Global Media Tech Regulation Tracker – April 2023

Welcome to the FIPP Global Media Tech Regulation Tracker! This live doc is updated monthly, bringing you the latest policy, regulatory, and legal updates from around the media tech world.  


March 29th: Is the quest to incorporate Artificial Intelligence in our lives moving too quickly? Might we be creating entities that we don’t understand and will struggle to control?

That’s the view of a group of high profile researchers who have signed an open letter calling on AI labs around the world to pause development of large-scale AI systems, citing fears over the “profound risks to society and humanity” they claim this software poses.

The letter, published by the nonprofit Future of Life Institute, calls for… 

“All AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Signatories include Twitter owner and serial entrepreneur Elon Musk, author Yuval Noah Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, politician Andrew Yang, and a number of well-known AI researchers and CEOs.

March 27th: Digiday looks at Tik Tok’s latest issues from a marketer’s perspective. It says that while the ad money heading towards the platform shows no signs of slowing down, sentiment towards the Chinese-based social network is evolving.

“For some marketers, the bloom is coming off TikTok. This isn’t necessarily the case when it comes to ad spending — marketers remain invested in that regard. This is more so about the sentiment. That intense enthusiasm for the app that has emanated from marketers over the last four years or so is making way for reservation and, in some cases, trepidation.

“From a creator and marketing perspective, I value the reach and sheer force that a viral TikTok video can give me,” said Kara Harms, CEO of lifestyle blog Whimsy Soul. “But, I don’t believe in putting all my eggs in one basket. I’ve done the work to migrate my TikTok followers over to other platforms such as Instagram, email newsletter and blog posts.”

A year ago a comment like this would’ve been an outlier. Nowadays, not so much.

FIPP Congress Early Bird offer ends this Saturday 1 April

Time is running out to save on FIPP World Media Congress tickets. Our exclusive Early Bird offer ends soon. Register on or before 1 April to save €300 on each ticket! Congress takes place from 6-8 June in Cascais, Portugal


That’s due to several factors. Some are old, like TikTok’s well-documented measurement issues, while others are newer, like more competition. None, however, is as topical as the geopolitical tensions over TikTok.”

March 1st: Inkstick Media takes a look at some of the furor surrounding Tik Tok and the calls to ban the platform. The author argues that social media is essentially a reflection of society and that rather than prohibiting platforms legislators need to be looking more closely at how they can regulate them.

Facebook regularly experiences major security breaches. Instagram is linked to psychological harm. And censorship accusations toward Twitter are a bipartisan affair. TikTok isn’t worse than the competition, and banning it has significant costs, such as harming consumers and small businesses, provoking retaliation by the Chinese government, and stifling free speech. But this doesn’t mean that TikTok can’t be better regulated. Congress should craft regulations that are universal, limited, and targeted to the issues.”

Feb 28th: Above the Law takes a long look at the way that the media industry adheres to regulations as well as regulating itself due to commercial necessity.

It suggests that this might provide a blueprint for the way that AI would be governed moving forward – guided by regulations,

“AI-generated content and LLMs (large language models) are in their infancy. New policies, guidelines, and styles will need to be developed for AI-generated content. After all, content is content, whether generated by humans, machines, or humans and machines.

What is the role of AI training in creating consistent output? What is the role and responsibility of those that generate AI content to review it, similar to that of an editor?

Feb 24th: Digiday has the latest on several cases that the US Supreme Court is examining that it says sheds additional light on brand safety, the pace of innovation, and the future of digital advertising.

In late February the USA’s highest court heard separate oral arguments about Google and Twitter, and whether social networks should be held responsible for terrorist content that families of victims claim led to the deaths of their relatives. And although the focus of each case is quite narrow, experts say the stakes are much higher and could have a potentially far broader impact on the future of free speech, content moderation and how platforms sell advertising.

“It could transform both the ways that ads are hosted and recommended on the algorithms and also the way non-advertising content is recommended,” said Jeffrey Rosen, CEO of the National Constitution Center, a nonpartisan nonprofit focused on constitutional education.

Feb 24th: Digiday has the latest on several cases that are being examined by the US Supreme Court that it says sheds additional light on brand safety, the pace of innovation, and the future of digital advertising.

In late February the USA’s highest court heard separate oral arguments about Google and Twitter, and whether social networks should be held responsible for terrorist content that families of victims claim led to the deaths of their relatives. And although the focus of each case is quite narrow, experts say the stakes are much higher andcould have a potentially far broader impact on the future of free speech, content moderation and how platforms sell advertising.

“It could transform both the ways that ads are hosted and recommended on the algorithms and also the way non-advertising content is recommended,” said Jeffrey Rosen, CEO of the National Constitution Center, a nonpartisan nonprofit focused on constitutional education.

Feb 24th: Facebook’s owner Meta says it will now give content violators up to seven chances before sending them to Facebook jail, reports Fortune. Yet the reprieve only applies to minor violations. Offences that Meta’s deems to be more serious will still be met with temporary or even permanent bans

Feb 17th: Techxplore reports that TikTok, Twitter, Apple Store, Amazon and several other online platforms have announced user figures in Europe that bring them under stricter EU regulations for policing internet content.

The companies published their numbers ahead of a deadline made compulsory under the new EU Digital Services Act (DSA) that puts internet behemoths operating in Europe under monitoring by the European Commission.

The platforms all said they had more than 45 million monthly active “recipients” of their services. That is the threshold above which they are categorised as a “Very Large Online Platform” (VLOP) or a “Very Large Online Search Engine” (VLOSE) under the DSA.

Jan 31st: A top European Union official told Elon Musk on Tuesday that Twitter Inc. will have to do more over the coming months to prepare for the bloc’s new social-media regulations.

Reported in the WSJ Thierry Breton, the EU’s commissioner for the internal market, told Mr. Musk during a video call that there were only a few months left before major online platforms like Twitter will have to be fully compliant with the Digital Services Act. Mr. Musk has previously said that he intends to comply with the EU’s new rules.

Jan 17th: FIPP has the lowdown on the current state of play in the US on the Journalism Competition and Preservation Act, which would, if passed, force social media companies to compensate news content producers for the content that is shared on the former’s platforms.

The bill’s future looks uncertain though. Writing in Columbia Journalism Review Emily Bell, says she thinks that “alternative potential federally mandated funding sources are unlikely to pass in a Republican-controlled House.” The recent undignified spat over the election of the speaker of the House would also suggest that enacting legislation in the coming years could also be problematic. So the bill in its current format is dead for now.”

The article also reports that in 2022 the UK Government also suggested it was considering Australian-style regulation forcing big tech to partner with the media. Press Gazette concluded that a similar deal to the Australian would yield as much as £170 million per year for publishers. It hasn’t yet followed up with any proposed legislation.


March 24th: The CEO of TikTok, Shou Zi Chew, faced tough questioning from Congress in late March, reports The Daily Beast.

Much of the antipathy toward TikTok was really directed toward its parent company, Byte Dance, and the Chinese government, whose control of the company and potential misuse of data of American users was seen as the real threat associated with the app.

The Daily Beast reports that the chairman of the House Energy and Commerce Committee, Rep. Cathy McMorris Rodgers (R-WA), convener of the hearings, said “We do not trust TikTok will ever embrace American values—values for freedom, human rights, and innovation.”

The highest ranking Democrat on the committee, New Jersey’s Frank Pallone, added “I’m not convinced that the benefits [of TikTok] outweigh the threats it poses to the American people.”

March 24th: The Guardian, and indeed many other news outlets, reported the surprise news that the governor of Utah, Spencer Cox, has signed sweeping social media legislation requiring explicit parental permissions for anyone under 18 to use platforms such as TikTok, Instagram and Facebook.

Cox also signed a bill prohibiting social media companies from employing techniques that could cause minors to develop an “addiction” to the platforms. The former is the first state law in the US prohibiting social media services from allowing access to minors without parental consent. “We’re no longer willing to let social media companies continue to harm the mental health of our youth,” Cox, a Republican, said in a message on Twitter.

March 23rd: There may have been a lot of discussion in recent years about cookies and their use in advertising, AdWeek has shifted the debate by looking at the role of pixels.

It points out that the FTC recently published a deep dive into pixels and their potential risks to consumers, noting that “companies using tracking pixels that impermissibly disclose an individual’s personal information (which may include health information) to third parties” may be in violation of a host of state and federal laws.

The explainer came after the regulatory body slapped fines on discount drug provider GoodRx in February and digital therapy firm BetterHelp earlier this month, for $1.5 million and $7.8 million respectively, both for sharing sensitive customer information with third parties like Google and Meta via pixels.

Feb 24th:  Lawmakers across the US are pushing a variety of bills aimed at boosting privacy protection for kids’ personal information, limiting their access to social media without parental involvement, or keeping them off of sites that include explicit content such as pornography.

The measures would rely on companies like Meta Platforms, Inc., Alphabet Inc., and TikTok Inc. to know how old their online users are—posing the conundrum of determining age without gathering too much sensitive information about a person’s identity.

More from BloombergLaw.

Jan 31st: Editor and Publisher reported how seven leading journalism, media and pro-consumer antitrust advocacy organisations sent a joint letter to President Biden calling on him to highlight, in his upcoming State of the Union address on Feb. 7, the importance of local journalism.

The letter stressed the urgent need for congressional action to preserve a strong democracy and a free press. Specifically, the letter urges President Biden to call on Congress to advance the bipartisan Journalism Competition and Preservation Act (JCPA) (S. 673 and H.R. 1735). The legislation which would give small, local news outlets the ability to join together in negotiations that will level the playing field with Big Tech platforms, was mothballed at the end of 2022.

Jan 24th: Republican senator Josh Hawley is to introduce legislation to ban TikTok in the US, according to a statement Tuesday, one month after a bill—sponsored by the Republican legislator—banning the social media app from federal devices was approved by Congress.

“TikTok is China’s backdoor into Americans’ lives,” Hawley tweeted, explaining that he planned to introduce the broader ban but did not yet offer any, claiming the app “threatens our children’s privacy as well as their mental health.” More from Forbes.


Feb 27th: Various stakeholders have warned that the draft National Media Development Policy released by Papua New Guinea’s Department of Information and Communications Technology (DICT) on February 5 could undermine media freedom if approved by the government.

Asia Pacific Report says that the DICT asked stakeholders to share their input within 12 days, but this was extended for another week after Papua New Guinea’s Community Coalition Against Corruption (CCAC) criticised the short period for the consultation process.

The draft policy lays the framework “for the use of media as a tool for development.” The state emphasised that “it includes provisions for the regulation of media, ensuring press freedom and the protection of journalists, and promoting media literacy among the population.”

Jan 19th: Australia is continuing its tough stance against spreaders of disinformation by considering a new bill that will impose a compulsory code of conduct for digital platforms. Michelle Rowland, the communications minister said that the Australian Communications and Media Authority will also be given new information-gathering powers to assess how platforms, including social media companies, respond to misinformation and disinformation.

The move , which was reported in The Guardian, follows the Digital Industry Group Inc – whose members include Google, Apple, Meta, Twitter and TikTok – toughening up its voluntary code of conduct in December.

Jan 19th: The Australian government is putting pressure on Influencers and content creators on social media to come clean about their commercial deals and income streams.

The country’s key media watchdog, the Australian Competition and Consumer Commission (ACCC), has launched a national sweep of online platforms such as Instagram and TikTok, and is warning those with large followings to be more up-front about whether they are getting paid for product placement. More from ABC.


Jan 16th: The recently elected Brazilian government helmed by President  Luiz Inacio Lula da Silva has reportedly got social media practices on its radar and is planning a wide variety of new regulations.

The Brazilian Report says that the new administration will push through a bill, which stalled in Congress last year, and create a regulatory framework for both social media and the media generally. Among the topics it hopes to address are the sustainability of journalism and the protection of individual and collective rights.

Jan 9th: Facebook owner Meta has removed posts from the platform that praised or supported the anti-democratic demonstrators in Brazil who early in January stormed the Supreme Court and presidential palace.

CBS reports that Meta is working on efforts related to Brazil’s election, such as removing posts that questioned the legitimacy of votes.


March 26th: The Toronto Star has been focusing on the issue of deep fakes. It highlights a couple of recent examples – one involving the U.S. President Joe Biden and Prime Minister Justin Trudeau apparently white water rafting on the Ottawa River, the other showing the arrest of ex-President Donald Trump.

The columnist argues “And here’s your news flash: while we can be amused by the fun such a picture can create, there is nothing funny in the least about its threat to people’s privacy, safety, sanity, and our very way of life.

Some moments in history demand a comprehensive public policy response to extinguish nascent, but dangerous, developments.

We are at that moment.

Make no mistake, the latest advancements in generative AI represent a threat entirely different from those posed by social media, the internet, and other innovations we have failed to adequately legislate over the last two decades.”

Feb 25th: Canada’s ongoing run in with Google over content has taken yet another turn. Prime Minister Justin Trudeau said in late February it was a “terrible mistake” for Alphabet Inc’s Google to block news content in reaction to a government bill that would compel the tech giant to pay publishers in Canada for news content.

Google said this week it was testing blocking some Canadian users’ access to news as a potential response to the Trudeau government’s “Online News Act,” which is expected to be passed into law.

Trudeau, speaking to reporters in Toronto, said the blocking of news in Canada was an issue “bothering” him.

“It really surprises me that Google has decided that they’d rather prevent Canadians from accessing news than actually paying journalists for the work they do,” he said.

“I think that’s a terrible mistake and I know Canadians expect journalists to be well paid for the work they do.”

Dec 15th: Calls for more scrutiny of Tik Tok come as countries around the world make moves to prohibit the controversial social media app based in mainland China. In an article in the Toronto Star Conservative foreign affairs critic Michael Chong insists that Ottawa must investigate TikTok over national security concerns as more jurisdictions in the United States move on banning the controversial social media app based in mainland China.

Chong said the app’s reach and ability to manipulate algorithms and laws in China requiring companies there to cooperate with the government, including on intelligence operations, could present a national security threat to Canada. “I think the government needs to take this threat much more seriously than they have,” Chong said. “If you look at what our closest allies have done, they’ve all taken some action.”

He said algorithms could be manipulated for foreign influence operations, such as pushing disinformation meant to politically divide Canadians, and data the app collects on Canadians themselves could be used in espionage operations.


March 24th: The Sydney Morning Herald looks at the latest updates from China’s market regulator on rules for online advertising.

The agency has published updated rules on online advertising, including oversight of recommendation algorithms used by apps such as Douyin, the Chinese version of TikTok, that are used to push commercials to targeted individuals.

The amended Internet Advertising Management Measures, a big update from “provisional” regulation published in 2016, from the State Administration for Market Regulation, will take effect on May 1 this year, impacting a highly competitive, evolving market worth over US$70 billion.

While the updated regulations still focus on limiting pop-up online ads, they also lay the groundwork for the state to rein in powerful push algorithms. According to the update, anyone who uses recommendation algorithms in online advertising “must record the rules for algorithms as well as advertising logs”.

Feb 27th: Chinese media regulators are studying measures to curb addiction among youths to short videos, says Bloomberg.

The National Radio and Television Administration held a meeting Feb. 22 to consider ways to tighten oversight of the short video industry. The powerful agency called for the sector’s “healthy development” and improvements in content quality.

Feb 23rd: The Guardian reports that Chinese regulators have reportedly clamped down on access to ChatGPT, as Chinese tech firms and universities push forward with developing domestic artificial intelligence bots.

ChatGPT, the discussion bot created by US-based OpenAI, is not officially available in China, where the government operates a comprehensive firewall and strict internet censorship. But many had been accessing it via VPNs, and some third-party developers had produced programs that gave some access to the service.


March 15th: The EU’s antitrust chief says that the Metaverse requires competition checks, reports the journal of the EWorld Economic Forum.

Margrethe Vestager, says it is “time for us to start asking what healthy competition would look like” in the virtual space.

Regulatory scrutiny of digital markets has been escalating worldwide in the last three years, Vestager says, adding that “there’s a much wider political debate that digital markets need careful attention”.

Bloomberg says that EU officials have already started to look into how AI tools such as ChatGPT are changing the landscape when it comes to regulating digital spaces.

Feb 25th: Ongoing discussions about the impact of the Metaverse have been covered by Being Crypto. EU Commissioner Yvo Volman said in late February that metaverse regulations must prevent discrimination and protect user privacy.

Speaking at the DG Connect event in Brussels, Data Director Yvo Volman said that the bloc must consider issues of inclusion, equality, and user privacy protection in its upcoming legislation, slated for May 2023.

While acknowledging the potential for the metaverse in surgery and learning, he emphasised that people must be equipped with tools to protect themselves in these virtual spaces.

Feb 24th: The Conversation has more details on the new regulatory framework: the European Media Freedom Act (EMFA).

Introducing the new framework, EU commissioner Thierry Breton said it contains[…]common safeguards at EU level to guarantee a plurality of voices and that our media are able to operate without any interference, be it private or public.

He said a new European watchdog would be set up to ensure transparency in media ownership. Another key feature will require EU member states to test the impact of media market concentrations on media pluralism and editorial independence.

Feb 23rd: EU Observer has an interesting take on how the EU might respond to the threats and opportunities posed by AI. It says the launch of ChatGPT in November last year has sparked a worldwide debate on Artificial Intelligence systems. Amidst Big Tech’s proclamations that these AI systems will revolutionise our daily lives, the companies are engaged in a fierce lobbying battle to water down regulations.

In April 2021, EU commissioners Margarethe Vestager and Thierry Breton presented a proposal for a European legal framework on AI. It was celebrated as the first global attempt to regulate AI — a technology that, as the commission observed, would “have an enormous impact on the way people live and work in the coming decades.”

Jan 30th: CNBC reports on the latest discussions the EU is having about Tik Tok

EU Commissioner of the Internal Market Thierry Breton warned TikTok CEO Shou Zi Chew in a meeting this month the bloc could ban the app if it didn’t comply with new rules on digital content well ahead of a September 1 deadline.

That’s a marked shift from the EU’s near silence on TikTok, while U.S. lawmakers have been aggressive — banning the app from federal devices in December over national security concerns. A proposed bipartisan bill also seeks to block the app from operating in the U.S.

Jan 24th: Tech HQ has a story about how companies need to up their game to meet the demands of Europe’s GDPR legislation on data use and security. In the article Sasha Grujicic, Chief Operating Officer at NowVertical, a company specialising in big data and analytics, outlined the potential benefits companies can unlock by being prepared to meet the demands – or at least the spirit – of data privacy rules like Europe’s GDPR legislation. Among them, the opportunity to not be fined $414 million while working with other people’s data.


March 27th: India Law Business Journal looks at the current legislation governing personalised advertising in India.  It describes how it has become a tightly focused subject in the field of data protection and consumer protection, and suggests that the country’s legislators might consider an EU GDPR style approach.

March 24th: India has announced sweeping rules that could force social media companies to break into encrypted messages and take down posts New Delhi deems contentious, reports the FT.

“Government officials said that the new guidelines would help end “double standards” by making platforms more accountable to the law. The rules, which apply to almost everything online, follow a government stand-off with Twitter earlier this month after it refused to block accounts tweeting about widespread farmers’ protests.

“We want them to be more responsible, more accountable,” said IT minister Ravi Shankar Prasad in New Delhi, “if they [won’t], then whatever provisions are there in the law will take their course.” Prasad described the rules as “soft touch oversight” and called on companies to “self-regulate”. The new rules require companies to take down offensive content that threatens the “unity, integrity, defence, security or sovereignty of India” or “causes incitement” within 36 hours of an order, according to a copy of the draft legislation seen by the Financial Times.”

Feb 27th: The Hindu has an update on how the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules which were issued two years ago through which the Ministry of Information and Broadcasting (I&B) are being observed.

It says the body was given the task of regulating content on OTT and online platforms. India’s approach can be termed as a light-touch ‘co-regulation’ model where there is ‘self-regulation’ at the industry level and final ‘oversight mechanism’ at the Ministry level.

More here.

Jan 25th: Exchange4Media reports on an interesting discussion about the future of laws governing media in the country. Several of the participants suggested that while the country has good laws governing the media they are not always as enforced as they should. The discussion also covered the impact that AI-driven content generator Chat GPT could have on the country’s media.


March 24th: According to The Times the metaverse is starting to develop too quickly for legislators to keep up. In an article behind its paywall it quotes senior Meta executives who suggest that legislators from both the US and the EU need to focus on delivering regulations that can enable the metaverse to grow while at the same time offering a degree of protection to its users.

March 2nd: Who owns the copyright of AI-generated content? That is one of many pertinent questions about the future of automated content asked in an article published by Press Gazette.

“AI’s legal and ethical ramifications, which span intellectual property (IP) ownership and infringement issues, content verification and moderation concerns and the potential to break existing newsroom funding models, leave its future relationship with journalism far from clear-cut,” concludes the author.

Feb 22nd: AI chatbots are likely to face scrutiny in the long-discussed Online Safety Bill reports TechRadar

The website highlights how Lord Stephen Parkinson, a junior Parliamentary Under-Secretary in the Department for Culture, Media and Sport, confirmed the plans to include AI-generated content into the scope of the proposed legislation.

This comes as both search engines and social media platforms are in the process of integrating their service with software like ChatGPT, revealing the potential risks involved with artificial intelligence tools.

Currently being discussed in the House of Lord, the Online Safety Bill’s end goal is “to make the UK the safest place in the world to be online” by making tech executives liable for breaching rules. “Content generated by artificial intelligence ‘bots’ is in scope of the Bill, where it interacts with user-generated content, such as on Twitter. Search services using AI-powered features will also be in scope of the search duties outlined in the Bill,” said Lord Parkinson, The Telegraph reported.

Jan 22nd: How tech can potentially make live streams safe for younger audiences is the subject of a lengthy story published in the Financial Times. The article reports on the response from Meta, Tik Tok, YouTube and others to ensure that incidents of self harm are not broadcast. Tech companies are exploring tactics such as more effective age verification techniques and encryption to counter claims that they are not doing enough to prevent the distribution of the harmful content.


Your first step to joining FIPP's global community of media leaders

Sign up to FIPP World x