[New!] FIPP Global AI in Media Tracker – June 2025
Welcome to the FIPP Global AI in Media Tracker. This live document will be regularly updated, providing the latest developments, insights and breakthroughs in artificial intelligence (AI) and its influence on the media sector. Whether you’re interested in regulatory changes, innovative solutions or the latest developments, this tracker is your go-to resource for staying informed.
GLOBAL
May 5, 2025 – Global news industry pushes for AI code of practice.
Thousands of public and private news outlets worldwide joined an initiative by the European Broadcasting Union (EBU) and WAN-IFRA calling on AI developers to ensure AI is safe, reliable and beneficial for the news ecosystem. The “News Integrity in the Age of AI” initiative, unveiled at the World News Media Congress in Poland, proposes five key principles for a joint code of practice – including that news content must only be used in generative AI with the publisher’s authorisation, that up-to-date quality news should be fairly compensated when used, and that AI outputs should clearly attribute original sources. BroadbandTV News.
This global call is significant because it represents a united front by publishers to demand responsible AI use that supports quality journalism (through consent, fair value, attribution, and transparency) instead of undermining it.
Why it matters: This global push signals that media organisations are proactively shaping AI’s role in journalism. By setting standards and demanding cooperation, they aim to safeguard news quality and integrity worldwide as AI tools become more prevalent in news production.
May 3, 2025 – Journalists demand ethical AI on World Press Freedom Day.
Marking World Press Freedom Day (themed around AI’s impact on journalism), the International Federation of Journalists (IFJ) urged that AI be put at the top of the agenda in social dialogue between media organisations and journalists’ unions. The IFJ’s statement – echoing its 2024 recommendations – insisted “AI cannot replace human journalists, and its output must not be considered ‘journalism’ without appropriate human oversight and fact-checking. ifj.org.
Why this matters: The global journalism community is drawing a red line: AI should only assist, not supplant, editorial judgment and verification. By calling for collective action to ensure AI serves ethical journalism (and protects jobs), groups like IFJ are pushing newsrooms worldwide to adopt principles of transparency, accuracy and human accountability in any AI integrations.

Madrid, Spain, 21-23 October 2025
Balancing AI and editorial integrity will be one of the key topics at this year’s Congress. Join the conversation and shape the future of media.
AMERICAS
June 2, 2025 – AP launches AI training for newsrooms
The Associated Press introduced an “AI in the Workplace” course offering journalists practical tools and ethical guidance on AI. The programme includes real-world newsroom use cases and emphasises editorial standards. AP Workflow.
Why it matters: As AI tools proliferate, newsroom education is key to responsible integration.
May 29, 2025 – Google’s AI search mode sparks publisher fears
Google launched a new “AI Mode” in U.S. search, showing fewer links to original sources. Tests by Press Gazette found the AI often omitted credit to the original reporter.
Why it matters: AI search summaries could divert traffic away from news sites and diminish attribution. PressGazette.
May 29, 2025 – New York Times strikes AI content deal with Amazon
The New York Times agreed to license content to Amazon for training Alexa’s AI, its first such generative AI deal. The Times previously sued OpenAI and Microsoft for unlicensed use.
Why it matters: This signals a new business model for monetising journalism in the AI era. The Verge.
May 21, 2025 – Push for US legislation against AI impersonation (No Fakes Act).
Tech and creative industry leaders – including a major music artist – testified before a Senate Judiciary subcommittee to urge support for the proposed No Fakes Act, which would outlaw unauthorised AI-generated replicas of a person’s voice or likeness. They warned that Americans, from celebrities to ordinary people, are at risk of having their voices or images cloned by AI to defraud or misinform the public, and advocated holding companies liable for producing such deepfakes without consent. Read more here.
Why this matters: The bipartisan No Fakes Act, reintroduced in April, would establish federal protections (with notice-and-takedown mechanisms) against these abuses. This matters for journalism because preventing AI-driven impersonations – whether of public figures, sources or officials – is crucial to safeguarding truth and trust in news. If enacted, it would fill a legal gap and set a precedent for how governments can curb AI-fueled disinformation while permitting beneficial uses of the tech.
May 19, 2025 – US enacts law targeting deepfake porn and AI abuse.
President Donald Trump signed the bipartisan Take It Down Act, which imposes stricter penalties for the creation or distribution of non-consensual intimate imagery and certain AI-generated deepfakes. Read more.
The law, passed after growing alarm over AI-driven “revenge porn” and fake sexual content, holds perpetrators accountable for using AI to fabricate harmful images. This development is important for news and media because it’s among the first federal laws directly addressing AI harms, signalling that lawmakers are starting to grapple with how generative AI can be weaponised.
Why this matters: By criminalising malicious deepfakes, the act aims to protect individuals’ privacy and dignity and by extension, helps uphold trust in authentic media at a time when AI can easily distort reality.
16 May 2025 – US House moves to block state AI laws
A federal bill passed by the House would prevent states from regulating AI until 2035. California lawmakers oppose the move, citing public safety and local protections. AP News.
Why it matters: This centralised approach to AI governance could significantly impact media oversight and platform accountability.
May 9, 2025 – US Copyright Office weighs in on AI training and news content.
The US Copyright Office released a report analysing whether using copyrighted material to train generative AI models is fair use or infringement. News publishers welcomed the report’s careful stance, which recognises content owners’ rights. It noted that when AI outputs essentially summarise or replicate news articles, that use is likely not transformative and implicates copyright. The News/Media Alliance praised the report’s conclusion that existing US copyright law is capable of handling new AI technology and the key issue is ensuring AI developers respect the law. News Media Alliance.
Why this matters: This development is significant because it bolsters publishers’ efforts to protect their investment in original journalism. With over 40 AI copyright lawsuits underway, the Copyright Office’s guidance may influence how courts and policymakers balance innovation with the rights of news creators, potentially leading to norms (or laws) that require AI firms to license content or limit the scope of unpermissioned scraping of news for AI training.
EUROPEAN UNION
May 27, 2025 – Consortium plans major AI data center
A consortium of German companies, including SAP, Deutsche Telekom, Ionos and the Schwarz Group, is in discussions to build a significant AI data processing center. This initiative is part of the European Union’s strategy to establish AI “gigafactories” and reduce dependence on non-European AI infrastructure. The project aims to secure part of the EU’s $20 billion funding initiative to enhance AI capabilities across the continent. Reuters.
Why it matters: This move demonstrates Germany’s leadership in strengthening Europe’s AI infrastructure, fostering technological independence and competitiveness on a global scale.
May 7, 2025 – Nordic publisher sees broad newsroom uptake of AI tool.
Amedia, Norway’s largest local news publisher, reported that 51% of its journalists are using its in-house generative AI “sandbox” every week. The company’s head of editorial AI outlined how Amedia launched an editorial AI hub in 2024, bringing together journalists, developers and data scientists to experiment with AI across language, personalisation, and content formats. After initial skepticism, over 500 reporters have tried the AI Sandbox for tasks like drafting articles, and more than half the newsroom routinely integrates it into work. INMA.
Why this matters: This surge in adoption demonstrates a pragmatic embrace of AI in European newsrooms, even outside the EU’s biggest markets. It shows that publishers are finding ways to boost reporting efficiency and output with AI assistance, while attempting to maintain editorial quality. Amedia’s experience – focusing on improving journalism rather than replacing it – could serve as a model for other European outlets navigating AI’s learning curve.
(Note: The EU’s regulatory backdrop continues to evolve – the EU AI Act, which entered into force in 2024, has begun phasing in rules like requiring AI transparency and banning certain manipulative uses. While most provisions aren’t active yet, Europe’s publishers are anticipating compliance by building ethical AI practices into their workflows.)
April 2024 – Mistral AI and AFP collaborate on fact-based chatbot
French AI startup Mistral has entered into a multimillion-euro agreement with Agence France-Presse (AFP) to integrate AFP’s news articles into its chatbot, Le Chat. This partnership aims to provide reliable, fact-based journalism through AI, countering the influence of less regulated AI content sources. Over 2,000 AFP articles in six languages will feed into Le Chat daily, supporting the dissemination of trustworthy news content. Financial Times (paywall)
Why it matters: This collaboration underscores a proactive approach by European media to harness AI for disseminating accurate information, reinforcing the importance of journalistic integrity in the digital age.
UNITED KINGDOM
March 31, 2025 – BBC charts a cautious, collaborative AI strategy.
The BBC announced that it will open talks with leading AI tech providers as part of a push to safeguard trusted news. Unlike some UK rivals (such as The Guardian or The Times, which have struck partnerships to license content to AI firms), the BBC has so far avoided formal deals. Its research found that nine out of ten AI chatbot answers to news queries had issues, with half containing “significant” inaccuracies. PressGazette.
Citing the “growing threat to trusted information” from generative AI, the BBC is seeking cooperation with companies like OpenAI, Google and others to find solutions to AI-driven distortions. At the same time, BBC News is investing in its own AI-driven projects (for example, faster multilingual news translations and automated subtitles for audio content) with human oversight.
Why this matters: The significance of this move lies in Britain’s leading public broadcaster setting the tone for responsible AI adoption. The BBC aims to embrace AI innovation to reach younger audiences and personalise content, but firmly on its own public-service terms, prioritising accuracy, transparency and intellectual property protection. This approach may influence other UK media outlets and even regulators as they grapple with AI’s opportunities and risks in journalism.
AFRICA
May 3, 2025 – African journalists urge safeguards against AI risks.
The Federation of African Journalists (FAJ) marked World Press Freedom Day by calling for the responsible use of AI to safeguard journalism in Africa. In a statement celebrating African reporters’ contributions to democracy, FAJ warned that AI is transforming news production in ways that bring “immense risk” alongside innovation. The federation emphasized that AI’s influence “must not dilute journalistic ethics, compromise editorial independence, or silence critical voices” and that automated content systems should not displace community narratives from underrepresented regions. FAJ urged the African Union, national governments, media houses and tech companies to develop enforceable safeguards – insisting that AI should support, not replace human journalists. FAJ website, ifj.org
Why this matters: This regional stance is important because it highlights a proactive effort in the Global South to shape AI’s impact on media. By demanding transparency, fairness, and data privacy protections, African journalists want to prevent scenarios where unchecked AI could amplify disinformation or surveillance and undermine press freedom. It’s a call to ensure Africa’s adoption of AI in media happens on ethical and equitable terms, preserving the hard-won gains in independent journalism.
ASIA
April 17, 2025 – Japan’s Nikkei launches AI service with built-in source attribution.
Leading Japanese publisher Nikkei Inc. introduced “Nikkei Kai,” a generative AI-powered research and information service aimed at business professionals. The service uses retrieval-augmented generation (RAG), essentially combining a custom search of trusted media databases with OpenAI-style text generation. It produces concise analytical reports on markets and industry trends, while indicating the sources of information in its output. Notably, Nikkei Kai is designed to handle all necessary rights clearances for the data it uses so that companies can utilise its AI-generated insights without risking copyright or compliance violations. IT Business Today.
Why this matters: Nikkei’s launch is significant as it shows a major Asian news organisation actively innovating to monetise AI responsibly. By building an in-house AI product that emphasises accuracy, attribution and respect for content ownership, Nikkei is addressing two key concerns – quality control and intellectual property – that have made many newsrooms cautious about generative AI. This move could pave the way for other Asian media to explore new AI-driven products (such as paywalled AI news summaries or business intelligence tools) that augment their journalism and generate revenue, while maintaining credibility and legal safety.
March 13, 2025 – Press freedom concerns over state use of AI in India.
Media freedom advocates are raising alarms about government deployment of AI to monitor and potentially intimidate the press. In the state of Maharashtra, authorities approved a plan to spend 100 million rupees (US $1.4M) on an AI-based media monitoring system that will scrape and analyse news reports, tagging them as “positive” or “negative” in tone toward the government. Officials claim the system will help provide “the truth or facts” to counter critical coverage, but the Committee to Protect Journalists (CPJ) and local media bodies fear it could be used to harass outlets that publish unfavorable news. cpj.org
Why this matters: This development exemplifies how AI tools can be misused by those in power, potentially leading to automated censorship or self-censorship. The backlash in India underscores a broader point for Asia: as governments adopt AI in the media sphere (for surveillance, content moderation, etc.), strong safeguards for press freedom and independent oversight will be needed to prevent technology from becoming a tool of information control.
AUSTRALIA
May 18, 2025 – News Corp mandates AI training for its journalists.
Media giant News Corp Australia (publisher of The Australian, Daily Telegraph and more) has begun rolling out mandatory AI “bootcamps” for all editorial staff, as it accelerates the use of AI in news production. In an internal memo, the Murdoch-owned company informed journalists they must learn to use its proprietary “NewsGPT” tool and other AI functions integrated into its content management system. The move comes as News Corp enters pay negotiations with its journalists’ union, is a proactive effort to embed AI in daily workflows and ensure reporters are “confident using the tools now available”. Capital Brief (paywall).
Why this matters: This is an example of major newsrooms adapting to AI at scale. It shows how news organisations are investing in upskilling their workforce so that AI becomes a routine reporting aid (for tasks like drafting articles, research or personalising content). However, it also raises questions about the impact on jobs and editorial quality. The significance of News Corp’s strategy is twofold: it illustrates the competitive drive to boost efficiency with AI in the media business and it highlights the need for transparency and training so that journalists can harness AI ethically.