AI talking points at the FIPP World Media Congress 

When FIPP President & CEO James Hewes delivered his opening remarks at this year’s FIPP World Media Congress he wasted little time in addressing what would be one of the biggest topics of discussion at the event. 

“The existential challenge that might be there from AI, not only to us but to some of the businesses in our ecosystem – we have to think very deeply about how we use it, not only as publishers, but also in the broader environment in which we work,” he told delegates. 

“It’s clear that we’re all thinking quite deeply about the impact that this technology could have on our world and on our businesses, we’re not treating this like some technologies in the past where we’re just letting it run free. We’re actually having sensible conversations.” 

The comments were a precursor to a lot of debate at the event in Cascais about the transformative power of AI. Here are some of the biggest talking points across the sessions at the Congress. 

Collaboration is key 

With the rise of artificial intelligence affecting everyone in media, it makes sense for those in the industry to compare notes when it comes to the challenges ahead, while also collaborating with technology platforms. 

Research carried out by Journalism AI, a partnership between the Google News Initiative and the London School of Economics that empowers news organisations to use artificial intelligence responsibly, shows there’s a big appetite in the industry for more opportunities to experiment and to collaborate both with other newsrooms and tech platforms.  

“We do believe that the best responses to the challenges that are ahead of us will come from good partnerships in the field, with academia, with technologists, and of course with journalists,” said Ana Rocha de Paiva, Senior News Programme Manager, EMEA, Google, who revealed that Journalism AI drew up a questionnaire that reached 116 news professionals from 71 organisations across 32 countries. 

Her sentiments were echoed by Isabel Reis Rodriguez, Chief Marketing and Digital Development Manager for Portuguese media conglomerate, Cofina. 

“In the end, working in a collaborative way is mandatory,” she said during a session exploring how during the AI age, copyright is a critical catalyst for quality assurance and revenue generation. “That’s the key to a successful business model in the future. 

“Quality assurance is everything, so copyright is crucial for publishers and content creators and businesses, but it’s crucial also for platforms. They need us. They need our content. They need quality content. 

“And AI can help us to get closer to our audiences, can help with behavioural predictions, with mitigating churn, growing subscriptions, optimising revenue and contextual personalisation.  

“There are lots of experiences, lots of approaches, but we need to think in a collaborative way – generative platforms need media, and we will use AI.” 

Adopting a clear strategy 

Artificial intelligence can certainly make media organisations more efficient, but research by Journalism AI shows most newsrooms lack an effective AI strategy. 

“While 68% of our respondents believe AI can contribute to more efficiency in the newsrooms, the big question remains how exactly are we going do that and how exactly can we use this technology for good?” Ana Rocha de Paiva pointed out.  

“And 63% of our responders told us they don’t really have an AI strategy, and that’s mostly because they lack the technological and financial resources. When we probe a little further and asked exactly what resources they you lacking, more than 40% told us they lack the understanding and the ability to know exactly how to work with the technology, or they lack the ability to hire people that understand the technology and can do something with it.” 

More from the FIPP Congress:

The statistics are further backed up by a recent survey conducted by the World Association of News Publishers (WAN-IFRA) which shows that while 49% of managers say journalists are free to use new AI tools, only 20% say they have guidelines in place.  

“This worries me a bit, because we need as publishing companies to have a very clear approach to how we want to use these technologies,” said Steffen Damborg, a digital transformation specialist and author of Mastering Digital Transformation, who works with WAN-IFRA. “It shouldn’t be the individual journalist of the Financial Times or the New York Times that decides how the New York Times will approach this new technology. This is a management decision.  

“We need the editors and chiefs to step up. We need the publishers to step up to make these guidelines, to make the rules for the journalists. This is so vital for each of our companies that we have a clear policy.” 

Lingering concerns over accuracy 

Inaccuracy remains a major concern for the media industry and consumers when it comes to generative AI like Chat GPT. 

“We have to remember that accuracy is not the object of this technology. It is designed holistically, modelling plausibility – not accuracy, not veracity, and not using cited sources,” said Lexie Kirkconnell-Kawana, Chief Executive of Impress UK, independent press monitor and advocate for trusted news. 

“The broader question we might want to be asking may therefore actually be one about advertising and advertising regulation, whether generative AI can be marketed as accurate. Also, whether users understand that what they are looking at is not accurate.” 

While problems around accuracy might have worsened or thrown into relief by AI, they are not new and need a concerted effort from media organisations to rectify, Kirkconnell-Kawana pointed out. 

“To solve this, what we need is consensus across the news publishing industry to ensure that our house is in order, before we start pointing the finger,” she stressed. “And that requires cultural and structural change on how news is organised and regulated.” 

Steffen Damborg also highlighted issues around the lack of accuracy, as well as authenticity, when it comes to AI. 

“If we ask a journalist, what do you fear the most? Well, can you really rely on the output from these algorithms?” he said. “So, inaccuracy is the biggest concern as a journalist. You don’t want to publish anything that is not accurate.  

“And then of course, plagiarism. It’s also a huge concern. Do I have a guarantee that what I publish hasn’t been written by somebody else? That’s also a huge problem in journalism.” 

Speaking up for Chat GPT, Richard Lee, the Chief Integration Officer or The News Lens Group, said generative AI has been a big help for editors and translators.  

“Adding new languages to news sites has become vastly more efficient with new LLM mode-based AI workflows,” said Lee, pointing out that generative AI-assisted workflows are simply faster than traditional methods. 

“It’s really time efficient. Humans need to sleep, we need to eat. So, while breaking news about semiconductors will take a day or probably two days to get translated by a human, with AI it takes less than three minutes.  

“And then it take less than an hour for our human editors to check the content and fix some minor issues and then we are ready to publish. So, AI enables us to deliver the news fast and also in a cost-efficient way.” 


Your first step to joining FIPP's global community of media leaders

Sign up to FIPP World x