Rise of the machines: Finding the best ways to use generative AI
No topic is creating greater angst and more excitement in media circles than the rapid rise of generative artificial intelligence. Both an existential threat and golden opportunity to expand and be more efficient, the wave of new tech washing over the industry has forced publishers to identify the AI that works best for them and come up with ways to effectively integrate it into their operations.
In the latest FIPP Innovation in Media World Report, Juan Señor and Jayant Sriram of Innovation Media Consulting, take a closer look at how media companies are leveraging AI to create new forms of content, enhance user experiences and drive business growth.
“The impact of AI on the media industry is not just about technological advancements,” the authors points out. “It raises fundamental questions about the role of journalism, the ethics of data collection and analysis, and the relationship between humans and machines.
“As AI technologies continue to evolve, media organisations need to understand the opportunities and challenges they present, and to develop strategies to leverage them effectively.”
Gremlins in the system
Of all the new AI tools, nothing has generated more column inches, sometimes literally, than OpenAI’s Chat Generative Pre-trained Transformer (ChatGPT). Since its launch in November 2022, the powerful language model-based chatbot has caused feverish debate over its potential to replace journalists and, as some headlines have suggested, lead to the “death of artistry”.
One of the major concerns surrounding generative AI involves inaccuracy and a lack of truth, with OpenAI limitations sections freely admitting that: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging.”
Equally importantly is that there’s a date cut-off when it comes to the information ChatGPT sends back your way. If you push the chatbot hard enough about current developments, it will repeat one line back to you — that its training cut-off date is 2021.
AI’s opacity means sourcing is also dead end. ChatGPT does not cite all its sources when it generates text. While there are guesses floating around, OpenAI has not posted a list of all its training sources for ChatGPT.
“When working with tools like ChatGPT, keep in mind that it is smart, but not that smart,” the authors of the report point out. “It is a machine that has no intentions – it does not want to help or mislead you; it has no concept of what is real and no morals. It is just what it says on the tin – it generates text based on a lot of information it has been trained on.”
Because of this, media organisations need to fact-check absolutely everything tools like ChatGPT generates – and it goes beyond what you would need to verify if the text was written by a human.
While large language models, such as GPT-3 can enable journalists to use AI to write their stories, it does raises ethical questions around authorship and plagiarism. If a journalist relies too heavily on AI to write their stories, who can be credited as the author? Additionally, there is the potential for the AI to inadvertently plagiarise other sources, raising questions about journalistic integrity and accuracy.
Three golden rules
So, where does this leave media organisations? How do you balance the good with the bad of generative AI? To stay one step ahead of ChatGPT and the like, publishers need to have a clear mission and vision around AI and address some key questions. For starters, media organisations have to realise they are in a position of strength.
“Conversational AI can do many things, but they rely solely on what has happened and been documented,” the authors of the report stress. “Compared to virtually every other industry, news publishers have the upper hand here because only editors and human beings can tell original stories and report from the ground. AI cannot literally write the future of our industry, but it can help make the work of journalism more efficient.”
Another crucial question is how newsrooms can leverage the power of AI for journalistic work without becoming over-reliant on it? While it’s useful for generative AI like ChatGPT to produce large volumes of content, news organisations have to formulate a code or guidelines for what it can and should not be used for.
And thirdly, how can we hold AI accountable? “Generative AI systems like ChatGPT have been trained to a large degree by the content that we have produced. This raises important issues about sourcing, trust, and payments to content creators, especially as many of these AI are becoming commercialised,” the report points out.
Introducing AI by stealth
While there has been a lot of theorising about how generative AI will be phased into media companies, some publishers have taken the mechanical bull by the horns and experimented with ChatGPT.
BuzzFeed, for instance, announced earlier this year that it was using OpenAI’s publicly available software, much like ChatGPT, to automatically publish its popular quizzes.
“To be clear, we see the breakthroughs in AI opening up a new era of creativity that will allow humans to harness creativity in new ways with endless opportunities and applications for good,” the company’s CEO, Jonah Peretti, said in a memo to staffers.
According to BuzzFeed it doesn’t plan to use AI to write journalistic articles – a line most publishers are not eager to cross. Science and tech news platform Futurism did note recently, however, that BuzzFeed had quietly started publishing fully AI-generated articles that are produced by non-editorial staff. The publication identified around 40 articles, all of which appear to be SEO-driven travel guides, which they noted were comically bland and alike.
BuzzFeed wasn’t the only brand to introduce AI by stealth. Earlier this year it came to light that CNET, a massively popular tech news outlet, had been employing the help of “automation technology” in its financial explainer articles, possibly from around November 2022.
According to Futurism, the news sparked outrage with critics pointing out that the experiment felt like an attempt to eliminate work for entry-level writers, and that the accuracy of AI text generators, at least at the time, was notoriously poor.
Meanwhile, The Verge pointed out that the AI tools were being used with little transparency to readers or staff. CNET never formally announced the use of AI until readers noticed a small disclosure. “We didn’t do it in secret,” CNET editor-in-chief Connie Guglielmo told the group. “We did it quietly.”
Other media groups have taken a harder line on using AI to write articles, while keeping the door open for other types of uses. “We will never have an article written by a machine,” Neil Vogel, CEO of Dotdash Meredith, told Axios. “We’re not denialists. We actually think it’s an incredible opportunity for us,” noting that the company has already begun to use AI for some tasks like sourcing images.

A helping (robotic) hand
What are the ethical ways in which journalists and newsrooms could leverage the power of ChatGPT on a daily basis? According to Marcela Kunova at journalism.co.uk the options range from generating summaries, emails and social posts to providing quotes and context for articles.
Generative AI can be used in several ways in our swiftly evolving world of multimedia, including helping with image generation, transcribing interviews and fact checking articles. Teams of computer scientists around the globe have already developed AI systems designed to detect manipulated media, misinformation or fake news.
A team from Drexel University recently published a new approach for detecting forged and manipulated videos with a system that combines forensic analysis with deep learning to detect fake videos that would slip past human reviewers or existing systems.
A number of publishers have also started touting the cost-saving potential of the new technologies. Take Gannett, for instance, who has made investments in AI and machine learning tools to simplify routine tasks, such as selecting and cropping images quickly, personalising content and gathering datasets to inform readers on where to watch various sporting events.
Doug Horne, CFO at Gannett, said during the company’s earnings call earlier this year that this is to achieve an annual savings of at least $220 million this year.
Should journalists be worried?
The most provocative argument against generative AI will remain the possibility that it will eventually make journalists redundant. This is not something writers should worry about, according to the authors of the Innovation in Media report:
“AI-based tools will never replace human journalists. As we know, CNET recently made the mistake of overestimating AI’s ability, yielding not only a series of articles rife with factual errors but a broader reckoning for the company and perhaps the industry at large. If AI replaced humans, which it would do ineptly, it would merely flood the internet with even more unreliable (but plausible sounding) junk.
“Much like previous technological advancements, tools like GPT could be part of a broader shift and redelegation of how journalism is done and change how reporters do their jobs — freeing them up to spend more time interviewing sources and digging up information and less time transcribing interviews and writing daily stories on deadline.”
Download the full Innovation in Media 2023 World Report here.
