How news coverage, often uncritical, helps build up the AI hype
“I would put media reporting [about AI] at around two out of 10,” David Reid, professor of Artificial Intelligence at Liverpool Hope University, said to the BBC earlier this year. “When the media talks about AI, they think of it as a single entity. It is not. What I would like to see is more nuanced reporting.”
While some individual journalists and outlets are highly respected for their reporting on AI, overall, social science research on news media coverage of artificial intelligence provides some support for Reid’s assessment.
Some working in the technology industry may feel very put upon – a few years ago Zachary Lipton, then an assistant professor at the machine learning department at Carnegie Mellon University, was quoted in the Guardian calling media coverage of artificial intelligence “sensationalised crap” and likening it to an “AI misinformation epidemic”. In private conversations, many computer scientists and technologists working in the private sector echo his complaints, decrying what several describe as relentlessly negative coverage obsessed with “killer robots.”
Of course, there are articles accompanied by a still image of T-600 from one of the Terminator movies focused on this possibility, just as, more widely, the independent researcher Nirit Weiss-Blatt has documented a tendency in some quarters to follow the old journalistic proverb that “if it bleeds, it leads.” In her work on what she calls the “AI panic”, she writes about journalism that tends to foreground claims about the possible extinction-risk that could arise from AI in the future while, effectively, diverting attention away from current, real-world problems ranging from discrimination and inequality to the environmental impact of energy- and water-hungry technologies.
More broadly, several pieces of research on media coverage of AI suggest that overall, news coverage is anything but dominated by negative angles and critical voices. Instead, overall, it comes off as industry-led and generally positive, even uncritical.
When J. Scott Brennen, who is today Head of Online Expression Policy at the Center on Technology Policy at UNC-Chapel Hill, led work focused on mapping how UK media cover AI that we did here at the Reuters Institute in 2018, he showed that nearly 60% of news articles across outlets were indexed to industry products, initiatives, or announcements. Furthermore, 33% of the unique sources identified in the coverage analysed came from the industry, almost twice as many as those from academia. In a more qualitative accompanying analysis, the same team of authors suggested that AI coverage in the UK tended to “construct the expectation of a pseudo-artificial general intelligence: a collective of technologies capable of solving nearly any problem”.
A team of researchers found a very similar situation in Canada. In their report, they wrote that, whatever those at the receiving end may feel, in fact, “tech news tends to be techno-optimistic”, and AI is generally covered more as business news than as science and technology. Overall, they find, “very few critical voices are heard in legacy media” in Canada.
Of course, industry voices are not the only ones shaping AI coverage – national political context also plays an important role. As Weili Wang and coauthors show in their analysis comparing across countries in their analysis of AI coverage in UK, China and India from 2011 to 2022, debates about AI tend to reflect national priorities, preoccupations, hopes and fears, all of which are projected onto emerging AI technologies.
What about generative AI?
These studies generally predate the explosive growth in coverage of generative AI in the last couple of years, but are still important context for how news media have dealt with AI over the last decade. Early work on headlines in the UK suggests some of the patterns documented above are recurring in coverage of generative AI, with these headlines “oscillating between promising potential for solving societal challenges while simultaneously warning of imminent and systemic dangers”, so perhaps closer to the concerns highlighted by Weiss-Blatt.
More broadly, across news media coverage of AI in general, reviewing 30 published studies, Saba Rebecca Brause and her coauthors find that, while there are of course exceptions, most research so far find not just a strong increase in the volume of reporting on AI, but also “largely positive evaluations and economic framing” of these technologies.
So, perhaps, as Timit Gebru, founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR), has written on X: “The same news orgs hype stuff up during ‘AI summers’ without even looking into their archives to see what they wrote decades ago?”
There are some really good reporters doing important work to help people understand AI—as well as plenty of sensationalist coverage focused on killer robots and wild claims about possible future existential risks.
But, more than anything, research on how news media cover AI overall suggests that Gebru is largely right – the coverage tends to be led by industry sources, and often takes claims about what the technology can and can’t do, and might be able to do in the future, at face value in ways that contributes to the hype cycle.
In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.
- Twice a week
- More than 20,000 people receive it
- Unsubscribe any time
signup block
In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.
- Twice a week
- More than 20,000 people receive it
- Unsubscribe any time