OK computer? Understanding public attitudes towards the uses of generative AI in news

A new report commissioned by the Reuters Institute paints a nuanced picture about public attitudes towards AI in Mexico, the UK and the US
16th July 2024

​​While much of the news industry has set its sights on generative artificial intelligence and how it can support the production and distribution of news, little attention has been paid to how audiences feel about AI-powered innovation in journalism. This is no small matter in an already saturated media ecosystem where trust cannot be taken for granted. What kinds of applications are news audiences comfortable with and what applications, to the contrary, spark unease? How should news organisations think about the disclosure of AI use in newsrooms? And how might the use of AI impact—and be shaped by—trust in news?

To tackle these questions, the Reuters Institute commissioned strategic insight agency Craft to conduct qualitative research with 45 digital news users (15 per country) in Mexico, the UK, and the US. The research, which was carried out between January and February 2024, took participants on a 3-phase deliberative journey, first documenting their baseline attitudes and understandings about AI in an initial interview, before beginning a week-long digital assignment, where they tried out and reflected on 25 use cases of AI in news, and followed by a final interview, where they reflected on how their perceptions had been impacted by these experiences.

This report details how news consumers form their attitudes towards AI technologies, how they feel about different uses of AI in news, and why. Beyond initial reactions prompted by generalised suspicion and concerns about complete automation of journalistic content, the findings paint a nuanced picture, highlighting applications that are more or less likely to sit well with audiences. Here are five key findings from the report:

1. General attitudes towards AI are significantly shaped by broader cultural narratives. 

At a time when most people have little to no technical knowledge about or direct experience with generative AI, perceptions of these technologies draw heavily on popular culture, media narratives and the conversations in their everyday lives. Mediated discourse about AI is often largely negative, which colours how people think about AI in news specifically, which they typically approach with suspicion.

2. Individual level factors condition how people approach the use of generative AI in news. 

We identified three broad types of people, based on a range of relevant factors: 

  • Traditionalists tended to be most fearful towards technological change and had lower levels of knowledge and experience AI
  • Sceptics were cautious and critical in their outlook towards technology, and were more clued up on generative AI and LLMs
  • Optimists were generally trusting of technological progress and most focused on personal benefits.

3. Comfort levels with the use of AI in news varied considerably across applications. 

Initial negative reactions to the use of AI by journalists typically defaulted to the assumption that AI would be used for content creation. As participants interacted with more use cases, their attitudes became more nuanced and, on balance, more positive. In addition to cultural and personal factors, comfort levels depended on four key elements:

  • Where in the process of news creation and distribution generative AI is used 
  • The type of information being conveyed and whether human interpretation and feeling was required/desired 
  • The medium itself – text, illustration, photos, videos were viewed differently
  • How much human oversight there would be

4. Most see human oversight as imperative for the responsible application of AI. 

Participants viewed oversight as a principle of good journalistic practice more generally, and especially so when it came to AI. However, the expected level and nature of oversight varied by where in the process generative AI is used.

5. Disclosure of AI use requires a nuanced approach. 

Just like comfort levels varied across different parts of the news creation and distribution process, participants felt disclosure (e.g. labelling) was less important for some behind-the-scenes tasks than it was when AI was used to deliver content in new ways, and especially when creating new content. People saw disclosure as much less important when AI was used to assist journalists compared to when it was used to augment or automate content creation.

Attitudes towards AI, in general and in relation to news, will most certainly continue to evolve as people become more accustomed to, informed about, and experienced with AI. We are at a critical juncture that presents news organisations with both opportunities and risks.

Approaching AI implementation both responsibly—with rigorous oversight, appropriate disclosure, and due consideration of audience comfort levels—and strategically—communicating how AI innovation can benefit journalism and users alike—can help news organisations more effectively navigate this uncertain period.

Join our free newsletter on the future of journalism

In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.

  • Twice a week
  • More than 20,000 people receive it
  • Unsubscribe any time

signup block

Join our free newsletter on the future of journalism

In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.

  • Twice a week
  • More than 20,000 people receive it
  • Unsubscribe any time

signup block