Our podcast: Digital News Report 2024. Episode 2. How audiences think about AI and journalism

"There are areas that people think should remain in the hands of humans," says Amy Ross Arguedas in an episode about AI and the future of news
25th June 2024

The podcast

Spotify | Apple

In this episode of our Digital News Report 2024 series, we explore what people think about the use of AI in journalism. We look at how AI is being used in newsrooms, levels of comfort that people have with AI and journalism, and questions around transparency and trust when it comes to AI.

Speakers

Amy Ross Arguedas is a Postdoctoral Researcher Fellow at the Reuters Institute for the Study of Journalism and works on the Digital News Project. Amy completed her M.A. and Ph.D. in the Media, Technology, and Society program in the Department of Communication Studies at Northwestern University in 2020. Prior to pursuing her doctorate, Amy worked as a journalist for five years at the Costa Rican newspaper La Nación, where she covered various different beats.

Our host Federica Cherubini is Director of Leadership Development at the Reuters Institute. She is an expert in newsroom operations and organisational change, with more than ten years of experience spanning major publishers, research institutes and editorial networks around the world.

The transcript

Awareness of AIComfort with AI in journalismTransparency around AI in newsTrust and AI in journalismHow newsrooms might navigate this issue

Federica: So ChatGPT was launched in late 2022 to the public. And since, there has been growing interest in how  generative artificial intelligence could be used and is being used in areas across society. Journalism is no exception. And with the potential for AI to transform how news is produced, what did you set up to explore in the research Amy?

Amy: We've seen really rapidly growing interest in generative AI within journalism  industry circles. But so far, audience research has been quite limited. So we didn't really know very much at all. And especially at a global scale, and beyond a handful of global North countries, how much information about AI audiences are encountering in general, or how they might feel about the integration of generative AI into news production. So for this year's Digital News Report survey, we set out to measure some baseline levels of awareness about AI and also around comfort  with AI being used in news among the general population across 28 of the countries that we include in the survey. And then we supplemented this with in-depth qualitative research in three countries: so Mexico, the United Kingdom, and the United States. And in this qualitative component of our research, we basically presented people with a wide variety of possible uses of AI in journalism, including some current uses already. And we gave them some space to experiment with and reflect on these different uses. And this allowed us to go into much greater detail and specifics than we could in the survey, and also to capture some areas of nuance and variation, and also kind of get a better understanding of the sensemaking behind these general comfort levels that we see in the survey.

Awareness of AI 

Federica: I imagine many of our listeners are familiar with the concept of AI, even if they may not have use it themselves necessarily. According to your survey, what is the level of awareness, like around the world?

Amy: So even though interest in AI is very high in industry circles, the survey reminds us that awareness among the general public is still relatively low. So we measured this by asking people how much they have heard or read about AI. And we see that, on average, only around 45% of our respondents, so less than half, say that they have read or heard a large or moderate amount about AI. And then about 40% have heard or read a small amount and 9% say nothing at all.

Federica: Does this vary by audience and demographics?

Amy: Yeah, we do find some important differences in awareness when we look at different demographic groups. And we see this globally. So for example, 56% of people under the age of 35, tend to have higher awareness, compared to only 42% among those who are over the age of 35. So that's a 14 percentage point difference. And we see some similar differences based on gender and education levels as well, with men and those with higher levels of education. Also having higher levels of awareness.

Federica: As well as raw data, you also asked respondents to describe what they knew about AI already, and what its growth could mean for society. Why would this important for your research? And what type of things did they say?

Amy: Yeah, so in our qualitative research, we had the opportunity to learn a little bit more about where people were encountering information about AI and also just how they were forming their opinions about it more generally. And so we learned that although a small subset of people were already using AI tools like ChatGPT for their work or for their schooling, and a lot of people have come across information about it in the news or on social media, for example, for other people, things like science fiction television series or films were also playing into important role in shaping their perceptions. And this is important because in the absence of having personal experience with AI, these kinds of popular portrayals are going to play an especially important role in shaping people's perceptions, which in turn is going to shape their perceptions and attitudes towards using news made with or buy AI, which is something that most have likely not seen that much of at the moment.

Comfort with AI in journalism 

Federica: And how comfortable are people with journalism being produced by AI versus being produced by humans?

Amy: Our survey data tells us that comfort with the use of AI and journalism is still generally quite low. And this is the case in both of the scenarios that you've mentioned. So we find across all of the countries, only around 1/3 of the respondents say that they feel comfortable using news made mostly by humans with the help of AI, so 36%. And the proportion is even smaller, just around one in five, when it comes to news that's made mostly by AI, with some human oversight. Now, we do see some country level differences here. And respondents in the United Kingdom, for example, tend to express below-average levels of comfort in the scenario of news being made mostly by AI, compared to the US, where we tend to have above average levels of comfort. And we also see that people with greater AI awareness, tend to feel relatively more comfortable with using news that's been made with or by AI.

Federica: Could you describe for our listeners some of the ways that AI is being used or could potentially be used by newsrooms? And then also what people think of these different use cases?

Amy: Yeah, so AI is not entirely new to journalism. Of course, you know, it's been used for mostly backend tasks in a lot of newsrooms for some time already. So for things like monitoring trending topics online or transcribing interviews, also personalising recommendations for readers. And now what we're seeing is a real boom and a growth in interest in the use of generative AI specifically, and for increasingly public-facing applications. So for things ranging from summarising past articles, for example, or translating news into different languages, to chatbots, and even synthetic news readers.

Federica: Is there a pattern in what people see as acceptable and unacceptable uses of generative AI? And what could explain this?

Amy: Yeah, so in our qualitative research, we found that people tend to be most comfortable with the more behind the scenes uses of AI. So cases where journalists are using AI as a tool to help make their work more efficient, and in ways that are not directly visible to audiences. So in other words, cases where we could say the journalists are still very much in the driver's seat, we find that the public tends to be much less comfortable, but in some cases still open to the use of AI for delivering news in new ways or different kinds of formats. And this is especially the case when it comes to improving their own experiences as users or increases accessibility. So here we're talking about things like using AI to provide, you know, more simplified versions of news articles or where news content can be transformed into different formats. So for example, from audio into text, or vice versa. And then lastly, we find that people tend to be the least comfortable with the use of AI when it comes to generating entirely new content. So for example, creating news articles from scratch.

Transparency around AI in news 

Federica: As AI becomes more and more infused in the process of journalism, newsrooms are going to have to decide how open they are with their audiences about how and when they're using it. And labelling journalism as AI generated even partially may show transparency but also draw attention to something that audiences instinctively dislike. What do audiences expect when it comes to transparency around the use of AI?

Amy: This is an area of a lot of interest for newsrooms, and one that can be really challenging to navigate, especially when we think about how AI can be used for so many different things within newsrooms. When we asked our participants in the qualitative research about their expectations around the disclosure of the use of AI in journalism generally, they often demanded, you know, complete total transparency for everything. But when we started probing into more specific kinds of AI applications, we started to see that not everyone found it necessary across all use cases. So some people saw labelling as less important when it came to behind-the-scenes kinds of uses where human journalists are using AI to make their work more efficient, but they did find it absolutely necessary when it came to AI producing public facing outputs, and especially those that are made mostly by AI. And a lot of people reason that this would shape how they approached the information or, you know, their decision about whether or not to use it in the first place.

Trust and AI in journalism 

Federica: Closely related to questions around transparency is the question of trust. Are there any correlations between trust in news in general, and levels of comfort around the use of AI news?

Amy: Yes, so in our survey data, we looked at comfort levels with the use of AI news among people who have high versus low levels of trust in news in general, so people that do versus do not trust most of the news most of the time. And here, what we find is that trusting audiences tend to be more comfortable with the use of AI in journalism, particularly when it comes to using news produced mostly by humans with the help of AI, which suggests that these are audiences who are kind of more trusting about publishers’ ability to use AI in responsible ways. And these gaps and comfort levels, they ranged from, you know, 24 percentage points in the US to 10 percentage points in Mexico. And we also saw evidence of trusted news shaping comfort with the use of AI in our qualitative data, in general, but also at the outlet level. So for example, individuals who trusted specific news organisations, especially those that they saw as being particularly reputable or prestigious, they also tended to be more open to those organisations, in particular, using AI.

How newsrooms may navigate this issue 

Federica: So you said, of course, the use of AI in general is not new. But we can say relatively speaking, we're still in the early days of generative AI being used actively in use other any guiding principle that newsrooms could take from your research around how they adopt this technology for the benefit of their audiences going forward.

Amy: Yeah, we think our research highlights areas where audiences seem to be more comfortable, and other areas where to the contrary, news organisations are probably going to want to tread more lightly or even avoid altogether. So audiences seem to be more open to AI uses that are behind the scenes, and those where AI can help improve their experiences as users. So helping provide them with more personalised experiences and more accessible information. But they tend to be much less comfortable when it comes to the use of AI for creating public facing content and also when it comes to more sensitive or important topics, such as politics. And they also tend to be more resistant when it comes to images. So things like synthetic videos and pictures that may come across as real. And also in those cases where the consequences of mistakes or of so called hallucination tend to be, you know, seen as more consequential. And overall, there's clear consensus that a human should always be in the loop and that complete automation should be off limits.

News organisations are also going to want to think very carefully about you know, beyond how they implement AI also how they're going to want to communicate this to audiences. So going overboard with labelling, for example, or using language that's really vague, it carries the risk of scaring off people who already tend to have low levels of trust in news. And also people that will have kind of lower levels of knowledge about AI, because they're going to tend to default to more negative assumptions. And that's what we've seen in our qualitative data. But at the same time, you know, failing to provide audiences with information that they may want, in order to decide what news they want to use, and what news they want to trust, that could also prove damaging. So publishers are going to have to thread this needle very carefully.

And lastly, I would say that it's worth keeping in mind that we are very much in the early stages of AI. And public perceptions are going to continue to evolve as people become better acquainted with these technologies, and in many cases, as people start to use them more and more in their everyday lives. But for now, it's quite clear that there's areas that people think should remain in the hands of humans. And we think that these kinds of work which tend to require things like human emotion and human judgement and human connection, these are areas where publishers are going to want to keep humans front and centre.

Federica: Fantastic Amy, thank you so much for joining us today and helping us understand the issue better.

Amy: Thank you.

 

Listen to whole DNR24 series

Join our free newsletter on the future of journalism

In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.

  • Twice a week
  • More than 20,000 people receive it
  • Unsubscribe any time

signup block