Our podcast: How people are using generative AI, and what this means for news

We discuss findings from our major survey report into how people are using generative AI, what AI features they see online, how they see it transforming society and their own lives and what this all means for news

In this episode of Future of Journalism we discuss one of the hottest topics in journalism right now which is how people are responding to the growing role of AI in news and wider society. We’ll look at how generative AI tools are being used, how people engage with AI-generated answers in online searches, and AI’s role in newsrooms and wider society. The discussion draws on findings from our 'Generative AI and news report 2025'.

The podcast

Spotify | Apple Podcasts

Speakers:

Dr Felix M. Simon is a (political) communication researcher and Research Fellow in AI and Digital News at the Reuters Institute for the Study of Journalism. Before joining us, he was a doctoral student at the Oxford Internet Institute (OII), where he is a Research Associate.

Host Mitali Mukherjee is the Director of the Reuters Institute and is a political economy journalist with more than two decades of experience in TV, print and digital journalism.

The transcript

AI use overall | AI-generated search answers | Perceptions of AI across society | Attitudes to AI and news

AI use overall 

Mitali: So let's do a context setting question first, as we always do Felix, before we look more closely at AI and news as a subset, give us a bit of a bigger picture feel on how generative AI, or Gen AI, as I'm going to say through this conversation, is used more broadly, and how it compares with the previous survey from last year, how often are these tools used? Is there a popularity rating here? Who's using it?

Felix: Yeah, I think it's a fantastic question. It's always good to contextualise these things in many ways. I think the vibe for a lot of people there was that AI use was already up. I think that's the sense lots of people have if you're reading the news, that's the impression you easily get, and this is something we can also confirm with our findings across those countries. Almost everyone has heard at least of one AI system by now. That's one of the most striking findings. It's really hard to find those people who don't know what AI is and what these systems are, and the uptake also jumped year on year, in comparison to 2024 so for people who say that they've ever used any AI system that we asked about, that rose from 40% to 61% and the weekly use of any of those tools rose from 18% of people saying that they had them or that they are using them on a weekly basis, to 34% this year, which is a quite dramatic increase. It's a doubling.

In many ways, when it comes to the tools that are most popular with people, ChatGPT is probably unsurprisingly in the lead. It's interesting to speculate why that might be the case. In many ways, it's because it's the earliest one that was made publicly available, and it also leads in our survey with 22% weekly use across countries, and then followed by systems like Gemini from Google, Meta’s suite of AI systems and Microsoft's Copilot. But basically the core user base for these big platform systems has again doubled year on year, which is quite striking. When we look at who's using it again, there's an interesting picture, probably unsurprisingly, the users skew slightly younger, so 59% of the 18-to-24 year olds used any Gen AI tool or system in the last week, versus just 20% of the 55-year olds and older. It's something we see quite commonly with new technology. So the younger people are the first adopters, the early adopters. And that also seems to play out for generative AI, but if you drill down into the data, this age gap is driven mainly by ChatGPT, where we have 47% and the youngest cohort using this versus just 9% for the oldest.

One thing that I always say when I talk about this data is it's important to remember that despite these impressive numbers, there's still lots of people who haven't heard or basically haven't used any AI system, so many are only light or occasional users, and even for the top tools, large shares only use them monthly or once or twice. So despite the doubling that we've seen with weekly use, there's still a large part of the population in all those countries who only use them occasionally. That might change in the future, but at least that's the situation as it presents it to us for now.

Mitali: Very interesting, as I said at the start, as well, Felix, it's quite a wide variance geographically in terms of countries. Did you see a difference in terms of the usage, country wise? Because there are cultural differences, sometimes on how people approach technology or how willing they are to embrace it.

Felix: In many ways, what we see is happening across countries, but there are some outliers. We're not quite sure if that's just cultural reasons. It could also be the demographic profiles, with some societies having just a larger number of younger people, and therefore you see a different picture. But at least for weekly use, for instance, Argentina is the highest, where 51% of people say they used it weekly, followed by the USA with 36%, and then other countries like Denmark, the UK, following and with the lowest numbers in France and Japan, where it was 27% and 26%, respectively in terms of weekly use of any GenAI system. ChatGPT is the one that happens across countries. There's not a big difference there. It's the top tool. The runners up vary a bit, but they are all part of the big platform companies. So it is in some cases, Gemini first. In some cases, it's one of the Meta AI systems. And that ranking differs by country but broadly it's those those four. And in many ways, it's only a small difference, I would say, by country at least for weekly use, it's not as striking as you probably would expect it to be.

Mitali: Are there clear verticals emerging though Felix in terms of what purposes people use these Gen AI tools for, and that's aside from using it to solve a math problem or to check on whether you have the right recipe.

Felix: Not bad uses either. I think you're completely right. We did see that it's something we asked about both last year and this year, and what we see is that information seeking, information retrieval tasks, in many ways, that is now the top use case. That wasn't the case last year, but this year, we found that 24% across countries say they use it weekly for that purpose, up from 11% in 2024 and has thereby overtaken forms of media creation, so say creating a poem or creating an email, which is a 21% now. The other thing we saw, and we unfortunately didn't ask about this last year, but we did this time, was people also use it for social interaction. So 7% of people this year said they use AI systems for some form of social interaction, as a friend, as an advisor, although it looks like exactly it’s bit hard to say with our data and follow up research will hopefully dive deeper into what that means, or what it means to people. But at least when you ask people about it, that's what's emerging. And it's also higher for younger cohorts, again, where 13% of the 18-to-24 year olds say that they've used an AI system in some shape or form to interact with it in a social way.

If you look more closely at these information retrieval tasks that I talked about, we did see, for instance, that researching a topic is now at 15%. People asking it for specific advice at 11% or answering factual questions, that's 11% weekly. So people really seem to be embracing those systems as a way to make sense of the world and make sense of things they are interested in. Creation tasks, again, they remain mostly niche. 9% of people in our survey cross country said that they have used it for image generation. I personally was a bit surprised by that, because I would have expected this to be a lot higher, because we've seen quite a lot of development and progress when it comes to the image generation abilities of those systems. And if you look at the recent news about OpenAI’s Sora, for instance, or Google's Nano Banana, which are really, really quite good at creating realistic imagery in a variety of formats. I would have personally expected that more people make use of this, but basically, we haven't seen those sorts of big increases. So it kind of seems to suggest that the early adopters have done their work there, and this need has been saturated for now.

And when it comes to news, obviously, something we're quite interested in at the Reuters Institute, that is small, even though it's rising. So this year, we see that 6% of people say that they weekly use any Gen AI system for news consumption or for news purposes in any shape or form, which is up from 3% last year, its strongest in Argentina and us. So there we do see these country differences play out. But again, it's a very small number in comparison to some of the other tasks people are using it.

And if you then go further and look at okay, what are these people who use it specifically for news purposes? What are they actually using it for? What do they do? We do find that latest news, asking for the latest news is the strongest use case there, but also things like summarising, validating, rewriting news content seems to be quite  pronounced. And again, there's also a clear age pattern among those who have used an AI system for getting news, younger people are more likely to ask those systems to help them make stories easier to understand, 48% versus 27% for the oldest cohort, but it's very important to remember again that these are very, very small segments of the overall population, and most people that we did survey didn't use it directly for news. They used it for other purposes, mostly around information seeking.

AI-generated search answers 

Mitali: But just on that point about young audiences, I think it's important to point out that it ties in with a lot of our other research, which when you know younger audiences are asked why it is that they don't turn to the news, sense making is is something that they point to as a challenge. So it ties in that many of them are using GenAI for that sense making of news. And we'll scratch this in more detail, of course, but let's talk a bit about the AI search. You know, which is quite common now in many people's online experiences with the rollout of these AI generated searches, what's the level of exposure? Does it vary across countries? And maybe it would be useful for people listening to actually have a placeholder on top of this question, which is, what is it that they're talking about when we talk about AI search?

Felix: Yeah, it's really important to broaden it out, as you say, in many ways, what we mean when we talk about this is things like getting an AI-generated answer in response to a search question. So many people will have had this experience, at least, according to our data, that they ask for something, they have a question, they go to Google, they go to Microsoft’s Bing. So that's the two dominant search engines, they type in their question, and then, instead of a list of links, a list of sources that they would follow, they would get basically, a short generated answer in a box, often with a list of sources they can then consult if they want to. So basically, if we ask about seeing this, that's what we refer to. It's also important before I go into the results, to say that this is where survey research becomes a bit tricky at times, because, of course, people are not always perfect and exactly recalling if they've seen something, and the stated behaviour often can sort differ to a degree from actually observed behaviors if you were actually to sit down with them and see what they actually do, what they have seen, but nevertheless, it gives you a fairly accurate and reasonably good idea of how common this is. And this is where we did find that across countries, 54% say that they saw an AI generated search answer in the last week in response to one of their queries. And much to the point you made, this has been now broadly rolled out across countries. It's probably not very surprising that this is the case, and our data quite clearly shows this.

It's again interesting to see how that breaks down by country. Argentina really is the standard category here where 70% people say that they've seen one. UK is at 64% USA at 61% and and then followed by Denmark 54% Japan with 46% and France with just 29% even though might feel very low, but ifd we look into this in France, the reason is that at the time of the research, Google as the dominant search engine, which I think is roughly, 85% market share for search in France, hadn't yet rolled out AI Overviews or AI summaries, so people couldn't see those within the dominant search engine in France, and that explains this somewhat lower number.

Mitali: Let's dig a bit deeper with that one. How do people engage with these answers in terms of actually clicking through to the sources that they cite, as you said, that come up in that box with an AI-generated answer and any findings or learnings in terms of trust with these AI overviews?

Felix: Yeah, we did ask about both, and in many ways, that's where the picture, probably, especially to publishers, is a bit worrying, because our data seems to confirm some of the recent industry reports we've heard about click-through, also people reaching publishers’ sites, but also the size of any content creator on the internet declining. We found that of those who see those answers, of those who said they have seen one, just 33% they always or often also click through to the sources that are being displayed, and 28% that they rarely or never do this, although younger users seem to be slightly more happy to click through so in many ways, what we find here, and again, what people say might differ from what they actually do, is that these answers are good enough in many ways. People are happy with what they see and don't necessarily feel the need to actually click through to some of the underlying sources from which these answers are being generated.

In terms of trust in those. Again, it's a finding that I personally find highly fascinating, also because it's slightly goes against the grain of what some experts might expect, or might have expected. We did find that 50% of those who've seen those answers say that they trust these answers, broadly speaking, slightly higher in Japan and Argentina, a lot lower in the UK, and that these people who say they trust them are also more likely to click on one of those links. I think it's important to say that this trust is conditional in many ways. It's something that's not just where people don't just look at it and say, ‘Oh yes, I see this. I trust this.’ When we ask people in a more open way to explain why they trusted or distrusted those answers, lots of people said that they find them useful. They think that because they draw on a lot of information, that the answer they get will be correct and therefore useful. But they also don't trust them for all kinds of topics. So what came through in those answers was that for very high stake topics, health, for instance, or politics, trust was a lot lower than for something like, say, celebrity gossip or maybe gardening advice, or the recipe for pizza. So in many ways, it's not that people are just dupes and see those answers, thing, oh, ‘yeah, that must clearly be correct.’ They still have heuristics by which they estimate them. And in many ways, these AI answers a bit like a first pass, and then people will sort of, at least in their own retelling, go on and contextualise this follow up with other sources. 

Mitali: What do both those findings imply for news, more broadly, Felix?

Felix: Well, in many ways, I think there's a discovery pressure. So we do see that AI answers are now widely seen, and that actually a fair number of people don't actually click through. So there is this referral risk for publishers, though it's a bit difficult to say what that looks like at scale, how bad it will be for the industry in general or for different publishers. That's where a lot of follow up work will be needed. And in many ways, at least when you talk to publishers, they say, in some cases, that the big impacts have not yet been seen. But that picture varies. So other publishers, who would say, if you ask them, that actually they have already seen traffic, and then people coming to their content going down as a result of those AI overviews. So it will disintermediate, to a degree, the relationship between audiences and publishers, or content creators even further. In many ways, it pours cold water on this idea that some pundits have, but also some people in publishing industry, but also in academia, that because we as people who study this, who think about this all day long, and are quite concerned about the quality of information, but because we often don't find these answers great because we see how they are flawed, because we see and know from research, from talking to people, that they can have issues, it doesn't necessarily mean that users will see it that way too, and that's what the data clearly shows for us. There's trust in those answers. We might not always like this, but convenience and usefulness clearly seems to be something that is a selling point of those answers, and people seem to like them for that reason. So I think that's where we as experts have to be a bit humble, because our view of those things might be quite different from what general people, general users think of those answers.

Perceptions of AI across society 

Mitali: And I think it's also important here to share with people listening that part of our approach at the Reuters Institute, while studying AI has been not just to explore what the engagement for GenAI and news is for audiences, but more broadly, what this means for the public and how they're using AI in their daily lives, what it means for society at large. And that was really fascinating, Felix, because in your report, you did foray into that as well. People's general perceptions about AI's prevalence across not just news but different sectors in society. And which ones were the ones where they felt that AI could actually improve interactions?

Felix: I mean, that's an area of the research and the report that we owe in particular to Rasmus Klein Nielsen, who was leading on this, and I'm very glad he did, because it really produced incredibly interesting findings. In many ways, people think that AI is increasingly used across sectors. So 41% said in our survey that they think that GenAI is used always or often, across the various sectors we asked about. 67% say it is for search engines. 68% for social media. So that people really expect GenAI to be used quite a lot. 51% say that for the news media. So again, still, fairly large number of people thinking that it's being used there. And in terms of what it does to the experience of thism across sectors we did find that 29% expect GenAI to make their personal interactions with those sectors better, 22% thought the opposite would be the case. In many ways, it's interesting that the optimists seem to outnumber the pessimists for things like healthcare science also search, probably because, a. there's lots of media coverage on those topics. So we hear a lot of stories about healthcare being improved by AI advances. We hear how AI can potentially and already has, pushed science forward in many ways, and with search at least, that's where people thanks to things like AI summaries, but also in general, search results have seen it does make a difference. That's probably why we see slightly more optimism around this.

Unfortunately, there's also pessimists, and the pessimists lead for things like the news media and the use of AI by governments and politicians and parties. Again, it's a bit difficult for us to say why that is the case, but in many ways, we can reason from existing research and from other work we've done at the institute, but it has to do with general attitudes towards those sectors that will also shape how people think about them in terms of AI. So if you have a negative attitude about the news media, if there's low trust in them, you're more likely to also think that AI use in this context will not necessarily be good for you. And the same with government and politicians and parties, where institutional trust, or trust in those actors is low. It would not be surprising to find that therefore people also think slightly more pessimistically or negatively about AI used by those very actors, and what it means, for their interaction with them.

Mitali: For a while, and perhaps it's changing now, a lot of the conversation within the news industry was like that as well, Felix, where it was very binary. AI was either terrible or it was great. You know, leaving the industry aside, what did we find in terms of how people are approaching this? Are they generally more optimistic or pessimistic about AI's role in their own lives, and then by extension, you know, is it different or the same for wider society?

Felix: I think mostly we see, at least for the effect of AI on their own life, we see more optimists than pessimists. So in four out of six countries that we have in our research, there's more optimism when it comes to the effects AI will have on their own life, if you ask them about it. The UK is the big exception where there’s more pessimists, and that generally shines through in the data. 

Mitali: I wonder why!?

Felix: Who knows! When it comes to society, in many ways, there's more pessimists and optimists in three out of six countries, including the USA, and especially for the US, we've seen that views have shifted towards a more negative stance since 2024. I would personally expect, and again, my expectation is not necessarily backed up by the research and the data at hand is that that is probably to do with general shifts in politics in the US, which probably will predispose people in certain ways. And it's also interesting to look at a very clear gender gap, which we did find in this context, so women are generally less likely to expect benefits for both their own lives and society from AI than men. Again, it would be very interesting to see why that is the case, but at least in our data, it was very obvious.

Attitudes to AI and news 

Mitali: At the start of this year, we always published something called the Trends and Predictions report, which is how news leaders are thinking about the year. And there was certainly a really clear shift and leaning towards investing more time and effort in AI features within newsrooms, and they've been rolled out of the course of the last few months or more. What did we find in our research Felix in terms of how frequently people say they actually have come across such features?

Felix: Yeah, that's when the data gets maybe a little bit frustrating for some of the publishers who've been pushing for the adoption of AI features and have rolled them out. We did actually find that most people across countries don't seem to see those features regularly. So 60% said they do not regularly encounter those audience-facing AI features on news sites and apps. So in many ways, of course, this is we asked the general population, that number would probably be higher if we just looked at, say, the subscribers of the Financial Times as a news outlet, which has rolled out quite a lot of AI features, and very visibly, if you ask just that subgroup, the percentage would look quite differently, or the same with maybe a place like Rappler or Der Spiegel. But at least for the population at large, most people say they don't regularly see those features. If we ask those people who say they have seen them, what the most seen features are, it's topped by AI summaries with 19% followed by news chatbots at 16% and then things like AI audio, AI video, generally more seen in the USA, Japan and Argentina. And again, in many ways, if you look at the industry trends and what has happened, and that's probably not very surprising, because AI summary is really the low hanging fruit and something that's not necessarily easy to implement, but it's a very obvious way of using AI in a news context and use of chat bots, in similar way. Things like Ask the Post AI, which is exactly that, it's a chatbot you can ask. It's something that I was in the talks for a long time, especially since those chatbots became more publicly available. So it's probably not surprising that we see more people say if they've seen any of those features, then these are the two most common ones.

Mitali: Is there, you know, is there greater nuance in terms of how comfortable people are with AI's role in news specifically, and within that, are there certain areas that they're more comfortable with, versus spaces where they're not comfortable with seeing AI as part of the process?

Felix: Yeah, I mean, that's one of the key things we did find, and it was Rasmus and Richard really picking up on this, that there is this comfort gap. So across countries, only 12% in our survey say that they are comfortable with entirely AI made news 21% then if you ask them about ‘Okay, what about if there's a human in the loop’, that rises to 43% when news creation is human led with some help from AI, and it tops out at 62% for entirely human generated news. So there seems to be this clear gap between completely human made news, handmade news, whatever you want to call it, and news entirely made by AI. That gap has also widened since 2024 in many ways. It also varies a bit by country. So the gap, as we find, is largest in the UK and Denmark. It's a lot smaller in Japan, in Argentina, and something that we've also found in past research, for instance, led by a wonderful colleague, Amy Ross Arguedas, it changes from task or task or people are much more comfortable with back end tasks, so things like spelling, grammar, editing and translation and comfort is a lot lower for rewriting things for different audiences or creating realistic images or artificial presenters or authors, where in the latter case, only 26% and 19% respectively said that they are comfortable with this.

I think it's also interesting to think about, the transparency aspect, which is quite important to a lot of news organizations, where they try and make clear if they're using AI, where that is happening because they want audiences to understand and in many ways, that has been seen as as a good recipe of hopefully increasing trust and helping people to understand what is happening. Unfortunately, this is again, an area where we do find that only 19% across countries say that they see AI labels daily, despite 77% of those people also saying that they use news daily. So despite some very common news usage, it seems that most people actually don't see AI labels. Again, it's a bit of variation by country, but that's clearly an issue for publishers, because, yes, you can be transparent, you can make those efforts, but if no one sees them, what effect will that have? And at least for now, it seems that most people actually don't recall seeing those labels.

Mitali: What do our findings point to in terms of how people think AI could potentially transform the news? And again, are they country variations in those expectations?

Felix: Yeah, in many ways that's again where we see a continuation of what we already found last year. We could say lots of attitudes have hardened a bit, where, when we ask people about the expected benefits, there still is that view that news will be cheaper to make with the help of AI, and more up to date. But there's also strong expected downsides, where people also think it will be less transparent and less trustworthy. And the picture is kind of the same as last year, just that the attitudes have slightly sharpened so people seem to have become more entrenched in their views. There's a bit of a country pattern there where Japan and Argentina show that people are certainly more positive. UK again, a country where people are more negative and expect the news to become slightly worse for them, with respect to AI use by news organisations,

Mitali: Just looking at the core elements of this contract between journalism and the public. Though we've explored trust, we've explored reach on confidence. Felix, how much confidence do people have in journalists and newsrooms using AI responsibly? Because, as it should, a lot of the conversation is around the ethics of this usage, along with the technological developments.

Felix: That's right, and unfortunately, it seems like, again, I'm the bearer of slightly bad news for news. In many ways, human checks are seen as something that happens only on a very limited basis. Only 33% in our surveys across countries think that editors always or often check AI outputs. It's a lot higher in Japan, where 42% say so. In Argentina, it's at 44% but in UK, only 25% of people think that AI-generated outputs are checked by an editor before they get into production, before they get to see them, which is clearly indicative of the fact that people don't seem to have a lot of trust in a lot of news organisations to use that technology in a responsible way. In many ways, if you look more closely at trust link, those who strongly trust news organisations are far likelier to believe that those checks happens at 57% versus only 19% among the strong distrusters. So that's a very clear finding there that basically existing trusts in news organisations and in news also shapes the way you think about how those organisations are using AI. And in many ways, this also continues when we ask people about if they expect differences between different news organisations in terms of how respect terms of how responsible they are in AI use, where we do see a lot of difference. So 43% say that they actually expect different outlets to do some differently good jobs at handling responsible AI, versus 28% who say that they think there will only be small difference. So again, people expect some outlets to be a lot better doing this in a good way and adopting and integrating AI in a responsible way. And that's not necessarily pretty reading, I would say, for quite a lot of publishers out there,

Mitali: But perhaps important reading for the news industry and for publishers Felix.

Felix: Medicine serves a purpose, yes.

Mitali: Thanks very much for joining us today. It was a pleasure.

Felix: Thank you.

Join our free newsletter on the future of journalism

In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.

  • Twice a week
  • More than 20,000 people receive it
  • Unsubscribe any time

signup block