AI and the Future of News 2026: what we learnt about its impact on newsrooms, fact-checking and news coverage
Eduardo Suárez, Tai Nalon, Clara Jiménez Cruz and Chris Morris, during the session on AI and fact-checking. | Marina Adami
On 17 March we hosted AI and the Future of News, a conference with journalists and experts from the University of Oxford and around the world. More than three thousand people signed up to follow this gathering, which included presentations from our researchers as well as panel discussions on how AI is used in newsrooms, how it’s being covered and how it’s impacting society as a whole. In this link you can find a reading list with projects, articles and research shared or mentioned by the speakers. Here’s a summary of the panel discussions at the event.
Jump to: How newsroom cover AI | How investigative journalists use AI | How AI is transforming fact-checking | How AI is transforming society | How the Guardian is using AI
Watch the event
1. How newsrooms cover AI
The first panel, chaired by our director of leadership development Federica Cherubini, discussed news coverage of AI, including the stories that are not being told. It featured journalists Joanna Kao from the Pulitzer Center, Niamh McIntyre from the Bureau of Investigative Journalism, and Akshat Rathi from Bloomberg, who joined online.
McIntyre challenged the jargon and mystification that still appears in many news reports about AI, making it more intimidating for non-expert journalists to start reporting on these issues, and serving the interests of tech companies over news audiences.
Kao agreed. “A lot of what we see”, she said, “are actually big tech narratives, stories that make AI seem really scary, really complicated, using a lot of big words. This intimidates people from asking more questions about how the technology actually works.” She pointed to another problem she sees in news coverage: journalists talking about AI without explaining what the ‘AI’ they’re referring to actually is.
Rathi, a climate journalist, drew parallels between his beat and AI reporting: “Looking at what's happened with climate is a real warning for how we should be thinking about AI. Yes, it might have benefits, but at what cost?”
Both issues are closely intertwined because some of the protagonists of climate and AI stories are the same, Rathi said: “It was these tech companies that were at the forefront of the sustainability story, trying to say, ‘We will be net zero by 2030, we will have all our data centers powered by renewable energy.’”
These same companies, he added, have now stepped back from their climate commitments, in some cases reversing them, and have pivoted to AI.
How can journalists demystify AI? Kao recommended reporters discuss how they are approaching their work with other reporters. Collaboration can also address other issues, she said, such as working with researchers from academia or think tanks to fill in the information gaps left by scant self-reporting from AI companies.
“A lot of the media coverage about AI shows up without any experts being noted and I think that's a real red flag for how [some AI] coverage is happening,” Rathi said.
Rathi encouraged journalists to collaborate with colleagues from other desks. “One of the things that has been the most powerful for our newsroom evolution on climate,” he said, “has been this idea where the climate desk and the environment desk don't just sit on a desk and do their own stories, but actually work with desks across the newsroom.”
What prevents journalists from finding the AI stories that are not being told? MacIntyre pointed to the lack of public information, and said her investigations rely on laborious source development.
“What I've found quite useful in my own work is actually going to the lowest-paid workers in a tech company, like the data workers, the labelers, the moderators, etc.” she said, adding that these workers are often forgotten and may be more likely to speak to reporters than better-paid colleagues.
Another way to find stories on AI, Kao said, “is just following up with what people say. We have a lot of stories about announcements where people claim things will happen, but we don't often then follow up and see whether those claims actually came to be.”
Go deep:
- A recent piece from Akshat: 'How Big Tech is obscuring AI’s climate impact'. | Read
- A recent piece from Niamh and other colleagues: 'King of slop: How anti-migrant AI content made one Sri Lankan influencer rich'. | Read
2. How investigative journalists use AI
The second panel of the conference, moderated by our own Tania Montalvo, looked at the evolving role of AI in investigative journalism with the help of Elfredah Kevin-Alerechi from the Colonist Report, Ryan McNeill from Reuters, and Sondre Ulvund Solstad from The Economist.
Kevin-Alerechi explained how AI has allowed smaller news outlets like hers to expand their work despite their limited resources.
She said that AI has allowed her team to enhance their investigative work, particularly in areas such as data analysis and content generation. But she cautioned against relying too much on AI without verification. "The biggest opportunity is that it will enhance our work as investigative journalists, but then we should not give it 100% trust," she said.
This point was echoed by McNeill, who works on geospatial investigations, and Ulvund Solstad, a senior data journalist. As vibe-coding becomes more widespread, McNeill warned journalists against blindly trusting anything AI spits out to them.
“You have to be able to justify every single coding decision that you make,” he said. “So if you're running around and you're coding and you don't know what it's doing, you won’t be able to catch errors.”
Similarly, Ulvund Solstad concurred that journalists shouldn’t embrace vibe-coding without inspecting or understanding the underlying code. But he acknowledged that AI can allow many people, not only journalists, to enter the world of coding and encouraged everyone to be transparent about the process they follow when investigating huge amounts of data.
All panellists agreed that while AI is a powerful tool, its effectiveness and trustworthiness are contingent upon careful management, transparency, and the quality of the underlying data. Journalists need to be cautious about AI's limitations and ensure that they verify and fact-check the information it produces, especially when covering complex and often under-researched topics.
Kevin-Alerechi, for example, explained that AI’s ability to generate useful insights depends heavily on the quality and availability of the data it is trained on. When covering rural or underserved areas, particularly in developing countries, the lack of reliable or accessible government data becomes a significant problem.
“AI cannot give you data that is not already available,” said McNeill. “That's why I tell journalists that AI cannot take your job – because you can go to the field.”
McNeill explained a recent Reuters investigation into atrocities under the regime of Bashar al‑Assad. The investigation relied heavily on AI‑enabled analysis of vast, chaotic evidence obtained by reporters on the ground, who took tens of thousands of pictures of documents from the regime’s security forces. Using custom-made AI tools, Reuters journalist Allison Martell built the infrastructure to translate, index and search these documents, and exposed the regime’s plan to move a mass grave to hide its atrocities.
Go deep:
- The AI-powered Reuters investigation into atrocities in Syria. | Read
- Our story on how Kevin-Alerechi’s Nigerian newsroom used AI for a flooding investigation. | Read
- An example of Ulvund Solstad’s work around the war in Ukraine. | Read
3. How AI is transforming fact-checking
AI is reshaping nearly every newsroom beat, but this transformation has been particularly acute for fact-checkers, tasked with debunking a growing volume of AI-generated falsehoods, and facing a technology that is both a disrupting force and a powerful new instrument.
In a panel moderated by our own Eduardo Suárez, Clara Jiménez Cruz from Maldita, Tai Nalon from Aos Fatos, and Chris Morris from Full Fact examined how fact-checking is evolving amid rapid advances in generative AI and the proliferation of misinformation.
All three described the rise of generative AI as a watershed moment comparable to the early days of the internet: disruptive, democratising, and destabilising. On one hand, AI has accelerated the production and spread of misleading content. On the other, it has enabled small fact-checking teams to operate at a scale previously unimaginable.
“We are in danger of getting to a place where no one believes anything they'd read or see or hear anywhere,” said Morris, who is the CEO of the British fact-checker Full Fact. “But there are things that this technology can do for us which allow us to address this problem at a huge scale in ways that even a newsroom of 100 people couldn’t do.”
Nalon, the founder of Brazilian fact-checker Aos Fatos and a Journalist Fellow of the Reuters Institute, illustrated the scale of the shift. In 2025, 16% of the 619 claims her team fact-checked involved AI-generated content, compared with 7% the previous year. Much of this growth was driven by fabricated visuals.
“[In Brazil] AI-generated fast content reached over 32 million views across TikTok,” she said. “Additionally, there were 2.1 million interactions, including likes and shares, on Facebook and Instagram related to AI-powered disinformation. These overall figures help us understand the scale of what is happening and the magnitude of what we are dealing with.”
Jiménez Cruz, CEO and co-founder of Spanish fact-checker Maldita and Chair of the European Fact-checking Standards Network, noted that AI has made the job more demanding, often forcing her team to respond to obvious AI-generated viral content. But she also said it has enabled the development of new tools.
Maldita and Full Fact have developed systems that use large language models to detect and classify claims across millions of sentences. A newer iteration integrates a generative AI component built around a ‘harms’ model to prioritise the most damaging narratives.
Aos Fatos has also used AI to develop its own tools. One of those tools is Fátima, an AI chatbot that answers audience questions. They are now developing Busca Fatos, a newsroom tool designed to fact-check live coverage and provide real-time context to audiences.
The panel also emphasised that misinformation spreads through emotionally resonant narratives rather than isolated facts. “I don’t think we can construct narratives that are as emotionally engaging as disinformation,” said Jiménez Cruz. “I do think we need to work more through trusted proxies: community figures and influencers who already have established connections with these audiences.”
Go deep:
- An essay arguing AI’s impact on elections might be overblown. | Read
- An investigation from Maldita on the TikTok polarisation industry. | Read
- A piece on how Full Fact used AI during the 2024 UK election. | Read
- Aos Fatos’ story on how Big Tech lobbying shaped Brazilian lawmakers’ positions on AI rules. | Read
4. How AI is changing society
In a wide-ranging panel chaired by our researcher Felix Simon, panellists with backgrounds in safety, policy, economics and law discussed AI’s impact on society. The discussion featured Keegan McBride, director of science and technology at the Tony Blair Institute for Global Change; Max Kasy, professor of economics at the University of Oxford; Natali Helberger, professor in law and digital technology at the University of Amsterdam; and Carina Prunkl, senior research fellow at the University of Oxford’s Institute for Ethics in AI.
A lot of the discussion revolved around regulation. Helberger stressed her concerns about a growing divide about AI policy between policymakers and AI adopters. “We innovate over the heads of people,” she said. “That potentially could lead to an AI legitimacy bubble of regulations that pretend to promote the public interest. But if this bubble pops, what I fear we might see is democratic disruption and the loss of trust and legitimacy of the institutions that pertain to regulate AI for the greater good.”
Kasy said he was sceptical that AI would benefit the greater good: “There were massive investments of hundreds of billions of dollars to build these systems and data centers and investors want their money back from somewhere, and it seems it's going to come from ads, warfare and political control and surveillance.”
For him, regulators should place more scrutiny on the decisions that are being made behind the scenes and which shape the way AI systems operate and what they do. “It's often framed in these ‘men versus machine' terms that Hollywood and tech CEOs love,” he said, “but the story is not about us losing control of the machine, but about who controls the machine.”
Another danger that tends to fly under the radar, according to Prunkl, is the one presented by open-source models. As these are publicly available to be downloaded and modified, anyone could remove the safeguards that are required in proprietary models such as Gemini or ChatGPT.
McBride argued that policy-makers shouldn’t underestimate the dangers of falling behind: “We can have debates [on whether] it is good or bad. But you can't ignore the fact that there are so many resources being poured into this… So the question is: what do you lose if you don't take this seriously? And I think there's quite a lot at stake if you fall behind on this, because exponential functions are quite scary.”
Kasy pushed back on the idea that AI is improving exponentially, saying that the reason GenAI saw a jump in its abilities is because of improvements in the training data and computation, not because it got ‘smarter’. “At least within the current paradigm,” he said, “I think it's unlikely that anything like an exponential explosion is happening.”
“The big picture story is one where we do have a lot of agency,” said Kasy, “and so often a lot of this discourse is divided between the pessimists and the optimists, the doomers and the boomers. But the reality is much more one where there's a million choices to be made along the way, both in the development and the deployment and usage side of this technology.”
“AI doesn't overcome us,” agreed Helberger. “We make choices.”
The panellists had different views on regulation, with McBride doubting its effectiveness and Kasy in favour of employing different policy levers to distribute power more equitably and give people impacted by AI more of a say over how it works.
“The problem at the moment,” Helberger said, “is that it's the AI companies that have the epistemic authority to decide what is a risk and what is not a risk, so in effect, the communities have no role in these risk assessments.”
Ekaterina Hertog, associate professor in AI and society at the Oxford Internet Institute and Institute for Ethics in AI at the University of Oxford, also discussed the contours of regulation during the following panel.
Asked about social media bans for teenagers, she acknowledged it’s an emotive topic for many people, but warned that the implementation of bans could put young people in danger in different ways. For example, normalising online age verification would result in a loss of privacy for many young people, and potentially expose them to bad actors obtaining their personal identification on malicious sites.
“If we introduce that, I would like to ensure that it doesn't make the situation worse and doesn't expose young people to more risks,” she said.
Go deep:
- The International AI Safety Report Prunkl co-authored. | Read and download
- A book authored by Kasy: The Means of Prediction. | Check out and buy
- A report co-authored by McBride: 'Sovereignty in the Age of AI'. | Read
5. How the 'Guardian' is using AI
During a final session chaired by the Institute’s director, Mitali Mukherjee, Chis Moran from the Guardian explained how his approach to AI has evolved in the past few years and why the newspaper’s focus shifted from building products to providing journalists with training and internal tools.
“About a year ago I realised the really consequential AI tools weren’t the experimental ones we were building, but the generic systems any journalist could open in a browser,” Moran said. “That’s why we shifted our focus to mandatory training, so everyone understood what these probabilistic models are doing, and what can go wrong when you use them in journalism.”
The kind of training the Guardian provides, Moran said, is not about “10 prompting techniques that will change your life” but about teaching journalists how AI models work, how they often rely on low-quality sources, and why reporters should be sceptical about any output they produce. The Guardian recently updated its AI principles and its editorial code to reflect the fact AI is now widely available to both journalists and audiences alike.
Unlike the FT or the Washington Post, the Guardian hasn’t created a public-facing chatbot and remains cautious about building one in the near future. Why? Moran pointed to issues around accuracy, values and voice, and suggested it would be difficult to measure its quality.
“It might be based on our work, but it isn’t edited, commissioned or accountable in the way our journalism is,” Moran said. “Chatbots generate novel answers that can change from one query to the next. Traditional journalism doesn’t work like that. Our pieces are fixed, attributable and can be scrutinised. That difference in accountability really matters to us.”
On the other hand, the Guardian has employed AI tools to aid in some more complex reporting projects.
An example shared by Moran is this investigation into 100 years of anti-immigration rhetoric in the British Parliament. The project is the product of a collaboration between the data journalism team, in-house data scientists, and researchers from University College London. They built a custom machine learning model tailored to immigration‑related language and sentiment, used an LLM for annotation, and examined language changes over a century and subtle shifts in how immigration is framed.
“This project couldn’t have been done without AI”, Moran said. “Most of the work was carefully building and tuning a model over more than a year, and being very transparent about the methodology so readers can see how it was done.”
Moran also presented two internal tools: one developed to summarise speeches and compare stances from different members of Parliament, and an internal chatbot to help journalists find insights in the Guardian’s archive. He also presented AI-powered tag pages, which the Guardian launched on the day of our event.
The Guardian used an AI model to extract storylines from all the articles published under a tag and to generate very short titles for each of these storylines. The goal was to increase clarity and context in these pages, and to guide readers through the key narratives.
Moran warned against journalists with no expertise using Claude and other models for tasks they are unfamiliar with. But he pointed to improving accessibility as an area where AI can be very helpful for newsrooms when producing easy-read versions of their articles with clear guardrails and established frameworks.
“AI is extremely well‑suited to transforming well‑structured text,” he said. “Using it to generate alt‑text or easy‑read versions of our articles, under clear frameworks, feels like an obvious public good—so long as we test it carefully.”
The Guardian is one of the founding members of the SPUR coalition, an alliance that also includes the BBC, Sky News, the Telegraph and the Financial Times. “What’s encouraging is that very different publishers are coming together to think about attribution, business models, and how our journalism is used in AI systems,” Moran said. “That kind of collaboration across rivals is rare, and AI makes it necessary.”
Go deep:
- Chris Moran’s piece on how the Guardian is using AI. | Read
- The Guardian’s AI-powered investigation into 100 years of anti-immigration rhetoric. | Read
- Landing page for the SPUR coalition. | Check it out
For more on the research sessions
- Our research fellow Felix Simon presented findings from our own research on what the public thinks of generative AI in news.
- Our director of research Richard Fletcher presented findings from our research on how UK journalists are adopting AI, a partnership with Neil Thurman and Sina Thäsler-Kordonouri from LMU Munich.
If you want to know more:
- Explore our research on: How audiences use generative AI | How audiences think about news personalisation
- Read our reporting on: How AI is transforming freelance journalism | How AI will reshape the news in 2026 | AI-generated prose | AI’s impact on journalism education | AI language gaps
In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.
- Twice a week
- More than 20,000 people receive it
- Unsubscribe any time