
The final panel discussion of our event, featuring our Acting Director, Mitali Mukherjee, along with Chris Summerfield, Victoria Nash and Roxana Radu. | John Cairns
The final panel discussion of our event, featuring our Acting Director, Mitali Mukherjee, along with Chris Summerfield, Victoria Nash and Roxana Radu. | John Cairns
On 26 March we hosted AI and the Future of News, a conference with experts from the University of Oxford and journalists from around the world. Hundreds of people joined us online for this gathering, which included presentations from our researchers as well as panel discussions on how AI is used in newsrooms, how it’s being covered and how it’s impacting society as a whole. Here’s a summary of the panels discussions at the event.
Jump to: How AI is changing newsrooms | How AI is used in newsrooms | How AI is reshaping the news ecosystem | How AI is transforming society
Our speakers were critical of the AI coverage they have seen so far, pointing out that it’s often incomplete, focusing too much on hype, and not delving deeper into the issues related to this topic.
Jazmín Acuña, co-founder and editorial director of Paraguayan newspaper El Surtidor, said stories about AI in Latin America tend to lack a critical angle. “Most stories focus on products and capabilities. And when the stories are not part of the tech section, they are usually about how the government or private firms are adopting AI,” she said.
This was echoed by Sannuta Raghu, head of the AI Lab at Scroll in India, who said that Indian coverage was often led by engineering influencers and independent journalists rather than mainstream media.
According to UK-based Victoria Nash, Director of the Oxford Internet Institute, negative stories about AI and technology are the ones that drive most of the attention. “For me, the stories that are missing are the positive ones. We have a good degree of coverage of the risks, particularly when it comes to children and young people, but we don’t cover the potential benefits,” she said.
Katharina Schell, deputy editor-in-chief of Austrian wire service APA, said AI coverage in Austria and Central Europe is also still very hype-driven and technology-focused.
“The attitude that's seeping through all this coverage is either euphoric or scared, and both of these stances are not ideal,” she said. “Sometimes I have the impression that the coverage sounds almost clueless, so it’s very much buying into the hype, but then asking what this will do to us. It often ends with this question, and I don’t think journalism that’s good.”
One example of an incomplete AI story, Schell said, is the coverage around Italian newspaper Il Foglio’s experiment with an AI-generated supplement. The articles she’s read about this left Schell with many questions: “What is the objective of this experiment? What's the workflow? Is there a human in the loop? Is the journalist in control? These questions weren’t asked.”
For Schell, every journalist should learn the basics of reporting on AI. “The basic grasp is not rocket science,” she said.
According to Acuña, the stories that are still missing are the ones focused on human rights: “We are not asking enough, ‘What is the social purpose of certain forms of technology? At what cost are they advancing in our region?’”
Acuña’s newsroom is reporting on the health problems experienced by a community near a noisy data centre. “After publishing that report, we would like to give people decibel monitors, so that they can measure sound levels in their communities and hopefully have a crowd-sourced follow-up report,” she said.
During the event’s first panel discussion, our own Felix Simon explained that AI is the most recent (and perhaps most significant) addition to the trend of tech companies attracting and controlling access to audiences.
“Platform companies are increasingly controlling access to audiences and publishers have found various ways already of dealing with this, from full blown denial, in some cases, to collaboration to confrontation,” he said.
This is a trend started with search engines and social media platforms, which collect vast amounts of data, helping them position themselves on the frontlines of AI development. Now news organisations find themselves competing for attention with social networks but also with new AI-powered platforms that position themselves as low-cost information providers.
Andrew Strait, former associate director at the Ada Lovelace Institute, spoke critically of the way AI companies are framing their quest to build artificial general intelligence (AGI), a nebulous concept that is widely understood to mean AI that is as good if not better than a human across a range of capabilities.
“The way that AGI is being talked about, both in the press and the marketing materials, and even by some governments unfortunately reflects this idea that everything will be worth it in the end,” he said. “But we need innovation that’s moving towards very specific products that have a very clear value add for society, for people, for certain communities, not this kind of everything system.”
Strait also said that an ‘opt out’ solution is not going to work: as we’ve seen with web crawlers, it’s very difficult for news organisations to separate themselves and their content from the AI sphere. There are other technical solutions news organisations can pursue, such as securing multiparty computation, where someone can access and train a model on a publication’s data without ever having the data leave its platform or a secure device.
Some of our speakers also discussed their own AI experiments and their implications for news organisations.
Schell from Austria’s APA said that AI experiments within newsrooms need to have a purpose beyond just seeing what the technology can do. “Experiments are fine, and they are great fun,” she said. “You learn a lot, but what's going to happen when you're finished with the experiment? This is something we see a lot in Europe. You have shiny prototypes, proofs of concept and so on, and then you don’t know what to do with them.”
It’s also important to acknowledge the limitations of AI applications, Raghu said, giving the example of how Scroll’s ‘human-in-the-loop’ policy means the opportunities for more content powered by AI systems are constrained by how much content her colleagues can actually monitor and verify.
Our speakers presented a few newsroom experiments with artificial intelligence as well as reporting projects where AI has had a prominent role. Acuña explained how her outlet, El Surtidor, had created a chatbot to explore the story of women jailed for drug-related crimes.
“All the answers that you get in the chatbot are words from the interviews [we did with these women],” she said. “We didn’t want to run the risk of hallucinations and misrepresent such a sensitive story.”
Our Austrian alumna Schell, who’s the author of this recent project on how to label content produced with the help of AI, explained how APA is now developing a tool that provides alternative text for infographics. “It’s sort of a blind spot in the efforts to provide access to information for visually impaired people,” she said.
Journalist and engineer Dylan Freedman presented some of the recent AI-assisted articles from his team at the New York Times. He stressed AI was extremely useful for tasks akin to finding a needle in a haystack and explained how AI allowed them to analyse hours of interviews of nominees Pete Hegseth and Tulsi Gabbard before their US Senate confirmation hearings, or hundreds of Zoom calls for a long exposé of the election denying movement.
“It would be physically impossible to listen to all of these videos,” he said. “But AI was able to help pinpoint examples there. We then found examples and talked with the reporters and refined what we were looking for and were able to get very good examples that we were able to show in our story.”
He also presented this piece on how some keywords are disappearing from US government websites as a project in which they mixed machine learning and old-fashioned reporting. They created over 5,000 screenshots of government websites. Then they used AI to extract the text from those screenshots and they compared the website before and after Trump’s inauguration.
“Part of the challenge is that a word like equity can be used in various contexts”, Freedman said, “But large language models are able to go beyond keyword search by intuiting if a change is ideologically motivated. It was also useful for finding changes in Spanish and other languages.”
Liz Lohn, director of product, AI and editorial tech at the Financial Times, explained how her newspaper had created an AI playground, “a pretty basic internal tool that connects our existing content, either published or still in draft, to an LLM in a safe way that doesn’t send that content in any places we don’t want it to go, and allows everyone in the newsroom to experiment with prompts.” This piece is a good primer on the tool.
Lohn presented two audience-facing projects the FT is now experimenting with. The first one is a discussion prompt to ask readers to engage in the newspaper’s comments section. The model reads the article and suggests a relevant question that pops up in readers’ screens two-thirds of the way through the article.
“Users who engage with comments engage more with the content itself and are more likely to continue subscribing,” Lohn said. “About 60% of people don't even know that the comment section exists. So the problem was around the section’s visibility in the first place.”
The second project she presented was a feature with three AI-generated bullet points now shown at the top of the FT’s pieces. Lohn’s team realised that readers were pasting their content to ChatGPT and asking this chatbot to summarise it. So they decided to do this themselves in a way that minimises hallucinations and keeps journalists at the steering wheel.
The first version only showed one bullet point and readers had to expand or collapse everything else. But they didn’t want the summary to be too in the way of people who didn’t want it. So now the reader needs to click to generate these summaries.
“The most exciting outcome is that no one has needed to change anything factual and it’s been live for several months now. Only some small stylistic changes have been made to the actual output,” she said.
Lohn stressed they hadn’t seen too much of a negative impact in terms of audience engagement. “Not even on the article depth,” she said. “We have even seen a little bit of a positive impact on overall engagement.”
Nathalie Malinarich, director of growth, innovation and AI at the BBC, provided many details about an internal deepfake detector used by the journalists at BBC Verify. For now, they are testing its accuracy and carrying out the human checks and processes they would normally do. She said it was around 90% accurate and explained they would like to make results a bit more explainable and detail how it knows something is fake.
Malinarich also pointed to sports as an area they’ve experimented with. They’ve generated transcripts of hundreds of commentaries provided by BBC Local Radio, identified the key moments of each match and, after being checked by humans, published them as live text updates on the BBC Sport app.
News organisations have to reckon with AI to ensure the longevity of the information ecosystem in which they operate. One of the aspects that’s changing is the relationship between audiences and news.
Strait from the Ada Lovelace Institute highlighted the growing prominence of AI chatbots, which are particularly used by young people and could potentially replace search. This is concerning, he said, as it is much harder for audiences to receive accurate responses to their queries from generative AI’s probabilistic systems.
“These are not tools that are deterministic, that are designed to give a single answer,” he explains. “By their structure, by design, they are stochastic and probabilistic. That means that if you query it 100 times, it's going to give you 100 slightly different versions of the same thing.”
Klaudia Jaźwińska, researcher at Tow Center for Digital Journalism, detailed some of the research her organisation has carried out regarding AI and journalism. It found that AI chatbots were confidently wrong rather than declining to answer the question; often inaccurately cited news content even when given verbatim extracts; were able to find information from news articles that have deliberately blocked their crawlers; and content licensing deals gave no guarantee for accurate citation.
Another issue that came up in the panels is how this push-and-pull between Big Tech and news organisations will impact the news ecosystem. Various panellists pointed out how tech companies have too much influence over potential legislation, leveraging the idea of economic growth over regulation.
Matt Rogerson, director of global public policy and platform strategy at the Financial Times, was concerned about the message Big Tech is sending to the US government that it is essential to secure national economic security.
“The challenge that we've got is that the level of power that those companies have over the government now is quite astonishing. Those companies are now arguing to the White House that copyright should basically not exist because otherwise, China will win,” he said.
The question of copyright is at the heart of many of these legislative discussions between Big Tech and the news industry. Many news organisations are striking deals with AI companies. As those deals are confidential, though there is no expectation for the industry to converge on common licensing agreements.
Simon said that, while he understands why these deals are secret, it creates problems for the news ecosystem overall where collective action would be useful.
“I can see why both publishers and AI developers don’t want to disclose the details, but it creates a big collective action problem for all players involved. We have absolutely no idea what the value of that data is. It's very difficult to independently assess it,” said Simon, who argued for more clarity in this piece that we published a few months ago.
Our panellists expressed concerns that the usage of AI could disadvantage speakers of oral minority and Indigenous languages.
Acuña explained that over seven million people speak the Indigenous language Guarani in Paraguay. Since it’s oral, there aren’t many written records that can be used to train AI models. Raghu pointed out that this can be a problem even when a language is widely spoken.
“There are 500 million Hindi speakers. It baffles me that the training data used for a language like Hindi is still so archaic,” she said. “DeepSeek gives you gibberish. ChatGPT gives you very, very archaic outputs: words we don't use today, but probably were used in the early 1900s, late 1800s.”
Roxana Radu, Associate Professor of Digital Technologies and Public Policy at the Blavatnik School of Government, pointed out that there has not been a comprehensive acknowledgement of who is left behind of the AI conversation. This creates additional divides on top of the digital divide we were used to.
“There’s a lot at the level of international disparities,” she explained. “Not everybody has access to AI in the same way. Some countries are building sovereign capabilities and others have not even started that conversation.”
Chris Summerfield, Director of the UK AI Safety Institute, said that all industries are still trying to reckon with what AI is and what it means for their sectors. He explained how the idea of AI doing the job of a person still persists. He challenged that notion by putting forth an alternative way of thinking about AI: as a part of the natural course of institutional development and as a tool for moulding and shaping interactions among humans, rather than something that will replace us.
“We live in a Wild West moment in which there is a chocolate box of AI tools, which are available to those who are willing and able to use them,” he said. “But what we haven’t followed up with is a series of standards or protocols that define exactly what best practice is for implementation in each sector.”
The panelists also grappled with the ethical implications AI can have on society. Nash, from the Oxford Internet Institute, pointed out the influence Big Tech has on academia with more funding coming from the industry. She said that academics must make sure this does not impact the kind of questions they can ask around the industry and highlighted the importance of ethical guidelines so that individuals are not exploited by AI systems.
“We need to focus on the ethical guidelines we need to ensure that all individuals in their everyday online activities are treated responsibly, so that moments of financial or emotional vulnerability are not capitalised upon,” she said.
____
In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.
In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.