Rebuilding trust: journalism’s role in an AI-driven world

A woman in black glasses and dark clothing speaks into a microphone at the Frontline Club in London.

Sashka Koloff presenting the findings from her Reuters Institute fellowship project at the Frontline Club, London. Credit: John Cairns

When I began my fellowship at the Reuters Institute for the Study of Journalism, I set out to explore how public service media could safeguard trust in the age of artificial intelligence. But it quickly became clear that the challenges facing the media cannot be separated from deeper questions about journalism’s core purpose.

At the heart of this inquiry lies a deceptively simple question: what unique value does journalism offer in today’s information environment? 

The answer may seem obvious. Journalism exists to deliver information that is accurate, relevant, impartial, and independent. Its credibility depends on rigorous methods and ethical standards. These principles form an implicit contract with the public: that journalism can be trusted to seek truth, verify facts, and serve the public interest. 

But that contract is under severe strain. Trust in news is depressingly low. Globally, only 40% of people say they trust most news most of the time. Polarisation, information overload, and deliberate misinformation have left audiences feeling disoriented. AI-generated content is confusing an already distrustful public. Political attacks on the media labelled “fake news” and journalists “enemies of the people” further erode public confidence.

Journalism’s trust deficit cannot be disentangled from broader societal shifts: the erosion of institutional authority, the strategic manipulation of information, and the fragmentation of public discourse. Research shows that citizens are more likely to distrust news that challenges their worldview. As A.G. Sulzberger, publisher and chairman of the New York Times noted, in today’s hyper-polarised societies, audiences reduce journalism to a binary: “Are you with us, or against us?” 

Any journalist looking at the scope of these challenges is likely to feel despair. But there is hope. Over three months, I spoke with newsroom leaders, AI experts and standards editors for ideas on how to reaffirm journalism’s role in a noisy and confusing world.

Trust vs trustworthiness 

What makes journalism trustworthy in the eyes of the public? Unfortunately, there is no clear answer. A helpful starting point, however, is to distinguish between trust and trustworthiness.

Trust is inherently subjective. People often place trust in sources that affirm their worldview, even when they lack accuracy or integrity. In this sense, public trust can be misplaced, rooted more in familiarity or ideological alignment than in factual reliability.

Trustworthiness, by contrast, aims to be objective. It speaks to whether a source deserves trust based on the quality of its reporting, its adherence to ethical standards, and its commitment to transparency and editorial rigor.

As Professor Charlie Beckett, Director of Polis at the London School of Economics, told me, chasing universal trust is pointless and “a complete misunderstanding of what trust is and why people trust things”.

In short, news organisations hoping to earn the trust of everyone are chasing a fantasy.  We need to reframe it.  

Is transparency the solution to the trust problem?

In recent years transparency has emerged as a strategy to demonstrate trustworthiness. Rather than striving to be universally trusted, many news organisations are now focused on proving they are worthy of trust. This shift reframes the challenge: from expecting audiences to believe, to showing them why belief is justified. 

Whether it’s working remains uncertain. A review of the available academic literature suggests not. I am a little more hopeful. It seems newsrooms are too, as many step up transparency measures. Efforts to better explain how stories are produced, why editorial decisions are made, and to publicly acknowledge mistakes reflect sound journalistic practice and a commitment to continuous improvement. They also exemplify an emerging model: not to demand trust, but to earn it through sustained, transparent engagement directly with the public.

How four news organisations are tackling the trust problem

For this project, I examined trust efforts at four legacy media organisations: the BBC, the New York Times, Schibsted Media, and my employer, the Australian Broadcasting Corporation.

Despite vastly different geographical, political and societal contexts, there were striking similarities. All four share a common belief that transparency, audience engagement, and relevance are essential to earning public trust.

Here’s how:

  • Make verification visible: Show how facts were checked, what sources were used, and how editorial decisions were made.
  • Put a human face to journalism: Audiences respond to the voices and values of individual reporters, not just institutional brands.
  • Design with the audience: Schibsted Media’s innovation lab, IN/LAB, develops formats in partnership with hard-to-reach audiences, particularly young people, to make journalism that is more relevant to them.
  • Tailor transparency to context:  Excessive or poorly placed AI disclaimers can confuse or even undermine trust. Effective transparency when labelling generative AI content must be concise and meaningful.

AI – the new frontier for trust

Generative AI threatens to amplify the trust crisis. While it offers efficiency and new capabilities, its careless deployment has already damaged newsrooms’ reputations. Fake book lists, misleading AI-generated summaries, and even fabricated interviews have exposed not just technological flaws, but editorial failures. 

But the problem is not the technology itself, rather how it is used. As one academic told me, AI brings new challenges which require us to be more vigilant, but the additional problem with AI is that it can make you lazy.

The bigger challenge lies online, where the surge of misinformation and disinformation threatens to overwhelm both the public and journalists. A multi-country study undertaken earlier this year underscored these concerns, revealing that both journalists and audiences are deeply uneasy about generative AI’s potential to mislead and deceive. Journalists said they feel “poorly equipped” to detect fake, AI-generated material, and most newsrooms lack clear systems to verify it.

Sophisticated AI tools can now generate synthetic content – images, audio, video and text – that is virtually indistinguishable from human-created journalism. This growing problem was highlighted by a recent viral video claiming to show a fire at Iran’s Evin Prison. As reported by ABC News Verify, the video was AI-generated, featuring visual inconsistencies and fabricated elements that misled many viewers, and sadly some news organisations, too.

These problems will only intensify as tools become increasingly sophisticated and accessible, representing a new battleground for newsroom fact-checkers. AI is worsening an already fragile trust environment. As Dr Anya Schiffrin director of the Technology, Media and Communications Specialisation at Columbia University's School of International and Public Affairs, told me: “Nobody trusts anything anymore. And I think AI is making it worse. You’ve got a hard job in that situation. Journalists need to keep trying, but it is difficult to see our way out when there is that massive confusion.”

Newsrooms of the future – how journalism can harness the power of AI

Newsrooms can no longer treat generative AI as optional. It’s already reshaping how the public interacts with information. As tech analyst and venture capitalist Mary Meeker recently noted, the pace of AI adoption is “unprecedented” and materially faster than any prior technological revolution.

 Meanwhile, the adoption of generative AI in most newsrooms has been, in the words of AI expert David Caswell, “broad but shallow” driven more by a desire for efficiency than genuine innovation.  One reason is structural: legacy media organisations often struggle with what Caswell describes as “bureaucracy, legacy assumptions, and risk avoidance.” In other words, even when the opportunity is clear, organisational inertia gets in the way.

But by doing nothing – or too little, too slowly – media companies risk disempowerment. As Jane Barrett, Head of Reuters AI Strategy said: “When it comes to AI, we need to change our thinking. Humans should be in control, not just in the loop.” AI strategist Nikita Roy also argued that newsrooms must lead the conversation, not follow the pace set by tech companies: “Journalism needs to get in front of AI. We need to understand it. We need to start building tech that aligns with our values, not just adopt tech that is being pushed on to us.” Based on interviews with newsroom leaders and AI strategists, six  key priorities emerged:

  • Train your newsroom in AI literacy: Many journalists lack the skills or confidence to use AI tools responsibly. Training is essential to bridge this gap and ensure informed, ethical use.
  • Build hybrid newsrooms: Editorial, product and design teams must collaborate. Integrated, hybrid teams at organisations like Reuters and the New York Times are demonstrating how this model fosters innovation and alignment.
  • Put yourself in the shoes of your audience: The news industry has long expected audiences to adapt to its formats. Generative AI offers an opportunity to reimagine storytelling through flexible, audience-driven formats – adapting to what audiences want, rather than the other way around.
  • Know your red lines and stick to them: Understanding public expectations is critical. Reuters, for example, has a clear red line: no AI-generated imagery.
  • Design flexible, evolving AI policies: Rigid AI policies are often unhelpful. Vague catch-all “principles” leave newsrooms confused. Instead, establish precise guidelines that adapt with public understanding.
  • Think from “first principles”: Rather than retrofitting new tools to old workflows, journalism must lead the conversation and think from a “first principles” position. AI strategist Nikita Roy said, “The question now becomes what is journalism really for in an age where AI can completely remix your content and produce completely personalized podcasts and newsletters? Why do people truly need us? It’s about going back and drilling down to the foundations of why we exist in the first place and building up from there.”

In truth, the biggest challenge news organisations face is the rapid shift in information distribution. Generative AI tools like OpenAI’s ChatGPT and Google’s Gemini now offer fast, confident answers—often without attribution or verification —reducing the perceived need for traditional news sources. The disintermediation of journalism, which began with social media, is accelerating with AI. The scale and speed of this change is extraordinary. In April, ChatGPT reported over 800 million weekly users and 1 billion daily interactions. Earlier this year, OpenAI’s paid subscriptions grew from two to three million in just a few months. Control over information flow is shifting even further from newsrooms to a handful of dominant tech platforms.

We must urgently define and double down on the unique value journalism provides, by doing what AI cannot: engaging directly with audiences, exercising human judgment, and telling stories rooted in care and curiosity. If not, our industry risks becoming irrelevant.

For more of Sashka’s work, including detail on newsrooms’ trust measures and recommendations on how to set up for AI, see the full PDF version below.

Meet the authors

Sashka Koloff

A recipient of three Walkley Awards for Excellence in Journalism, Sashka now serves as standards editor for the Australian Broadcasting Corporation's Content Division.  In this role, she works to safeguard truth and trust in the ABC's storytelling by... Read more about Sashka Koloff