Neither humans-in-the-loop nor transparency labels will save the news media when it comes to AI

“The notion of a human needing to intervene in every decision undercuts the idea of speeding up various tasks,” argues our researcher Felix M. Simon
Ana Cristina García waves to Captcha, a robot by Hidoba Research, during the AI for Good Global summit in Geneva. REUTERS/Denis Balibouse

Ana Cristina García waves to Captcha, a robot by Hidoba Research, during the AI for Good Global summit in Geneva. REUTERS/Denis Balibouse

21st November 2024

Labelling the use of AI and having a human-in-the-loop are the no-brainers of the current AI discourse in the news. You can hardly go to any roundtable, media conference, or read any industry blog post without coming across these terms. If we just have labelling and humans in the loop, so the thinking seems to go in some corners, all will be well with AI. If only it were so simple.

First, let’s look at the evidence. Recent surveys have indeed found that most people are uncomfortable with news being created completely with AI but are fine with news created mostly by humans with some assistance from AI. So audiences say they want the human in the loop. 

This also applies to many people’s views on the disclosure of AI use. Almost everyone wants at least some forms of content to be labelled although there’s little consensus on exactly what should be labelled and how. 

Many publishers, worried about how their use of AI might affect their reputation and peoples’ trust, are following suit and are introducing both approaches. But what audiences say they want might not always overlap with how they will respond when they actually get it. And both human-in-the-loop approaches and AI labels bring with them risks and shortcomings that suggest that responsible AI is easier said than done.

What to label and how?

Let’s look at transparency labels first. What this usually means is disclosing in some form that an AI system was involved in the production or distribution of a piece of content. Take for instance German regional tabloid Express. For articles mostly generated with the help of AI, the Express has a separate byline (Klara Indernach, which abbreviates to KI, the German term for AI). They also add a disclaimer under these pieces:

This text was created with the support of artificial intelligence (AI) and edited and checked by the editorial team (name of the responsible editor). You can find out more about our rules for dealing with AI here.

Filipino news outlet Rappler also labels its use of AI in the automated article summaries that users can create at the click of a button:

This is an AI generated summarisation, which may have errors. For context, always refer to the full article.

But despite these, bigger questions loom.

For one, it is unclear to which uses of AI in news production should apply. Should back-end tasks like the use of AI to transcribe interviews made transparent to audiences too or just front-end uses such as summarisation? Should it be both? And how should such labels appear across different modalities and media formats? It’s also uncertain what labels are most effective for and how effective they actually are. For labels to make any difference, people need to notice them in the first place. While these design issues can and likely will be addressed, it highlights that there’s not going to be a one-size-fits-all solution. 

Secondly, there is a risk they might backfire, potentially decreasing trust in the labelled content. And the binary that labels can create, as AI expert Claire Leibowicz recently argued, can itself be problematic. To paraphrase her words: Just because something was made with AI does not mean it is misleading, less credible or not trustworthy. And just because something is not made with AI does not mean it is accurate or trustworthy.

A human in the loop? 

What about having a human in the loop, another feted approach to responsible AI use in news? By way of a quick definition, this refers to humans actively participating in the decision-making process alongside AI systems. This interaction can occur at various stages, but in news often refers to a human editor reviewing (and correcting) an AI’s initial output. Necessary and laudable as this is, by itself this approach is not a silver bullet either. 

For one, the process needs to work. But humans are fallible creatures. In some contexts, we prefer algorithmic judgments to human ones. And we often accept incorrect decisions without verifying whether the AI was correct. Even when given an explanation by the AI, we do not necessarily reduce our over-reliance. The risk is obvious: journalists defaulting to the AI systems' judgement or output, simply approving content because it’s easier when they are worn out or under time pressure.

Second, the promise of ‘human-in-the-loop’ approaches sits awkwardly alongside AI’s central selling point: scalability. The notion of a human needing to validate or intervene in every decision fundamentally undercuts the idea of speeding up or scaling various tasks. Of course, there are gradients to how often human oversight is required, but the tension remains. The whole point of AI lies in minimising human touch points, not relying on them.

 Third, again, audiences need to be aware in the first place that humans are involved for human-in-the-loop approaches to have a broader effect on trust. They also need to trust the process. So far, audiences don’t seem to believe journalists are doing a good job at this. In a recent six-country survey by the Reuters Institute, only one-third thought that journalists always or often check generative AI output. That number may have increased by now, but it’s hardly a reassuring finding. 

Where does this all leave us? In short, you cannot simply ‘hack’ your way to trust and acceptance around AI in news – and the same might be true in other contexts. And things have to be seen in context, too. General trust in media and levels of news consumption depend on numerous factors: partisanship, broader habits, trust in other institutions. 

In digital news environments, people also make snap judgments about trustworthiness based on various cues – the visuals used, advertising, familiarity with the brand and so on. And public acceptance of AI in news stems just as much from people’s own experiences with these technologies and their general attitudes toward technology as it does from broader cultural narratives. 

While some of these factors are shaped by news media coverage, which means the media has some influence over them, they cannot necessarily be directly addressed through socio-technical interventions such as AI labels or the like. 

Forms of responsible AI aimed at maintaining or increasing trust, while certainly well-advised and helpful in some ways, could end up playing only a minor role in this overall picture. 

Perhaps we have to think of them as something akin to airbags, an interesting mix of useful and effective inventions and regulatory requirements stipulating their use. But even the best airbag is really only meant to avert the worst. Acting on its own while the brakes fail, the chassis rusts, and the steering wheel tilts constantly in one direction, it is of little use to both the driver or the passengers.


This article originally appeared on NIKKEI Digital Governance. It represents the views of the author, not the view of the Reuters Institute.

Join our free newsletter on the future of journalism

In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.

  • Twice a week
  • More than 20,000 people receive it
  • Unsubscribe any time

signup block

Join our free newsletter on the future of journalism

In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.

  • Twice a week
  • More than 20,000 people receive it
  • Unsubscribe any time

signup block