Key AI concepts to grasp in a new hybrid journalism era: transparency, autonomy, and authorship

AI rendering of a human-robot hybrid hand, holding a pen and poised over a blank page

When we bring generative AI into the news production cycle, we must be aware that we are making it a co-author, and grapple with a new understanding of autonomy and authorship, argues Austrian fellow Katja Schell. Credit: Dall-E prompted by Katharina Schell

17th January 2025

As we begin 2025, you might feel you’ve read more than enough about AI in journalism – and that your newsroom has more pressing problems to worry about.

As deputy editor-in-chief at Austria Presse Agentur (APA), I’ve grappled with AI and the implementation of AI-informed tools like TextAssistant. And, as the editorial representative in APA’s Trusted AI team, I’ve wrestled with policy. Discussions and white papers all seem to end in the same vague recommendations for “transparency” in the form of labelling.

My project argues that we have been unduly preoccupied with technological aspects of AI, and insufficient attention has been directed towards the fundamental aspects of journalism impacted. Among the key questions that remain unanswered:

  • What factors should be considered when determining whether to label the involvement of artificial intelligence (AI) in journalistic production?
  • What characteristics should the labels possess?
  • What information should they convey, and
  • How can the labelling process be integrated into the workflow of media production and distribution?

This winter, I used a three-month fellowship at the Reuters Institute for the Study of Journalism to pull together a practical framework and accompanying tools that help publishers, editorial teams and journalists make well-informed and plainly explainable decisions on whether, when and how to disclose the use of AI in their journalistic products. Contingent on these decisions, the project also sets out a range of formal options for the labelling itself.

But before we get there, here are the key concepts we need to grapple with as journalists.

The transparency paradox

Transparency has been a buzzword since the mid-2010s, when it was first hailed as the home remedy for misinformation and declining trust. Now, AI has been cast as a fresh risk factor, adding to the fear that opaque systems undermine credibility.

I reviewed 13 editorial AI guidelines, and the majority mandate “transparency” but often fail to define what that means in practice.

At the same time, “trust in journalism” is assumed to hinge on disclosure, but we lack any evidence that transparency alone bolsters trust.

In reality, media organisations’ AI strategies often assume an a priori mistrust of the technology. Audiences, it is argued, will feel deceived if they discover AI was involved without their knowledge. Such “advance mistrust” is understandable when “transparency” has historically been seen as a remedy for digital disinformation. Yet we face a paradox: AI is deemed untrustworthy by default, but if it is used “transparently” audiences supposedly will stay loyal?

Moreover, labelling AI can backfire. Some studies suggest that attaching “AI-generated” labels reduces perceived accuracy and willingness to share the content. Transparency initiatives meant to build trust can thus inadvertently erode it. If a fully human-written article goes unlabelled, while an AI-assisted piece gets flagged, the latter may unfairly appear less credible.  The worst-case scenario: generic AI slob, not labelled as such, would be considered more trustworthy than a piece of thorough and reliable journalism, produced with the help of carefully and ethically implemented AI systems fully controlled by the human journalist,

Autonomy, agency and AI

Autonomy is a bedrock principle of journalism, traditionally understood as freedom from undue external influence.

With AI, we are inviting a new “player” into the newsroom. In many cases, technology is framed as an “assistant” that sits below editors in the hierarchy, but it still exercises a degree of “second-order” autonomy when it comes to editorial decisions and generating or revising text.  

Newsrooms make countless decisions every day: Which sources to use, which ones to discard, which story to tell, which angle to choose, which audiences to address, when and where to distribute... When we outsource decisions to semi-autonomous systems, we need to be aware that we are granting them agency. One crucial question is: Does the machine have more agency than the journalist?

There is no doubt that our notion of journalistic autonomy is being challenged. And as we reshape journalistic autonomy it is inevitable to alter the sense of who “authors” the story, too.

Redefining “authorship”

We often overlook the notion of the journalist as an author. Generative AI can produce text that reads convincingly “human”, leading to headlines about “robot writers”. But, in reality, any “AI-written” article still depends on prompts, edits, and final review by humans. As such, we are dealing with distributed authorship: an interplay between human authorship and algorithmic assistance.

This shift challenges the idea of full human control and invites us to examine the balance of power. If the journalist retains ultimate responsibility and accountability – choosing sources, refining text, applying news judgment – then AI is a co-author but not a solo actor. However, if AI’s role in shaping content expands, how do we disclose this authorship dynamic without simply stamping “AI-generated” on an article?

The two key factors “editorial decision-making agency” and “authorial autonomy” describe the balance of power in the hybrid journalistic process. I used them to provide a decision matrix for or against the disclosure of AI for exemplary use cases.  The classification is not risk based, but identifies tasks with a high impact on autonomy, which may indicate a higher need for scrutiny to maintain ethical standards, transparency, explainability and journalistic integrity. 

Why labelling falls short

Many guidelines call for “labelling” AI involvement, but this approach can be problematic:

  • Normalisation: If AI is no longer an exception, labelling it as “other” could quickly become obsolete.
  • Skewed focus: Calling attention only to AI risks diminishing human editorial effort and overshadowing the broader journalistic process.
  • Potential distrust: Labels may inadvertently heighten suspicion, especially among users unsure how AI was used.
  • Limited utility: Vague labels like “AI-generated” do not explain which tasks the technology performed or how heavily humans intervened.

Instead of merely describing content, we might need procedural insights: letting audiences see how a piece of journalism was created, which steps involved AI, and what kind of human oversight was applied. This concept can be implemented through multi-layered disclosure that focuses on the entire editorial workflow, not just the AI element.

graphic with circle showing stages of disclosure: full, indirect (metadata), and direct (both specific, like labels and general, like policy statements)

A more holistic approach looks at editorial and authorial autonomy as key factors in ascribing the type and level of labelling required. If AI decisions significantly affect editorial choices (selecting stories or deciding angles) or authorial processes (generating or rewriting text), then a form of disclosure may be warranted. But this disclosure works best as part of a procedural or metadata-driven system, rather than a simplistic label.

  • Direct labelling: Visible cues for users on a media outlet’s own site.
  • Indirect labelling: Machine-readable metadata for syndication, archives, or research.
  • General transparency: Readily accessible explanations of how news is produced, including editorial standards and fact-checking processes.
  • Specific transparency: A drill-down option (e.g., “click for more”) showing exactly what role AI had in writing, editing, or distributing the content.

By aligning these measures with broader user-centric strategies, publishers empower audiences to gauge the credibility of content without reflexively lowering their trust simply because “AI” is mentioned.

Conclusion

Journalism is entering a hybrid era in which human and technology collaborate. If the industry clings to a narrow “label AI” mindset, we risk sending the wrong signals: that journalists are unquestioningly adopting an untrustworthy technology without human intervention and oversight.

By labelling journalism – that is, providing a clear, layered account of the authorial process – we highlight our continued commitment to quality, ethics, and accountability.

An initiative like this will require effort, from metadata systems to user education. But for journalism to evolve to meet the demands of a hybrid era, so too must our notions of autonomy, authorship, and transparency. Let’s stop merely stamping AI with a label and instead offer an accounting of how we practise our craft. Users will be best served by a journalism that reveals its process—even when it’s supported by machines.