AI newsroom guidelines look very similar, says a researcher who studied them. He thinks this is bad news

Tomás Dodds, who’s looked at 37 AI rules in 17 countries, warns against “rigid, top-down” principles and calls to celebrate a diversity of approaches
Assistant Professor at Leiden University and Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University, Tomás Dodds.

Assistant Professor at Leiden University and Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University, Tomás Dodds. 

3rd May 2024

Newsrooms around the world are trying to figure out the best ways to apply generative AI to their work without falling into a hype trap. This is a delicate balance, especially at a time of uncertainty about the technology and the business of news. How are journalists and media managers reacting to the rise of this new technology? Are they fearful, hopeful or both? Which kinds of guidelines are they putting in place? 

These are the questions at the heart of my conversation with researcher Tomás Dodds, who’s studied these issues in the last few years and who recently shared his findings at a talk at the Oxford Internet Institute. Dodds is an Assistant Professor at Leiden University and a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University. Our conversation was edited for length and clarity. 

Q. For your research, you and your collaborators (Mathias-Felipe de Lima Santos and Justin Yeung) analysed 37 AI guidelines in newsrooms in 17 countries across the world. What commonalities have you found in those guidelines and what is missing? 

A. We found that most of the guidelines highlight the importance of having editorial values in place that supersede the use of these technologies. The need to protect users and users’ privacy is mostly prevalent in European newsrooms, which I think correlates with the regulations and probably decisions that the EU is making are putting in place, which is really good. 

We also see that guidelines have highlighted the need to ensure the ethical implementation of AI in journalism and that includes having human oversight, the explainability of AI systems and the disclosure of automated content. That was super interesting. But what was more interesting was a high degree of isomorphism. This means that these guidelines were very similar to each other, and that made us suspect a little bit of how these guidelines were being made, especially given how quickly they emerged in Europe and North America. 

As soon as we started doing interviews, my suspicions were confirmed: a lot of these guidelines were made from the top down. They were made individually by an editor-in-chief or sometimes by parent companies, without any consultation of journalists. 

How can we create guidelines from the bottom up? How can we create guidelines involving journalists and all the stakeholders involved in news production? It shouldn’t surprise us that journalists are still making decisions based on their gut feeling. Even with all of these guidelines in place, journalists are still going to make decisions based on what they and their community feel it’s important. 

If you impose guidelines from the top down, they are not going to be very effective because journalism is based on gut feeling. So we need to encourage newsrooms to have a conversation with their journalists and ask them about how these technologies should be put in place. 

Q. What is driving this pressure to publish AI guidelines in newsrooms? 

A. I think we all are. I do have the impression that we in academia have rushed to write papers and speak in conferences about how journalists are creating guidelines, and we’ve put a lot of pressure into newsrooms to come up with a response. We asked journalists to do all the legwork and have everything solved within weeks and this will actually be dangerous. 

What I saw is that journalists are trying to understand how these technologies work, and how they should use them. They are trying to educate themselves. And yet we kind of pushed them to have an answer to this question really quickly and that could be counterproductive because if you end up with really rigid, top-down guidelines, that’s not going to correspond with how journalists actually want to use AI.

Q. This idea brings me to the hype around AI and the fact that journalists, including us at the Reuters Institute, are focusing on this topic. Do you think this hype is warranted or are we in the industry focusing too much on AI? 

A. We need to try to understand what hype is and what hype does, which is to simplify very difficult concepts like innovation or how these technologies actually work. By hyping these technologies, we are reducing the framework of our understanding of what these technologies actually are, and how we should appropriate them. 

At the same time, hype also distracts us from the consequences that these technologies could have in real life. When we hype technologies, what we're doing is kind of creating a myth around them, and not putting the focus on the political, cultural and social consequences that these technologies are going to have.

However, what I’ve seen in my research is that journalists are really trying to avoid falling for that hype in different ways. One of those ways is by refusing to engage in this conversation on the terms set by big tech platforms. That is a huge shift in tech journalism. Until not very long ago, most tech journalism was about the newest shiny gadget. Now newsrooms are investing in tech journalism by hiring experts on these matters who can actually contribute to the debates about how newsrooms should approach these technologies. These nerds are way more critical and more likely to avoid hype. 

It’s also important to say that when we are avoiding hype, we are not stopping the conversation with these big tech platforms because they are still very important social actors that we need to engage with. It’s our job as journalists to hold them accountable. 

Q. When you look at attitudes towards AI, do you see any differences between top editors and the rank-and-file? 

A. We need to understand that work hierarchies do not correspond necessarily with knowledge hierarchies. Our research suggests that knowledge about AI inside newsrooms is very siloed. Most of the people we have interviewed believe that AI is important, but they are trying to learn knowledge that is very concentrated in these knowledge silos. 

This is problematic because it means that there are some journalists that are working with AI systems, unsupervised by their editors. It doesn’t mean that they are doing unethical things, but some of them are doing things that dance on the edge of the ethical borders of journalism. 

The way that journalists decide what is ethical or what isn’t is in your discussions with your editor and your peers. So when you have peers and editors and supervisors who don't know or do not understand the technologies you're working with, it's very difficult for them to determine what is ethical and what is not. It’s very difficult to regulate something you don’t understand. That's why we need to create information sharing practices that allow newsrooms to break these silos and take advantage of the people who know a lot about AI.

Q. We have seen first hand how quickly AI can change and I have a feeling a lot of things are going to continue to change very quickly in the next few years. How prepared do you think newsrooms are for these changes? 

A. I am now collaborating on a project with Rodrigo Zamith and Seth Lewis, and Rodrigo made this beautiful analogy the other day, that the innovations in journalism in terms of technology have always kind of pointed towards acceleration. We have social media to write faster and shorter, we have bigger audiences, podcasts, digital journalism. Everything was about acceleration, what he called a hamster wheel of acceleration in journalism. 

Generative AI made some journalists step out of this hamster wheel for once and look at the technology with a more critical eye. We think this happened because of the threats generative AI poses to journalism as it redefines basic concepts of what it means to be a journalist. What does it mean to be a journalist when you don't have to write headlines or summaries? This made a lot of journalists step out of this wheel approach these technologies in a way that they didn’t do with social media. 

With social media, we jumped to the promise of distribution and monetisation. So the most important thing that newsrooms did this time was for once to stop that hamster wheel, step out and look at it from outside and say, ‘We are going to get a little better at preparing because we know that we don’t know how these systems work.’ 

Q. How important is it for the news industry to be on the same page when it comes to the adoption of AI?

A. I am aware that most of my research was conducted in the Netherlands in newsrooms that have big budgets and the luxury of pausing and deciding how to approach or negotiate with these platforms. Not every newsroom in the world is going to have that level of power. This means that you will probably see newsrooms with fewer resources which won’t have the luxury of stepping out of the hamster wheel and they will be more dependent on these tech platforms even when they are aware of the consequences.

 But I don’t think it’s fair to ask them to behave in the same way because each newsroom is an ecosystem in itself. Eventually journalism is going to regulate itself, so you can’t ask newsrooms to behave the same way. That’s why the fact that the guidelines are the same is kind of problematic. Isomorphism in journalism is bad news. You want to have diversity and newsrooms innovating on how they approach these technologies.

Academics, policymakers and other stakeholders need to avoid making a call for journalists to approach these technologies in the same way. We need to celebrate the diversity of approaches and strategies that newsrooms are trying to come up with. We shouldn’t force journalists to behave the way we want them to behave. We need to help journalists define for themselves how they want to do it.

Join our free newsletter on the future of journalism

At every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s sources - all in 5 minutes.

  • Twice a week
  • More than 20,000 people receive it
  • Unsubscribe any time

signup block