This newsroom has been experimenting with AI since 2020. Here is what they have learned

“Look at your mission, understand what you really want to do with technology and do not rush it,” says Uli Köppen, head of AI at Bayerischer Rundfunk
 Uli Köppen, head of the AI and Automation Lab at the German public broadcaster Bayerischer Rundfunk.

 Uli Köppen, head of the AI and Automation Lab at the German public broadcaster Bayerischer Rundfunk. 

23rd May 2024

The generative AI boom of 2023 has prompted many newsrooms to experiment with AI systems in their work. This year, for example, both The New York Times and The Washington Post established editors for AI newsrooms initiatives. This is the job Uli Köppen has been doing in the last four years as head of the AI and Automation Lab at the German public broadcaster Bayerischer Rundfunk. 

Köppen is one of the true pioneers in this field. So I was interested in what she makes of the current hype cycle and how she thinks about the months ahead. Our conversation was edited for length and clarity.

Q. Your team was established almost three years before the explosion of generative AI in newsrooms. Why did AI become a priority for Bayerischer Rundfunk back in 2020?

A. We founded our data team nearly 10 years ago. For data teams, it comes kind of natural to stumble over AI. So, for us in the beginning, AI was just one tool in our investigations toolset. I also had the chance to take a break from the newsroom to do a Fellowship at Harvard. During that time, I focused on automation in different industries. When I came back, I had the chance to apply that to our work at Bayerischer Rundfunk. So we built a team on top of the data team: our AI and the Automation Lab. 

It was fortunate that we already had this environment between programming and journalism so we could onboard people that did not necessarily have a journalistic background. In this fruitful environment, they could thrive and we could build our AI and Automation Lab. The idea behind the lab was to shift the skills and mindset we already had in our data team to a product part: to do automated texts and automated graphics, to build tools for the newsroom, to be able to replace mundane tasks. We experimented a lot with some of the things people are doing right now. 

Q. How have you seen this technology and its applications to journalism change over time?

A. In the beginning, we had to explain a lot why we had a natural language generation expert in our team. People were kind of curious about what we were doing there. Fortunately, we had the trust [of the leadership] because we already had this cultural change with our data team, using data for journalism, investigations and tools for the newsroom. So we had an environment that trusted us, but nevertheless we had to explain a lot. 

Now people are absolutely into it. They are really eager to use it and want to know where it can be used and where we should use it. We are having regular meetings with all the people in our newsroom. People can just jump in and present use cases. So there is a lot of bottom-up movement right now, which is great. 

Of course, the market is absolutely agile. I can't follow all the tools that are published right now. This is not what I want to do because I wouldn't do anything else with my life if I did that. 

Q. How has the advent of Gen AI changed things in your role?

A. GenAI and the explosion of the market has certainly shifted me to a more strategic role as we have to put up a governance system and make long term decisions around AI and automation in the broadcaster.

Q. You mentioned that you work in both product and content within Bayerischer Rundfunk, and that there is a lot of bottom-up interest in AI technologies. How embedded is AI right now within the newsroom?

A. Our idea was always to intertwine the practical work with the technical work and strategy. In the beginning we were building prototypes around ideas, and then got the newsroom on board. 

However, we learned it is much better to get the newsroom onboard when we start a project from scratch. This is what changed in these four years and it’s a very beautiful thing because this is interdisciplinary. 

What we developed over time is the flow of ideas from our team to the management to really extract strategy ideas out of our prototypes, and to get knowledge from the team to the senior management of the organisation. That’s also my work: to translate that and to make it available so that our management can make the right decisions. 

If you're using AI, this is the kind of holistic approach you need. You have to make decisions that are kind of overarching. What kind of data management do we have? What APIs do we need? What resources do we have to shift? You need the top managers on board if you have to make decisions like that.

Q. Every newsroom is hoping to implement AI integration within their newsrooms. Many outlets have established positions like the one you have been doing since 2020. What kind of advice will you give to any newcomers? 

A. The most important thing to look at is your mission. You need to understand what you really want to do with technology and not rush it. You need a problem you want to solve and the problem should be connected to your mission. Then you should go out and find a solution and be also open to non-technology solutions.

What fascinates me about AI is that you can use AI projects or automation projects as a vector to be more digital and to work more digitally in the newsroom. Many times if you are looking at a tool, you have to revise the workflow. So you are looking at workflows that are kind of legacy workflows, you have to make them hybrid, like, support a journalist with an algorithm, for example. And for doing that, you have all the cultural change, you have the workflow changes, and then you have a tool that is on top of that. 

But the tool is just the tip of the iceberg. Everything beyond that is much more work and also much more interesting, because it’s about being more digital and thinking about what you really want to do with this kind of technology. That's the foundation for working with technology.

Q. What kind of projects is the AI and Automation Lab at Bayerischer Rundfunk currently doing? 

A. We are paying attention to algorithmic accountability reporting and using AI and automation for the newsroom to make the life of reporters easier, and also for our users to digest information better. 

On the algorithmic accountability front, we are always doing several investigations. Algorithmic accountability reporting is a beat around algorithms and how we are using them as a society. We have just published a white paper on our methods, which might be interesting for other journalists, because we're always interested in sharing our knowledge we're building within our teams. We are trying to set up statistical experiments around data and to explain to our users what's going on with their data and in the AI world. 

Q. When you are investigating these stories, how do you use your AI knowledge and your skill sets?

A. Our team has a very unusual combination of skill sets. We are in between content and product. We are an interdisciplinary team between journalism, programming and product management. In programming, we also have very specialised colleagues. 

For example, one colleague is a computer linguist and is specialised in natural language generation. The beauty of this combination between content and product is that this person is able to jump between building products around natural language generation like automatic texts and also chipping in when we need his skill set for investigations. So if a reporter has questions, they can always talk to this person and get deep knowledge around natural language generation. 

We are also always reaching out to universities. So when setting up statistical experiments, we always try to get scientists on board that are far more specialised in the topic than we are. 

Q. What other projects around AI are you currently exploring? 

A. What we're exploring, like many newsrooms around the world, is AI services for the newsroom. What is really helpful for journalists? Do we need summaries? Do we need headline suggestions? Do they have to be in our content management system? That’s what we are exploring right now. 

Then we are putting up a whole governance system: how do we want to use AI within our broadcaster? We put out our guidelines in 2020. Now a second version is coming up. We build internal guidelines to empower newsrooms to make decisions and we are putting up an AI ethics board as well. This still has to be signed by our CEO. But these are the suggestions we are making right now to really make the use of AI safe.

Q. How do you strike the balance between letting journalists experiment with AI while at the same time ensuring they are using it ethically and responsibly?

A. We’re fostering an experimental mindset within the newsroom. At the same time, we’re putting up guardrails to ensure that the use of AI happens within our journalistic ethical guidelines. The key is to empower the newsroom to decide for themselves whether a certain tool is safe to use. Therefore, we’re preparing a checklist that should help to make this decision.

Q. How do you see the potential of AI in the years ahead? 

A. What is already clear is that AI is going to be part of nearly all the tools we are already using. So the custom solutions we are building right now with the teams will be less and less important because we are using a lot of tools right now. 

For example, will our content management systems or the Microsoft products we are using be intertwined with AI products and AI services? AI services will be normal. You won't recognize this as AI anymore, which is good for the workflow, but bad for literacy because people should still know what they are using. 

I’m always happy if workflows are seamless, but I still think that for our users in the newsroom and also our users outside the newsroom, people should be aware of what kind of technology is behind what. There’s so many implications behind AI… What kind of data goes in? What happens with my data? Is this safe to use? What are the disadvantages? Is the output true? Is it factually correct or not? Do I have to check it? There are so many questions and I am always looking at those two sides. 

Q. Right now, most AI tools being used by newsrooms are made by tech companies. Should news organisations invest money and resources on developing their own tools or CMS?

A. It depends. If there are off-the-shelf solutions, it usually is better to use those to save time and money. But oftentimes, those solutions don’t exist and news organisations at least have to customise technology to their needs. If a newsroom needs to build internal knowledge around a tool or a certain technology, it might make sense to consider building or customising something. Of course, we’re not talking about large language models – this is completely out of the scope of media organisations, at least right now.

Join our free newsletter on the future of journalism

At every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s sources - all in 5 minutes.

  • Twice a week
  • More than 20,000 people receive it
  • Unsubscribe any time

signup block

Join our free newsletter on the future of journalism

At every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s sources - all in 5 minutes.

  • Twice a week
  • More than 20,000 people receive it
  • Unsubscribe any time

signup block