Focus on the humans, not the robots: tips from the author of AP guidelines on how to cover AI

"As journalism about AI becomes a staple of reporting across beats, it's key to think about basic journalistic questions," says Garance Burke
Credit: Douglas Zimmerman

Journalist Garance Burke. Credit: Douglas Zimmerman

5th September 2023

2023 has been the breakout year for generative AI, with many industries, including journalism, studying the implications this new technology may have on their own fields. In August the Associated Press updated its AP Stylebook to include a chapter on journalistic coverage of artificial intelligence which includes suggested reporting approaches, common pitfalls, and general guidelines. 

Garance Burke is a global investigative journalist with The Associated Press and the person who led the development of the AI chapter in the latest edition of the AP Stylebook. I spoke with Garance about how journalists can improve their coverage of these new technologies based on these guidelines. 

Q. What factors did you take into account when writing the artificial intelligence chapter on the latest edition of the AP Stylebook?

A. I'm based in San Francisco, where AI is having a moment. So it's very clear to me that there's a real need to bring more journalistic rigour to our coverage of these technologies. 

I was really fortunate to have a fellowship at Stanford University back in 2020, and had the time and space to take more programming classes and think about how these models are built and how to best understand their impacts in the world. 

I'm an investigative reporter. So when writing this guidance, I thought a lot about how to translate these complicated statistical concepts into terms that reporters can understand, so that we can do deep and accurate work when chronicling these systems, both their promise and their perils.

Q. How should journalists approach their coverage of AI? 

A. I think we're in a bit of a hype cycle, or what some might call a doom cycle. But rather than just staying in that esoteric debate about whether AI models are good or bad, coverage of AI tools should really get back to journalistic basics, which includes thinking about: how do these systems actually work? Where are they deployed? How well do they perform? Are they regulated? Who's making money as a result? And who's benefiting? And also, very importantly, which communities may be negatively impacted by these tools? 

As journalism about AI becomes a staple of reporting across beats and platforms, it's really important to just think about these basic journalistic questions rather than feeling as if the very concepts behind these models are too difficult for an average journalist to parse. 

Q. You mentioned the term ‘hype cycle’. How can journalists deem what is worth covering and what is not when it comes to AI? 

A. We did include some do's and don'ts in this Stylebook chapter about AI and one particular thing to keep in mind is to just really beware of developers who describe their tools as breakthrough or revolutionary technologies, because few of these systems really are. A lot of that tends to be thinly disguised marketing ploys. 

In this particular situation, consulting with experts and thinking about ways to build sources with folks who audit these systems, academics who've done studies of the data, technologists who may have worked at these companies or currently do, regulators who understand these tools from a different perspective will help reporters come to deeper and more accurate ways of writing about these tools rather than sticking to the sort of breathless, first-in-class, best-in-class kind of ‘hypey’ coverage. 

Q. I have also seen a lot of coverage that verges towards a more alarmist or ‘doom and gloom’ tone. How can journalists balance their concerns for these tools while not veering towards hyperbole? 

A. We're very conscious that we're putting forward this new AI chapter of the AP Stylebook during this big wave of disruption. AI technologies are increasingly starting to impact journalism itself and the beats that we cover in the real world. 

Bringing a basic journalistic lens will help us provide coverage that helps our audiences understand the ways in which these technologies may alter our daily lives.

One thing to keep in mind is that we have a big responsibility as journalists because in many countries these tools are not particularly well-regulated. This is all happening in real time and so we have a real responsibility to explain and visualise these technologies accurately and with nuance, as our coverage in turn ends up having lasting impacts on law, public policy, and beyond. 

Just taking a step back and thinking about how to bring more critical lenses to these tools is really important right now.

Q. What is the most common pitfall you see in current AI coverage? 

A. One thing that has been pretty evident in recent coverage is just humans' fascination with robots. There's a lot of coverage out there that ascribes human emotions or capabilities to AI models or implies that they have thoughts, feelings, or exert human-like agency in the world. That doesn't really further the public's understanding of what these technologies can do. 

Journalists will produce much better journalism by focusing on the humans who develop and supervise these systems and trying to go beyond the tool itself to look at the human data that was captured to train these models and the humans who made very specific choices in how these models are optimising or predicting.

Q. Can you point to one story or one angle that you think has gotten lost in current AI coverage? 

A. You don't need to be an engineer and to be able to get access to all of the code base to really interrogate these systems. 

One element that often gets lost is that as journalists, we can be like water. When we don't find answers through the front door, we can seep underneath. We can talk to the people who, off the record, have misgivings about the tool that they deployed. Or we can find the paperwork that shows the government contract that proves a surveillance technology or a predictive tool is being deployed in our communities. 

We can do our own AI experiments testing these models if we get access to APIs. We can also use these technologies in our reporting if we get, say, a huge cache of documents by letting the AI do the cumbersome work that humans are otherwise tasked with, like going through thousands and thousands of PDFs. 

It's important to keep in mind that we have a lot of resources that we can employ as journalists to explain these technologies if we come at it from the perspective of finding the humans who are building these systems and, very importantly, the people who are most impacted by them. 

These systems can often carry human biases in the training data that's used to build them. So it's important to think about the methodological choices that were made. 

If you have a child welfare algorithm that runs on data that might over-represent families of colour, what kinds of choices does that lead the model to make going forward? Or if you have a large language model that's trained on a vast array of text from the internet that includes conspiracy theories, what does that mean that the model knows as its basis for language? 

Just keeping in mind how human biases can influence the ways in which these tools work is important, in addition to knowing that you have the agency yourself to deploy AI tools and experimenting in testing the systems that you're reporting on. Doing new digitally savvy open source work in your newsroom is also a good place to keep pushing. 

Q. As AI tools continue to optimise and develop, what will you keep your eyes on for updating the guide in the future? 

A. This process, as with most at the AP Stylebook, involved consulting some of our internal experts at the AP, consulting with external experts, and sending around the guidance for comment. We're very open to hearing from folks as to what they think about this first new AI chapter. We're very conscious that the Stylebook is read by people all over the world, inside and outside of journalism and so we're going to be updating it as AI evolves. 

Being based in San Francisco and seeing the degree to which large language models are morphing and expanding every day, it's going to be really important to think about generative AI in particular and how these models are adopted in multiple spheres of life that we cover as journalists, so that we can stay on top of the ways in which generative AI is changing. 

Q. How would you like people to use these guidelines? 

A. My hope is that this guide helps more people write about these technologies accurately and in depth. All of us need to really be thinking about how AI models are influencing our world and what that means, not just the people who are creating the tools or folks who have deep computational backgrounds. So my hope is that this gives journalists around the world from many different backgrounds the feeling that they too can do it.