Julia Angwin fears the public sphere is about to get worse: “AI makes it easier to flood the zone with misinformation”

As she launches nonprofit Proof News, the reporter shares tips to cover this technology and argues that cutting deals with AI platforms is not worth it
Award-winning reporter Julia Angwin.

Award-winning reporter Julia Angwin. 

2nd April 2024

Julia Angwin has been covering the impact of technology on society for more than two decades – first at the Wall Street Journal and ProPublica, then as the founder of the Markup, a nonprofit that she launched in 2018 and that she left in 2023 with a letter with 10 inspiring takeaways

Earlier this year, Angwin launched Proof News, a nonprofit whose goal is to follow an aspiration Walter Lippman expressed in his seminal book Public Opinion – to apply the rigour of the scientific method to report on some of the big questions of our time. 

“At Proof, we don’t chase stories,” Angwin wrote in her founding letter. “We develop hypotheses and test them. We build software to collect data, and we use statistics to analyse it. We borrow from science the idea of peer review, asking experts to examine our work before publishing. We release our data to the public so that it can contribute to further research.”

Unlike the Markup, Proof News is starting small, with a team of six people: two reporters, a data editor, an executive producer, a chief of operations and Angwin as editor-in-chief. According to its website, funders include the Internet Society Foundation, Omidyar Networks, the Alfred P. Sloan Foundation and the Surdna Foundation

Proof News’s first investigation, produced in collaboration with scholar Dr. Alondra Nelson, looked at how five leading AI models fared when asked basic questions about the 2024 US presidential election. The models examined were Anthropic’s Claude, Google’s Gemini, OpenAI’s GPT-4, Meta’s Llama 2 and Mistral’s Mixtral. 

Angwin’s team prepared 26 questions and then showed the models’ responses to a group of more than 40 selection officials and AI experts. Overall, the AI models performed poorly on accuracy, with about half of their collective responses being ranked as inaccurate by a majority of testers. More than one-third of responses were rated as incomplete, harmful or both, and a small portion were rated as biassed. You can take a look at the methodology here.

In mid-March I spoke to Angwin on a video call. I asked her why she’s founded Proof News and what she has learnt about generative AI from her first project. She also discussed how she thinks that journalists should report on the rise of this technology and shared her views on some of the challenges facing journalism more generally. Our conversation has been lightly edited for clarity and length.

Q. You’ve been doing evidence-based data-driven journalism for many years now. Why have you launched Proof? What would you say it’s different from your previous work? 

A. I’ve been focused on using computational techniques to superpower journalism for many years. We need all the tools we can get. So I started doing that at the Wall Street Journal and ProPublica, and then I founded the Markup. Throughout that time, I was viewed by the outside world as a tech reporter, and I wanted to put the methods – and not the topic – first, because the methods are increasingly important in an era where no one trusts journalism. 

It's just not enough to just say ‘I'm a tech journalist, and you should trust what I say.’ Calling [this outlet] Proof and making the methods so transparent is something that we need in an era where AI is creating all kinds of plausible text, where we have all sorts of misinformation purveyors, and where some newsrooms are writing press releases as news articles. This information landscape is very polluted, so focusing on the methods is a way to build trust with the audience. 

Q. Can you walk me through the project? How was Proof launched and what are your plans? 

A. Launching a newsroom in 2024 is a challenge. When I launched the Markup in 2018, there was a lot more money floating around in nonprofit news. So I’m starting very small. We are six people and we’re kind of being scrappy. I would like to raise more money. That’s how much I raised. But I also want to be able to be nimble and experimental. 

Our first project was this innovative collaboration with an academic, which is not that common for journalism. It involved this sort of experiment, bringing people together to do testing. So I want to be able to do things like that, learn from them, and then build from there, as opposed to starting with an idea of exactly what it's gonna look like.

Q. Was it difficult to find funding to launch Proof? 

A. It was. Philanthropic dollars have traditionally not funded a lot of journalism because for a long time journalism was profitable. That’s starting to change. But they're focusing on the most dire need, which is local news, and that's great. But that doesn't mean that I get funding because I'm doing something slightly different. So there's a growing awareness of a need for funding in this space, but there is still not enough to support what is needed. 

Q. In your launching letter, you argue that journalism should focus less on bearing witness and more on analysis in terms of testing hypotheses, sorting through evidence... Why?

A. We are all just drowning in narratives. You open up your social media feed, and everyone has a story. That's the beauty of social media: everyone has been given a voice. The problem is that it's impossible to tell how representative any of that is. 

Think about the way bad actors take a single narrative, which might actually be true, and weaponise it to make it seem like it’s this huge trend. You can see that with the discourse around immigrants in the US. There’ll be one case where some immigrants commit a crime, and then a bunch of outlets will make a huge fuss about how this is representative of a crime wave, which is not true. 

We need journalism to say, “That’s an exception, not the rule.” The audience is desperate for that. They are just drowning in all of this information. They don't know how to sort it out.

The debate around AI is too focused on whether AI is going to kill us or not. It’s like talking about flying cars and how we're going to regulate them when we haven't regulated the cars that are on the roads

Julia Angwin
Founder of Proof News

Q. Will the rise of AI-powered content will make this problem even worse?

A. AI will superpower everything that I just said. One thing that’s weird about AI is that it’s pretty good at storytelling. So it will just add even more noise to the same challenge. There are information gatekeepers out there, the technology platforms, but they don’t have a real incentive to sort this out nor a legal requirement to do it. So we are all left to our own devices.

The platforms are optimised for attention. That’s we learned through [investigations such as] the Facebook Files: that when given a choice between even [removing] really harmful content and keeping people on the platform longer, platforms choose to keep people on the platform longer. So the default position for smart people these days is not to trust anything unless they can find a way to trust it. The default is not to trust.

Q. Proof’s first investigation is a great example of evidence-based journalism. You focused on generative AI and this year’s US election. Why did you choose this topic and how did you design the project? 

A. I was really frustrated with the debate around AI. It was too focused on whether AI was going to kill us or not. We can have that debate, but that's like talking about flying cars and how we're going to regulate them when we haven't regulated the cars that are on the roads. 

I wanted to test AI for what it is doing right now, and the US election is such a high-stakes event that it felt like that was the right place to start. This is an incredible election year, and I just wanted to see if we could trust these large language models with basic elections’ questions. We did not ask complicated things. We were literally just testing the very bare minimum, like “Can you tell me where I should vote?” 

We should take the conversation on AI safety away from the theoretical and into the ground truth, and I didn't see anyone doing that testing. So I paired up with Dr. Alondra Nelson, who is a leading AI scholar and policymaker. I knew she shared my concern about AI safety being too abstract and about the fact we weren’t focusing on the ground truth. 

Q. You also managed to bring together Republican and Democratic election officials and convinced them to participate in the project. Was it difficult to get them onboard? 

A. It was really surprising to me to see how enthusiastic they were. When I wrote to them, so many of them were like, “Look, I'm desperate to know what I should be worried about.” All of them were so thrilled. One of them said to me, “People come to us and they lecture us about AI. But nobody has ever asked for our opinion.”

They were thrilled to go back to their offices and say, “Okay, here's what I learned, here's what we need to be worried about.” A lot of them came into the testing worried about deep fakes. And it’s not that deep fakes aren’t an issue. But they hadn’t actually thought about this text-based AI just presenting incorrect answers. They thought these answers would be accurate and they were shocked when they saw they were not. 

Q. Which kind of impact would you like to see from this project? 

A. I've always thought of impact as trying to convince companies to do better. In the US we’ve chosen not to regulate them, so the only tool we have is public pressure. That’s my approach here too. I don’t know if Congress is going to do anything, so I like these companies to try to compete with each other on accuracy. They are not going to do it unless somebody points out the differences between them. 

It was interesting to see that ChatGPT did much better than all the other models, so I would like to believe that the rest are going to feel incentivised to improve.

Q. What was the most surprising thing you learnt from this investigation?

A. It really surprised how bad Google’s Gemini was. It was a reminder of how different AI is from facts. If you ran most of those queries to Google search, you’d get a pretty decent result. Google has that information. And yet Gemini is producing probably the worst results we saw. So this just made me think this is a real sign of how AI is a wholly different beast. 

Google has spent its whole corporate history developing a reputation as a trustworthy source, and then Gemini couldn’t be less trustworthy in this realm at least. So I asked myself: Why are we regressing? Why is technology getting worse? These AI models are good at writing sentences that sound nice, which is fine. But I don't know if it’s worth the enormous resources that are going into it. So I don't know if that’s what you would say is most surprising, but that’s what surprised me.

These AI models are good at writing sentences that sound nice, which is fine. But I don't know if it’s worth the enormous resources that are going into it

Julia Angwin
Founder of Proof News

Q. Which other areas you’d like to investigate? 

A. One thing I’ve been hearing a lot about is that people are using these AI tools as companions or for things like therapy or emotional support, and I’ve spoken to some therapists who are really concerned about that. So I've been thinking about doing a similar investigation where you bring together therapists to rate the answers these chatbots give. But I could see doing similar investigations on other fronts.

One thing I’m really interested in is bias. For instance, we saw that in a primarily black neighbourhood in Philadelphia, an AI model said, “There’s no way you can vote here.” So we would like to see if it says the same in other neighbourhoods. There’s a lot more testing on bias that can be done, and it needs to be done. But those studies need to be designed a little differently to make sure that you are asking the same question but in different geographic spaces. You could do it in different languages too. 

Q. You've reported quite a lot on technology and algorithms in the past. How do you think journalists should cover the rise of generative AI?  

A. Journalists should stop speaking about AI models as if they have personalities, and they are sentient. That is really harmful because it changes the conversation from something that we as humans control to a peer-to-peer relationship. We built these tools and we can make them do what we want. 

Another thing I would recommend is talking about AI specifically. Which AI model are we talking about? And how does that compare to the other AI models? Because they are not all the same. We also need to talk about AI in a way that’s domain-specific. There’s a lot of talk about what AI will do to jobs. But that is too big a question. We have to talk about this in each field. 

A classic example of that is that people have been predicting forever that AI is going to replace radiologists and it hasn't happened. So I would like to know why. That's the kind of question you can answer. So part of what we’d like to do at Proof News is focusing on a testable hypothesis. Focusing on a testable hypothesis forces you to be a little more rigorous in your thinking.

Journalists should stop speaking about AI models as if they have personalities, and they are sentient. That is harmful because it changes the conversation from something that we control to a peer-to-peer relationship

Julia Angwin
Founder of Proof News

Q. Around 25% of the people we survey for our Digital News Report 2023 say that search is the main way to get their news, and there is a real chance that these AI models change the way we search for news online. How do you think journalists should think about this? 

A. I don’t know if you saw that, but a new web browser called Arc puts together some sort of AI summaries from news websites. Journalists were really upset with them because they basically crossed the line that Google hasn’t crossed yet but that’s obviously going to be crossed. I hope this doesn’t come true. But there's definitely a world where we could all just be creating content to train AI models, like an unpaid service to these giant corporations. That's the way we're headed right now.

Q. The dilemma for news organisations is how you deal with this threat. Media companies such as Prisa, AP, Axel Springer and others have signed deals with OpenAI. On the other hand, the New York Times has sued Microsoft and OpenAI. What would you say it’s the right answer?  

A. I really appreciate the Times’ lawsuit. It’s a move that will benefit everyone in journalism, if they set a precedent and there’s not a settlement. Other outlets just sold off their archive for money. If you think about the benefits these companies are going to get from years and years worth of journalistic labour and effort, I don’t think it’s worth it. So I’m happy the Times is doing that, because somebody needed to take a stand. 

Back in September I wrote a guest essay in the New York Times about this. I'm really concerned about the commons being overgrazed by rapacious tech companies, stealing everything that’s in the public sphere. I’m worried about what this will actually mean – that there’ll be no incentive to put anything into the public sphere. 

In this scenario, things like Wikipedia and Stack Overflow, that are super helpful places where people participate in a public spirit, will wither away and the only real good information will be behind paywalls. Then the rest of the world will just be living in this traffic farm, which will be a terrible world of information. I’m very concerned about that outcome and I’m not sure about what is going to stop it.

Q. How do you think misinformation is going to change in a world powered by generative AI?

A. AI will just make things easier for all kinds of bad actors. Do you remember those famous Macedonian teenagers creating false content before the 2016 US presidential elections? In the past, you had to pay people to come up with that kind of stuff. Now you don’t have to do that. AI makes it much easier to flood the zone with misinformation. 

It will accelerate the production of misinformation and gatekeepers don’t have any incentive to block it. So it can just become really unmanageable as a flood. Right now, we are in a moment where I feel like the flood hasn’t quite hit yet, and I think it’s partly because the narratives aren’t quite good enough. You can still tell they are AI and people are busy calling them out. I hope that continues to be the case. But I suspect AI will get better at this. 

If you think about the benefits AI companies are going to get from years and years worth of journalistic labour, I don’t think cutting deals with them is worth it. I’m happy the 'Times' is suing

Julia Angwin
Founder of Proof News

Q. We live in a world with all these people (hyper partisan voices, influencers, content creators) who are not exactly journalists but who have a prominent place in the public sphere. How do you see the future of journalism as a profession in this kind of environment? 

A. The big newspapers have found their paths, and that's great. And then we have this model of content creators that has been really successful at building big audiences, but that it’s not actually built around a rigorous process and legal defence of those processes.

This means that these content creators are in a situation where they don’t have the First Amendment lawyers that we have in journalism, and who are willing to fight if they get a cease and desist notice. So I’m worried that this entire ecosystem is likely to be more cautious about being adversarial and that is very worrisome, which is why I think I can help because I do have those lawyers, and I could partner with them. 

I’m hoping to inject more of that journalism rigour and make sure that you have the facts behind you when you make an allegation and that you go to the companies for comment before you publish. The bet I’m making is that some of these content creators are going to be interested in participating in that and that this improves the environment more generally. 

Q. How are you planning to partner with them?

A. In this particular case, I’ve built software to do this AI testing. So what I'm planning to do is go to content creators who have expertise and say, “Hey, I built this cool software where we can test which AI model is the best in your field.” 

So, for instance, if you run a maths channel, we can go to you and say, “Hey, let's see which one of the bots is best or worst.” In that case, I want to bring them an investigative tool, and then collaborate on an investigation. That’s how I'm going to start this work. 

Honestly, they have to create content, and they're on this schedule, where they have to constantly create content for the algorithm to support their work. And so if I have some trustworthy information that I can bring to them, and help them really do on their own, that actually just improves the world for everybody. Then I can just say to my donors, “I'm bringing trustworthy information into this new sphere.” 

The journalistic output doesn't need to look like a little story, right? I’ve built some investigatory tools, and I brought them to other people who can then investigate.

Join our free newsletter on the future of journalism

At every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s sources - all in 5 minutes.

  • Twice a week
  • More than 20,000 people receive it
  • Unsubscribe any time

signup block