Are fears about online misinformation in the US election overblown? The evidence suggests they might be

The days right before and after the vote can be vulnerable to misinformation. Experts explain what we know about its origins, its spread and its true impact
Republican presidential nominee Donald Trump campaigns in Michigan

Republican presidential nominee Donald Trump campaigns in Michigan. 19 October 2024. Brian Snyder / Reuters

24th October 2024

A recent survey suggested that one-quarter of Republicans agreed that Kamala Harris is not a US citizen and that over a third of Democratic voters endorsed false statements claiming assassination attempts on Donald Trump had been staged. 

Run by a group of American political scientists, the Bright Line Watch survey shows that millions of Americans are prepared to say that they believe statements which are demonstrably false. But where does the blame lie? In a knife-edge US presidential election, which could be followed by a dangerous uncertainty over the results, how concerning is the influence of online misinformation including that which is sowed by foreign actors or involves the use of AI?

According to several experts who study the spread, consumption and impact of online misinformation and a wide body of evidence on the subject, including academic papers and post-election studies in the US and around the world, online misinformation is not proven to have the profound impact on election outcomes or political beliefs that is often suggested by some media coverage. 

What data says about exposure

Concerns about misinformation are not a now phenomenon in the US. The Russian government used social media to interfere in the 2016 presidential election. Four years later, Trump and his advisers spread the false rumour that the election was stolen from him.

According to this year’s Digital News Report, 72% of online news users in the US said they were concerned about what is real and what is fake on the internet, up by 8 points from the year before. This percentage is much higher than the 59% average across the 47 markets covered by the report. Respondents across all countries said the platforms where they found it most difficult to discern trustworthy information were X and TikTok.

But concern is not the same as exposure. A review of evidence published in Nature earlier this year documents “a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek out such information.” 

Most people only see very little misinformation online, and the ones seeing the most are those who are most amenable and go looking for it.

This phenomenon played out during the 2016 election. A 2019 study in Science looked at the spread of ‘fake news’ on Twitter (as it was known then) published by hundreds of websites which created deliberately misleading or editorially-flawed content. Scholars continue to debate whether such content should even be considered online misinformation, but the pattern here was clear: ‘fake news’ was highly concentrated among users, with 80% of all ‘exposures’ to it seen by just 1% of individual users. On the other side of the equation, around 0.1% of users were responsible for sharing 80% of ‘fake news’ sources.

For the average user scrolling through their feeds, these tweets comprised little over 1% of the total political content they were exposed to.

This doesn’t mean misinformation doesn’t circulate through more opaque platforms. Messaging apps such as Signal, Telegram or WhatsApp are much harder to study. But it’s likely that what is known about misinformation also plays out in these spaces as well, argues Sacha Altay, a French expert in the study of misinformation at the University of Zurich: exposure is fairly limited, is concentrated among partisans and is not particularly persuasive for those not inclined to believe it in the first place. 

“My mom is not going to all of a sudden open Telegram and Signal and look for unofficial information channels. Why would she do that? She trusts FranceInfo and the mainstream news,” Altay says.

Other potential sources of online misinformation are the proliferation of ‘pink slime’ websites, which pose as local news sites but are often run by organisations with partisan interests. NewsGuard, which aims to counter misinformation, estimated in June 2024 that there were 1,264 such websites, fractionally more than the number of websites belonging to actual news organisations. Despite their apparent abundance, a Stanford University study found less than 4% of Americans viewed one in the months surrounding the US election.

What we get wrong about misinformation

“There's this common refrain in [the study of] persuasion, which is that ‘persuasion works on other people’,” says Jennifer Allen, a post-doctoral researcher at the University of Pennsylvania and an expert in digital persuasion and misinformation. The phenomenon she points to is also known as the third-person effect and is a nod to the idea that we generally think of others as being far more susceptible to false content than ourselves despite the evidence saying otherwise.

“Political or persuasive content has very small effects on people's political attitudes or voting choices or behaviour,” she says. “There is this assumption that people see misinformation and it causes them to take all these crazy actions. And I think both of those things are in question. One, people don't really see that much misinformation. And two, it's not necessarily driving them to take these actions that they wouldn't otherwise take.”

So what explains the many widespread false beliefs particularly when it comes to politics? Brendan Nyhan, a professor at Dartmouth College who studies misperceptions about politics and healthcare, wrote in the Journal of Economic Perspectives: “Many seem especially susceptible to misperceptions that are consistent with their beliefs, attitudes, or group identity… These tendencies can be especially powerful in contexts like politics.” 

In other words, it is likely that a partisan political environment is conducive to the adoption of false beliefs, rather than just misinformation pushing people into opposing camps.

The freewheeling, sometimes chaotic nature of social media, where everyone has a voice, is also suggestive of a place where misinformation proliferates. Social platforms “make some things visible that were not visible before,” argues Altay, but this doesn’t mean these false beliefs wouldn't be alive and well without the help of digital platforms.

Journalists may also misunderstand public attitudes towards news when they are considering questions about misinformation. As worrying as it might seem to reporters or news junkies, “most people are just not very interested in politics and current affairs and the news,” says Altay.

And what do they do when they are interested? They turn to mainstream news outlets, he says. So it’s not just that average users don’t see a tidal wave of political misinformation. It’s more that many don’t see that much political news online at all.

When you see people behaving in a way that you don't understand, it can be comforting to assume that it's because they were tricked.

Jennifer Allen, University of Pennsylvania

Just as journalists may misjudge the public’s appetite for political news, they may also incorrectly attribute some people’s false beliefs or misperceptions to the kind of lies they came across on social media rather than to a more complex set of issues. 

“When you see people behaving in a way that you don't understand,” says Allen, “it can be comforting to assume that it's because they were tricked.” Her research also looks at vaccine hesitancy, a phenomenon that often comes from “longstanding reluctance” and mistrust in healthcare institutions, rather than a belief in “outright falsehoods [or] manipulation.” 

“The reality is that all these decisions are really complicated,” Allen says. “There are a lot of things that go into it. People's political attitudes and political identities are very hard to change.”

If the extent of exposure to misinformation and its influence are so limited, why are they so ingrained in public discourse? The Nature review mentioned above points to assertions from “public intellectuals and journalists” who often make sweeping claims about this topic which the evidence doesn’t support. Online misinformation has become an easily available explanation for why people believe certain things, which is hard to let go of.

Is foreign influence so dangerous?

A recent article featuring lawmakers and analysts speaks of “foreign adversaries” taking advantage of a social media-led news ecosystem to “mess with America’s decision” and argues this could be “devastating to public confidence in the results”. 

Buried in the final paragraph, Jen Easterly, director of the US Cybersecurity and Infrastructure Security Agency, is quoted as saying: “Malicious actors, even if they tried, could not have an impact at scale such that there would be a material effect on the outcome of the election.”

This election season has already seen a specific scare when the US Department of Justice revealed Russian money was funding prominent right-wing news creators at digital outlet Tenet Media to pump out pro-Russian content to unsuspecting viewers.

An analysis by Allen rejected concerns that these influencers had much influence at all and found the videos had less reach, featured less pro-Russia content and were less polarising than those posted by Fox News or Tucker Carlson.

If covert Russian-backed media was largely ineffective, what about overt Russian media? Following the invasion of Ukraine, governments around the world sought to curb the ability of state-owned news channels RT and Sputnik to operate due to their support for the war. One study, co-written by the Reuters Institute’s Director of Research Richard Fletcher, looked at 21 countries that took such measures. It found that their websites and apps didn’t reach even 5% of the digital population in any given month between 2019 and 2021. In the US, just prior to the invasion, they barely reached 1% of the population online.

According to a study published in Nature, 70% of Twitter users were exposed to content from foreign election interference campaigns by Russia’s Internet Research Agency (IRA), as well as smaller campaigns, including from China, Venezuela and Iran during the 2016 US election.

Malicious actors, even if they tried, could not have an impact at scale such that there would be a material effect on the outcome of the election

 

Jen Easterly, US Cybersecurity and Infrastructure Security Agency

Despite this apparent foreign disinformation blitz, people actually saw nine times as many posts coming from politicians and 25 times as many posted by news organisations. Even the content that did reach people’s screens had little impact as far as electoral politics is concerned. It mostly reached partisan Republican voters – those “least likely to need influencing” – and there was “no evidence of a meaningful relationship between exposure (...) and changes in attitudes, polarisation, or voting behaviour.”

So while it may be concerning that some overseas agents could be trying to influence US voters, their mission seems largely ineffective and journalists could do a better job of reflecting that. “There's a lot of secondary coverage of misinformation in the mainstream media that I don't think adequately characterises how little it's seen,” says Allen. “There needs to just be much more careful reporting and contextualisation of efforts.”

Bringing attention closer to home, some scholars argue that an insular right-wing arm of the domestic news ecosystem is more effective at promoting “disinformation, propaganda, and just sheer bullshit,” than social media or Russian influence campaigns.

The authors of Network Propaganda, which looked at media coverage in the years either side of the 2016 election, described the US right-wing ecosystem of hyper-partisan sites as publishing “decontextualised truths, repeated falsehoods, and leaps of logic to create a fundamentally misleading view of the world.” In an interview with Boston Review, lead author Yochai Benkler said: “Its defining characteristic is pushing content that reinforces identity and political in-group membership.”

Whether or not stories are true or false is not really the point, he argues, and it's therefore operating to a different standard than the rest of the American news ecosystem which largely adheres to traditional journalistic practices and standards.

Does AI pose a new misinformation risk?

The 2024 presidential election will be the first where generative AI applications are available at scale prompting fears that it is “supercharging the threat of election disinformation,” where “anyone can create high-quality ‘deepfakes’ with just a simple text prompt,” to deceive voters. 

A contributor to a recent podcast by the Brookings Institution said of the threat of AI: “We could end up in a situation of an election being decided [emphasis added] based on false narratives.” Fears often centre around the quantity of misinformation that can be generated using AI, the increasing quality as the technology improves and the ability to create personalised misinformation targeted at recipients.

In a recent piece published by Harvard’s Misinformation Review, a group of researchers including Altay and the Reuters Institute’s Felix Simon argued that, overall, “current concerns about the effects of generative AI are overblown” and in keeping with other “moral panics surrounding new technologies” that emerged in the past. 

Could individuals be hyper-targeted with AI-generated content? Possibly, but AI-generated content doesn’t distribute itself and there is little evidence from political campaigns that direct, personalised messages are very persuasive. As with the rise of existing technologies, argues Altay, AI will have “a huge influence in what we do. But it doesn't mean it's going to determine who we vote for.”

The US election is only one of the dozens of electoral processes taking place in 2024. During the Indian election, generative AI was used widely, often to inject an air of fun into campaigns, including by ‘resurrecting’ famous singers or cloning politicians’ voices to make personalised campaign calls. 

Consistent with Indian politics, violent rhetoric, particularly against Muslims by Prime Minister Narendra Modi’s campaign, was in force and generated using AI. However, say Harvard Kennedy School’s Vandinika Shukla and Bruce Schneier, “the harm can be traced back to the hateful rhetoric itself and not necessarily the AI tools used to spread it.”

This sentiment was recently echoed by Ritu Kapur, co-founder of Indian news outlet The Quintwho said: “We didn't need AI for misinformation in the Indian elections. We have plenty coming from politicians.”

How journalists unwittingly assist disinformation campaigns 

When the Department of Justice announced their indictment against Tenet Media in September, it became a major news story. It involved Russian money, a shady plot and one of the country’s most popular YouTubers two months before election day. However, given the limits of the influence operation, journalists should have offered “much more careful reporting and contextualisation,” argues Allen. This often requires showing that the amounts of money involved and the overall reach were relatively small.

Another case shows how the news media’s overemphasis of the scale or success of such campaigns may play into the hands of those running them. 

The 2022 ‘Doppelganger’ campaign created a network of fake websites posing as real ones to push information in line with Russia’s war aims. Thomas Rid, an expert in information warfare who saw leaked campaign documents, wrote: “The biggest boost the campaigners got was from the West’s own anxious coverage of the project” and that “far more people likely read the secondary coverage of the exposed forgery campaigns than ever viewed the primary disinformation.” 

One of the company’s stated achievements was “the publication of a number of journalistic and industry investigations into Russian disinformation campaigns,” including by Meta and the Washington Post. Mainstream media coverage of the operation was one of the goals the campaign pursued.

It’s much more difficult to understand the exposure to online misinformation today than even two years ago, says Allen, who stresses that Twitter removed its free API, Facebook replaced CrowdTangle with something less useful and TikTok “is not transparent” about the platform. Another concern is that much of the evidence on the spread and influence of misinformation comes from Western Europe and the US, so researchers are not able to draw strong conclusions about the influence of misleading content in the Global South.

What is lost in pointing the finger at online misinformation

Despite the evidence against social media or AI-driven misinformation being persuasive enough to swing an election, this doesn’t mean misinformation in general is not problematic in other ways. Frequent false claims about the reliability of the 2020 election result by Donald Trump circulated among angry crowds outside vote-counting centres and culminated in the storming of the Capitol on 6 January 2021. In this election, with a potential photo finish, genuine uncertainty around the results and possible delays in vote counting could leave space for false claims to flourish at a time when they could be most harmful.

But focusing on online misinformation as the cause of unease or even unrest can distract from larger questions as to why people accept something that isn’t true or that is so at odds with prevailing beliefs. It can also let more powerful people off the hook, Altay says: “This focus on social media diverts our attention from politicians, from older institutions, or even from some media organisations that are much more powerful and influential and sometimes spread misinformation that I think is more consequential than what’s spread online by regular users.”

After all, it was misinformation coming from the very top that spread the lie about the 2020 election being stolen, and even four years later the public's beliefs about the scale of election fraud are "wildly exaggerated". As Professor Nyhan says, "the claim about Trump winning was of course spread widely by him and via the mainstream media." The extent of this misperception would have been very different if only ordinary people had shared those claims on their social media channels. 

As Trump was using every channel to promote these baseless claims, “people still would have likely learned about and believed [it] without the help of social media,” Allen suggests. Would the assault on the US Capitol have happened without Trump’s encouragement? The evidence suggests it’s very unlikely.

Evidence shows that millions of Americans hold false beliefs, including in the arena of politics. Reporters have also proved that overseas actors are trying to influence what people believe and that AI has now become a major campaigning tool in elections. But the evidence from previous elections suggests such fears about the actual impact of online misinformation are largely unwarranted, at least for now. 

Being interested in the pursuit of truth inevitably means journalists are concerned when they repeatedly see misinformation themselves and they may want to sound the alarm in their coverage. But it is in large part due to the work of journalists, fact-checkers and the dominance of mainstream news outlets that online misinformation does not gain as much of a foothold as we might think. The US election might be more immune to these false narratives than many outspoken pundits say.

Join our free newsletter on the future of journalism

In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.

  • Twice a week
  • More than 20,000 people receive it
  • Unsubscribe any time

signup block