How the coronavirus pandemic is changing social media
“War is the locomotive of history,” said Leon Trotsky in 1922, arguing that social developments that would normally unfold over decades can take place in months when conflict is raging. The coronavirus pandemic is similar in nature, bringing about radical changes to almost every field of human existence in an unusually short space of time.
This is especially true for the field of social media.
As a journalist I have been reporting on the changes that tech companies have made in recent months in response to a barrage of virus-related misinformation. These changes include actively boosting credible health information, moderating more of the dubious content, limiting message forwarding and restricting ads. But these steps have not done enough to stem the tide of virus misinformation, which is constantly changing in form, with elaborate conspiracy theories increasingly more significant than dodgy remedies.
More changes will become apparent in the near future as new waves of misinformation hit different countries in different ways. Here’s an overview of how the first stage of the pandemic has changed social media and how it may change it in the months ahead.
Promoting credible information
In early March, many people just wanted to know more about this mysterious virus. How deadly was it? Who was most at risk? What could they do to protect themselves? With so many unknowns, the online environment was ripe for health-related falsehoods.
Social media gives “everyone in the world an equal opportunity at getting their message out” says Alastair Reid, digital editor of First Draft, a non-profit organisation.
There are few things more resonant than a virus that could kill you and your family.
In March posts claiming that lemons cure the virus, that coughing techniques build immunity, and that your skin colour can protect you gained traction on Facebook and other platforms. All of these claims were false.
Many of these early hoaxes were things which supposedly protect people from the virus, like “vitamin C and garlic soup”, says Cristina Tardáguila, associate director of the International Fact-Checking Network.
Social media companies’ immediate response was to proactively promote health-related information from sources which it deemed more reliable.
For example, Facebook established a “COVID-19 information centre” sharing information about the virus from government sources and credible media outlets. This centre was promoted alongside all sorts of posts relating to the pandemic, including misleading ones.
YouTube started directing people watching coronavirus-related videos to “get the latest facts about coronavirus” from official sources like the World Health Organisation and national governments. Twitter, Instagram and TikTok did similar things.
“Expert information” resources have also been placed alongside Google search results and in banner ads on websites, says Chloe Colliver from the Institute for Strategic Dialogue, a think tank which analyses social media.
All the experts I spoke to for this piece agreed promoting these reliable resources is a good thing. But there are some limitations to this approach. A series of bullet points from a government agency does not necessarily grab the reader and many people will never click on these links. Not every government has been as trustworthy or as trusted by its citizens throughout the crisis. Throughout the pandemic, there have been questions about coronavirus to which even experts do not know the answer. This means a link to a trusted website will not answer every question.
Before the pandemic, misinformation outbreaks generally clustered around specific events like elections or protests. However, like a world war, COVID-19 arrived in some countries before others but ended up affecting pretty much every inhabited corner of the planet. This was a recipe for misinformation going global and bouncing between different countries in unpredictable ways.
In my own reporting I looked at how misleading health claims are sometimes rooted out quickly in English, but kept going viral in other languages spoken by fewer people such as Romanian.
All major social media companies are based in the US, a society which has particular norms surrounding free speech and information which do not necessarily apply to other countries.
This means that the norms of the tech giants do not always apply well when they travel overseas.
“These companies are looking through American-coloured glasses” says Whitney Phillips, assistant professor of communication and rhetorical studies at Syracuse University.
Her research reveals that the United States exports elements of the US information ecosystem to the Global South. For example, “alt-right” groups in Brazil taking their cues from similar American organisations. “Polluted information doesn’t care about borders,” she says.
Limiting forwarded WhatsApps
Misleading content on the internet is usually more complex than outright fakery.
Recent research by the Reuters Institute found significantly more coronavirus misinformation is “reconfigured” content — where existing and often true information is spun, twisted, recontextualised, or reworked — rather than content entirely fabricated.
But during the pandemic there has been lots of content that is simply false. For example, there is absolutely no link between 5G mobile technology and the spread of the virus.
In normal times, utterly baseless claims get little traction. But in those panicked days in March, when supermarket shelves in many European countries were emptying as hospital beds were filling up, some entirely fabricated rumours exploded into the mainstream.
As the severity of Britain’s outbreak was becoming clear, my phone started pinging with friends, relatives and work contacts asking about a rumour that troops were occupying the streets of London to impose martial law. A bit of digging revealed that this was nonsense —photos were being used out of context to spread fear.
Nonsense spreading on WhatsApp is nothing new, but the level of panic and uncertainty generated by the early days of the pandemic seemed to provide the perfect circumstances for people to forward misinformation. However, the encrypted nature of WhatsApp means it is hard for researchers and journalists to see what is happening on the platform, so they rely on tip-offs or misinformation reaching such a boiling point that it explodes onto other parts of the internet. Sometimes domestic politicians and news organisations are to blame for this. An example is a fake Bill Gates quote that emerged on WhatsApp, which I discovered had been reported by major British news outlets.
False coronavirus rumours spread on WhatsApp in Nigeria, Egypt and many other countries. The private nature of the platform makes it difficult to know how serious the problem is.
In response to countless examples from all over the world, in April WhatsApp made a big change to the way the platform works, placing stricter limits on how many people a message can be forwarded to at once.
This goes further than similar measures imposed in 2018 after mob lynching in India was linked to forwarded WhatsApp messages. Before that users could forward a message to 250 groups at once, a figure which was progressively reduced to 20, five in 2019 and just one now.
WhatsApp says this series of measures has dramatically reduced forwarding. It has also created anger in countries like Spain, where far-right activists and politicians have presented it wrongly as censorship, as Clara Jiménez Cruz, co-founder and head of Spanish non-profit news organization Maldita.es, explained in a recent seminar at the Reuters Institute.
Despite this change, however, it is still very easy for falsehoods to spiral across social media with incredible pace, with platforms and regulators struggling to keep up.
As well as quacks and scaremongers, the virus has also been a boon for scammers.
I investigated one UK website illegally selling coronavirus kits for people to use at home which was promoted via Facebook ads.
This is just one example of people trying to profit from the virus by selling things people were suddenly desperate for, like testing kits, face masks and hand sanitiser. Social media companies acted by restricting ads seeking to profit from public health issues.
If you search ‘masks’ in Facebook’s marketplace, you will see no results. But journalists and researchers all over the world are still uncovering examples of people finding ways around these rules and profiting from the pandemic.
Misinformation is now moving away from issues directly related to health, says Cristina Tardaguila, Associate Director of the International Fact-Checking Network. But new forms of coronavirus-related misinformation are ramping up such as the dubious use of statistics to “prove” whether easing lockdowns is good or bad, politicians making false claims about social distancing rules, and complex conspiracy theories replacing fake cures as the main form of misleading content.
In my own reporting I looked at how the decades-old ‘Agenda 21’ conspiracy theory about a shadowy world government has been reignited by the pandemic, how photos of people supposedly breaching social distancing rules can be misleading, and how pseudoscientific social media outlets can be superspreaders of misinformation.
“There are a huge amount of mistruths, falsehoods and conspiracy theories around vaccines and the pandemic,” says Alastair Reid of First Draft News. “They are combining in a way we’ve never really seen before as some kind of uber-conspiracy, uniting a lot of different strands.”
This sort of misinformation is often far harder to moderate than outright false content, weaving together facts and falsehoods to create simple but misleading narratives about the virus.
“The politicisation of coronavirus disinformation is already well underway and will only continue as major elections like the US presidential election near,” says Chloe Colliver. She worries the crisis “will be used globally to spread disinformation undermining people’s trust in democratic systems and processes, or disincentivising them from voting”.
Leading Italian politician Matteo Salvini has been a major source of coronavirus misinformation, as well as Mike Sonko, governor of Nairobi, and prominent politicians from all over the world.
In late May President Trump got into a high-profile spat with Twitter — which took the bold step of marking his tweets for the first time as potentially misleading and linking users to articles rebutting his false claims about the US electoral process.
Fact-checkers and researchers have questioned the consistency of these policies —Trump’s earlier misleading tweets about coronavirus cures have not been flagged.
Although the Trump tweets first flagged by Twitter did not directly relate to the virus, Twitter spokesperson Katie Rosborough admitted that the pandemic prompted the company to reevaluate its approach to fact-checking more generally.
“COVID was a game changer” she said.
Facebook has taken steps, recently deactivating dozens of ads placed by Trump’s reelection campaign critical of “far-left groups”, which included a symbol once used by the Nazis. It had previously removed posts by Brazilian President Jair Bolsanoro, but Trump is seen as a higher risk target.
Reddit is also clamping down harder on misinformation, banning pro-Donald Trump sections of the site that had long been associated with racist content.
Platforms had generally thought it is “too risky to invoke the wrath of Trump and his political base”, says Alastair Reid of First Draft News. However, recent developments suggest this may be changing.
Social media platforms moderate content with detailed policies explaining what is and is not allowed. For example, Facebook prohibits “dehumanising speech or imagery” including comparisons with “insects”, “subhumanity” and “filth, bacteria, disease and faeces”. This sort of moderation is an incredibly complex business —the same phrase may be inoffensive in one language but a provocative racist slur in another.
Companies have always had some sort of “harm principle” prohibiting posts which lead directly to physical harm, such as inciting violent protests or encouraging suicide. But during the pandemic the definition of “harm” has vastly expanded as people all around the world have trawled social media for information about the deadly virus.
Whitney Phillips thinks social media companies put too much emphasis on “negative freedoms” — freedom from external restraint — rather than the broader concept of “positive freedom”.
“We’re thinking about preserving individual rights, rather than minimising collective harms and maximising collective benefits” she says, prioritising freedom of speech rather than, for example, the public health benefits of encouraging people to wear masks.
During the pandemic, social media companies have shown some signs of going further than before when it comes to removing content. As well as deleting content, Facebook has also made use of third-party “fact-checkers” to flag posts as containing potentially misleading content.
While it claims to remove content which directly leads to physical harm, Facebook does not tend to remove “general conspiracy theories,” though may flag them as containing false information and decrease their reach.
But when it comes to a deadly virus these lines are often blurred.
Whitney Phillips says “false political information absolutely threatens people’s health. Not in the way that COVID-19 does, but with profound consequences for safety and well-being nevertheless.”
Platforms have worked to remove content such as the viral ‘Plandemic’ video which “wrongly claimed a shadowy cabal of elites was using the virus and a potential vaccine to profit and gain power”. They have also clamped down on conspiracy theorists like David Icke when they are seen to cross the line from “general conspiracy theories” — which are allowed — into claims which may actively lead to physical harm such as by encouraging people to ignore public health advice.
These conspiracies do often harm ordinary people, such as a woman I interviewed who had her photos stolen to spread a conspiracy theory about death certificates being manipulated by governments.
Ordinary people play a role in spreading them too — like the people who pretend to be “bots” to wind up their political opponents.
Although platforms try to keep on top of misleading content, there is still a huge amount of it out there, says Chloe Colliver of the Institute for Strategic Dialogue. Research by her institute shows that “disinformation-hosting websites received tens of millions more interactions on public Facebook in 2020 than content linking to the websites of the Center for Disease Control or the World Health Organisation”.
“The business models of the companies still enable, recommend, and target misleading or false information to users,” says Colliver.
Facebook recently announced an ‘Oversight Board’ which will have the final say over content moderation decisions. Board members include former Guardian editor Alan Rusbridger, former Danish PM Helle Thorning-Schmidt, and Nobel peace laureate Tawakkol Karman.
Chloe Colliver describes it as “a move in the right direction” but says “the processes of the board itself are not transparent enough to build real public trust in the system”.
The pandemic has led to drastic changes to social media in a short space of time as companies have reacted to wave upon wave of misinformation.
Important developments include the widespread promotion of positive health information, the further limiting of WhatsApp forwarding, and a general beefing up of moderation when it comes to health.
But experts think companies could be doing more, and are particularly worried about the coming months seeing a rise in more complex, political misinformation that is harder to root out than bogus health cures.
European countries are generally in a better place than three months ago, with cases falling and lockdowns beginning to ease. However, the global picture is much darker, with infections rising in Brazil, Mexico, India and Pakistan, countries which account for around a quarter of the world’s population.
Experts warn that the only long term solution to this crisis will be a vaccine. But misinformation researchers fear the hunt for one could be undermined by falsehoods shared on social media.“We can all see this rumbling in the distance, and it’s disorienting” says Whitney Phillips.
“The concern is that Facebook is not going to treat it as health information, it’s going to treat it as political information. I can’t imagine that it’s not going to be a nightmare."
Joey D'Urso is a reporter specialising in UK politics, social media, the internet, and misinformation. He's worked for BBC News and Buzzfeed News.
If you want to know more...
- Read this piece on how fact-checkers are dealing with the pandemic.
- Read our factsheet on types, sources, and claims of COVID-19 misinformation.
- Watch our seminar with Clara Jiménez Cruz, co-founder of the Spanish factchecker Maldita.es, on how she and her team are fact-checking the pandemic.
- Read our report on how people in six countries are navigating the information about the coronavirus.