Our podcast: Should platforms have the power to ban leaders like Donald Trump?
The topic
Following the suspension or barring of Donald Trump by many of the largest social media and tech platforms, after his supporters stormed the Capitol building in January 2021, we explore the issues surrounding these decisions. When should the likes of Facebook and Twitter weigh in on potentially harmful political speech on their platforms? How can these rules be applied fairly and consistently? What are the implications for freedom of expression? And what lessons can we draw from similar events around the world?
The voices
- Rasmus Nielsen, Director of the Reuters Institute
- Nikhil Pahwa, Indian journalist, digital rights activist, and founder of MediaNama, a mobile and digital news portal
The podcast
On Spotify | On Apple | On Google
The transcript
Rasmus: Welcome to Future of Journalism, a podcast from the Reuters Institute for the Study of Journalism. I’m Rasmus Nielsen. Should platform companies have the power to ban leaders like Donald Trump? And what, if anything, will the steps that companies such as Twitter, Facebook, YouTube and others have taken in recent days mean for the future of online political speech across the world? I’ll discuss these issues with our guest today, Nikhil Pahwa. Nikhil is a journalist and digital rights activist, and Founder, Editor and Publisher of MediaNama, a site that covers digital technology and policy in India. He also started the Save the Internet campaign in India, one of the largest grassroots campaigns for net neutrality ever, with more than a million participants, and is Co-Founder of the Internet Freedom Foundation. Thanks for joining us, Nikhil.
Nikhil: Thanks for having me on, Rasmus. Just one clarification. I’m no longer with the Internet Freedom Foundation, but I guess I did help co-found it.
Rasmus: It’s a man wearing many hats, but sometimes not all the hats he has worn.
On the fallout from the Capitol storming ↑
Rasmus: So, let’s just quickly recap some of the key steps that different platforms have taken in recent days, in response to the Capitol Hill riot in the United States. January 6th, 4:17 pm local time, four hours after he had encouraged his supporters to march on the Capitol – resulting in hours of violence and five deaths – President Donald Trump tweeted the video telling his supporters in DC that they have to go home now, but also reiterating his false claims of a rigged election, the nominal motivation for the riots. Ten minutes later, Twitter removed the ability to like, reply or retweet the video due to the risk of violence. This decision started an avalanche of escalating content moderation decisions across the tech industry. Facebook removed the post from Trump’s page a little more than an hour later. YouTube followed suit a few minutes after Facebook. And that evening, Twitter, Facebook, Instagram and Snapchat all introduced short-term locks on the President’s accounts.
The day after, Twitch, TikTok and Reddit all took similar steps, and Facebook extended the block on Trump’s account indefinitely. Twitter then permanently suspended Trump’s account. Even as companies that may never have imagined that they would have to take a stance on elected officials and others inciting a violent attack on the US Congress, including the exercise equipment and media company Peloton, which has blocked the ‘Stop the Steal’ hashtag from being used or created within its apps, were confronted with the reality that all platforms are at least sometimes political platforms. Conservative politicians, right-wing talk radio stars and Fox News hosts increasingly protested what they saw as censorship, and some of them increasingly turned to Parler and alt-tech microblogging and social networking service, which has grown its user base to an estimated four million active users in the aftermath of the 2020 US election. And is popular with, among others, some Trump supporters, conspiracy theorists and right-wing extremists, as well as conservative politicians, influencers and media figures.
Parler, however, was suspended from Google’s Play store on January 8th. Apple took a similar step January 9th, followed by Amazon Web Services removing Parler from their cloud-hosting service. This saw then Parler going offline entirely January 10th, drawing further attention to the power the private, for-profit platforms – especially the biggest ones, including Facebook, Google, Apple and Amazon – exercise over political discourse. And provoking increasingly intense debate across the globe about how they exercise that power. Some have applauded the crackdown on Trump and some of his most ardent supporters mobilising around false claims that the 2020 election was stolen, even though many also characterised these decisions as too little, too late. A position that’s also been taken by some in the Indian Congress Party, who say that social media companies have allowed this to go on for far too long.
But there’s also been increasing criticism, including from many prominent politicians across the globe – ranging from German Chancellor Angela Merkel to the prominent Russian dissident, Alexei Navalny, who say that they don’t believe that private, for-profit companies have the right to determine who should and who should not be allowed to speak online. In India, the BJP IT Cell Head called the decision taken in recent days a dangerous precedent, and one BJP MP called unregulated big tech companies a threat to democracy.
On the global context ↑
Rasmus: Nikhil, you’ve been observing this process from India, where most of the platform companies have far more users than they do in the United States, but where they have also often been criticised for lax enforcement of their content policies, that the decisions in the United States were based on. Especially lax, it seems, when it comes to prominent politicians inciting violence.
What do you make of what happened in the United States? And what do you think it means for India and the rest of the world?
Nikhil: Rasmus, if you look at the way platforms have behaved over the last many years, it’s really clear that they tend to side with power. So, whether it is Facebook doing nothing about the Rohingya violence in Myanmar that was being fostered through the platform – until the United Nations raised that issue – or it was Facebook, for example, siding with Duterte and allowing hate speech in the Philippines and, to some extent, the persecution of Maria Ressa – you know, and incitement of hate against her … and even if you look at India, WhatsApp not doing enough to contain hate speech especially targeted at Muslims, given that the ruling BJP party appears to have an anti-Muslim bias, it’s clear that platforms tend to lean towards power, and effectively not do anything about those people who can hurt them.
Now, until Trump lost the election, he was in that same position. He wielded significantly more power than the platforms. If you remember earlier in the year, Trump had also wielded an amendment to Section 230 of the Communications Decency Act – which basically provided platforms with safe harbour – and the removal of safe harbour would’ve impacted the sustainability of platforms. So, as long as there are politicians who have power and can wield it, these platforms won’t do much about hate speech and free speech. And Trump was given free reign for a very long time. The moment he lost power, they turned. They felt more confident about these bans because he lost, and because he’s probably not in a position to retaliate as of now, because he also doesn’t have much support even amongst the Republican party.
So, it’s essentially the fear of retaliation that, at times, prevents platforms from acting. And they have the discretion, they have the freedom, to choose where they act and where they don’t. And now that Trump’s out in the cold, his speech on these platforms is at their mercy, just like regular people like you and me. So, I think platforms tend to not necessarily be consistent with how they deal with speech.
If you remember, for many years, Twitter allowed ISIS accounts to fester on the platform. Let’s not forget that, for the longest time, YouTube did nothing about white nationalism channels that were spewing hate, until a large number of advertisers – including WPP, Omnicom and, I think, Dentsu – all of them came together and refused to advertise on – or threatened to pull advertising from – YouTube. And I remember this being mentioned on Google’s earnings call, that they’re dealing with the situation, that they’re trying to ensure that these advertisers stay, and that’s what they eventually did. They de-monetised those channels. So, platforms act when they’re threatened by those who have power over them.
On journalism pressuring platforms to act ↑
Rasmus: So, just to be clear I understand your view of this, the default decision, in your view, often is lax and inconsistent enforcement of existing content policies. And then we should expect to see enforcement mostly in situations where someone powerful is pushing for it – whether that is commercial interests around advertising, or political interests, like a government – and mostly enforce them against those who are relatively powerless to fight back or to hurt the company in question.
Nikhil: Well, also, when they’re shamed into it. So, when journalists write about these issues, about accounts that haven’t been taken down, accounts that may be spewing hate speech, that’s when platforms often act on it. So, we’ve seen this especially in India, where there were a number of accounts that were spewing hate speech. And once the press wrote about it, that’s when Facebook started taking them down. Especially with an account – the Wall Street Journal did this extremely impactful story about an Indian politician called T. Raja Singh, who had been inciting violence on his channel for the longest time. And it was alleged that Facebook’s India Head of Policy did nothing about it, or rather advocated against Facebook banning his profile and his account, because it might lead to repercussions with the ruling party.
Now, none of this was proven, and this was based on a report that the Wall Street Journal did, based on sources. But you know, it was post- that report that that account was banned. So, they sometimes act when the media reports on it, and I think that’s where the role of the media is extremely important in highlighting the duplicity of platforms.
On prospects for increased accountability ↑
Rasmus: So, I wonder, you point to the important role of journalism here but, also, I suppose there is a question about the role of policy. I mean, do you think that some of the steps we are seeing around President Trump, or some of the examples you provide of journalists scrutinising the hypocrisy and inconsistencies of policy enforcement, means that we should expect platforms to be more accountable on issues like hate speech and incitement to violence in the future?
Nikhil: I think that’s especially important. And you know, one of the things that I’ve noticed it's that it's some of India’s most celebrated women journalists – whether it’s Barkha Dutt or Nidhi Razdan, Rana Ayyub, as well – who tend to be at the receiving end of just some horrifying language and abuse on platforms, especially Twitter. And even despite the fact that these complaints are raised, the platforms really don’t act in dealing with either hate speech or incitement to violence. And I think there needs to be more consistent takedowns taking place from these platforms.
Now, I know it isn’t easy for the platforms. When you have, I don’t know how many – 500 hours – of video being uploaded on YouTube every hour, when you have billions of updates and messages on different platforms taking place every day, it’s obvious that the implementation is going to be inconsistent.
But I think there is still … it’s important for them – especially with verified profiles and verified accounts, and people who are in positions of power – for them to be more conscious of their activity. And activity against them, as well. Obviously, I would prefer that everyone’s treated with the same level of attention and care, but I do understand that sometimes it’s not possible for platforms, simply given the scale. And algorithms are never going to be perfect in implementing those community guidelines. And even as we’ve seen in the past, the human moderators are stretched, in terms of reviewing this content. So, it’s not an easy job, I understand that. But when decisions are taken, then there needs to be greater accountability, and the buck needs to stop somewhere. It can’t be lost to obfuscation. And platforms can’t just resort to templated, boiler-plate responses, regarding banning of accounts or shutting down of accounts for many, many users.
On an 'extremely difficult problem to solve' ↑
Rasmus: Right now, in most countries around the world – leaving aside the subset of issues that are clearly illegal and criminal forms of speech – the buck mostly seems in practice to stop with the platform companies themselves, despite, now, more than a decade of increasingly intense discussion around the need for greater regulation and oversight of how online speech functions. Do you think that there is a case to be made that policy makers have sometimes deliberately let this policy vacuum continue to exist? That they are sometimes, in fact, interested in a situation where there is no clear regulation of online political speech?
Nikhil: I don’t think that is deliberate. I think it’s just an extremely difficult problem to solve. If you think about it, platforms are essentially property of private spaces, performing an important public function. So, I’ve always held that protecting the platforms that enable our speech is just as important as protecting our speech itself. And so, the safe-harbour provisions are extremely important, and they need to be protected so that platforms can go about allowing us to say what we want to say.
The other challenge with the Internet is that, often it’s very difficult to distinguish, even for users, the difference between publishing and communicating. So, we tend to publish like how we talk to someone and, in that sense, it’s difficult to govern this kind of speech, because we don’t necessarily pay as much attention to it, let’s say, as a journalist would, or as a media publishing house would. And so, because this is speech in its purest form, without editing, without any censorship from the user in many cases, do we really want to restrict it if it’s not harmful and if it’s not incitement to violence, and it’s not leading to censorship of someone else? So, there are no easy answers here.
Now, from a legality standpoint, because platforms are, again, performing – they’re private spaces performing a public function, they can’t … there is a reticence from regulators to try and censor them or try and control them, because that would lead to over-censorship. And we have to be very careful about how we regulate platforms, because we don’t want to prevent free speech, either.
And I think the other challenge is that there are platforms that are very, very large, and there are groups and platforms which are very small, as well. And they need to be able to have their own community guidelines to be able to enforce the kind of behaviour they would want in that community. So, for example, the same rules apply to Twitter as well as to a football forum that only wants active members who are discussing only football. Right? And so, if you put in a message regarding cricket or baseball in that group, they should be able to ban you because it’s not in line with how they would want that communication to take place. And the same freedom for Twitter and Facebook to enforce community guidelines needs to be there. Because, in many countries, pornography isn’t illegal, as an example. But there are platforms that don’t want pornography because they want to encourage minors to be able to use the platform, as well. So, you know, I find it difficult to justify strong regulation of platforms when it comes to controlling what they allow and what they disallow.
Essentially, I believe that they should be allowed … that community guidelines should be in conformity with the laws around free speech in that particular country. But also, that there needs to be some regulation in terms of how they impose those community guidelines on users. So, can they justify that they’ve done … they have enforced these guidelines on a best-efforts basis? Can they ensure that, if something is illegal in that country, if a certain kind of speech is illegal in that country, that that has, again – it’s gone through sufficient levels of scrutiny to decide whether a particular sentence or a particular update is in line with that? And obviously, this is going to be difficult, so it has to be on a best-efforts basis.
Rasmus: I mean, I think it’s worth just highlighting what you point to, here, and remind listeners that, as inconsistent, as limited and imperfect as content and moderation enforcement is on platforms, in many cases it still goes well above and beyond what is legally required, and is far more interventionist and restrictive than what is legally required. And in a situation like this, where there are many calls for platforms to do more, also against legal – both potentially problematic and even harmful forms of speech – there are also some who are pushing in the opposite direction. And I will just to point to one example from Europe where, under proposed legislation in Poland, it might become illegal for social media services to remove content or block accounts if they do not break Polish law. So, a far more permissive approach to content moderation than what we are seeing from most of the current platforms.
On platforms responding when pressured ↑
Rasmus: Nikhil, I wanted to get your views on an issue that’s a little bit separate from the complexity of regulating platforms, and of recognising the great variety both in terms of size, function and functionality of platforms. And that is the issue of how the same platform sometimes seem to behave quite differently in different markets. Is there a risk here that, in effect, users of platforms – whether we’re talking about YouTube or Facebook or Twitter or other products and services – that people who live in poor countries are effectively treated as second-class users by platforms who tend to keep more energy and attention on resolving issues in rich countries where they make more money and often face more immediate political pressure?
Nikhil: I think, Rasmus, it boils down more to whether there is immediate political pressure, and whether that market matters to them or not. So, for example, Myanmar may have been a market that Facebook was largely ignoring when there was incitement to violence against the Rohingya Muslims there. Right? India is a market where platforms have been extremely conscious of speech because of the size of the market, and because many of these platforms have their largest user base in this country. At times, what has happened – and it is in the nature of platforms and in the nature of the Internet and the nature of intermediaries to allow this to happen – is that they tend to only regulate when they need to regulate, when they’re called upon to regulate.
So, as an example, I remember watching this chat between Sarah Lacy of what was then Pando Daily and Brian Chesky, I think, the Founder of Airbnb. And he talked about how they allowed certain instances in the Airbnb – violations in Airbnb – to continue because, as a percentage number, it wasn’t really big enough. And it’s only when these problems became big that they dealt with it. So, platforms usually operate in unregulated spaces as aggregators, and they almost look at instances or incidents on a percentile basis, of whether this is a big enough problem for them to solve or not. Now, if you have 100 million users and 1% of that base gets affected by something, that’s a million people. But they don’t necessarily act on it unless they feel the need to do it, because there is a cost to acting on these issues. And so, platforms are able to scale faster and with far more aggression because they tend to ignore what they see as edge cases, even though those edge cases may have a direct impact on people.
And in India, WhatsApp has failed to do that in the early years. In 2013, the first set of WhatsApp-instigated riots took place in a town called Muzaffarnagar where there was violence between the Hindu community and the Muslim community. And that I think, at that point in that time, they did nothing. It’s only in, I think, in 2018, after the Indian IT Minister started calling out WhatsApp for fostering misinformation and fake news, and because in 2019 the elections were coming up, then WhatsApp start making changes to its platform. So, it started reducing the velocity of the spread of messages on the platform. So, it reduced … you could only forward to five people after that change, as opposed to 256 people before that. And so, just the pace at which hate speech would spread reduced drastically.
So, these are important changes that they have made and I think we are better for it, now. But for the five years following that first set of riots, we had Internet shutdowns in India, and a lot of these were as a function of hate speech spreading on WhatsApp. I think India had, for a few years, over 100 Internet shutdowns in the country. In some parts of the country, the Internet was shut down for six months. And some of this can all directly be triangulated to hate speech spreading on WhatsApp. In fact, I spoke with a local police commissioner who said just as much. That, ‘We don’t have a choice, because these messages are spreading so fast that we don’t know how to stop it, and we have no choice but to shut the Internet down.’
Rasmus: It’s a forceful reminder that, while blitzscaling and frictionless entry and growth hacking may be great for the bottom line and for investors, it’s not always great for society. And that, while some network effects are positive, not all network effects are positive, and some of them are creating very real problems. Both on smaller and especially in the larger platforms.
On whether platforms should have the power to ban political leaders like Trump ↑
Rasmus: Nikhil, I thought maybe I should end with a straight question. You’ve offered us a lot of nuance, but there is also a fundamental question here, I suppose. You’re pointing out and reminding us that platforms are private, for-profit companies that perform a sort of public function. And, as private property, the current state of affairs is that they clearly are able and allowed to ban you or me as individual users. But I suppose the question that we started with, the question of whether they should be allowed to ban political leaders, whether Donald Trump or others, what do you think of this decision that they made?
Nikhil: In my opinion, I think they should be free to ban political leaders just as they should be free to ban you or me, if we are violating the community guidelines, and particularly if you’re violating laws of the country. Only that we would expect consistency from them in the application of this particular approach. It shouldn’t be that they’ve allowed Trump to drip-feed hate and foster hatred against some communities over a period of time, and then choose to ban him because he no longer has the power.
Rasmus: Some call consistency the hobgoblin of small minds, but I think there’s a stronger case to be made that consistency is absolutely essential to due process and the rule of law, and thus also to the protection of free speech online. Nikhil, thank you so much for being with us today and outlining your views on the situation in the US, and what it means for India and the rest of the world.
Subscribe to our podcast
On Spotify | On Apple | On Google
If you want to know more...
- Read this interview on content moderation with our former Visiting Fellow Richard Allan.
- Read this piece we published in 2016 on how journalists should cover powerful people who lie.
- Read this section from Digital News Report 2020 on how people want the media to cover politics.