On May 2, Facebook
banned the accounts of several high-profile individuals including Infowars founder Alex Jones, Nation of Islam founder Louis Farrakhan, far-right provocateur Milo Yiannopoulos and white supremacist politician Paul Nehlen.
In each case, the rationale behind the ban was rather obvious. Whether it was Jones’
distasteful conspiracy theories, Farrakhan’s [anti-semitism] (https://www.apnews.com/70da24ff7b344c098e91d9b3505a98e0), Yiannopoulos’s
Islamophobia or Nehlen’s
racism, each one of these hate-mongers had given Facebook more than enough reason to ban it. Nonetheless, critics —
including noted [defender] (https://www.forbes.com/sites/michaeltnietzel/2019/03/05/trumps-free-speech-executive-order-oh-the-ironies/#70720f0b27db) of free speech Donald Trump — attacked the decision as a subversion of these individuals’ right to free expression.
Yet this is an issue that goes beyond Trump’s twitter tantrums, for the problem of extremist content on social media platforms is a global one. In the United Kingdom, the far-right provocateur Stephen Yaxley-Lennon — better known as Tommy Robinson — was
banned from Facebook for hate speech. Similarly, politicians from
Denmark to
Myanmar have been barred from the platform for violating Facebook’s terms of use. In addition to the people who it has banned, Facebook has attracted just as much controversy for the people that
it hasn’t.
When it comes to such global hate-mongers, Facebook and other social media behomeths have a seemingly difficult decision to make: should they crack down on such extremists or do they attempt to preserve freedom of speech?
Yet, when one considers the offline consequences of these online actions, the decision becomes far simpler. The hate speech spouted by Jones, Farrakhan, Yiannopoulos and their international counterparts cannot be restricted to the virtual world. This was illustrated in December 2016 by the patently false Pizzagate conspiracy theory — promoted by Jones on his Youtube channel — which led to a
shooting in a Washington D.C pizzeria.
Globally, multiple studies have shown how hateful content on platforms
leading to violence, from facilitating brutality in the Libyan Civil War to spurring anti-immigrant hate crimes in Germany. When hateful content drives such crimes, clinging on to some absolutist notion of freedom of speech is not only misguided, but also irresponsible.
These offline consequences are what differentiates these extremists from other problematic individuals such as flat-earthers. The latter may be ludicrous or even abominable, but their words are unlikely to inspire acts of violence. The same can’t be said of people like Jones, Farrakhan and Yiannopoulos.
If Facebook and other social media platforms aren’t willing or able to crack down on hateful fanatics, then governments are left with no choice but to take more drastic action, temporarily at least. Such was the case in the aftermath of terrorist attacks in Sri Lanka where the government
shut down a number of social media platforms — including Facebook — in order to prevent the spread of misinformation and the incitement of violence. The fact that such an action was deemed necessary by a government is an indictment of Facebook’s influence.
Some may suggest that Facebook’s decision is a political step by a supposedly “apolitical” platform. Yet this notion of Facebook has always been something of a myth. The firm became political the moment it
designed news feed algorithms which
incentivized content with “negative, primal” emotions, thus promoting more polarized politics. When this polarization results in hate, it is not only Facebook’s right, but rather its responsibility to crack down on extremism.
Many on the political right have seen Facebook’s decision as another sign that the firm is biased against conservatives. However, does the conservative movement really want to be associated with the xenophobia that these types of figures embody?
Individuals like Jones, Farrakhan and Yiannopoulos should not be described as conservatives or liberals. In reality, they are merchants of hate — individuals who have taken age-old prejudices, commodified them and sold them in a new bottle. Until May 2, Facebook was the marketplace for such despicable transactions.
Some —
such as New York Times columnist Bret Stephens — have agreed with the decision in principle, but expressed concern about where it may lead. According to this argument, Facebook’s decision suggests a broader shift from a mere content distributor to a content moderator, one that may lead to “bans on people whose views are hateful mainly in the eyes of those doing the banning.” This is certainly a valid concern. Facebook’s decisions over the past two years do not exactly inspire confidence. Trusting the company with even more power feels like insanity.
Yet what this argument ignores is that the firm’s profit motive will always guide it toward more absolutist notions of freedom of speech. At the moment, the firm’s bottom line is not affected by the consequences of hateful content, but it is altered by the advertising revenue such content generates. Thus, the company will always stray on the side of exercising caution when it comes to such provocateurs. The decision to ban such hateful figures is not some nefarious plot, but rather a laudable betrayal of the firm’s most basic capitalist instinct, a push away of the invisible hand if you will.
Ultimately, Facebook is an unethical monstrosity of a company that deserves to be
broken up. And yes, it deserves appreciation on the rare occasion that it gets it right.
This was one of those times.
Abhyudaya Tyagi is Opinion Editor. Email him at feedback@thegazelle.org.