The Ungovernability of Digital Hate Culture

Social media and the Internet play an important role in the proliferation of hateful and extreme speech. Looking to contemporary networks of digitally mediated extreme right-wing communication, this essay explores the form, dynamics, and potential governance of digital hate culture. It focuses on the cultural practices and imagination present in the networks of digital hate culture to illuminate how two frames, the Red Pill and white genocide, unify the different groups that take part in these networks. After providing a high-level overview of these networks, this essay explains three formal features of digital hate culture that make it ungovernable: its swarm structure, its exploitation of inconsistencies in web governance between different actors, and its use of coded language to avoid moderation by government or private sector actors. By outlining its cultural style and ungovernable features, this essay provides policy professionals and researchers with an understanding of contemporary digital hate culture and provides suggestions for future approaches to consider when attempting to counter and disrupt the networks on which it depends.

By
Bharath Ganesh
December 19, 2018

Introduction: The Dark Side of “Democratic” Digital Media

Social media has played an increasingly important role in domestic and international politics. Until recently, commentators and researchers on social media lauded its capacity to equalize the political playing field, giving voice to marginal groups and new actors. Iran’s Green Revolution in 2009 and the Arab Spring in 2011 were two early events, among many others, in which social media platforms—particularly Twitter—were used to circumvent state surveillance, connect and coordinate protests, and share information.1 While social media has been used by activists trying to counter authoritarian regimes and organize peaceful protests, its low barriers to entry have allowed extreme groups to exploit its benefits.

The printing press and the newspaper had a democratizing effect similar to social media’s that prompted cynical reflections from Danish philosopher Søren Kierkegaard. His qualm was that the press desituated knowledge from experience by making it readily available even to those that might not have any experience in the topic they were reading about.

Everybody can become a commentator: “The new power of the press to disseminate information to everyone in a nation led its readers to transcend their local, personal involvement and overcome their reticence about what didn’t directly concern them.”2 This has been taken to a new limit with the Internet, where “anyone, anywhere, anytime, can have an opinion on anything. All are only too eager to respond to the equally deracinated opinions of other anonymous amateurs who post their views from nowhere.”3 The democratizing force of social media thus also presents a risk: a clever group of these “anonymous amateurs” have the power to spread extreme views, bigotry, and propaganda through their commentary on anything, anywhere, any time.

The explosion of digital hate culture represents the dark side of the democratizing power of social media. Digital hate culture thrives on this democratizing force of social media. Its proponents write political blogs with the hope of attracting patronage from anonymous funders and others fashion themselves into YouTube pundits that educate audiences with misleading narratives based on secondary sources, repeating extreme worldviews and building on conspiracy theory and falsehood.4 In a new media culture in which anonymous entrepreneurs can reach massive audiences with little quality control, the possibilities for those vying to become digital celebrities to spread hateful, even violent, judgements with little evidence, experience, or knowledge are nearly endless.5 Digital hate culture grew out of the swarm tactics of troll subcultures, but has been co-opted for a political purpose, with automated accounts or “bots”—some of whom were associated with recent Russian information operations—adding to existing groups of users seeking to hijack information flows on social media platforms.6 As Angela Nagle writes, those participating in digital hate culture are zealots in a “war of position” seeking to change cultural norms and shape public debate.7 I use the term “digital hate culture” rather than “alt-right,” “neo-Nazi,” “white nationalist,” “white supremacist,” “fascist,” or “racialist”—all subgroups that have a home in digital hate culture—to refer to the complex swarm of users that form contingent alliances to contest contemporary political culture and inject their ideology into new spaces. Digital hate culture is united by “a shared politics of negation: against liberalism, egalitarianism, ‘political correctness’, and the like,” more so than any consistent ideology.8

This approach, centered on the naming of digital hate culture as an Anglophone phenomenon that is ungovernable, takes some distance from recent work by Alice Marwick, Rebecca Lewis, Whitney Phillips, and Robyn Caplan. Where their work provides theories of networked harassment and draws useful relationships between troll culture, gaming, and misogyny in the contemporary social media landscape, the focus here is on particular practices of hate as a challenge for governance and security rather than for media.9 For example, Phillips articulates the unintended ways in which journalists might amplify the voices of alt-right activists by reporting on their use of social media to affect public opinion.10 This opens up an important dilemma that journalists must negotiate, weighing the social benefits and potential harms of publishing a story which covers extremist right-wing views.11 The contributions in this piece provide a different perspective on the implications of such hatred by focusing on governance and security. Further, while Marwick and Lewis focus on practices of the US alt-right and detail many of the coalitions that are covered below, the present focus is explicitly transatlantic and attempts to take a broader view of the challenges that digitally mediated hatred and right-wing extremism raises for American and European governments.12

In introducing the concept of digital hate culture in the context of politics and international relations, readers should be cautioned against a reading of digital hate culture as a global phenomenon, though similar cultural processes are at play throughout the world. The digital hate culture referred to here is linguistically, spatially, and culturally bound to Anglophone social media activity in North America and Europe, and it has strong resonances with hate culture and right-wing extremism in other European languages (e.g. French, German, and Swedish). Many scholars have noted that hateful exchange on social media is localized in a multitude of ways and it is not the exclusive domain of the white identity politics that this essay focuses on.13 Consequently, it is important that readers remember that my reference to digital hate culture is one that refers to a network of users that post predominantly in English and perform an identity rooted in discussions that Jessie Daniels refers to as “networked white rage.”14 While the phrase digital hate culture has a certain generality in it, this naming avoids the reduction of the internal contestation between tenuously related but heterogeneous coalitions that are referred to by the umbrella term “alt-right.”

To effectively combat digital hate culture, we need to understand its formal characteristics. In doing so, I focus on the strategies and tactics used by exponents of digital hate culture rather than the hateful language that they express. This draws out the dangerous elements of its speech and its ungovernability. Digital hate culture goes beyond offense; it employs dangerous discursive and cultural practices on the Internet to radicalize the public sphere and build support for radical right populist parties. By explaining its characteristics, I explore the cultural politics of digital hate and the codes that it uses to flout hate speech laws and content regulation by private actors. In doing so, I argue that digital hate culture is ungovernable, but with the right knowledge and tools democratic processes can work towards managing its dangerous effects. In developing a critical perspective on digital hate culture, I hope to offer policy professionals and researchers new ways of thinking that can better disrupt and destabilize these ungovernable networks.

The Red Pill and White Genocide: Common Sense in Digital Hate Culture

It is important to understand the broad frames that bring digital hate culture together. Digital hate culture builds on a cultivation of common sense amongst its audiences that ultimately seeks to radicalize those who listen. What is unique about digital hate culture is that it is centered around a collective identity rather than the loose, ephemeral connections of coordinated action that scholars of digital cultures have identified.15 The concept of the swarm is a useful starting point for trying to understand the structure of digital hate culture, and (as I will discuss in the following section) contributes to its ungovernability. Yet the swarm encountered here challenges existing concepts of digital swarms, particularly that offered by philosopher Byung-Chul Han: “For a crowd to emerge, a chance gathering of human beings is not enough. It takes a soul, a common spirit, to fuse people into a crowd. The digital swarm lacks the soul or spirit of the masses.”16

Unlike the generic digital swarm to which Han refers, the swarm that composes digital hate culture is different in quality. Despite their tenuous coalitions and the fragmentation and fracturing that many observers of the “alt-right” have identified, digital hate culture does have a “common spirit” that is based on the tropes of the Red Pill and white genocide.17 Just as with religion or political revolutionaries, there are a multitude of ways in which this common spirit is interpreted, expressed, and actualized. This common spirit is a particular characteristic of right-wing cybercultures that have made use of new media in order to expand the audiences with whom they engage. The processes that bind contemporary right-wing extremism online should be understood as forming a community around forms of intimacy, sense, and feeling that are maligned or considered unacceptable in mainstream society.18

Digital hate cultures emerged through the appropriation of cultural practices from a range of groups: the so-called “manosphere,” an antifeminist coalition of men’s rights activists, bloggers, pickup artists and alleged experts in sexual strategy, and the Red Pill community; gamer and nerd subcultures; and a recently aligned coalition of neo-Nazis, anti-Semites Islamophobes, libertarians, Christians, atheists, conservative nationalists and so-called “race-realists” who profess a eugenic view of interracial competition. This collection of different viewpoints has led to two unifying frames by which we can understand the culture of online hate. The first refers to an idea that emerged from the manosphere called the Red Pill. The second refers to a radicalization of the idea that Western civilization and culture is facing an existential threat from non-white people and the liberals that appease them, which is summarized by the phrase “white genocide.” I provide an overview of these frames in order to illuminate the common sense that unites this group of digital subcultures.

The philosophy of the red pill emerged from the “interconnected organizations, blogs, forums, communities, and subcultures” on the manosphere:

Central to the politics of the manosphere is the concept of the Red Pill, an analogy which derives from the 1999 film The Matrix, in which Neo is given the choice of taking one of the two pills. Taking the blue pill means switching off and living a life of delusion; taking the Red Pill means becoming enlightened to life’s ugly truths. The Red Pill philosophy purports to awaken men to feminism’s misandry and brainwashing.19

To be “red-pilled” in the parlance of the manosphere is to internalize these “truths” and to develop sexual strategy based on exploiting the purported hard-wired sexual inclinations of all women.20 In digital hate culture, this concept has been taken further, particularly after the #Gamergate sexist harassment campaign in 2014 and heightened activity on 4Chan, Reddit, and Twitter in response to Donald Trump’s campaign in 2016. Often used as a reference to a state of mind, the sense of being “red-pilled” in the context of digital hate culture refers to the idea that leftist political ideologies (which, for the purveyors of hate refers to the entire spectrum of feminists, Marxists, socialists, and liberals) have deluded the population and conspired to destroy Western civilization and culture.21 The most significant delusion is the notion of equality, which betrays the “truth” that races are inherently unequal and, in doing so, ensures the degradation of white society.22 This takes a significant distance from the antifeminist masculinity that defines the red pill in the manosphere; it is not simply feminism that is a terrible conspiracy, but the entire “liberal” or “leftist” project that seeks the equality of people of different ethnicities and cultures.

In the context of digital hate culture, the red pill awakens its taker to uncomfortable truths that cannot be spoken in polite society. Almost all of the Internet celebrities prominent in digital hate culture, such as the “father” of the alt-right Richard Spencer, pundit Paul Joseph Watson, editor at Infowars. com, and many others speak about the difficulty of swallowing the red pill. A more recent entrant into digital hate culture, blogger and minor alt-right YouTube celebrity Bre Facheux, describes what taking the red pill awakens:

Once I became awake to the things that are happening in the West, things such as the implementation of communism through feminism, the intentional shift in demographics through mass immigration, the collapse in national identity, the destruction of the nuclear family, the generational hollowing-out of IQ, the left-wing mainstream media lies, the suppression of knowledge regarding race realism, and the very real threat of white genocide, I found myself in a constant state of melancholy. I couldn’t even go to the mall to buy myself a pair of jeans...without noticing the trends that I had been red-pilled about taking place all around me.23

In this context, taking the red pill means becoming aware of a totalizing view of the West as under threat by both immigrants and a range of intersecting ideologies that “appease” migrants and threaten Western civilization. This presents an extreme worldview in which all migrants, liberals, and leftists are enemies of Western, white society. This parallels the “absolutist worldviews” that sociologists have used to explain the violent radicalization of terrorists.24 While there is a multitude of processes that lead to radicalization and political violence, researchers in the area have shown that the identification with extreme worldviews that are “maligned” by mainstream society is an important factor in driving individuals towards political violence.25 It is important to note that the Internet plays an important role in the use of multimedia across different strands of extremism by enculturating individuals into extreme communities.26

In a sense, being awakened to white genocide is the outcome of a person’s assumption of the red pill mentality. White genocide, a concept that has had salience in extreme right-wing cybercultures for some time, encapsulates the way this state of mind is mobilized by this network to create a stable, collective identity.27 As a hashtag, the concept was involved in the recruitment and enculturation of audiences of alt-right, neo-Nazi, and white nationalist ideologies in a recent study of Twitter users.28 The mentality that white genocide refers to has long been under development; in surveys of white supremacist users of web forums that preexisted sites such as YouTube, Reddit, and Twitter, Dentice and Bugg find that that many of their respondents “believed” themselves “to be under attack from unenlightened whites, antiracist activists, and non-white out-groups” and have “crafted a definition of ‘whiteness’ that protects their identity from more mainstream whites who support diversity and display tolerant attitudes towards social issues such as gay rights.”29 White genocide directs the initiate to an extreme worldview that fosters a toxic and potentially violent mentality that serves as a “master frame.”30 This has been utilized in electoral politics by radical right-wing populist parties whose conception of the “pure” people being threatened by allegedly criminal and violent migrants has been a salient frame for the expansion of their social media audiences in Europe and the United States.31 By dropping “Red Pills” on audiences through the use of digital media, online hate cultures engage in a kind of “alternative education” focusing on a white identity politics and the alleged threats it faces.32 By deploying this frame, “the contemporary hate movement is grounded in its ability to repackage its messages of white male supremacy in ways that make them more palatable and appealing to a very different population.”33

Waking up to the purported white genocide is the thread that connects the disparate blogs, forums, social media accounts, and websites on which digital hate culture is built. Thus the red pill mentality assembles collectives out of proponents of different ideologies, commitments, and movements. Sharing, posting, and commenting on white genocide and taking the red pill is a kind of zealotry that transcends the divides between extreme right-wing groups: it seeks to use “alternative media” and “truth” to convert others to the totalizing, extreme world view that it endorses. In doing so, it creates a cyberculture in which a broad, transnational coalition of groups can congregate and mobilize under a shared mentality and worldview.

The Ungovernable Practices of Digital Hate Culture

The amorphous, fluid structure of digital hate culture prevents it from being easily governable. As a swarm brought together by its shared mentality, it connects a set of agents with a collective identity and sense of community. This swarm is ungovernable, rather than ungoverned, due to three characteristics: its decentralized structure, its ability to quickly navigate and migrate across websites, and its use of coded language to flout law and regulation. It exists in an interstitial zone that is formed through engagements between users across websites and platforms, taking advantage of different regulatory regimes between governments, the private sector, and civil society. It is important to note that this space is not specific to digital hate culture—any number of actors exploit the regulatory inconsistencies between countries and websites for their own agenda. Digital hate culture exploits these inconsistencies in a strategic manner, which this section will explore with a few examples.

The red-pilled zealot claims to be aware of the truth hidden from and maligned by the allegedly ignorant masses that are unaware of the conspiracies, ideologies, culture, and governments that are purportedly causing the downfall of Western civilization. All of society is against the West’s righteous defenders and wants to suppress their truth: Efforts to call them “racist,” brand them guilty of hate speech, or censor their websites or users only strengthen their resolve and buttress the claim that they speak a truth that is being suppressed by power. In doing so, they have managed to spread hate while avoiding legal repercussions and rapidly expanding their audiences.

Digital hate culture exploits the gaps of an interstitial zone in which regulation of content is contested between governments and the private sector. It would be a mistake to refer to this interstitial zone as an “ungoverned space”; as Deibert and Rohozinski explain, the Internet is “very much a governed space” in which the materiality of connections, wires, cables, routers, and signals, as well as code and software, have a significant effect on what actors can do online.34 Attempts to govern or regulate speech by private companies is often contested, which leads to differences between particular websites, servers, and hosts.35 It might be more useful, then, to move to a concept of ungovernability rather than assert that the Internet—and social media in the present case—represents an ungoverned zone. Digital hate culture rapidly migrates from one host that might shut a site down to one with completely different community guidelines or terms of service.36 Social media platforms and forums all have different codes of conduct, some of which are defined bottom-up by users. On Reddit, for example, each subreddit, which can be thought of as a discussion forum pertaining to a specific topic, has its own community regulations, moderators, and codes of conduct. However, Reddit may at times shut down a subreddit at its discretion. Alternatively, Twitter and Facebook have general codes of conduct that are specific in their limitations on hate speech, racist speech, nudity and pornography, harassment, and spam, but depend on users to report this content. Other platforms, like the now-infamous 4chan message board, have minimal levels of moderation and regulation.

Differences between web hosts allow digital hate culture to exploit inconsistencies. For example, after the Unite The Right rally in Charlottesville, Virginia, Nazi sympathizer James Alex Fields, Jr. murdered Heather Heyer, a counter protester, in a vehicular attack. The Daily Stormer, a neo-Nazi website, wrote of Ms. Heyer, “most people are glad she is dead...she is the definition of uselessness.” Following this, both GoDaddy and Cloudflare terminated their contracts with The Daily Stormer, briefly taking it down from the Internet.37 The Daily Stormer has since switched to a new host and has a site on the dark web. Ultimately, by navigating web hosts with different regulations and a different willingness to take a content neutral position, The Daily Stormer found another home on the Internet at a new URL. Without a consensus across all web hosts, it is almost impossible to prevent the migration of digital hate culture to other, less-regulated web hosts. This forces actors trying to disrupt to contend with a decentralized network that can reappear by exploiting different regulations from another provider.

While websites and the services that they use to host them are important parts of digital hate culture, hate cultures spread many of their ideas through social media platforms, where they are vulnerable to actions taken by the companies that own them.38 On 18 December 2017, Twitter coordinated account suspensions of numerous extreme right-wing accounts. Referred to as “#TwitterPurge” by members of this swarm, numerous alt-right users such as Richard Spencer lost their “Verified by Twitter” status and a number of accounts were shut down. Some of these suspended accounts quickly reappeared. A popular user, Tara McCarthy, who runs an alt-right YouTube series titled Reality Calls, also faced an account suspension. She quickly migrated to another account and has since regained her thousands of followers on Twitter. To thwart Twitter’s attempt at suspension, she merely changed her account from “TaraMcCarthy_14” to another account that had been created in 2015, “TaraMcCarthy444.”

Twitter’s purge also affected activists involved in social movements that exist in both online and offline contexts. Leaders of Britain First, a notorious counter-jihad group that is well-known for its intimidation tactics during its mosque invasions and “Christian patrols,” were affected by this purge as well. After Britain First’s leaders Paul Golding and Jayda Fransen had their accounts suspended by Twitter, they were courted by the leaders of an emerging “free speech” platform based in Texas, Gab.ai, to migrate their activities.39 Gab’s lax community regulations on hate speech attract banned users from Twitter. The platform already had a reputation as one of the key hubs for members of the alt-right, anti-Semites, and neo-Nazis, and the event increased its visibility.40

In addition to its cross-platform migration, digital hate culture circumvents government and regulation with its dynamic development of coded language to avoid legal repercussions for its content. Terms like “white genocide” or “rapefugees” (which is explained below) allow exponents of digital hate to share their common sense by inventing new terms that are not covered by community guidelines on social media platforms or by hate speech laws. Their terms are not immediately recognizable as extreme or hateful without understanding the context, an area in which social media platforms and governments are struggling to keep up.

Islamic Rape of Europe

Image 1: “Islamic Rape of Europe” wSieci Magazine 2016 41

In 2016, following reports of the rape and sexual assault of women on New Year’s Eve in Cologne, Germany, a Polish magazine ran a story about the “Islamic Rape of Europe.” On the cover was a white woman draped in the European Union flag fighting off brown hands pulling her hair and attempting to grope her. It repeats a common trope in digital hate culture which refers to all Muslims as rapists and pedophiles.42 Where they cannot allege that “Asians and Arabs have a predilection for being rapists,” they can—and do—get away with suggesting that “Muhammad was a paedo” and that “Islamic rape gangs” populated with and supported by refugees are running amok across Europe. This is presented as legitimate “criticism” of religion despite evident bigotry. In exploiting the criminality of a group of non-white men, they take the moral high ground as protectors of European women and indict all Muslims with the same crime. Any attempt at regulation of such language is immediately rebuffed by a swarm who claim their oppression: They simply point out that they are only offering a viewpoint that the “politically correct” government and media are trying to silence.

As I write this, a social media stunt quite fortuitously unfolds on Twitter that explains how the collective identification with this victim narrative is used to mobilize digital protest. Lauren Southern, who has blamed Muslims for the Holocaust and spread anti-Muslim conspiracy theories, was on a bus from France to the United Kingdom where she was denied entry due to the Border Force’s claim that she might incite racial hatred.43 This was alongside her colleagues Brittany Pettibone and Martin Sellner, prominent leaders in digital hate culture and far right activism, whom she had earlier joined on the “Defend Europe” mission that sought to disrupt NGO vessels saving migrants coming to Europe via the Mediterranean sea, endangering migrants’ lives.44 Southern posted photos of the letter she received from the Border Force denying her entry, and very quickly the swarm mobilized its support. Tommy Robinson, founder of the English Defence League and well-known counter-jihad activist, rushed to Calais to interview Southern and rapidly posted a video to YouTube. In less than 24 hours, the pundit Paul Joseph Watson pinned a video about her detention to his Twitter feed in which he compared cases where Muslims that were allowed into the UK committed terrorist attacks with the cases of Southern, Pettibone, and Sellner, who were denied entry. Watson sarcastically opines: “It can’t possibly be true that our bureaucracy is so poisoned by political correctness that it keeps 25-year-old conservatives locked up for days over their political opinions while ignoring industrial-scale grooming scandals.”45 All the codes of white genocide are at play in this quote: The bureaucracy is “poisoned,” white conservatives are censored and treated like terrorists, and all the while brown rapists and jihadists are on the hunt. Watson cloaks the implications of his statement, which reinforces the extreme worldview of digital hate culture, in a more reasonable veneer by decontextualizing the actions of Southern, Pettibone, and Sellner and focusing attention on the “problem” Muslims. In proper form for the digital hate culture swarm, a Twitter user tagged Southern, Pettibone, Sellner, Watson, and other alt-right figures in a meme (see Image 2) with a link to a webpage that allowed viewers to make their own version.

"UK Logic” meme in response to detention of Lauren Southern

Image 2: “UK Logic” meme in response to detention of Lauren Southern 46

In the meme, Lauren Southern is on the top stating, “The West is the best” to a frowning UK border agent, who refuses her entry. On the bottom is a generic jihadi that the agent is only too happy to allow in. What this illustrates is that the swarm quickly reacts to uses of regulatory power with drama, sarcasm, and humor in order to turn it into a social media event, shape the public debate, and maximize its publicity by flooding platforms with misleading content that repeats the notion that those expressing “alternative” viewpoints are victims of an out-of-control, illiberal power structure. By assuming the victim position that is hardwired into the cultural practices of digital hate, its exponents are able to increase their media visibility and catalyze support across the swarm, turning legal challenges into PR stunts. In this way, they boost their web presence amongst an audience that assumes that any such attack can only result from the capture of government by politically correct liberals seeking to appease Muslims at the expense of conservative white voices.

Digital Hate Culture as a Security Challenge

There is a growing consensus that digital hate culture is fueling hate crimes and, in more limited cases, terrorist attacks. In a speech at Policy Exchange, a leading right-leaning think tank in London, Mark Rowley, Assistant Commissioner of London’s Metropolitan Police and the UK’s most senior counter-terrorism officer, highlighted the role of online media in influencing both militant Islamists and extreme right-wing attackers. Pointing to Darren Osborne, who was in communication with Tommy Robinson in the weeks prior to his vehicular attack at Finsbury Park Mosque in North London, Rowley stated: “Osborne had grown to hate Muslims largely due to his consumption of large amounts of online far right material including, as evidenced at court, statements from former EDL leader Tommy Robinson, Britain First and others...There can be no doubt that the extremist rhetoric he consumed fed into his vulnerabilities and turned it into violence.”47 Justice Cheema-Grubb, who delivered Osborne’s sentence at Woolwich Crown Court on 2 February 2018, wrote that Osborne’s exposure to extreme right-wing material began after he viewed a documentary about child sexual exploitation perpetrated by British-Pakistani Muslim men and was later “rapidly radicalized over the Internet encountering and consuming material put out in this country and the USA from those determined to spread hatred of Muslims.”48 Osborne’s radicalization shows that the ease of accessing such material allows vulnerable, angry people to direct their violent urges at innocent targets, which is in line with findings that those who seek out extremist content on the Internet are likely to be involved in political violence.49 The fact that such content is readily available on the Internet expedites this process. Osborne was angry about a BBC video he had seen about a case of sexual exploitation, and he turned to the Internet to make sense of it. On Twitter, he encountered Tommy Robinson and Jayda Fransen, both of whom sent him circulars. Robinson’s email was a circular about crimes committed against a woman. He wrote: “Police let the suspects go...why? It is because the suspects are refugees from Syria and Iraq...It’s a national outrage...I know you will be there for her and together, we’ll get her the justice she and her family have been denied.”50 It is likely that Robinson was inviting him to a demonstration in an email sent to hundreds of other followers. What matters in this story is that Robinson never recommended that Osborne execute a vehicular attack; it was a consequence of the half-truths and blanket labeling of Muslims as violent sexual predators and terrorists that radicalized Osborne and gave him a rationale and impetus for action.

Such radicalization threatens the security of non-white North Americans and Europeans with politically motivated violence. Recent research on the surge of hate crimes against Muslims in the UK, for example, shows that this surge correlates with the rise of digital hate culture. There is a striking relationship between online digital hate culture and the insults hurled at victims of hate crimes, the threats stuffed in their letterboxes, and the vandalism of their communities. While future research is necessary to establish these causal linkages, this growing body of work has already demonstrated the relationship between online groups, the discourse of digital hate culture, and the growth of hate crimes.51 Digital hate culture ought to be understood as a security threat but one that requires a coalition of actors to counter.

This would not be the first time that hate and extremism on social media were the objects of scrutiny by law enforcement and the military. When ISIS became increasingly visible on social media, civil society and technology companies responded with a rapid deployment of censorship, counter-narratives, and strategic communication. By shutting down hundreds of accounts and using social media to track and investigate potential extremist activity, ISIS’s social media channels were drastically reduced and limited to encrypted messaging, while the efficacy of their propaganda “significantly diminished.”52 Unlike ISIS, digital hate culture is readily available today. I draw this comparison not to equate the two but to point out that prior to these actions it was easy to use YouTube and Twitter to find networks to support those interested in joining ISIS. Shutting down accounts and forcing content to come down made these pathways less accessible. The difference, of course, is that there is wide consensus that jihadists should not enjoy the benefits of social media. On the other hand, those like the believers in white genocide who cloak their antipluralism and bigotry in half-truths and extreme worldviews are able to exploit digital communication channels. If we take this seriously as a security threat, we need to ask whether censorship by state and non-state actors remains a valid option.

I suggest that a new global discussion on free speech—one that is not undertaken on the terms of radical right populists and the purveyors of digital hate culture—needs to consider the censorship of digital hate culture across governments and technology companies. With attention to the factors that make it ungovernable and reconsidering the limits of free speech in light of extreme right-wing radicalization tactics, governments, technology companies, and civil society can challenge digital hate culture. An example of such a process is underway in the United Kingdom, where a recent inquiry by the House of Commons Home Affairs Committee scrutinized actions taken by Twitter, Facebook, and Google after consulting with a range of groups representing victims of hate crime. They recommended that these companies take significantly more action to remove illegal and extremist content and that the UK Government review legislation of hate speech, extremism, and social media.

However, actors are currently focused on one part of a bigger problem: They typically attend to content rather than the cultures and the virtual spaces that these groups inhabit. By focusing on censoring content and banning certain users, they do not address the connective infrastructure that brings the swarm together. In 2015, Reddit shut down a number of subreddits that were dedicated to hate speech, such as “r/fatpeoplehate” and “r/CoonTown.” In a study of what happened to users after that shut down, scholars found that despite all the other subreddits those users migrated to, hate speech did not increase because the spaces for articulating hate were severely curtailed. Reddit’s attack limited the connectivity between members of this swarm (at least on Reddit’s website) and disrupted a hateful milieu.53

Rather than focus on content moderation, governments, technology companies, and civil society should cooperate to focus on disrupting individuals, accounts, and movements that use emotive messaging and extreme worldviews. The #TwitterPurge was something of an experiment in this regard, despite it providing publicity for the fringe platform, Gab.ai. By taking away verified badges from users like Richard B. Spencer but not from their colleagues like Brittany Pettibone, and by failing to take down accounts that users migrated to, they did not actually shut down space for extremist discussion and exchange, unlike the removal of entire communities (the approach taken by Reddit). Enough members of these networks maintained their accounts and were able to turn the purge into a media stunt, using their platform to inspire user migration to new platforms. Technology companies and government need to consider how to increase the barriers that digital hate cultures face in capturing the attention of audiences rather than simply deactivating accounts or censoring content. They can accomplish this by paying closer attention to the tools that digital hate culture uses to flout governance and regulation, and by working closely with other actors to leverage their considerable advantages in data science and user data.

There are design, legislative, and security implications in moving away from content moderation as the primary approach to disrupting digital hate culture, but addressing this gap is crucial to countering the growing specter of antipluralism and hate crime. First, different platforms with different designs have varying options when it comes to disrupting networks. Shutting down a subforum is relatively easy, but a platform like Twitter has to consider how to attack networks such that their users disperse and fragment. It is crucial that technology platforms work closely with a range of civil society actors to inform their disruption activities by considering the authority of speakers, their audiences, and the context in which they speak, in addition to the content they produce.54 Platforms must consider which actors to disrupt and think clearly about the ways in which highly authoritative speakers in online networks exploit collective sentiments to direct hate and anger towards specific groups of people. Waiting for the law to catch up leaves social media vulnerable to being hijacked by extremists. However, there needs to be a legislative conversation based on the formal features of digital hate culture that considers the limits of free speech online and the role of bigotry, extremism, and hate in spreading extreme right-wing views. Legislators need to understand that political violence is being organized through a geographically disparate swarm and that laws need to adapt to this dynamic threat. Finally, disrupting these networks requires a strategic mindset. The racist, antipluralist, and misogynistic dog-whistles of Trump’s campaign and early administration have deepened divisions in the United States and energized digital hate culture while contributing to the fear and insecurity that non-white Americans face. As recent surges in hate crime and incidents of terrorism indicate, failing to deal with digital hate culture will affect the stability of metropolitan areas, alienate populations, and fuel support for autarkic and anti-global policies that will significantly affect American soft power. More importantly, the culture of misinformation and half-truth that this swarm spreads makes voters vulnerable to information manipulation by wealthy political interest groups as well as foreign powers.

In order to effectively manage digital hate culture, its three features of ungovernability need to be clearly understood. Its swarm structure allows it to rapidly mobilize support and spread its worldview quickly across sub-cultures, communities, and the Internet. This also enables it to dynamically migrate from one place to another when authorities, private companies, or other actors shut down the infrastructure that they rely on or censor their accounts. The example of #TwitterPurge is particularly telling in that the unclear selection criteria that Twitter used to shut down accounts only enraged the swarm and mobilized it to encourage users to move to a more extreme space, Gab.ai. In a similar vein, any attempts at censoring or using legal action against these individuals must contend with their response, which identifies any use of power against them as further evidence of the allegedly repressive liberal establishment. With a clear understanding of how their coded hate operates and the identification of the infrastructure that this swarm depends on—such as user accounts, forums, and networks—rather than only on content, digital hate culture can, and should, be effectively managed. While this group will always find another digital home, making it increasingly difficult for audiences to access extreme content is an important step in combating the growing threat of hate in North American and European democracies. 


Bharath Ganesh is a researcher at the Oxford Internet Institute working on the Data Science in Local Government and VOX-Pol projects. His current research explores the spread and impact of data science techniques in local governments across Europe, right-wing and counter-jihad extremism in Europe and the United States, and uses big data to study new media audiences and networks. He is developing new projects to study hate speech and extremism online and regulatory responses to this problem. Broadly, Bharath’s research focuses on the relation between technology, media, and society. Bharath holds a PhD in Geography from University College London (2017).

Endnotes

1 Zeynep Tufekci and Christopher Wilson, “Social Media and the Decision to Participate in Political Protest: Observations From Tahrir Square,” Journal of Communication 62, no. 2 (2012), 363–79; Yannis Theocharis et al., “Using Twitter to Mobilize Protest Action: Online Mobilization Patterns and Action Repertoires in the Occupy Wall Street, Indignados, and Aganaktismenoi Movements,” Information, Communication & Society 18, no. 2 (2015), 202–20.

2 Hubert Dreyfus, On the Internet (Routledge, 2001), 75

3 Ibid., 78

4 Angela Nagle, Kill All Normies: Online Culture Wars From 4Chan And Tumblr To Trump And The Alt-Right(John Hunt Publishing, 2017); Christopher Stokel-Walker, “Alt-right’s ‘Twitter’ is hate speech hub,”New Scientist 237, no. 3167 (2018), 15.

5 Nigel Warburton, Free Speech: A Very Short Introduction (Oxford University Press, 2009); Soroush Vosoughi, Deb Roy, and Sinan Aral, “The Spread of True and False News Online,” Science 359, no. 6380 (2018), 1146–51.

6 Whitney Phillips, This Is Why We Can’t Have Nice Things: Mapping the Relationship Between Online Trolling and Mainstream Culture (MIT Press, 2015); Aaron Kessler, “Who Is @TEN_GOP in the Mueller Indictment?,” CNN, 17 February 2018, https://www.cnn.com/2018/02/16/politics/who-is-ten- gop/index.html; Adrienne Massanari, “#Gamergate and The Fappening: How Reddit’s Algorithm, Governance, and Culture Support Toxic Technocultures,” New Media & Society 19, no. 3 (1 March, 2017), 329–46; Savvas Zannettou et al., “The Web Centipede: Understanding How Web Communities Influence Each Other Through the Lens of Mainstream and Alternative News Sources,” Proceedings of the 2017 Internet Measurement Conference (2017), 405–417; Savvas Zannettou et al., “What Is Gab? A Bastion of Free Speech or an Alt-Right Echo Chamber?,” ArXiv:1802.05287 [Cs] (2018). Available: http://arxiv.org /abs/1802.05287.

7 Nagle, 40-54.

8 Evan Malmgren, “Don’t Feed the Trolls,” Dissent Magazine (Spring 2017), https://www.dissentmaga- zine.org/article/dont-feed-the-trolls-alt-right-culture-4chan; George Hawley, Making Sense of the Alt- Right (Columbia University Press, 2017).

9 Alice Marwick and Robyn Caplan, “Drinking male tears: language, the manosphere, and networked harassment,” Feminist Media Studies 18, no. 4 (26 March 2018), https://www.tandfonline.com/doi/abs/ 10.1080/14680777.2018.1450568.

10 Whitney Phillips, The Oxygen of Amplification (New York: Data & Society Research Institute, 2018), 22 May 2018, https://datasociety.net/pubs/mm/oxygen_of_amplification.pdf.

11 Ibid., 9.

12 Alice Marwick and Rebecca Lewis, Media Manipulation and Disinformation Online (New York: Data & Society Research Institute, 2017), 15 May 2017, https://datasociety.net/pubs/oh/DataAndSociety_ MediaManipulationAndDisinformationOnline.pdf..

13 See Gabriele De Seta, “Wenming Bu Wenming: The Socialization of Incivility in Postdigital China,” International Journal of Communication, 12 (2018), 2010-30; Matti Pohjonen and Sahana Udupa, “Extreme Speech Online: An Anthropological Critique of Hate Speech Debates,” International Journal of Communication 11 (2017), 1173-91.

14 Jessie Daniels, “The algorithmic rise of the alt-right,” Contexts 17, no. 1 (2018), 60-65.

15 For two examples on digital social networks and emotion and identity, see Jodi Dean, “Affective Networks,” MediaTropes 2, no. 2 (2010), 19–44; Zizi Papacharissi, Affective Publics: Sentiment, Technology, and Politics, Oxford Studies in Digital Politics (Oxford University Press, 2015).

16 Byung-Chul Han, In the Swarm: Digital Prospects (MIT Press, 2017), 10.

17 Hawley, 24.

18 Les Back, “Aryans Reading Adorno: Cyber-Culture and Twenty-First Century Racism,” Ethnic and Racial Studies 25, no. 4 (2002), 628–651. See also Jessie Daniels, “Cloaked Websites: Propaganda, Cyber-Racism and Epistemology in the Digital Era,” New Media & Society 11, no. 5 (2009), 659–683; Pete Simi and Robert Futrell, American Swastika: Inside the White Power Movement’s Hidden Spaces of Hate(Rowman & Littlefield, 2015).

19 Debbie Ging, “Alphas, Betas, and Incels: Theorizing the Masculinities of the Manosphere,” Men and Masculinities (10 May 2017), online preprint, doi: 10.1177/1097184X17706401.

20 J. B. Mountford, “Topic Modelling The Red Pill,” Social Sciences 7, no. 3 (March 9, 2018), 42; Rachel M. Schmitz and Emily Kazyak, “Masculinities in Cyberspace: An Analysis of Portrayals of Manhood in Men’s Rights Activist Websites,” Social Sciences 5, no. 2 (May 12, 2016), 18.

21 Annie Kelly, “The Alt-Right: Reactionary Rehabilitation for White Masculinity,” Soundings 66, no. 66 (August 15, 2017), 68–78.

22 Heidi Beirich, “Hate Across the Waters: The Role of American Extremists in Fostering an International White Consciousness,” in Ruth Wodak, Majid KhosraviNik, and Brigitte Mral, Right- Wing Populism in Europe: Politics and Discourse (Bloomsbury, 2013); Leonard Weinberg and Elliot Assoudeh, “Political violence and the Radical Right,” in Jens Rydgren, The Oxford Handbook of the Radical Right (Oxford University Press, 2018).

23 Bre Facheux, “Coping With Being Red Pilled,” Bre Facheux (blog), 29 January 2017, https://brefaucheuxblog.com/2017/01/29/coping-with-being-red-pilled/. Quote transcribed from Bre Facheux, “Coping with being Red Pilled,” YouTube video, 12:07, 28 January 2017, https://www.youtube.com/ watch?v=tHsiR_DUzwo.

24 Anja Dalgaard-Nielsen, “Violent Radicalization in Europe: What We Know and What We Do Not Know,” Studies in Conflict & Terrorism 33, no. 9 (2010), 797–814; Clark McCauley and Sophia Moskalenko, “Mechanisms of Political Radicalization: Pathways Toward Terrorism,” Terrorism and Political Violence 20, no. 3 (2008), 415-433.

25 McCauley and Moskalenko, 420.

26 Thomas J. Holt, Joshua D. Freilich, and Steven M. Chermak, “Internet-Based Radicalization as Enculturation to Violent Deviant Subcultures,” Deviant Behavior 38 (2016), 855–869.

27 Dianne Dentice and David Bugg, “Fighting for the Right to Be White: A Case Study in White Racial Identity,” Journal of Hate Studies 12, no. 1 (2016), 101; Adam Klein, Fanaticism, Racism, and Rage Online: Corrupting the Digital Sphere (Springer, 2017).

28 J.M. Berger, Nazis vs. ISIS on Twitter: A Comparative Study of White Nationalist and ISIS Online Social Media Networks (George Washington University Program on Extremism: 2016), https://cchs.gwu.edu/ sites/cchs.gwu.edu/files/downloads/Nazis%20v.%20ISIS%20Final_0.pdf.

29 Dentice and Bugg, 121.

30 Jens Rydgren, “Is extreme right-wing populism contagious? Explaining the emergence of a new party family,” European Journal of Political Research 44, no. 3 (2005), 413-437.

31 With regards to Europe, see Caterina Froio and Bharath Ganesh, “The transnationalisation of far- right discourse on Twitter: Issues and actors that cross borders in Western European democracies,”European Societies (forthcoming); with regards to the United States, see Bart Bonikowski, “Ethno-nationalist populism and the mobilization of collective resentment,” British Journal of Sociology 68, no. S1 (2017), S181-S213.

32 Olivier Jutel, “American Populism, Glenn Beck and Affective Media Production,” International Journal of Cultural Studies (9 January 2017), online preprint, doi: 10.1177/1367877916688273.

33 Barbara Perry, “‘White Genocide:’ White Supremacists and the Politics of Reproduction,” in Abby L. Ferber, Home-Grown Hate: Gender and Organized Racism (Psychology Press, 2004).

34 Ronald J. Deibert and Rafal Rohozonski, “Under Cover of the Net: The Hidden Governance Mechanisms of Cyberspace,” in Anne Clunan and Harold A. Trinkunas, eds., Ungoverned Spaces: Alternatives to State Authority in an Era of Softened Sovereignty (Stanford University Press, 2010).

35 Tarleton Gillespie, “Regulation of and by platforms,” in Jean Burgess, Thomas Poell, and Alice Marwick, eds., The SAGE Handbook of Social Media (SAGE Publications, 2017); Kate Crawford and Tarleton Gillespie, “What is a flag for? Social media reporting tools and the vocabulary of complaint,”New Media & Society 18, no. 3 (2014), 410-428; Sarah Roberts, “Digital Detritus: ‘Error’ and the logic of opacity in social media content moderation,” First Monday 23, no. 3 (2018).

36 Zannettou et al., “The Web Centipede.”

37 Lily Hay Newman, “The Daily Stormer’s Last Defender In Tech Just Dropped It,” Wired 16 August 2017, https://www.wired.com/story/cloudflare-daily-stormer/

38 See Marwick and Lewis, 24-29.

39 Sarah Marsh, “Britain First signs up to fringe social media site after Twitter ban,” The Guardian, 20 December 2017, https://www.theguardian.com/world/2017/dec/20/britain-first-gab-social-media- twitter-ban.

40 Jacob Davey and Julia Ebner, The Fringe Insurgency (London: Institute for Strategic Dialogue, 2017), https://www.isdglobal.org/wp-content/uploads/2017/10/The-Fringe-Insurgency-221017.pdf.

41 Ishaan Tharoor, “The so-called ‘Islamic rape of Europe’ is part of a long and racist history,” The Washington Post, 18 February 2016, https://www.washingtonpost.com/news/worldviews/wp/2016/02/18/ the-so-called-islamic-rape-of-europe-is-part-of-a-long-and-racist-history/?utm_term=.37ccbabf79f4.

42 Anton Törnberg and Petter Törnberg, “Muslims in social media discourse: Combining topic mod- elling and critical discourse analysis,” Discourse, Context & Media 13, no. B (2016), 132-142; Joanne Britton, “Muslims, Racism and Violence after the Paris Attacks,” Sociological Research Online 20, no. 3 (2015), 1–6; Faith Matters, “Facebook report: Rotherham, hate, and the far-right online,” Tell MAMA(2014), accessed March 25, 2018, https://tellmamauk.org/rotherham-hate-and-the-far-right-online/

43 Hope Not Hate, “Brittany Pettibone and Lauren Southern are not ‘conservative’ activists or ‘journalists,’” Hope Not Hate, March 14, 2018, https://www.hopenothate.org.uk/2018/03/14/brittany- pettibone-lauren-southern-not-conservative-activists-journalists/.

44 Ibid.; Hope Not Hate, “Defend Europe heads to the Med”, Hope Not Hate, July 14, 2017, https:// www.hopenothate.org.uk/2017/07/14/defend-europe-heads-med/.

45 Paul Joseph Watson, “The Truth About Broken Britain,” YouTube video, 6:58, 12 March 2018, https://www.youtube.com/watch?v=h45n1CSetiM.

46 @centerrationale, Twitter Post, 12 March 2018, 6:12 p.m., https://twitter.com/centerrationale/ status/973366036635504642.

47 Lizzie Dearden, “Darren Osborne: How Finsbury Park terror attacker became ‘obsessed’ with Muslims in less than a month,” The Independent, 1 February 2018), https://www.independent.co.uk/ news/uk/crime/darren-osborne-finsbury-park-attack-who-is-tommy-robinson-muslim-internet-britain- first-a8190316.html; Policy Exchange, “Extremism and Terrorism: The need for a whole society response,” Policy Exchange, 26 February 2018, https://policyexchange.org.uk/pxevents/the-colin-cram- phorn-memorial-lecture-by-mark-rowley/.

48 R Vs. Darren Osborne, Judiciary of England and Wales, “Sentencing Remarks of Mrs Justice Cheema Grubb,” 2 February 2018, https://www.judiciary.gov.uk/judgments/r-v-darren-osborne-sen- tencing-remarks-of-mrs-justice-cheema-grubb/.

49 Lieven Pauwels and Nele Schils, “Differential Online Exposure to Extremist Content and Political Violence: Testing the Relative Strength of Social Learning and Competing Perspectives,” Terrorism and Political Violence 28, no. 1 (2016), 1–29.

50 Lizzie Dearden, “Finsbury Park terror suspect Darren Osborne read messages from Tommy Robinson days before attack, court hears,” The Independent, 23 January 2018), https://www.indepen- dent.co.uk/news/uk/crime/tommy-robinson-darren-osborne-messages-finsbury-park-attack-mosque- van-latest-court-trial-muslims-a8174086.html.

51 Tell MAMA, Annual Report 2016, The Geography of anti-Muslim hatred (London: Faith Matters, 2016), https://www.tellmamauk.org/wp-content/uploads/pdf/tell_mama_2015_annual_report.pdf; Brian Blakemore, Policing Cyber Hate, Cyber Threats and Cyber Terrorism (Routledge, 2016); Imran Awan and Irene Zempi, “‘I Will Blow Your Face Off’—Virtual and Physical World Anti-Muslim Hate Crime,” The British Journal of Criminology 57, no. 2 (2017), 362–80.

52 Maura Conway et al., “Disrupting Daesh: measuring takedown of online terrorist material and its impacts,” (VOX-Pol: 2017), http://www.voxpol.eu/download/vox-pol_publication/DCUJ5528- Disrupting-DAESH-1706-WEB-v2.pdf.

53 Eshwar Chandrasekharan et al., “You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined through Hate Speech,” Proc. ACM Hum.-Comput. Interact. 1, no. 2 (2017), 1-22.

54 Susan Benesch, “Dangerous Speech: A Proposal to Prevent Group Violence,” Dangerous Speech Project, 23 February 2013, https://dangerousspeech.org/guidelines/.