Terrorist Communications: Are Facebook, Twitter, and Google Responsible for the Islamic State’s Actions?
Four of the world’s largest Internet companies pledged to monitor, combat, and prevent terrorists from using their social media platforms to conduct operations in May 2016. One month later, Twitter, Facebook, and Google were sued for deaths caused by the Islamic State in 2015, and their alleged allowance and facilitation of terrorist communication. A growing demand for responsible and accountable online governance calls into question the global norms of cybersecurity and jurisdiction, and the very definition of terrorism. This paper explores the legislative precedent for countering terrorist communications, including the evolution of the First Amendment, communications and information law, and limitations governed by public opinion. Using legal trajectories to analyze aspects of monitoring and censorship in both past and current counterterrorism strategies, evidence of the future cyber landscape becomes clear. Cyber norms will imminently and inevitably depend on public-private partnerships, with liability split between the government and the private companies that control the majority of the world’s information flows. It is imperative for actors to identify each sector’s competing and corollary priorities, as well as their legal and normative restrictions, to form partnerships that can survive the unpredictable court of public opinion and provide sustainable counterterrorism solutions.
This article appeared in The Cyber Issue in Winter 2016.
In June 2016, Reynaldo Gonzalez filed suit against Twitter, Google, and Facebook for the wrongful death of his daughter in the November 2015 Islamic State attacks in Paris.[1] The lawsuit claims that the defendants knowingly provided material support for terrorist activity via their Web sites and alleges that private media companies should be held liable for any actions resulting from this support.
Is the private industry liable for allowing, hosting, and ignoring dangerous online material? Answers to this question should address American and global precedents for governance of expression during wartime and peace, the continued controversy between civil and security-related freedoms, and public-private partnerships. Both legal and normative analyses of this question must account for the relationships between private companies and the public, notably the May 2016 pledge by Facebook, Twitter, YouTube, and Microsoft to counter online hate speech. This move emerged in the wake of terrorist attacks that took place in Paris, Brussels, and elsewhere as a response to public demand for the private industry to take initiative.
In highlighting these questions, this essay will explore Western legal and political precedents, recognizing the West’s leadership in creating global cyber norms.
Precedents
If you establish a censorship of the press, the tongue of the public speaker will still make itself heard, and you have only increased the mischief. — Alexis de Tocqueville
Wartime Precedents
In November 1755, Ben Franklin said, “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.”[2] Nearly a century later, Alexis de Tocqueville argued that censorship is not only dangerous but also absurd, particularly in a democracy, and prevents full participation within society.[3] The debate between civil liberties and security pervaded the national psyche even before the United States engaged in arguably its first violent conflict as an independent nation, the Civil War.
As the United States established itself as a war-fighting nation, these absolutist positions evolved. The Espionage Act of 1917 permitted the government to punish anyone who made statements with the intent to obstruct military or civilian wartime activities.[4] In 1919’s Schenck v. United States, the U.S. Supreme Court introduced the “clear and present danger” test, legislating that abridgments of speech do not violate the First Amendment when justified by a clear and present danger. In retrospect, one can argue that the most dramatic upheavals over the First Amendment overwhelmingly occurred during times of war.[5]
Precedents for the Balance of Security and Freedom
Alongside a tumultuous debate regarding this relationship between wartime and liberties, courts grappled with the effects of expression, particularly within the context of modern wars, which lack definable or limited durations.
Whitney v. California (1927) reasoned that the state, in exercise of its police power, had the ability to punish those who abuse their rights to freedom of speech “by utterances inimical to the public welfare, tending to incite crime, disturb the public peace, or endanger the foundations of organized government and threaten its overthrow.”[6] This decision held that this tendency, in the absence of a direct link between expression and result, was a viable method of accountability.
When Terminiello v. Chicago (1949) determined that riot-inciting rants fell within the protection of the First Amendment, justice Robert H. Jackson dissented:
The choice is not between order and liberty. It is between liberty with order and anarchy without either. There is danger that, if the court does not temper its doctrinaire logic with a little practical wisdom, it will convert the constitutional Bill of Rights into a suicide pact.[7]
In 1969, Brandenburg v. Ohio overruled the Schenck “clear and present danger” standard, holding that the government could only punish speech directed at incitement and imminent lawless action, and that people should not be held responsible for unintended consequences.[8]
In the wake of these decisions, policymakers have struggled to define a frame of reference for imminent danger, reasonable expectations, and the relationship between expression and tangible consequences. Justice Oliver Holmes lamented back in 1925 that “every day is an incitement…no rational application of the concept is possible.”[9] This difficulty is especially relevant in a global community where action can be incited by speech half a world away.
The Need for Norms
Now that actors can express themselves freely and anonymously online, much of these legal frameworks are becoming obsolete. Yet, governments have responded leisurely. Political scientist Stuart Gottlieb said:
[T]he 9/11 attacks revealed to one and all that a crafty enemy could orchestrate heinous acts of violence using the architecture of America’s openness – liberal visitor and immigration laws, unfettered freedom of communication, unrestricted domestic travel, and open financial networks.[10]
However, it was not until 2004 that the U.S. government made its first regulatory attempt to address online speech, with the passing of the Global Internet Freedom Act, which conversely set protections for online expression.[11]
Precedent has shown that governments have a historical inability to craft legislation that governs evolving online activity in a timely manner. Here, norms have a greater chance of defining public- and private-sector responsibility, and holding Internet “owners” accountable. Online stability will be established in the court of public opinion.
I’m not suggesting an abandonment of government regulatory efforts. Rather, these efforts must evolve in full recognition of the eminent, resourceful players, and offer both partnership and incentives for those players to lead.
Online Jurisdiction and Responsibility
In May 2016, four of the world’s most influential Internet companies—Facebook, Twitter, YouTube, and Microsoft—signed a pledge to abide by the European Union Code of Conduct for Countering Illegal Hate Speech Online.[12] The participating parties pledged to “protect not only information or ideas that are favorably received or regarded as inoffensive or as a matter of indifference, but also those that offend, shock or disturb the State or any sector of the population.”[13]
- Twitter’s head of public policy for Europe said, “Hateful conduct has no place on Twitter and we will continue to tackle this issue head on alongside our partners in industry and civil society…there is a clear distinction between freedom of expression and conduct that incites violence and hate.”
- Google’s public policy and government relations director said, “We’re committed to giving people access to information through our services, but we have always prohibited illegal hate speech on our platforms.”
- Facebook’s head of global policy management said, “As we make clear in our Community Standards, there’s no place for hate speech on Facebook.”
- Microsoft’s vice president of EU government affairs said, “We value civility and free expression, and so our terms of use prohibit advocating violence and hate speech on Microsoft-hosted consumer services. We recently announced additional steps to specifically prohibit the posting of terrorist conduct.”
Vague statements like these are neither groundbreaking nor lauded. Companies have often pledged to both monitor, flag, and remove content that runs contrary to their community guidelines. However, altruism and good intentions aside, it is important to consider if these pledges would efficiently underwrite the ultimate goals of policing.
The Realities of Monitoring, Flagging, and Removing Content
George Washington University explored the implications of intervening in online terrorist activity.[14] The study monitored suspected accounts over a thirty-day period, following the trajectories of accounts that were suspended and removed, and noting when “similar” accounts re-emerged. This monitoring closely observed the number of quality followers that engaged with each account, in order to ascertain the “follower effect” of these suspensions. This research fits within popular policymaker conjectures regarding the intelligence gain-loss components of online terrorist accounts; some have argued that allowing such accounts to continue operating helps law enforcement officials track external terrorist activity, while others have argued that the mere presence of terrorist propaganda can have devastating consequences for impressionable online users.
The study offered two major conclusions. First, the data demonstrated that suspending accounts for suspected terrorist affiliation both temporarily halted any “terrorist speech” and diminished the size of that user’s audience (or followers) in the long term.[15] Even when suspended users found methods to return to their audiences, either by creating new accounts or finding ways to adopt other accounts as their own, these suspensions typically succeeded in having a “significant, detrimental effect on these repeat offenders, shrinking both the size of their networks and the pace of their activity.”[16]
Second, the study showed that these suspensions led suspected terrorist users to find alternative platforms. The platform Telegram Messenger emerged as a favored alternative to Twitter, due in large part to its willingness to allow the dissemination of official Islamic State propaganda via press releases and statements.[17] But even though Telegram remains popular, official Islamic State leaders have continued to encourage its supporters to return to Twitter.[18]
Alongside these moves to alternative platforms, Islamic State users have also employed a number of countermeasures in an effort to remain on the Twitter platform, which is evidently favored by the group for its wide dispersal ability and diverse audience. These methods have included reverse shoutouts, ghanima accounts, and account donation.
- Reverse Shoutouts: Through a number of accounts, leaders have assisted Islamic State supporters in rebuilding their audience reach following suspension. One document, attributed to a user called Baqiya Shoutout, provided instructions for communities of users to help rebuild their follower networks, including the use of a disposable e-mail to create a new account, the avoidance of following the randomly suggested accounts provided by Twitter, and the following of major, trusted accounts within the community to avoid the inclusion of bot followers.[19]
- Ghanima Accounts: In mid-2015, Islamic State leaders began circulating instructions to their followers to identify kuffar (unbeliever) accounts on Twitter, usually abandoned by previous owners, and coopt them for their own purposes. These accounts are called ghanima (spoils of war), which are overtaken using a number of measures, including temporary e-mail generators and password resets.[20]
- Account Donation: In an effort to provide friends with available accounts, certain Islamic State supporters create Twitter accounts in large numbers (sometimes with the help of automated applications), and provide them to users whose accounts have been suspended. Using this method, users such as ENG ISIS, a self-proclaimed hacker, have reportedly created more than 52,000 accounts for donation.[21]
As a takeaway from the study, one can ascertain that the identification of suspected terrorist Twitter accounts, and their subsequent removal, certainly had a short-term impact on the release of information and the size of the audience subjected to it. The study also demonstrated that continued and repeated suspensions ultimately lessened the size of any suspected user’s following, resulting in a longer-term prevention of online propaganda.
With this in mind, the notion that private entities can successfully deplete the resources of terrorist groups lends credence to the proposal that they could theoretically be held liable for continued terrorist activity on their platforms. The failure to act on such knowledge prompted Gonzalez’s action in 2016.
Legal Impetus for Jurisdiction and Responsibility
The Gonzalez team alleged that:
For years, Defendants have knowingly permitted the terrorist group ISIS to use their social networks as a tool for spreading extremist propaganda, raising funds and attracting new recruits. This material support has been instrumental to the rise of ISIS, and has enabled it to carry out numerous terrorist attacks, including the November 13, 2015 attacks in Paris where more than 125 were killed, including Nohemi Gonzalez.[22]
Experts cited within the lawsuit corroborated the claim that these platforms directly served the Islamic State, and that the companies’ leaderships knew of their resourcefulness for years and still failed to act. According to FBI Director James Comey, “ISIS has perfected its use of Defendant’s sites to inspire small-scale individual attacks, ‘to crowd source terrorism’ and ‘to sell murder.’”[23] The lawsuit elaborates that the Islamic State successfully used the defendants’ sites to recruit more than 30,000 foreign recruits between 2013 and 2016, including as many as 4,500 Westerners and 250 Americans, and has also used the platforms to “spread propaganda and incite fear by posting graphic photos and videos of its terrorist feats.”[24]
According to this argument, the relevant content on the platforms would be considered conduits to terrorist activity, which has historically fallen outside the purview of online monitoring and suspension. Citing then-secretary of state Hillary Clinton for saying that “resolve means depriving jihadists of virtual territory, just as we work to deprive them of actual territory,” the lawsuit thus argues that the private entities in charge of the platforms should be held responsible for knowingly allowing terrorist activity that was reasonably expected to result in violence.[25]
Digital Jurisdiction—U.S. and Global Regulations
While Gonzalez’s suit introduces the concept of digital liability for online communications and operational activity, it is important to look at regulatory precedents for this liability, much of which suggests that companies like Facebook and Twitter cannot—and should not—be held responsible for private or public communications that occur on their platforms. Historically, their jurisdiction applies only to the platforms themselves, and not to individual content. Precedent also demonstrates the difficulty of linking digital activity to physical actions, particularly due to the amount of time that often passes between the two. This suggests that companies cannot reasonably predict which questionable content should be flagged or removed, or when. For purposes of this paper, this analysis is limited to major U.S. and European regulatory efforts.
U.S. Codes
Part of the 1996 Communications Decency Act, U.S. Code § 230, provides for the protection of private blocking and screening of offensive material, commonly referred to as “safe harbor.”[26] The law espouses a minimalist regulatory approach to Internet platforms (for our purposes, companies like Facebook and Twitter), legislating that providers are not owners of the content posted on their platforms, are not liable for any objectionable content, and should not be held responsible for any action taken to limit access to that content, “whether or not such material is constitutionally protected.”[27] This code protects companies from the liability that comes from “involving oneself” with digital content, and to differentiate the liability of a Web-site owner like Facebook from a company that owns a physical space, and might be held liable for someone injuring themselves within that space.
Another law, U.S. Code § 2339(A), creates liability for persons that provide or facilitate “material support” to any terrorist actor or conspiracy, noting that such actions are violations of the law.[28] Although the Gonzalez lawsuit does not call attention to this code, the precedent of liability for terrorist association and support, whether consciously or not, raises an important question within the digital space. International law has been able to nominally define the levels of association that would link facilitators to terrorist actions; for example, someone housing a terrorist for a night could be held responsible for their action. The code provides a context where hosts of digital servers that are used for terrorist communications could be held liable.
European Framework Decision on Combating Racism and Xenophobia
The social media pledges to the EU Code of Conduct stemmed from the EU’s original Framework Decision on Combating Racism and Xenophobia. This framework notes forms of conduct that are punishable as criminal offenses, including “public incitement to violence or hatred directed against a group of persons or a member of such a group defined on the basis of race, color, descent, religion or belief, or national or ethnic origin.”[29] This incitement could also be identified by the distribution of pictures online, a common online strategy on the part of the Islamic State. The framework also deems it criminal to publicly condone or even trivialize crimes of genocide, crimes against humanity, and war crimes, as defined by the statute of the International Criminal Court.
While the above declarations (notably the direct publication of material inciting terrorist activity) would not fall under the jurisdiction of the service providers, the framework also states that “instigating, aiding or abetting in the commission of the above offenses is also punishable.”[30] Here, one can draw parallels with the U.S. issues of identifying facilitators of terrorist crimes. How does one aid and abet an extremist Facebook post? If one likes a post, is he or she liable? If one views the post and does not report it, therefore allowing it to spread, is that also criminal facilitation?
EU E-Commerce Directive
Article 15 of the EU E-Commerce Directive expands on this notion of digital liability, highlighting the specific obligation that service providers have to monitor and identify troubling content. It states:
[M]ember States shall not impose a general obligation on providers when providing the services covered by Articles 12, 13 and 14, to monitor the information which they transmit or store, nor a general obligation actively to seek facts or circumstances indicating illegal activities.[31]
While it is important to first ascertain if Facebook and Twitter would fall under this “provider” classification, the second half of this precedent is more important, stating that member states can compel information providers to inform the public even of alleged illegal activities. Although this holds government entities accountable for reporting and flagging, this standard could legitimately hold a private entity liable for failing to report, either in a timely manner or at all.
Implications for Digital Norms
These notions of facilitation, allowance, and justifiable ignorance largely delineate the allegations in the Gonzalez lawsuit, and challenge existing precedent. Gonzalez is not suing the content creators or the terrorist actors, nor is he suing the defendants for negligence or for failing to notice extremist content. Rather, the lawsuit alleges that once companies flag dangerous content that could be reasonably expected to result in terrorist violence, they are legally obligated to remove it.
This is an important accusation, and it challenges a number of established digital and legal norms regarding the scope of responsibility.
First, the suit suggests that service providers have the same level of ownership over their platforms as private landlords do over their physical spaces. This suggestion holds them responsible for any activity that takes place within their confines—in this case, regardless of the non-existent geographic cyber borders that digital historians and lawyers have struggled to define. This lawsuit simply disregards that normative melee.
Second, the prosecution attempts to assign liability to private actors for failing to actively combat online behavior. It is important to note that this liability, in this specific lawsuit, does not call on companies to increase their monitoring or ensure that they have an adequate ability to identify the behavior in the first place. It presumes that these companies already have this capability, and that any allowance of extremist behavior is done knowingly and consciously. In this case, proven ignorance would in fact be bliss.
Third, Gonzalez’s criticism of the platforms essentially argues that “dangerous behavior” is a fairly objective observation, and that most logical parties could reasonably identify and ascertain the levels of danger associated with any given piece of content. This is outrageously broad. Of course it seems logical that a Facebook security department could identify radical language, recruitment communications, or direct calls for public violence. However, this fails to acknowledge the savvy social media knowledge that many terrorist groups, and in particular the Islamic State, possess. Code words and private messages are only a few of the countermeasures that extremists use, making it difficult to determine the level of liability that could be directed at private companies for failing to act on certain pieces of content. Timing also plays a key role in this dilemma. At what point should service providers have recognized extremist content patterns, and determined that external, non-digital activity was resulting from that content? If Facebook does not report worrisome content immediately, have they already shown gross negligence or is this passive facilitation of online terrorist activity?
Fourth, in an extension of the third observation, it would certainly be difficult to judge the level of danger associated with content, and the reasonable expectation that it might escalate outside the digital realm. Within this scope, could one argue that liability and culpability increase with a user’s level of influence? If Twitter flags an account with a greater number of legitimate followers, is it right to assume that content from that account is inherently more dangerous than identical content from a less popular account? The presence of crowds played a huge part in the First Amendment arguments of the twentieth century. If an online following is crowd-like, should Twitter knowingly anticipate that violence might occur? Can they be held liable if they do not? This concept could also differentiate the respective social media platforms, as certain platforms might statistically be more closely affiliated with terrorist activity, and more likely to be an integral part of extremist planning. Contrary to this, the Gonzalez lawsuit assumes that all defendant platforms were similarly used for the planning behind the Paris attack, and does not differentiate between their liability levels.
Lastly, with regards to the conversation of legal precedent, it is crucial to consider the connection between extremist content, private liability, and the First Amendment. While government entities have historically been limited in their censorship abilities under the First Amendment, private companies are not similarly burdened. Considering the pledge that Facebook, YouTube, Twitter, and Microsoft made to abide by EU governmental regulations, does this hold them accountable to the same standards as the government, or does it grant them greater freedoms within the same scope? The legal implications of digital partnerships will surely continue to pervade the global debate.
Recent discussions regarding cyber proxies have yielded similar questions, as experts have theorized that hacker groups that are not affiliated with state governments could legally perform similar operations under less national and international scrutiny. Perhaps the notion of public-private partnerships offers benefits greater than the combination of resources and expertise—or perhaps they allow actors to reach for the same goal of eradication, under broader and more inclusive standards.
There is a demonstrative public call for action on this front. In August 2016, a campaign titled #DisarmTheiPhone (disarm the iPhone) rallied for Apple to prevent the use of a gun emoji on all Apple devices.[32] The campaign succeeded. However, while some applauded the initiative to prevent inflammatory and triggering speech, others argued that fighting for government limitations is meaningless, when a select few private sector actors control the most influential media platforms and the largest numbers of online users. Others have said that online expression is more likely to incite malleable children with nearly unlimited online access. Does the immaturity of a platform user demonstrate a greater expectancy of radicalization? Does the quantity of platform users, or more specifically the number of Twitter followers, demonstrate a greater intent to reach a large audience?
These questions are enormously complex. Answers must consider public concerns about overly zealous censorship and the slippery slope of the reasonable expectation that an online user will become radicalized. Certain online activity may have obvious terrorist tendencies. Others may simply represent a disgruntled youth looking for non-violent online expressive outlets. To this day, determinations of those most at risk for radicalization are limited. If the government cannot identify these vulnerable populations, and companies have yet to do so, how can we expect individual online users to judge with reasonable expectation that his or her expression will have undue effect?
Even as companies carefully define what they see as “clear and present danger” or reasonable expectations, the public will continue to argue that expression cannot be pigeonholed. It is a common worry that companies looking to monitor or censor such activity will inherently infringe upon First Amendment rights, and follow the agendas of corporate leadership (whom, notably, are not held accountable through democratic processes).
In light of the fears of dictatorial companies and the control that they possess over billions of Internet users worldwide, these concepts cannot be ignored for years, or until legislation begins to address modern concepts of Internet ownership. These questions need to be addressed and answered quickly within the public domain, so long as we continue our yet-to-be-defined and seemingly endless war on terror.
With these concerns in mind, and the realities of individual government and private sector limitations, portraits of likely cyber futures paint a coalescent picture. Cyber norms will inevitably depend on public-private partnerships and result in shared safety responsibilities and public accountability. Of course, each respective sector faces unique barriers to conducting successful counterterrorism strategies, which must be addressed. While responsibility should naturally fall under the purview of whichever actor is best equipped to mitigate risks at any particular time, these barriers will influence how heavily these partnerships will impact each sector. Government control could either face inherently less scrutiny, as a result of mandatory disclosures and freedom of information opportunities, or it could incite protests over privacy infringements. Meanwhile, information companies could fall hostage to citizens utilizing their financial capital, as their survival depends directly and inescapably on maintained profit. If the majority of Twitter users boycott the platform over a botched censorship attempt, Twitter is likely to prioritize its financials over its patriotic duty.
These public-private enterprises will necessitate a complex balancing of priorities and stakeholders. It is imperative that actors account for these varying priorities—profit, constitutional rights, national security—and delegate responsibility in a way that best combats vulnerabilities.
Nicole Softness is a graduate student at Columbia University’s School of International and Public Affairs, studying international security policy.
Notes
[1] Gonzalez v. Twitter, Inc., Google, Inc., and Facebook, Inc., 16 CV 03282-DMR (filed N.D. Cal., 14 June 2016).
[2] “Reply to the Governor,” Votes and Proceedings of the House of Representatives, 1755-1756 (Philadelphia, PA: Pennsylvania Assembly, 1756), 19-21.
[3] Alexis de Tocqueville, Democracy in America: Historical-Critical Edition of De la Démocratie en Amérique, Eduardo Nolla, ed., James T. Schleifer, trans. (Indianapolis, IN: Liberty Fund, 2010).
[4] Act of 15 June 1917, ch. 30, § 3, 40 Stat 219, as amended 18 U.S.C. § 2388(a) (1964).
[5] Thomas I. Emerson, “Freedom of Expression in Wartime,” University of Pennsylvania Law Review 116, no. 6 (1968), 975.
[6] Whitney v. California, 274 U.S. 357 (1927).
[7] Linda Greenhouse, “The Nation; ‘Suicide Pact,’” New York Times, 21 September 2002.
[8] Brandenburg v. Ohio, 395 U.S. 444 (1969).
[9] Gitlow v. People, 268 U.S. 652 (1925).
[10] Stuart Gottlieb, ed., Debating Terrorism and Counterterrorism, 2nd Edition (CQ Press, 2013), 338.
[11] Global Internet Freedom Act, 108-48 (2003-2004).
[12] Julia Fioretti and Foo Yun Chee, “Facebook, Twitter, YouTube, Microsoft Back EU Hate Speech Rules,” Reuters, 31 May 2016.
[13] European Commission, Code of Conduct on Countering Illegal Hate Speech Online, May 2016.
[14] J.M. Berger and Heather Perez, “The Islamic State’s Diminishing Returns on Twitter: How Suspensions Are Limiting the Social Networks of English-speaking ISIS Supporters” (occasional paper, George Washington University, February 2016).
[15] Ibid., 9.
[16] Ibid., 9.
[17] Ibid., 18.
[18] Ibid., 19.
[19] Ibid., 15.
[20] Ibid., 16.
[21] Ibid., 17.
[22] Ibid., 2.
[23] Ibid., 3.
[24] Ibid., 9 and 12.
[25] Ibid., 22.
[26] Protection for Private Blocking and Screening of Offensive Material, 47 U.S.C. § 230 (1996).
[27] For purposes of this paper, we operate under the premise that companies such as Facebook, Twitter, YouTube, Google, and Microsoft are owners of platforms and services, and that, while they are not considered to be owners of the content published by users on their platforms, they are accountable for consequences resulting from their platforms, and do have jurisdiction over those platforms. This differentiates these companies from media companies or newspapers.
[28] Providing Material Support to Terrorists, 18 U.S.C. § 2339(A) (1994).
[29] European Parliament, Framework Decision on Combating Certain Forms and Expressions of Racism and Xenophobia by Means of Criminal Law, 28 November 2008.
[30] Ibid.
[31] European Parliament and Council of the European Union, Directive on Electronic Commerce, Article 15, 8 June 2000.
[32] Jonathan Zittrain, “Apple’s Emoji Gun Control,” New York Times, 15 August 2016.