Robot Wars: How Bots Joined Battle in the Gulf

In the summer of 2017, Twitter bots—automated accounts—were deployed to boost messaging on both sides of the diplomatic dispute between Saudi Arabia and Qatar. Some appeared commercial, made available by users outside the Gulf region to anyone willing to pay for them. Others appeared locally focused. Together, such bot deployments significantly distorted Twitter traffic. The Gulf dispute showcases the range of different bots and their impact.

By
Ben Nimmo
September 19, 2018

As the diplomatic row between the Kingdom of Saudi Arabia and the State of Qatar raged in the summer of 2017, the two waged a parallel battle on Twitter.1

Time and again, on both sides of the argument, automated networks of high-volume Twitter accounts—otherwise known as “botnets,” short for robot networks—amplified each side’s messages and boosted their hashtags.

These botnets took a range of forms. Some appeared commercial, hired from abroad to provide quick and unsubtle amplification. Others appeared as locally based, posing as citizens of Saudi Arabia or Qatar in an apparent attempt to pass unnoticed. At least one botnet seemed based in Turkey, joining the fray in mid-September in a bid to support Qatar.

A study conducted in cooperation with the BBC Arabic Service revealed thousands of bots in more than a dozen different botnets, which were active in 2017, between May and September.2 These were the most obvious networks; it is likely that the two sides deployed many more.

Together, they seriously distorted the conversation on Twitter, creating artificial spikes and surges in traffic as the two sides struggled to push their messages out. They represent a case study in the variety, scale, and impact of bot intervention in online debate.

Defining Bots

On Twitter, a bot is a user account pre-programmed to behave in a certain way without human intervention.3 Many bots are apolitical; for example, they can share news stories, extreme weather alerts, or even photographs or poetry.

Some bots, however, are used for political influence. They typically try to hide their automated nature, using a human profile picture and biography. Such bots spread favorable political messages or falsehoods.4 They can also amplify posts and hashtags, attempting to make them trend.5 Typically, such bots post at far higher rates than human users, with the most active bots posting more than 1,000 times a day.

There are many ways to identify bots, and developers have created a number of online tools to ease the process.6 A rule of thumb published by the Atlantic Council’s Digital Forensic Research Lab is to look at their activity, anonymity, and amplification.7 An account is likely to be a bot if it posts, on average, more than 144 times a day—the equivalent of one post every five minutes for twelve hours at a stretch—gives no verifiable personal information, and exclusively posts retweets or shares from websites.

A botnet is a group of bots that are all pre-programmed to do the same thing.8 Botnets can range in number from a few dozen to more than 100,000; there are rumors of botnets numbering in the millions lying dormant online.9 Simpler botnets often feature many accounts with similar or identical profile pictures and biographies, and post the same content at the same time. This makes them relatively easy to identify.

Bots in the Qatar-Saudi Row

Bots were deployed heavily to support both sides in the Saudi-Qatar dispute. Their presence throughout the row illustrates a number of important points on the use of bots in online political debates.

First, it shows the sheer range and prevalence of automated accounts. Whether local or foreign, political or commercial, they significantly contributed to online traffic.

Second, their presence was on such a large scale that it clearly distorted the messaging. The following analysis only covers the most obvious bot incidents; many more are likely scattered through the traffic. The bot accounts involved numbered in the thousands and accounted for thousands of posts.

Third, similar to manipulation attempts observed elsewhere,10 the bots formed one element of more sustained and sophisticated internet campaigns, which also featured apparently coordinated human users.

Finally, the deployment and fate of these botnets show the capabilities—and the limitations—of Twitter. The bots distorted traffic considerably during the summer of 2017, showing their impact. However, most were suspended or fell dormant before the end of the year, highlighting Twitter’s increasing will and ability to crack down on mass automated posting.11

Commercial Botnets

Some of the earliest botnets deployed in the Saudi-Qatar dispute appear commercial. Such botnets are a widespread feature of life on Twitter; they can typically be identified by the incoherent range of subject matter on which they post, often involving many languages and many advertisements.

Botnets such as these are created en masse and rented out to any user who is willing to pay for retweets, likes, and follow—either for their own account or for somebody else’s. They are thus the easiest and quickest way to obtain artificial amplification.12

On 24 May 2017, for example, pro-Qatar users launched the hashtag or “Qatar is not alone.”13 An automated scan of tweets that used the ,#قطرليستوحدها hashtag, showing the frequency of posts minute by minute, showed two significant spikes in activity in the initial hours, with traffic more than doubling in the course of a single minute. Spikes such as these are often an indicator of botnet involvement.14

The first spike, at 19:26, consisted entirely of retweets of a single post: a prayer of support for the Emir and people of Qatar.15 This post was retweeted 300 times, with scores of retweets coming in just two seconds, at 19:26:37 and 19:26:38.

A detailed visual analysis of the accounts that retweeted this post confirmed they were bots. For example, three simultaneous retweets came from accounts named “Blue,” handles @81stFT, @82ndFT, and @91stFT.16 All were just a few days old, but had posted more than 5,000 times each. All shared the same content, in a variety of languages—mostly not in Arabic. All only posted retweets. These were obviously bots, but commercial ones, with no discernible interest in Gulf affairs or in geopolitics more broadly.

Another account called @AmelieTrand, which retweeted the same post in the same second, was also a bot. The profile picture below came from an online stock photo provider called Pexels,17 and was also used by a commercial Russian bot called @290Veronica identified in May 2017.18

The Amelie account also appears commercial, posting retweets in Russian, Arabic, and English. The predominance of Russian-language posts and the use of the same profile picture as “Veronica” suggest this was a Russia-based commercial bot, but a solid identification would require more evidence, such as a Russia-registered web address or a large number of authored posts.

Supporters of Saudi Arabia also turned to apparently commercial bots to promote their messages. On 21 July, supporters of Qatar launched another hashtag, or “Tamim the Glorious,” in honor of the Emir. In response, supporters ,#تميم_المج ُد of Saudi Arabia used a botnet to attack the hashtag. The attack began when an account called @al_muhairiuae posted a photoshopped image of the Emir designed to make him look foolish.19

The tweet was retweeted almost simultaneously by a series of apparently commercial bots, many of them with the color “pink” or “pinky” in their usernames, and featuring Korean pop imagery in their profile pictures. Usernames included @PinkyRed9, @PinkyBrown13, and @PinkyWhite13.20

These accounts posted identical content in a mixture of English and Arabic on a range of mostly non-political themes. Combined with the Korean imagery, this suggests they were a commercial botnet that an unknown user rented to amplify the anti-Qatar tweet and to subvert the pro-Qatar hashtag, which had only just started to trend.

Local Botnets

Other botnets involved in the dispute appeared locally focused. For example, the second spike in traffic on “Qatar is not alone” illustrated above, was generated by more than 200 retweets of a post from an account called @albajran, relating a call between the Emir of Qatar and the Emir of the State of Kuwait.21

These retweets came a few seconds apart from a group of accounts that used the photographs of famous footballers as their profile pictures. These accounts behaved like bots, posting only retweets often in the same order and at the same time. Some of their content was political, but it was interspersed with what appeared to be automated tweets quoting the Quran and the Hadith. They thus constituted a botnet, but one focused locally and probably on a locally run network, rather than a commercial, foreign-based one.

Local botnets also boosted pro-Saudi posts. Most of these had a more overtly political flavor, systematically amplifying pro-Saudi news and opinions. For example, on 27 August 2017, the General Supervisor of the Center for Studies and Media Affairs at the Saudi Royal Court tweeted a series of posts launching the hashtag قذافي_الخليج#, or “Qadhafi of the Gulf,” an insult aimed at the Emir of Qatar.22

A mixture of humans and bots amplified these posts explosively, achieving more than 18,000 retweets in the course of the scan. Among the amplifier accounts were apparent bots @lamia_gh99 and @Lhe1112, both of which almost exclusively posted retweets and whose most recent posts featured identical retweets in identical order.23 Dozens of other accounts, now suspended, posted exactly the same retweets at the same time, marking this as a botnet.

These and other bot insertions that tweeted pro-Saudi messages were relatively crude, featuring large numbers of retweets in just a few seconds; on one occasion, 54 retweets in one second. The accounts largely focused on local issues, posted in Arabic, and had names and profile pictures appropriate to the region. Some even gave Saudi Arabia as their location. This suggests that domestically focused, and perhaps domestically run, botnets supported the pro-Saudi messaging, more than commercial ones.

Enter the Turks

One of the more intriguing bot insertions came on 19 September 2017 and featured a revival of the “Tamim the Glorious” hashtag.24 The hashtag performed strongly on its relaunch, generating more than 90,000 retweets in the space of a few hours. A machine scan recorded two distinct spikes in traffic, which appeared driven by botnets.

Interestingly, a network of obvious bots whose primary language and focus appears as Turkish drove the first spike, at 18:39. These bots retweeted the same pro-Qatar post 449 times in just two seconds.25

These bots were suspended by Twitter before they could be archived;26 their hyper-rapid retweeting of identical posts clearly triggered Twitter’s automated detection systems. However, prior to joining the traffic on Qatar, their posts were all in Turkish and concerned life and sports in Turkey. As such, they were most probably a Turkish botnet.

Their insertion is intriguing because it mirrors the support Turkey showed to Qatar throughout the row. Someone in Qater could have hired the bots; equally, the person behind the Turkish botnet could have made them available voluntarily. It is theoretically possible that a user could have set them up in another country to mimic a Turkish botnet, although creating so many apparently Turkish accounts—a relatively intricate and time-consuming operation—for a single short-lived intervention would be unusual.

With the accounts suspended, there is insufficient evidence to draw a firm conclusion. Nevertheless, their intrusion into the Gulf debate casts an intriguing international sidelight on the larger diplomatic struggle.

Bots and the Minister

One final, and striking, feature of the pro-Saudi messaging was the extent to which it was driven by a single user: General Supervisor al-Qahtani. He launched pro-Saudi or anti-Qatari hashtags repeatedly during the dispute, triggering an explosion in the volume of traffic and generating so many retweets that they dominated the debate. For example, the machine scan of posts on “Qadhafi of the Gulf” showed how traffic on the hashtag rose almost vertically, reaching peak flow within the first few minutes, and then declining sharply but with a number of distinct spikes in the traffic.

The combination of an almost vertical takeoff in the traffic and a series of brief but intense secondary spikes, strongly suggests artificial amplification. An eyeball scan of the first few minutes of traffic suggested that it came from a mixture of human users—some with verified accounts—and bots; analysis of the subsequent spikes showed that they were primarily bot-driven.

Once again, most of the bots involved appear locally focused, rather than commercially available because their posts were in Arabic and focused on local personalities and events. This suggests that they were locally based botnets. This need not mean that they were government-run given commercially operated botnets are available in many countries. However, it does distinguish them from the more overtly foreign and commercial bots that gave Qatar some of its support.

The scan also revealed how much al-Qahtani shaped the traffic. Of the ten most-retweeted posts to use the hashtag that day, the top four came from his account. Together, they generated 18,662 retweets, accounting for almost 55 percent of all traffic on the hashtag over the period scanned. Al-Qahtani’s posts regularly dominated traffic on other hashtags that he launched during the dispute. Retweets of his posts, without additional comment, accounted for anywhere from 25 percent to 65 percent of all traffic on the hashtags, a highly unusual dominance for any user.

As with “Qadhafi of the Gulf,” machine scans of his other hashtag drives revealed an almost vertical takeoff in the traffic, a very early peak, a rapid decline, and a series of secondary spikes, all of which are characteristics of artificial amplification. However, manual scans of the accounts that amplified him revealed a large proportion of apparently human users. This pattern is typical of coordinated human users reacting to an agreed signal, albeit with additional bot amplification.

Conclusion

The Gulf powers fought out their dispute on Twitter as much as through diplomatic channels. Both sides deployed bots of various types run by both domestic and foreign sources to boost their competing messages. The focus of these Arabic-language hashtags was clearly local and regional rather than international; this was a question of messaging to the domestic population and to Arabic-language rivals, rather than the non-Arabic-speaking world.

The Twitter campaigns distorted the online debate and artificially inflated the chosen hashtags. Bots were the primary vehicle, but there are indications that coordinated human teams may have played a significant role, especially on the Saudi side.

There are insufficient data to show whether these online campaigns had any effect on the offline conversation. What is clear is that they visibly distorted traffic on Twitter itself. The platform became a battleground for the feuding governments and their supporters. On Twitter, at least, the science-fiction idea of a war between robots became a reality.


Ben Nimmo researches patterns of online disinformation and influence for the Atlantic Council’s Digital Forensic Research Lab (@DFRLab). A former journalist with dpa, the German press agency, and a former NATO press officer, he specializes in exposing attempts to manipulate online discourse through false narratives or false social-media accounts. His research into bots on Twitter has included exposing a Russian botnet that was amplifying election-related messaging in Germany, an American botnet amplifying political messaging in South Africa, and a 100,000-strong botnet that was used to harass journalists writing on Russia.


 

Notes

1 The dispute began in early June 2017, when Saudi Arabia, the United Arab Emirates, Bahrain, and Egypt cut ties with Qatar, accusing it of supporting terrorist groups and siding with Iran. For an example of early reporting, see Patrick Wintour, “Gulf Plunged into Diplomatic Crisis as Countries Cut Ties with Qatar,” The Guardian, 5 June 2017, https://www.theguardian.com/world/2017/jun/05/ saudi-arabia-and-bahrain-break-diplomatic-ties-with-qatar-over-terrorism.

2 وثائقي”حرب الشاشات (documentary “War Screens”), BBC Arabic service, 15 May 2018, http://www.bbc.com/arabic/media-44130982.

3 Emilio Ferrara et al., “The rise of social bots,” Communications of the ACM 59, no. 7 (2016): 96–104.

4 Chengcheng Shao et al., “The spread of fake news by social bots,” arXivpreprint arXiv:1707.075924 (2017). 

Yubao Zhang et al., “Twitter Trends Manipulation: A First Look Inside the Security of Twitter Trending,” IEEE Transactions on Information Forensics and Security Archive 12, no. 1 (January 2017), .156–144

6 Clayton Allen Davis et al., “Botornot: A system to evaluate social bots,” Proceedings of the 25th International Conference Companion on World Wide Web (International World Wide Web Conferences Steering Committee, 2016).

7 Ben Nimmo, “#BotSpot: Twelve Ways To Spot A Bot,” Atlantic Council Digital Forensic Research Lab, 29 August 2017, https://medium.com/dfrlab/botspot-twelve-ways-to-spot-a-bot-aedc7d9c110c.

8 Nitin Agarwal et al., “Examining the use of botnets and their evolution in propaganda dissemina- tion,” Defence Strategic Communications 2 (March 2017), 87–112.

9 A botnet of more than 3 million accounts was reported in October 2016. “Chapter 32. The Stealth Botnet,” SadBotTrue, http://sadbottrue.com/article/51. In October 2017, an American bot herder known only as “Microchip” claimed that a network of 5 million accounts still exists. Without proof, this claim should be treated with scepticism, but it underlines the persistence of such rumors. Microchip (@Microchip), “Microchip Gab Post,” Gab (2017), http://archive.is/anrqq.

10 Notably the Russian campaign against the United States, 2014–17. See Adam Badawy et al., “Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign,” USC working paper (February 2018), https://arxiv.org/pdf/1802.04291.pdf.

11 Twitter updated its rules on mass automated posting in February 2018. See Yoel Roth “Automation and the use of multiple accounts,” Twitter Developer Blog, 21 February 2018, https://blog.twitter.com/ developer/en_us/topics/tips/2018/automation-and-the-use-of-multiple-accounts.html.

12 Nicholas Confessore et al., “The follower factory,” New York Times, 27 January 2018, https://www. nytimes.com/interactive/2018/01/27/technology/social-media-bots.html.

13 Translations provided by the BBC Arabic service.

14 For an example of multiple botnets producing such spikes, see Ben Nimmo, “Battle of the Botnets,” Atlantic Council Digital Forensic Research Lab, 28 July 2017, https://medium.com/dfrlab/battle- of-the-botnets-dd77540fad64.

15 Hassan Bink (@hassanbink703), “Hassan Bink Twitter Post,” Twitter (2017), http://archive. is/1TZq0.

16 Blue (@81stFT), “Blue Twitter Post,” Twitter (2017), http://archive.is/JQfHL; Blue (@82ndFT), “Blue Twitter Post,” Twitter (2017), http://archive.is/XHzU8; Blue (@91stFT), “Blue Twitter Post,” Twitter (2017), http://archive.is/byT8v. As of 22 May 2018, the accounts had not been suspended, but had not posted since July 2017.

17 “Girl Lying on Yellow Flower Field During Daytime,” Pexels, https://www.pexels.com/photo/girl-lying-on-yellow-flower-field-during-daytime-160699/.

18 Ben Nimmo, “The many faces of a botnet,” DFRLab, 25 May 2017, https://medium.com/dfrlab/the-many-faces-of-a-botnet-c1a66658684.

19 Al Muhairi (@al_muhairiuae), “Al Muhairi Twitter Post.” Twitter (2017), https://twitter.com/al_muhairiuae/status/888450912485871617.

20 Pinkyred (@Pinkyred9), “Pinkyred Twitter Profile,” Twitter (2017), http://archive.is/fHrak; Pinkybrown (@Pinkybrown13), “Pinkybrown Twitter Profile,” Twitter (2017), http://archive.is/jR5c6; Pinkywhite (@pinkywhite13), “Pinkywhite Twitter Profile,” Twitter (2017), http://archive.is/pn7kV.

21 Albajran (@albajran), “Albajran Twitter Post,” Twitter (2017), https://twitter.com/albajran/ status/867495802175926273, archived at http://archive.is/H1xJM.

22 Saudq1978 (@saudq1978), “Saudq1978 Twitter Post,” Twitter (2017), https://twitter.com/saudq1978/ status/901854641880752128.

23 As of the time of writing, neither account had posted since November 2017. Lamia_gh99 (@ lamia_gh99), “Lamia_gh99 Twitter Profile,” Twitter (2017), http://archive.is/k2iFc; LHe1112, (@ LHe1112), “LHe1112 Twitter Profile,” Twitter (2017), http://archive.is/6Sfnt.

24 The remaining traffic on the hashtag for this date can be viewed at https://twitter.com/sear ch?f=tweets&q=%D8%AA%D9%85%D9%8A%D9%85_%D8%A7%D9%84%D9%85%D8%AC%D 8%AF%20since%3A2017-9-18%20until%3A2017-09-19&src=typd. Many bot accounts have already been suspended.

25 Abdullah N Al Thani (@ANAALThani), “Abdullah N Al Thani Twitter Post,” Twitter (2017), https://twitter.com/ANAALThani/status/909892197146779648, archived at http://archive.is/osLop.

26 The handles of the suspended accounts included @Selmanzkan8, @Gul675, @bad_girl_ecem, and @hidayetabac.