Artificial Intelligence-Enhanced Disinformation and International Law: Rethinking Coercion
Abstract
Artificial Intelligence (AI)’s rapid progress poses an increasingly daunting task in differentiating truth from falsehood. This piece argues that democratic states must clarify which foreign disinformation operations should be deemed coercive and distinguish them from permissible influence operations.
When do foreign disinformation operations violate the principle of non-intervention as established by customary international law? Such operations have been around as long as states have competed against each other, and while states have never directly alleged violations of international law to address them, the quality of their content and reach will be distinctly different moving forward. This raises the question: what sets today apart from yesterday? I asked OpenAI’s ChatGPT. Its response: “Amplified by the rapid advancements in AI, disinformation operations have taken on a new level of sophistication and potency. The power to generate realistic text, images, and videos has birthed a digital landscape where distinguishing fact from fiction has become an arduous task.”
The automation of information production and distribution makes traditional disinformation methods increasingly effective and pervasive and enables the adoption of new techniques. To successfully confront the risks, democratic states need to establish joint response frameworks grounded on a clear shared understanding of AI-enhanced disinformation in the context of international law.
Background: Disinformation and the Principle of Non-Intervention
The principle of non-intervention involves the right of every sovereign state to handle its own affairs without interference from others. To qualify as a wrongful intervention under international law, disinformation operations must affect the target state’s sovereign functions, such as elections and health services and result in the target state engaging in actions it would otherwise not willingly undertake[1]. Absent this element of coercion, such activities are generally deemed permissible.
Governments have a varied and restricted understanding of the demarcation between coercive and non-coercive disinformation operations. Germany[2] claims to employ a scale-and-effects test to determine whether disinformation operations constitute coercion. Poland[3] and New Zealand[4] recognize that broadly targeted campaigns can be coercive if they affect a state’s sovereign function. The U.K.[5] notes that covert operations that seek to interfere in electoral processes could be considered coercive in some cases. Meanwhile, Canada[6] and Finland[7] don’t deem coercion possible in this context, while other states, such as the U.S.[8], Italy[9] and the Netherlands[10], have adopted a middle-ground position.
When these positions were drafted from 2019 onwards, governments were primarily concerned with the impact of distribution methods enabled by social media; such as bots, trolls, and microtargeting, i.e. the use of personal data in advertising to deliver tailored content to specific audience segments based on their individual characteristics and behaviors. The 2018 Cambridge Analytica scandal was particularly relevant in these discussions, as it involved the unauthorized harvesting of personal data from millions of Facebook users to manipulate public opinion for political purposes[11]. At the time, the debate surrounding the relevance of coercion revolved around the covert nature of distribution methods[12]. The emergence of generative AI has added to these challenges by enabling automated content production.
The Challenges of AI-Enhanced Disinformation
The main challenges of AI-enhanced disinformation are not limited to increased scale, efficiency, speed, and lower costs for content production and delivery. They have[13] and will likely continue to alter the nature of threat actors, their behaviors, and the content produced[14].
AI may be employed to present false evidence to persuade public opinion into pushing their governments to delay or cancel international commitments, such as climate agreements[15]. During the COVID-19 pandemic, less-sophisticated disinformation campaigns persuaded citizens to delay or outright refuse life-saving vaccines[16]. Deepfakes could be used to impersonate public figures or news outlets, make inflammatory statements about sensitive issues to incite violence, or spread false information to interfere with elections.
As a glimpse of things to come, AI-generated deepfake videos featuring computer-generated news anchors were distributed by bot accounts on social media last year as part of a pro-China disinformation campaign[17]. At the outset of Russia’s invasion of Ukraine, a deepfake video circulated online falsely depicting Ukrainian President Zelensky advising his country to surrender to Russia[18].
Shaping public opinion relies on the ability to persuade. According to a recent study by Bai, Hui, et al., AI-generated messages have shown comparable or even higher levels of persuasiveness, surpassing human-produced messages regarding perceived factual accuracy and logical reasoning, even when discussing polarizing policy issues[19]. The scale and sophistication of disinformation operations will only increase as AI technologies evolve, becoming cheaper and readily available.
Foreign Policy Implications
It is important to stress that disinformation is not a level playing field: authoritarian states hold offensive and defensive advantages over democracies. Democracies are built on transparency and accountability. When they engage in disinformation operations, they risk eroding these core principles and their citizens’ trust. Additionally, democracies have open information spaces and refrain from adopting measures limiting freedom of speech.
In contrast, autocratic states have fewer constraints to engage in deceptive practices and tightly control their information environment[20]. This asymmetrical information contest, bolstered by AI advancements, could lead to enhanced threat scenarios within democratic states[21]. In particular, the rapid dissemination of information across open societies means that, while domestic efforts to safeguard against these threats are crucial, they can be undermined by interference originating from states with limited regulatory and monitoring capabilities.
Recommendations
The international law debate on coercion must be reignited to better define whether and why certain disinformation activities should be deemed wrongful acts and how they might significantly differ from permissible influence operations. This distinction is necessary so target states can take appropriate response measures to compel the cessation of another state’s ongoing violation while ensuring their own actions remain within the bounds of legality.
While it is appropriate to maintain some level of strategic ambiguity, this effort should include specific references to the nature, methods, or effects of disinformation operations that are deemed coercive. This shift in approach should be clearly communicated and articulated in states’ national positions on the applicability of international law to cyberspace to send a clear signal to adversaries.
Establishing agreement on such distinction is equally important for different reasons. First, it would make it possible to identify, categorize, track, and compare wrongful disinformation operations across different states. This, in turn, would lead to a better understanding of the threat environment, such as the scope and depth of sophisticated transnational campaigns, and facilitate public attribution to responsible actors. Secondly, developing effective joint response mechanisms relies on a shared understanding of the issue at hand. Without this shared foundation, responses will likely be lackluster and inconsistent, and their implementation would be temporary and short-lived.
Eugenio Benincasa is a Senior Cyberdefense Researcher at the Canter for Security Studies (CSS) at ETH Zurich. Prior to joining CSS, he worked as a Government Officer at the Italian Presidency of the Council of Ministers in Rome and as a Research Fellow at the research institute Pacific Forum in Honolulu, where he focused on cybersecurity policy. Eugenio holds an MA in international affairs from Columbia University’s School of International and Public Affairs, where he focused on International Security Policy.
[1] Harriet Moynihan, “The Application of International Law to State Cyberattacks: Sovereignty and Non-Intervention,” International Law Programme (Chatham House, December 2019), https://www.chathamhouse.org/sites/default/files/publications/research/2019-11-29-Intl-Law-Cyberattacks.pdf.
[2] “On the Application of International Law in Cyberspace” (The Federal Government, March 2021), https://www.auswaertiges-amt.de/blob/2446304/32e7b2498e10b74fb17204c54665bdf0/on-the-application-of-international-law-in-cyberspace-data.pdf.
[3] “The Republic of Poland’s Position on the Application of International Law in Cyberspace” (Ministry of Foreign Affairs Republic of Poland, December 2022), https://www.gov.pl/web/diplomacy/the-republic-of-polands-position-on-the-application-of-international-law-in-cyberspace.
[4] “The Application of International Law to State Activity in Cyberspace” (Department of the Prime Minister and Cabinet, December 2020), https://www.dpmc.govt.nz/publications/application-international-law-state-activity-cyberspace#:~:text=New%20Zealand%20is%20a%20champion,of%20responsible%20state%20behaviour%20online.
[5] “International Law in Future Frontiers” (Attorney General’s Office, May 2022), https://www.gov.uk/government/speeches/international-law-in-future-frontiers.
[6] “International Law Applicable in Cyberspace” (Government of Canada, April 2022), https://www.international.gc.ca/world-monde/issues_development-enjeux_developpement/peace_security-paix_securite/cyberspace_law-cyberespace_droit.aspx?lang=eng#a4.
[7] “International Law and Cyberspace” (Finnish Government, October 2020), https://um.fi/documents/35732/0/Cyber+and+international+law%3B+Finland%27s+views.pdf/41404cbb-d300-a3b9-92e4-a7d675d5d585?t=1602758856859.
[8] “National Position of the United States of America (2021)” (NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE), August 2021), https://cyberlaw.ccdcoe.org/wiki/National_position_of_the_United_States_of_America_(2021)#cite_note-1.
[9] “Italian Position Paper on International Law and Cyberspace” (Ministero degli Affari Esteri, November 2021), https://www.esteri.it/mae/resource/doc/2021/11/italian_position_paper_on_international_law_and_cyberspace.pdf.
[10] “Letter to the Parliament on the International Legal Order in Cyberspace” (Government of the Netherlands, July 2019), https://www.government.nl/documents/parliamentary-documents/2019/09/26/letter-to-the-parliament-on-the-international-legal-order-in-cyberspace.
[11] Carole Cadwalladr and Emma Graham-Harrison, “Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach,” The Guardian, March 17, 2018, https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election.
[12] Michael N. Schmitt, “‘Virtual’ Disenfranchisement: Cyber Election Meddling in the Grey Zones of International Law,” Chicago Journal of International Law 19, no. 1 (August 16, 2018), https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1736&context=cjil.
[13] Tia Sewell, “FBI Warns That Deepfakes Will Be Used Increasingly in Foreign Influence Operations,” Lawfare, March 12, 2021, https://www.lawfaremedia.org/article/fbi-warns-deepfakes-will-be-used-increasingly-foreign-influence-operations.
[14] Josh A. Goldstein, Girish Sastry, Micah Musser, Renee DiResta, Matthew Gentzel, and Katerina Sedova, “Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations” (Georgetown University’s Center for Security and Emerging Technology, OpenAI, Stanford Internet Observatory, January 2023), https://arxiv.org/pdf/2301.04246.pdf.
[15] Victor Galaz, Stefan Daume, Arvid Marklund, “A Game Changer for Misinformation: The Rise of Generative AI” (Stockholm Resilience Centre, June 16, 2023).
[16] Francesco Pierri, Brea L. Perry, Matthew R. DeVerna, Kai-Cheng Yang, Alessandro Flammini, Filippo Menczer and John Bryden, “Online Misinformation Is Linked to Early COVID-19 Vaccination Hesitancy and Refusal,” Scientific Reports, April 26, 2022, https://www.nature.com/articles/s41598-022-10070-w.
[17] The Graphika Team, “Deepfake It Till You Make It” (Graphika, February 2023), https://public-assets.graphika.com/reports/graphika-report-deepfake-it-till-you-make-it.pdf.
[18] Bobby Allyn, “Deepfake Video of Zelenskyy Could Be ‘tip of the Iceberg’ in Info War, Experts Warn,” NPR, March 16, 2022, https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia.
[19] (Max) Hui Bai, Jan G. Voelkel, Johannes C. Eichstaedt, and Robb Willer, “Artificial Intelligence Can Persuade Humans on Political Issues,” OSF Preprints, n.d., https://osf.io/stakv/.
[20] Paul Bischoff, “North Korea, China & Russia among Worst Countries for Internet Censorship,” Business & Human Rights Resource Centre (blog), January 15, 2020, https://www.business-humanrights.org/en/latest-news/north-korea-china-russia-among-worst-countries-for-internet-censorship/.
[21] “Increasing Threat of Deepfake Identities” (U.S. Department of Homeland Security, 2022), https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf.