Zero Trust and Visual Vulnerability: What Does the Deep Fake Era Mean for the Global Digital Economy?

Maggie Munts
September 15, 2022

The internet is moving toward a zero-trust paradigm in which synthetic media will be ever-present and indistinguishable from authentic media. Over the last year, text-to-image AI platforms like DALL-E 2[1] have become increasingly sophisticated and easy to access.[2] Most recently,[3] announced that its technology would be open to the public[4] without any guardrails against damaging imagery. Under the deceptive cry of “by the people, for the people,” public diffusion of may effectively destroy any semblance of visual trust online.

Conversations about deep fakes, cheap fakes, and visual misinformation have primarily centered around ramifications for geopolitics,[5] less so the private sector. After all, what does a deep fake of Volodymyr Zelenskyy[6] have to do with annual revenue? As it turns out, quite a lot. A zero-trust paradigm brings the global digital economy and private sector’s image-based vulnerabilities sharply into focus.

Today, visual authenticity is at the heart of global brand integrity in our digitized economy. #TikTokMadeMeBuyIt[7] is more than a savvy marketing tagline with 7.4 billion views. According to 2019 data from Google,[8] more than half of consumers say that digital video content helps them decide what to buy.

High reliance on user-generated content means global brands are no longer built top-down in boardrooms and deployed to the public. Instead, brands are increasingly democratized constructions, a dialogue between consumers, communities, and companies. Anyone with a smartphone can contribute to or detract from a given brand’s identity. One viral video or photo can disrupt business overnight.

Last October, international retailer Lidl recalled its branded oat milk[9] after a Scottish TikTok user ran a viral video campaign claiming it was lumpy and smelled bad. The first video garnered 3 million views before it was removed by the platform. This was a notable event: one user and a few viral videos had a significant impact on an internationally recognized brand in a matter of weeks. The Lidl video was authentic and helped to protect consumers, but it also illustrated that companies are vulnerable to similar scenarios with synthetic or fabricated media. 

Unprecedented access to sophisticated synthetic media and image deception technologies, from text-to-image AI to pixel erasers, leaves the digital economy more exposed than ever to visual misinformation and fraud. In a zero-trust internet, brands with a digital presence need to assess their exposure, secure their content, and seek additional brand integrity protections. This means securing influencer networks, protecting creator likenesses, and verifying user-generated content from reviews to product placements.

Enterprise is particularly exposed to image deception, and synthetic media in the realm of reviews, and fake reviews hurt consumers too. Amazon’s barrage of lawsuits against thousands of Facebook group administrators[10] for facilitating fake product reviews only tackles one loophole in a relatively fragile system. Open-source review platforms like Amazon, Yelp, Google, or TikTok have become increasingly popular, highly visual destinations for customers around the world to evaluate buying decisions. Anyone with access to a smartphone can post an image, authentic or faked, of a given product. That’s 6.4 billion people, 78% of the world’s population.[11]

Creator likenesses are another area of vulnerability for brands that often rely on the persona of a group or single individual to sell product and underpin value. This past May, scammers used a deep fake of Elon Musk[12] and crypto billionaire Chris Anderson to promote a fake brand of cryptocurrency. While poor quality, the deep fake still circulated on Twitter for several days before Musk set the record straight.[13] Musk’s persona has effectively become synonymous with Tesla and SpaceX brands, whose combined market cap currently sits just shy of $1 trillion.

As globally recognized brands, influencers, and celebrities continue to expand and monetize their digital presence, not only are they at greater risk for image and likeness abuses, but the downsides of compromised reputation are amplified. Nowhere is the harm of image and likeness abuse more evident than the proliferation of AI-driven non-consensual porn.[14]

The global digital economy has created a permissive environment in which a wide range of players may turn to visual deception for their own ends. In a 43-page report[15] in December 2021, the US Department of Homeland Security emphasized the growing threats posed by deep fakes, including corporate espionage and stock manipulation. To protect visual attack surfaces in a zero-trust, digital landscape, new strategies, and technologies will become necessary.

In-house cybersecurity and marketing teams must collaborate on an ongoing basis to assess exposure to synthetic media and develop proactive strategies to preempt visual threats to brand value. Securing content pipelines will be a key task for these teams, as well as vetting influencer networks, educating teams on synthetic media risk, monitoring social media channels, and working with platform partners.

Now more than ever, brands and industry leaders need to join the larger conversation on visual trust and misinformation, engaging with ongoing efforts such as the Coalition for Content Provenance and Authenticity[16] and initiating conversations about standards within industry. Public dispersion of deep fake technologies like puts the need for cross-sector conversation and norm creation at the fore.

Content provenance technologies that help verify media can also help brands protect visual identity at scale, by authenticating photos and videos in the moment of capture. As the internet becomes increasingly zero-trust, authenticated media will likely become a key building block of industry and society.

In the digital age, images and video are the most powerful vehicles to deliver and convey value, meaning, and identity across the globe. Instantly, a single piece of media has the potential to change a global perception, valuation, and future direction in the digital economy. Future-proofing brand integrity now requires brands to contend with image deception technologies. To survive in the zero-trust era, industry needs to adapt to the growing intersection between online visual integrity and public trust.


Maggie Munts is a former Amazon PM and current Master of International Affairs candidate at Columbia University's School of International and Public Affairs, specializing in digital governance and development.


[1] “DALL·E 2,” accessed April 11, 2024,

[2] Kyle Wiggers, “OpenAI Expands Access to DALL-E 2, Its Powerful Image-Generating AI System,” TechCrunch (blog), July 20, 2022,

[3]“Stability AI,” Stability AI, accessed April 11, 2024,

[4] “Stable Diffusion Public Release,” Stability AI, accessed April 11, 2024,

[5] Robert Chesney and Danielle Citron, “Deepfakes and the New Disinformation War,” Foreign Affairs, December 11, 2018,

[6] Bobby Allyn, “Deepfake Video of Zelenskyy Could Be ‘tip of the Iceberg’ in Info War, Experts Warn,” NPR, March 16, 2022, sec. Technology,’

[7] “TikTok - Make Your Day,” accessed April 11, 2024,

[8] “How In-Store Shoppers Are Using Video,” Think with Google, accessed April 11, 2024,

[9] “TikToker Wins ‘lumpy’ Oat Milk War with Lidl,” October 12, 2021,

[10] Annie Palmer, “Amazon Sues Thousands of Facebook Group Administrators over Fake Reviews,” CNBC, July 19, 2022,

[11] “Global Smartphone Penetration 2016-2022,” Statista, accessed April 11, 2024,

[12] Edward Ongweso Jr, “Scammers Use Elon Musk Deepfake to Steal Crypto,” Vice (blog), May 27, 2022,

[13] “Https://Twitter.Com/Elonmusk/Status/1529484675269414912?S=20&t=B3oRTgIxHS293TcMvwUsbw,” X (formerly Twitter), accessed April 11, 2024,

[14] “A Horrifying New AI App Swaps Women into Porn Videos with a Click,” MIT Technology Review, accessed April 11, 2024,

[15] “Increasing Threat of Deepfake Identities” (United States Department of Homeland Security, 2021),

[16] “About - C2PA,” accessed April 11, 2024,