Unprecedented access to sophisticated synthetic and image deception technologies leaves the digital economy more exposed than ever to visual misinformation and fraud. New AI text-to-image platforms have democratized deep fake creation, allowing anyone to create synthetic, photorealistic images in a matter of minutes just by typing short descriptions of what they want to depict. With synthetic media evolving rapidly, financial risks to the private sector are also growing. Global industry must reckon with imminent visual threats as the internet becomes increasingly ‘zero trust.’ Until online platforms take action to distinguish synthetic from authentic media, brands will need to protect themselves from synthetic media risks by pursuing new strategies to secure and signal image authenticity in a zero-trust online landscape.

In March 2022, a video of Volodymyr Zelensky instructing Ukrainian forces to lay down their weapons spread across social media. It was not real, but rather a ‘deep fake,’ generated by AI to mimic President Zelensky’s likeness and damage the Ukrainian war effort. In the last six months, technologies to create deep fakes have become increasingly sophisticated, efficient, and accessible. Visual truth online is increasingly under threat, with the private sector’s vulnerabilities sharply in focus.

Synthetic media is an umbrella term that refers to digital content generated by AI or algorithmic means, often with the intention of appearing real. The internet is moving toward a zero-trust paradigm in which synthetic visual media, like the deep fake of Zelensky, will be ever-present and indistinguishable from authentic media.

Historically, making a deep fake video took hours, even days, and required specific technical knowledge and resources that were concentrated in the hands of a ‘techie’ minority. But with the newest technology, anyone can create photorealistic images and videos in a matter of minutes just by typing short descriptions of what they want to depict through text-to-image AI platforms such as Open AI’s DALL-E 2. Outputs are shareable across social media and bear an uncanny resemblance to user prompts.

DALLE-2 does have limited guardrails that attempt to address ethical concerns. But Stability AI, another text-to-image platform, recently announced that its deep fake code would be open to the public without any guardrails. Under deceptive cries of “by the people, for the people,” Stability AI’s move may effectively accelerate the destruction of visual trust online.

Meanwhile, Google’s new text-to-video generators Imagen Video and Phenaki, and Meta’s forthcoming ‘Make-A-Video,’ all signal text-to-image synthetics will quickly move to video too and are set to become ubiquitous online, raising serious questions if the global digital economy and private sector are adequately prepared.[MM1] 

Today, visual authenticity is at the heart of global brand integrity in the digitized economy. According to data from Google, more than 50% of consumers say that video content helps them decide what to buy. High reliance on user-generated content means that global brands are no longer built top-down in boardrooms but are instead more democratized constructions. One viral image can disrupt business overnight.

In October 2021, international retailer Lidl recalled its oat milk from its network of nearly 1,000 stores across the UK after a Scottish TikTok user ran a viral video campaign claiming the oat milk was lumpy and smelled bad. The video garnered 3 million views before it was removed, exemplifying that in the age of user generated content, companies’ images have become more vulnerable to collective aggression from individual users. The Lidl video was authentic with an aim to protect consumers, but fabricated media could do much more harm.

Another similar event featured Peloton. In December 2021, an HBO series depicted a main character dying after riding a Peloton bike, causing an 11.3% drop in the exercise equipment maker’s stock price overnight, roughly equivalent to a $130 million reduction in market cap. Even though consumers knew the show was fictional, the sentimental impact of the episode had substantial financial consequences for an internationally distributed brand.

The Lidl and Peloton cases involved authentic media that humans created without malicious intent. But if an adversary used text-to-image AI to generate a synthetic image or video that was specifically designed to manipulate stock prices or disrupt operations, consequences could be immediate and severe.

While synthetic visual media makes significant gains, in the absence of legal and regulatory protections, business risk continues to grow. Moreover, current content moderation practices on social media sites do not screen for image and video authenticity. Platforms use algorithmic and manual detection to flag and remove flagrant language and graphic images but do not take responsibility for verifying whether images are authentic or synthetic.

In 2019, expert analysis placed the total deep fake count at 15,000 and projected that number would double monthly, reaching the millions by the end of 2020. Now, two years later, with new easy-to-use technologies, deep fake count is set to grow at an unprecedented pace.

The digital economy is more exposed than ever to the financial consequences of visual deception and image-related fraud. In a zero-trust internet, businesses with a digital presence need to assess their exposure and take action to protect their brands. This means pressuring platforms to verify user-generated content, securing influencer networks, and protecting creator likenesses. 

Open-source review platforms like Amazon, Yelp, and Google have become increasingly popular, highly visual destinations for consumers around the world to evaluate buying decisions. Anyone with access to a smartphone can post an image, authentic or faked, to review a given product. That means 6.4 billion people, or 78% of the world’s population, now have the power to change consumer decision-making through photo and video reviews. Moreover, industry research indicates that 84% of consumers trust online reviews as much as they trust recommendations from friends.

Creator likenesses are another vulnerability for brands that rely on personas to sell products and underpin value. In May 2022, scammers used a deep fake of Elon Musk and crypto billionaire Chris Anderson to promote a fake brand of cryptocurrency. Musk’s persona has effectively become synonymous with Tesla and SpaceX brands, whose combined market cap currently sits just shy of $1 trillion. While poor quality, the deep fake still circulated on Twitter for several days and put a $1 trillion valuation at risk.

As international brands, influencers, and celebrities continue to expand and monetize their digital presence, they are at greater risk for costly, image-related abuses because of synthetic media technologies. In a 43-page report in December 2021, the US Department of Homeland Security emphasized the growing threats posed by deep fakes, including corporate espionage and stock manipulation.

The global digital economy has created a permissive environment in which anyone can turn to visual deception for their own ends. Until online platforms take action to distinguish synthetic from authentic media for users, brands will need to protect themselves from synthetic media by pursuing new technologies and strategies to secure visual attack surfaces in a zero-trust online landscape.

Content provenance and authenticity technologies are an available option to increase trust and transparency in media at scale. Secure content provenance technologies, like those offered by Microsoft-backed software provider Truepic, authenticate photos and videos at the moment of capture by identifying who took a given photo where, when, using what device, and authenticated pixelation.

As the internet becomes increasingly zero-trust, authenticated media and content provenance will likely become key building blocks for industry and society. Founded by companies including Microsoft and Adobe in February 2021, the Coalition for Content Provenance and Authenticity (C2PA) is a joint initiative by technology and media companies to push for an industry standard for content provenance.

Now more than ever, brands and industry leaders need to join the larger conversation on visual trust, engaging with ongoing efforts like the C2PA, initiating conversations about standards within industry, and sharing best practices. Public dispersion of deep fake technologies like Stability AI’s Stable Diffusion puts the need for cross-sector conversation and norm creation at the fore.

In-house cybersecurity, marketing, and legal teams can also collaborate to assess exposure to synthetic media and develop proactive strategies to preempt visual threats to brand value. Securing content pipelines will be a crucial task for these teams, as well as vetting influencer networks, educating teams on synthetic media risk, monitoring social media channels, and working with platform partners to improve media standards.

In the digital age, images and video are the most powerful vehicles to deliver and convey value, meaning, and identity across the globe. Instantly, a single piece of media has the potential to change a global perception, valuation, and future direction in the digital economy. Future-proofing brand integrity requires brands to contend with synthetic media technologies and image deception. To survive in the zero-trust era, industry needs to adapt to the growing intersection between online visual integrity and public trust.


Maggie Munts is a Master of International Affairs candidate at Columbia University's School of International and Public Affairs, specializing in digital governance and development. She was formerly a product manager at Amazon, and currently consults for Truepic on visual trust in online marketplaces.