Artificial Intelligence is Set to Question our Identity and Privacy

Even as governments push to implement online authentication for digital transactions, AI threatens to convincingly play the part of a human online.

By
Nicole Softness
July 15, 2017

Each day, news sites publish alluring updates on the capabilities of artificial intelligence (AI) technology. Corporate executives congratulate each other on these new tools, and await with bated breath for the financial rewards that will come from algorithmic technologies that predict and even replicate human behavior. Even innovators in the security communities reap the benefits of this new technology, which allows them to map out human activities, identify trends and aberrations, and craft more precise policy and operational responses.

Predictive technology is good. It allows us to better study the indicators that trigger and influence human behavior, and as all good investors know, knowledge is the silver bullet of wealth and security. 

And yet, the closer we get to the creation of credible AI, or anything that would pass the skepticism of another human, the closer we get to an inevitable clash of priorities.

That clash is between replication and authentication – the debate that will soon be (or should be) dominating our news feeds. With every step AI takes towards replicating human activity, we see a backtrack through efforts to use the internet to verify humans through digital footprints. Countries like Estonia and Brazil have made remarkable progress in that arena.

Ronaldo Lemos and Gabriel Aleixo, technological and political thought leaders in Brazil, have been huge proponents of an effort to enable more digital activities, including online political involvement and financial transactions. This effort rests upon the use of blockchain, a technology that essentially creates a record of online interactions. These records are allegedly fool proof, and allow participants to completely trust the reliability of online transactions and communications. People aren’t going to conduct sensitive personal activity online if they don’t trust that a human (or in these cases, a specific individual) is at the other end.

There are significant benefits to be gained from these efforts. Lemos and Aleixo describe a unique part of Brazil's constitution, which states that any petition signed by 1 percent of the voters must be recognized by Congress as an official draft bill and voted upon. Before today's technologies, the notion of acquiring 1.5 million signatures was laughable. Now, if Brazilian authorities were to use the blockchain model, they could verify the identity of voters (based on unique identification numbers and taxpayer information) and allow them to vote online. Online voting means higher voter turnout. And higher voter turnout is in turn good for democracy.

Similarly, Estonia has used blockchain and cryptographic hashing to create an authentication model capable of supporting nationwide digital life, which supposedly has added 2 percent a year to its GDP. From the time of their birth, Estonian citizens receive unique eleven digit numerical IDs, which allow them to conduct financial and voting activities online. Following the implementation of e-residency cards in 2014, the former Prime Minister of Estonia even began an initiative to use genome data from the country’s 1.3 million citizens to develop precision medicine for diagnostics and treatment.

All of this sounds good. But the problem is that AI technology could get to a point where it can convincingly play the part of a human online. That’s not quite in sync with efforts to authenticate online behavior, or reap any of its benefits.

So how close are we?

Social Science

It might seem academic and impractical, but social science has in fact significantly informed efforts to replicate human behavior. Even if they don’t realize it, technology investors are drawing from it.

Web scraping and metadata tools – which are used to quickly collect and analyze large amounts of online information – are a big reason for recent social science successes. Although the field used to predicate on the assumption that observable indicators had to be studied in isolation from their environments, technology has changed this. In a tactical sense, big data tools have allowed scientists to remove environmental context from indicators, but have also encouraged the study of indicators in direct conjunction with their environments.

In addition, researchers are also working to observe facets of human behaviour that they previously could not study. The University of California Davis published a study that separated how humans view space. The study compared physical blueprints to those drawn by individuals. The differences show how humans value spaces – did they leave out a door they never use? Did they draw the kitchen larger because they eat a lot? These ‘mistakes’, as we might call them, are at the heart of what distinguishes humans, and present the largest obstacle to credible AI.

Tools like RankBrain, an AI system launched by Google in 2015, are trying to do this. As opposed to simpler search engines like Yahoo, the RankBrain algorithm responds to the heart of searcher’s inquiries, rather than the specific content. It responds to unprecedented searches and unfamiliar content with its best guess. What could be more human?

Security Benefits of AI

But if we’re going to look at the challenge that these AI inroads pose to online authentication, we should also look at the benefits – and not just the financial ones.

Using technology to capture previously inaccessible data on human activity, and then predictively mapping human behavior, could provide a tremendous boost to the intelligence and law enforcement communities.

Developed in the counter-terrorism community, Activity Based Intelligence (ABI) – a man-hunting tool – allowed analysts to disambiguate (a counterterrorism term for ‘resolve’) individual human identities from larger communities of alternative identities. By bringing together raw data from multiple sources, analysts can combine micro-signals to identify a certain person crossing the street as the terrorist they've been looking for, given that they walk at a certain speed, take a certain route, and behave a certain way.

These behavioral indicators play a big role in disambiguating individuals. It’s possible to copy fingerprints, steal someone’s computer or knock out your teeth. It’s harder to copy someone’s walking style in its exact form, or recognize and duplicate their driving patterns. At some point, people will have characteristics that are observable with new technology, and inherently unique.

The Future

Of course, it’s unlikely that ABI will continue to expand at such a constant rate. The more we are able to uniquely identify people through these digital actions, the less they will unassumingly practice them. Innocent people, fearful of being monitored, will start practicing anti-detection behaviors. ABI will adapt in turn, and the cycle will continue.

However, even as the technologies and capabilities shift, this debate between privacy and security isn’t going anywhere. At some point, people are going to recognize its inherent place in the field of AI, and start to question the very basis of big data analysis and technological capabilities. The day of reckoning is upon us.


Nicole Softness is a graduate student at Columbia University’s School of International and Public Affairs in New York, studying International Security & Cyber Policy. Her research focuses on the intersection of counterterrorism, social media, and public-private security partnerships. She is currently the Research Assistant for Columbia’s Initiative on the Future of Cyber Risk.