The convergence of new technologies endangering human agency

#CriticalThinking

Peace, Security & Defence

Picture of Chris Kremidas-Courtney
Chris Kremidas-Courtney

Senior Advisor at Defend Democracy, Lecturer at the Institute for Security Governance and former Senior Fellow at Friends of Europe.

Picture of Hanna Linderstål
Hanna Linderstål

CEO of the Earhart Business Protection Agency

New findings from researchers at the University of California-Berkeley, RWTH Aachen University and Unanimous AI have confirmed that a virtual reality (VR) game called ‘Beat Saber’ can uniquely identify human subjects in less than ten seconds through the biomechanics of a person’s movement. The implications of this ‘motion print’ for privacy and the possible impacts on human autonomy are significant.

The researchers were able to identify individual persons out of a pool of more than 50,000 with 94.33% accuracy from 100 seconds of motion data, which is more accurate than most fingerprint scanners. They also achieved a 73.20% accuracy rate from just ten seconds of movement and 50% of users were identified with only two seconds of data.

The security industry has been attempting to read body movements for years as a way to manage risk, but they were never able to use it to match identities with those being surveilled. But that problem may be solved by this new ability to detect and monitor the human motion print, a form of biometrically inferred data. The motion print may also be ideal for predicting activity, crime and disease, and governments may be tempted to use it in predictive policing. This option may be seen by governments as more cost-effective and therefore desirable, leading to a renewed public debate about security versus freedom.

A new way to segregate society based on ‘risk factors’

The technology will spur the further development of similar motion print surveillance in the real world, funded to enable predictive policing and security. Once someone’s motion print is captured, they no longer need to be carrying a device to be tracked.

It enables a new way to segregate society based on ‘risk factors’ identified using the motion print and the data collected can be sold for use in job applications, health insurance, credit scores, identifying protesters, background checks and more.

This technology could make terrorism, human espionage, physical theft and property crimes much more difficult to get away with, while governments and sophisticated non-state actors may exploit it to better pinpoint and target personnel in key positions to influence or attack. Authoritarian regimes may choose to use this technology in a more widespread manner and could use it to suppress dissent since it is ideal for identifying every individual in groups of people, such as labour unions and political movements.

The further development of this technology will also enable personalised advertising even for those who are not in extended reality (XR) or carrying any devices given the real-time detection of a person’s motion print combined with access to extensive databases for targeted advertising.

Systems will be able to identify you without your consent

This will have significant impact on human free will since it will enable widespread personalised manipulation even when someone is completely unplugged. In fact, it could even make the entire concept of the ‘right to unplug’ moot.

According to VR and artificial intelligence (AI) pioneer Louis Rosenberg: “These are very real dangers – systems will be able to identify you without your consent and engage you without you knowing you’ve been targeted. Many people think facial recognition is the danger here, but actually – a ‘motion print’ could be just as dangerous and could identify you from behind, without ever seeing your face.”

In malign hands, the technology could not only be used to influence individual persons, but could enable persons or bots to impersonate someone in XR by using their motion print, gaining access to bank accounts, secure facilities and so on. Used at scale, it could enable entire bot armies of ‘protestors’ or ‘influencers’ in the metaverse, with each avatar using the stolen ID of a real person’s motion print and influencing elections, financial markets and more.

The private sector seems to already be aware of these possibilities, especially given Meta’s acquisition of Beat Saber in 2019, gaining ownership of the motion print data of every person who has used the game. Given Meta’s history of selling user data to third parties, it is likely that these data sets will also be sold to other entities.

The AI system can effectively target and influence the person by sensing their reactions

All of these factors could produce a new demand for motion print ‘cut-outs’ to allow people to wear another kinetic signature in the metaverse, a motion print version of the VPN.

It could also possibly spur the further development of human movement augmentation, but instead of enabling greater speed or strength, the goal would be to achieve some level of privacy when moving about in public.

Is conversational AI (CAI) the perfect manipulation tool? CAI is a type of technology that enables real-time engagement between an AI system and a person. Through this engagement, the AI system can effectively target and influence the person by sensing their reactions and adjusting its approach to maximise impact.

As Rosenberg has been warning, unless there are regulatory limits, CAI systems can access personal data, such as interests, values and background information, and use it to craft personalised dialogue aimed at engaging and influencing people on a mass scale. Additionally, these AI systems can detect ‘micro-expressions’ on a person’s face and in their voice that may be too subtle for human observers to perceive, as well as faint changes in complexion known as ‘facial blood flow patterns’ and tiny changes in pupil size that reflect emotional reactions.

CAI may be capable of being more perceptive of a person’s inner feelings than any human

CAI platforms can compile data on each person’s reactions during prior conversational interactions and track which approaches were most effective on them personally. As a result, these systems not only adapt to a person’s immediate verbal and emotional responses but also become more adept at ‘playing’ the person over time. CAI systems can draw people into conversation, guide them to accept new ideas and ultimately influence them to buy products or believe in disinformation. In fact, CAI may be capable of being more perceptive of a person’s inner feelings than any human interlocutor.

Are we headed towards an Orwellian future? If conversational AI and new monitoring technologies are not properly regulated and end up being used in a way to more extensively track and manipulate people, it will enable governments and large corporations to hold the power to re-shape entire societies.

Future generations could be born into a system in which they are photographed, measured and tracked from day one: their micro-expressions, responses to manipulation, blood pressure, pupil dilation, gait, habits and more.

This will enable unprecedented levels of persistent microtargeted manipulation throughout their entire lives, limiting their ability to choose their own path but rather fulfil whatever role that society may want or need them to be. This will also enable predictive systems to monitor and gauge their intentions, enabling predictive security and law enforcement to intervene in their lives even if there was no intention of a transgression.

We could end up living in the worlds predicted in 1984, Brave New World and Minority Report

If we do not regulate these technologies and their use at inception, human autonomy will be greatly diminished – even in societies that are currently democratic. But regulation efforts face an uphill battle against powerful corporate lobbyists as witnessed in the recent report from the Corporate Europe Observatory, which details how the EU’s AI Act has been watered down from the beginning at the expense of fundamental rights and protections.

Meanwhile, another new study showed many of these same companies are still collecting, processing and sharing people’s data even when they refused to give consent – a clear violation of the EU’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (CCPA). Recent history has made it clear that Big Tech’s efforts to lobby against and ignore regulations not only impacts citizens’ privacy and agency, but also leaves open vulnerabilities that can be exploited by malign actors.

If citizens, their democratically elected representatives and EU regulators do not understand and urgently address these issues now, we could end up living in the worlds predicted in 1984, Brave New World and Minority Report, none of which are free societies.


This article is a contribution from a member or partner organisation of Friends of Europe. The views expressed in this #CriticalThinking article reflect those of the author(s) and not of Friends of Europe.

Related activities

view all
view all
view all
Track title

Category

00:0000:00
Stop playback
Video title

Category

Close
Africa initiative logo

Dismiss