From countering disinformation to cognitive self-determination

#CriticalThinking

Peace, Security & Defence

Picture of Chris Kremidas-Courtney
Chris Kremidas-Courtney

Senior Advisor at Defend Democracy, Lecturer at the Institute for Security Governance and former Senior Fellow at Friends of Europe.

“The future cannot be predicted, but futures can be invented.” – Dennis Gabor, Nobel Laureate and author of “Inventing the Future” (1963)

For years now democratic societies have adapted to the digital age as the ubiquity of mobile phone and social media applications has changed the way humans interact in the most significant ways since the invention of the telephone and electricity.

We’ve experienced how these revolutionary new technologies brought new vulnerabilities to information manipulation, resulting in divided societies, suboptimal pandemic responses and an erosion of belief in facts and democratic institutions.

Initially slow to react, governments, civil society and institutions have responded vigorously and have made strides in reducing the impact of disinformation and other information manipulation. Most recently at the fourth ministerial meeting of the EU-US Trade and Technology Council in Luleå, Sweden, both sides agreed to adopt a “common methodology of identifying, analysing and countering FIMI [foreign information manipulation and interference].”

While these are big achievements, new technological developments under way will change the entire cognitive landscape. We may be moving into an era of persistent monitoring, microtargeting and manipulation tailored for each person, on a mass scale.

Discerning reality from unreality will become even more challenging

Today, only 52% of internet traffic comes from humans, with bots making up the rest. The trendlines point towards humans being in the minority in the near future. The challenge that faces us now is not just disinformation but addressing potential threats to our cognitive self-determination.

We’ve seen how the dawn of extended reality (XR) technologies will change our relationship with the digital world from an interactive to an immersive one within the metaverse. This will enable targeted disinformation and information manipulations that ‘feel’ real. Discerning reality from unreality will become even more challenging as these technologies move us from post-truth to post-reality.

Outside of the digital world, the recent discovery of the human motion print (HMP) enables sensors to uniquely identify and track persons who are not carrying or interacting with any electronic device. Further development of this technology will allow personalised influence in everyday life, combining real-time detection of a person’s HMP with existing databases for targeted advertising and influence campaigns.

The convergence of these technologies with artificial intelligence (AI), especially in the form of conversational AI, could empower a new era of persistent monitoring and manipulation in which the lines between human free will and the microtargeted choices presented in a broadly manipulated landscape become blurred.

What does human agency or democracy mean when new technologies could enable each person to live in a “Truman Show” which is not of their own design?

What does privacy mean if we don’t even have the privacy of our own thoughts?

These developments coincide with breakthroughs in neurotechnology – the science that monitors, records and can modify brain activity. Advances in neurotechnology are being tested to solve perennial problems such as mental depression, drug addiction, keeping train drivers awake and finding the ideal time to work on a challenging project.

Brain-computer interfaces are expected to replace headsets to access the XR metaverse, further blurring the lines between reality and unreality. Brain monitoring sensors are soon likely to be built into everything from bike helmets to in-ear headphones.

Experiments now underway are finding that neurotechnology powered by AI systems can decipher stories and images from a person’s brain with remarkable accuracyeven turning thoughts into text.

These developments raise concerns about privacy and manipulation given the new potential to read and decipher human minds and use brain stimulation to change thought patterns. What does privacy mean if we don’t even have the privacy of our own thoughts?

The development of these invasive technologies is outpacing our ability to govern them.

Nita A. Farahany, Professor of Law & Philosophy at Duke University and author of the new book “The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology”, has proposed a “right to cognitive liberty”, which encompasses the right to freedom of thought and the right to mental privacy.

Instead of just focusing on what we are trying to prevent, we should identify what we want to protect

The combination of these dynamics and technologies – disinformation, XR, AI, HMP and neurotechnology – has the potential to fundamentally impact our societies. Yet our approach to them is splintered, with various civil society organisations and public institutions limited by their mandates or by funding lines tied to specific threats and challenges. Funding and emphasis on more holistic approaches remains rare.

In a recent podcast with Friends of Europe, Farahany urged the EU to take a more holistic approach and to update and refine our definition of human rights for the coming age.

In 2021, Chile enacted a new law to be enshrined within its constitution, establishing the rights to personal identity, free will and mental privacy, becoming the first country in the world to legislate on human rights for the future and to seek to safeguard citizens from advanced forms of cognitive monitoring and manipulation.

Since each of these sectoral laws requires long, laborious negotiations, we also need a more foundational, overarching statement of principles. Instead of just focusing on what we are trying to prevent, we should identify what we want to protect.

So, while the EU seeks to implement the AI Act and update the Digital Services Act, as well as the Code of Conduct on Disinformation, it’s time for a broad collaborative effort to define and update the human rights we need to protect our cognitive self-determination in this new era.

These newly defined and updated human rights should be written into the EU’s AI Act, Digital Services Act, the laws of member states and an updated EU Charter of Fundamental Rights. In addition, updating human rights to protect cognitive self-determination should be on the agenda for the 2024 EU-US Human Rights Consultations and 2024 G7 summit in Italy.


Related content:

The views expressed in this #CriticalThinking article reflect those of the author(s) and not of Friends of Europe.

Related activities

view all
view all
view all
Track title

Category

00:0000:00
Stop playback
Video title

Category

Close
Africa initiative logo

Dismiss