The world needs a time out on AI development

#CriticalThinking

Peace, Security & Defence

Picture of Louis Rosenberg
Louis Rosenberg

CEO and Chief Scientist of Unanimous AI

Picture of Chris Kremidas-Courtney
Chris Kremidas-Courtney

Senior Advisor at Defend Democracy, Lecturer at the Institute for Security Governance and former Senior Fellow at Friends of Europe.

In recent years we’ve seen a steady march of technological developments that have improved our lives but also brought along new risks to democracy and social cohesion. These previous tech revolutions happened over periods of five to ten years, yet democratic governments and institutions struggled to keep pace, often acting only after much damage had already been done.

The current revolution in artificial intelligence (AI) that we entered last year is moving much faster than any technological advancement in recent memory. Within the next 12 to 18 months, we will be talking to our computers and they will be talking back, while enabling human-level tasks to be performed through fully automated means. This is coming and it’s happening at a time when the European Union’s AI Act will not yet be in force – nor will it be sufficient to protect democratic institutions and citizens’ fundamental rights.

Last week, over 2,000 tech leaders and experts (including the authors) signed the ‘Pause Letter’, issued by the Future of Life Institute and calling on all AI labs worldwide to pause training of their next generation large-scale AI systems – namely, new LLMs (large language models) – for at least six months in order for governance to catch up.

First and foremost, the concern is not that these large-scale AI systems are about to become sentient, suddenly developing a will of their own and turning on humanity. Still, these AI systems don’t need a will of their own to be dangerous – they just need to be wielded by malign actors who can use them to influence, undermine and manipulate our societies in ways that challenge human agency and threaten democratic institutions.

On Friday of last week, Italy temporarily banned ChatGPT, citing privacy concerns

This is a very real danger and we’re not ready to handle it. AI development over the next 12 to 24 months could mark an epochal moment in human history on par with the computing revolution, the internet revolution and the mobile phone revolution. But unlike these prior transitions, which happened over periods of years and even decades, the AI revolution is poised to roll over us like a thundering avalanche of change.

And that avalanche is already in motion. ChatGPT is currently the most popular LLM to enter the public sphere. Remarkably, it reached 100mn users in only two months. It took Facebook more than four years to reach that milestone. Clearly, we are experiencing an unprecedented rate of technological change. Consequently, regulators and policymakers are deeply unprepared for the new risks coming our way.

On Friday of last week, Italy temporarily banned ChatGPT, citing privacy concerns over using people’s personal data to train the popular chatbot, but new concerns on AI go well beyond privacy.

The most urgent risks can be described in two distinct categories. Firstly, there are risks associated with generative AI systems that can produce human-level content and replace human-level workers, and secondly, there are risks associated with conversational AI that can enable human-level dialogue and will soon hold conversations with citizens that are indistinguishable from real human engagements. Let’s explore each.

These technologies are deeply concerning because they introduce a totally new threat vector for targeted influence that regulators are unprepared for

Generative AI refers to the ability of LLMs to create original content in response to human requests. The content generated by AI now ranges widely from photos, images, artwork and videos, to essays, poetry, computer software, music and scientific articles. In the past, generative content was impressive but not passable as human-level output. That has rapidly changed, with current AI systems able to create outputs that can easily fool us, making us believe they are either authentic human creations or genuine videos or photos captured in the real world. These capabilities are now being deployed at scale, creating several significant risks for societies.

While several studies indicate concerning impacts on Western workforces, the more urgent danger is that the content generated by AI can look and feel authentic and often comes across as authoritative, and yet it can easily contain factual errors. There are no accuracy standards or governing bodies in place to help ensure that these systems, which will become a major part of the global workforce, will not propagate errors from subtle mistakes to wild fabrications. For these reasons, we urgently need to ramp up regulatory authorities that can create and enforce accuracy standards for AI-generated content.

Another major risk is the potential for bad actors to launch AI-generated influence campaigns that spread propaganda, disinformation, deep-fakes and other information manipulations. Malign actors already spread damaging content, but generative AI will enable it to be done at scale, flooding the world with artifacts that look authoritative but are completely fabricated. To help tackle this threat, we need watermarking systems that identify AI-generated content as synthetic, enabling the public to more easily discern what is real. Again, this means we need to rapidly put protections in place and ramp up regulatory authorities to enforce their use.

Next we come to conversational AI, which is a form of generative AI that can engage users in real-time dialogue through text or voice chat. These systems have recently advanced to the point where AI can hold coherent conversations, keeping track of the flow and context over time. These technologies are deeply concerning because they introduce a totally new threat vector for targeted influence that regulators are unprepared for: conversational influence.

We urgently need to put regulations in place, potentially banning or heavily restricting the use of AI-mediated conversational influence

As every salesperson knows, the best way to convince a target to buy something or believe something is to engage them in conversation so you can make your points, observe their reactions and then adjust your tactics to address their resistance or concerns. With the release of GPT-4, it is now clear that AI systems will be able to engage users in authentic conversations as a form of targeted influence. The concern here is that third parties using application programming interfaces (APIs) or plugins will impart promotional objectives into what seems like natural dialogue and that unsuspecting persons will be manipulated into buying products they don’t want or believing information that is untrue.

This AI manipulation problem has suddenly become an urgent risk. In the near future, malign actors will target individuals through conversational content that will be customised to their specific values, interests, history and background, and will adjust its persuasive tactics in real-time to optimise impact. Unless regulated, these methods will be used not just for predatory sales tactics, but to drive propaganda, disinformation and other information manipulations. If unchecked, AI-driven conversations could become the most powerful form of targeted persuasion that humans have ever created. Again, this means we urgently need to put regulations in place, potentially banning or heavily restricting the use of AI-mediated conversational influence.

At this point, there are two main questions for the EU and its member state governments. Firstly, are we ready to grant so much power to systems we don’t yet fully understand? And secondly, can the EU and member state governments quickly agree on a set of new regulatory guardrails to protect democracy and human agency, and implement meaningful monitoring and enforcement measures before GPT-5 or other next generation systems are released into the world?

If not, then it’s time for the EU, United States, Japan, United Kingdom, South Korea, Canada, Australia and other democracies to call a time out on AI development until governments and institutions can get ahead of these new developments. This is an urgent issue because AI technologies are currently evolving faster than any technological advances in recent human history – and possibly with even greater impact.


The views expressed in this #CriticalThinking article reflect those of the author(s) and not of Friends of Europe.

Related activities

view all
view all
view all
Track title

Category

00:0000:00
Stop playback
Video title

Category

Close
Africa initiative logo

Dismiss