AI regulation: getting it right from the get-go

#CriticalThinking

Peace, Security & Defence

Picture of Jamie Shea
Jamie Shea

Senior Fellow for Peace, Security and Defence at Friends of Europe, and former Deputy Assistant Secretary General for Emerging Security Challenges at the North Atlantic Treaty Organization (NATO)

Photo of This article is linked to State of Europe – the festival of politics and ideas.
This article is linked to State of Europe – the festival of politics and ideas.

Click here to learn more.

Show more information on This article is linked to State of Europe – the festival of politics and ideas.

State of Europe is a fixture and a highlight of the European calendar. The reason is simple: it is a forum for today’s top leaders from the worlds of politics, business and civil society, from Europe and beyond, to connect, debate and develop ideas on key policy areas that will define Europe’s future.

The State of Europe high-level roundtable involves sitting and former (prime) ministers, CEOs, NGO leaders, European commissioners, members of parliaments, influencers, artists, top journalists and European Young Leaders (EYL40) in an interactive and inclusive brainstorm – a new way of working to generate new ideas for a new era.

The 2023 roundtable focused all of its attention on deliberating 10 policy choices for a Renewed Social Contract for Europe that will be disseminated ahead of the 2024 European elections and ensuing new mandate. The 10 policy choices will be the result of year-long multisectoral and multi-stakeholder consultations and will take into consideration the voices and opinions of over 2,000 European citizens.

As Friends of Europe progresses on its road towards a Renewed Social Contract for Europe by 2030, State of Europe serves as an opportunity for entrepreneurs, politicians, legislators, corporates, civil society, citizens and thought leaders to brainstorm solutions and ways out of the current polycrisis. The big-ticket items and trends that demanded our attention at the 2023 event included: money, debt, hardship, conflict, corruption and elections.

Learn more about State of Europe and the 2023 edition, ‘10 policy choices for a Renewed Social Contract for Europe’.

Inventors of technology often experience doubts, if not remorse, about their discoveries when they realise the full potential, for bad as well as good, of the inventions that they have let loose upon humanity. When I was at school and studying German literature for my A-level exams, one of our prescribed texts was the play “The Physicists”, by the Swiss writer, Friedrich Durrenmatt. It showed Einstein and his fellow scientists, who split the atom and opened the path to building the first nuclear bomb, voluntarily incarcerating themselves in an asylum and pretending to be mad. The purpose of this exercise was to convince governments that a nuclear weapon invented by madmen could not possibly work and therefore programmes to develop the bomb, like the Manhattan Project in the United States, should be abandoned.

Despite this fictitious and ultimately very unreal example of scientific reversal, most inventors tend to emphasise the benefits to humanity of their research, whether in health and genetic engineering or the expansion of cyberspace, virtual reality or social media planetary connectivity. It is left to others in law enforcement, intelligence agencies and the military to see the other, more negative side of the medal. Technology is often value-neutral and depends on the use to which it is put. The same nuclear energy that can eliminate humanity in a single afternoon also gives us carbon-free electricity and radiation for cancer treatments. The same software that allows us to talk to everyone everywhere also allows criminals to steal our identities and raid our bank accounts, and the same drones that deliver our parcels and monitor our crop yields can inflict instant death on anyone, anywhere and at any time. Benefits and dangers are indeed inseparable.

So, policymakers have to try to retrofit security into products that were not originally designed to be secure and whose very nature – being useable by everyone, connected to everyone and at an affordable price – is the antithesis of security. Given the rapid dissemination of cheap but super high-performance technology in a globalised world, the products have already become universal before the security community intervenes. New versions and product upgrades and innovations come on stream before engineers have found solutions to make the previous versions secure. It feels like an eternal game of catch-up or closing the stable door after the horse has bolted.

This week, Friends of Europe is debating the form and content of a Renewed Social Contract for Europe at its annual festival of politics and ideas: the State of Europe roundtable in Brussels. Certainly, one key feature of this revised and modernised social contract has to be how governments and industry can better protect Europe’s 600+ million citizens against harmful technologies that manipulate humans more than humans manipulate technology. This can be for commercial gain, as in abusing a person’s data for direct theft, sales and advertising, or to restrict their rights and individual freedoms to engage in political protest or for even more cynical purposes as when terrorists plot attacks, steal cryptocurrency or acquire knowledge of the design of explosives or chemical or biological weapons. The public will not accept fatalism when it comes to these abuses, nor its own liability to accept the high-risk level and absorb the damage without complaint or compensation. People expect products to be safe and regulated before they enter the marketplace and the costs of things going wrong to be shared between governments, the private sector and the individual consumer.

Cyberspace – where we have spent three decades now trying to improve firewalls, design more secure software, constantly apply security upgrades, expose bots and fakes, and deter cyber-criminals through better detection and stiffer penalties – is not an edifying example. Can we reboot, learn from previous mistakes and do better with the new generation of technology, particularly artificial intelligence (Al)? Especially when this new technology seems to harbour even more existential threats to life as we know it if we get the calculus wrong? It certainly raises the stakes for a new European social contract if the fundamentals of human existence need to be protected and not merely a person’s access to healthcare, jobs, education and security.

We cannot wait to see how far AI develops and whether the good balances the bad before we apply the brakes

It is perhaps for these reasons that last week’s first-ever AI Security Summit hosted in the United Kingdom by Prime Minister Rishi Sunak attracted so much attention, and this despite the all-prevailing conflict between Israel and Hamas in Gaza. The summit was held symbolically at Bletchley Park, the country house outside London where Alan Turing and his fellow code-breakers succeeded in accessing the German Enigma coding machine during World War Two, using the world’s first digital programmable computer. It attracted leaders from 28 countries, as well as the United Nations Secretary-General, António Guterres, and the President of the European Commission, Ursula von der Leyen. The US Vice President, Kamala Harris, also showed up. What was most unusual, however, was the presence of the Chinese Vice Minister of Science and Technology, Wu Zhaohui, at a predominantly Western event. The United States has long accused Beijing of trying to become the world leader in AI by illicitly obtaining data across the globe and using AI tools, such as facial recognition and social registration, to increase state control over the Chinese population. Yet the UK argued that, given China’s massive investment in AI technology and use, it was all the more important to have Beijing at the table in any discussion of the future regulation of AI.

The UK was determined also to make this summit a multi-stakeholder event as befits a technology where innovation and private sector investment are in the driving seat. So, Elon Musk of X participated too, as well as Sam Altman, who founded OpenAI, backed by Microsoft, which launched the first frontier AI software, ChatGPT, on the market one year ago. Demis Hassabis, CEO of Google DeepMind, was also in attendance alongside leading AI developers from companies such as Inflection and Anthropic. The guest list included two well-known Chinese online retail and communication companies, Tencent and Alibaba, as well as civil society representatives from The Alan Turing Institute and the Future of Life Institute. Finally, there were academics, such as Stuart Russell and Geoffrey Hinton, often known as the ‘Godfathers of AI’. Just over 100 participants at Bletchley cannot speak for the entire global community producing AI or being affected by it, but it was a sufficient critical mass to kick off the discussion on whether it is possible to define a common set of rules of the road going forward, and if so, what should these rules look like? Some governments have already started to define their own norms and standards without much heed to what others are doing. The EU has come up with its own AI Act, which is currently going through the European Parliament. So, is it possible to adopt a common approach before we are confronted with dozens of competing and incompatible regulatory regimes that all target different aspects of the AI challenge?

Certainly, it is none too soon to have this public policy debate given the warnings about future AI impacts, which have come from the inventors and developers of the technology itself, particularly Musk and Altman. Many potential threats and risks have been identified. The EU in its new AI Act has been worried about human rights, particularly in the field of mass surveillance, wiretapping and facial recognition, as well as data privacy. In both North America and Europe, there have been concerns regarding election interference and discrimination. We have become familiar with deep fake videos and voice or face reproductions, which have become technically more astute and life-like. But AI can also help political campaigners to target disinformation and specially craft messages on key groups of swing voters or minorities, either to encourage them to vote or prevent them from voting. Some of this activity may well be legal and legitimate, building on existing polling or data analytics, while enabling a more sophisticated and differentiated approach. Yet, distinguishing between fact and fiction or breaking out of ideological bubbles will be increasingly difficult when AI-generated messaging pushes people deeper into those bubbles. The creative industries such as music or cinema are also worried about fakes impersonating well-known performers or about all the legal issues, such as copyright or royalties, arising from determining the owners and beneficiaries of AI-generated content or hybrids of both AI and human creativity. What, for instance, would be the percentage of AI content in a given product to change the liability or safety rules? Yet, the most alarming scenarios are in the field of security, for instance, more deadly weapons relying on AI-generated targeting and intelligence or, as Rishi Sunak warned just before the summit, the use of AI to make cyber-attacks more deadly or to enable terrorists to mass produce biological weapons. The Ukrainian army has already demonstrated how cheap drones can be turned into deadly bombs using AI.

At the extreme end of the spectrum, some have painted a more existential picture of AI-powered robots or computer networks taking over the world and enslaving less intelligent mankind as the systems learn from each other dynamically and develop the capacity to self-programme and self-upgrade outside the human loop. Microsoft and Alphabet even co-signed a public letter warning that AI posed an existential-level risk to humanity. Closer to home, most of us are convinced that AI-led work processes will lead in time to the disappearance of millions of jobs, and among the white-collar workforce – such as doctors, teachers and bankers – more than among blue-collar workers, as in construction or agriculture. Yet, the optimists point out that – initially, at least – AI will make us more efficient in our existing jobs by automating many of our more routine processes and giving us more and better data on which to base our decisions and business strategies. The bottom line of all this is that we need more intelligence about the intelligence we have just created. We cannot wait to see how far AI develops and whether the good balances the bad before we apply the brakes.

It would be a pity to deny ourselves the full benefit of AI because of a panicky overreaction to perceived risks and dangers

Despite the warnings, many in the AI innovation community will see only the benefits and play down the dangers in order to maximise their profits, delaying meaningful regulation as long as they can. We can recall Mark Zuckerberg denying for years that Facebook was anything other than a wonderful tool to connect people, empower free speech and facilitate a global conversation. The advertising revenue that it generated made him a billionaire several times over, able to maintain Facebook’s near-monopoly position on the market, promote its chosen products and services, and keep out potential rivals. It was only after the actions of governments, the US Congress and the European Commission and Parliament to hold him to account that Zuckerberg finally moved seriously on things like fact-checking, data protection, content moderation and actions to address child pornography, violent extremism and fake identities. At least AI gurus like Musk and Altman have been willing to acknowledge the more negative side and start a conversation with governments without waiting to be named, shamed and pressured.

The summit produced a Bletchley Declaration, which in essence was a commitment by the various stakeholders from the government, private sector and civil society to continue to talk and try to figure out some basic common rules. The AI thinker, Yoshua Bengio, was commissioned to write a report on the current status of AI technology and the trends and risks going ahead. A report like this can be useful to determine how the human can continue to be in the loop – and with a stop switch in algorithm programming – for as long as possible. The report should also help governments to decide where to apply the precautionary principle to new AI developments, particularly in the field of safety and testing, and perhaps impose limits on technologies and their capacity for modification in the way that speed limits or safety features like warning sounds can be built into new car engines. A few days before the summit, the US announced the setting up of its first national AI Safety Institute. The UK is also establishing a Safety Institute and just last week the UN announced the formation of an AI Advisory Board of experts from government, the private sector and academia. Sunak proposed the model of the highly respected and influential UN International Panel on Climate Change (IPCC), whose work has done so much to make the reality of man-made global warming irrefutable except to the most bunkered down climate change deniers. Its scientific analysis and reports on climate trends and carbon emissions are now the global authority and baseline from which COP agendas, government and UN initiatives, and climate mitigation and adaptation efforts are developed. That said, the IPCC provides data and assessments rather than policy solutions, so it might not be the exact model that we need to tackle AI given that this has to be addressed much faster and with a greater sense of urgency than the leisurely pace with which we have approached climate change.

As the debate moves forward, there are some fundamental debates that governments will need to manage and – sooner or later – resolve.

First, how quickly should we impose regulation? And with a light touch initially or with a heavy hand from the start? ‘Don’t rush to regulate’ is a mantra of the business community fearful that profits and innovation will be curtailed by premature, unnecessary and ill-suited government regulation. ‘Insight before oversight’ is a favourite slogan of Musk. Sunak himself seemed to support a cautious and step-by-step regulatory approach in his opening address to the summit. Certainly, it would be a pity to deny ourselves the full benefit of AI because of a panicky overreaction to perceived risks and dangers, some of which may be overhyped. Yet, this can also be self-interested pleading by business. The same companies that are warning about AI are also continuing to invest heavily in it. So even if we need to think long and hard to get the regulation right, some basic guidelines or guardrails could be agreed upon beforehand in an international code of conduct. This is similar to cyberspace, also a notoriously difficult area to regulate as some countries prefer a voluntary approach, hoping to secure good information exchange with the tech companies, while others including the EU advocate strict legal obligations to respect safety standards and large fines for data breaches and inadequate measures of consumer protection and incident prevention. The EU’s Network and Information Security Directive and Digital Services Act are good examples of the top-down approach. The model of confidence and transparency measures in the area of cyberspace whereby states accept certain principles on safety and human rights, for instance, and agree to factor these into domestic regulation may be the way to go here. These cyber-security confidence measures began in regional groupings, such as the Council of Europe, the Organization for Security and Co-operation in Europe (OSCE) and G7, as open frameworks that could be adhered to by other member states at a later stage. An additional thought here is that in regulating AI governments need to preserve an open market, level the playing field and prevent the big fish in AI innovation and marketing from using regulation to keep the small fish, particularly small and medium-sized enterprises, out. So, as happened recently with the FTX cryptocurrency platform, governments need to be on their guard that when industry itself calls for regulation, it is not a disguised form of protectionism.

Second, how do we balance multilateralism with ‘mini-lateralism’? We have seen that achieving global agreements is difficult, not only in view of the large number of states that need to be brought on board – 193 currently in the UN General Assembly – but also the more competitive and confrontational nature of the modern international order. It is several decades since we had a university treaty like that on nuclear non-proliferation (NPT) or chemical weapons (CWC). The Group of Governmental Experts in the United Nations working on the responsible use of cyberspace has toiled for years without agreement on a set of norms, let alone a binding treaty. The United States and its allies see cyberspace as a multi-stakeholder domain for which governments share responsibility alongside civil society and industry. China, Russia and a number of authoritarian states, by contrast, see cyberspace as equivalent to airspace or national territory – an area over which those states claim jurisdiction and control with firewalls on what comes in or goes out. So, there is probably no alternative to a patchwork approach with difficult constellations of countries working at varying speeds but trying to get the level of buy-in from other states in a way that will impose a dominant and leading set of standards.

Tech companies will need to be incentivised or forced to integrate safeguards into AI technology

The United States seems determined to lead in organising the Western camp. There were media reports that the United States was not particularly happy about the Chinese participation and Harris gave her major address on AI safety in London away from the Bletchley venue: “Let us be clear: when it comes to AI, America is the global leader. It is American companies that lead the world in AI innovation. It is America that can catalyse global action and build global consensus in a way no other country can.” As with climate change, it is good when the United States takes the lead and builds a coalition behind it, but the test of US diplomacy will be to build a regulatory regime that does not just privilege its own companies and commercial interests. On the other hand, the EU will have more influence on US policy if it develops an AI innovation industry of its own and a venture capital market. The EU cannot afford to be simply the regulator of US technology dominance, as has been the case with Google and Facebook. The US-EU Trade and Technology Council would be a good place to take up these issues. AI norms could be a core part of a revived transatlantic agenda beyond security cooperation in NATO or regarding Ukraine.

Third, governments have to remain in the lead and frame the AI issue as one of society and human rights and not simply one of technology and economics. Certainly, the multi-stakeholder approach is the way to go but governments need to keep control of the key decision points, for instance, the introduction of mass surveillance technologies or the storage of vast amounts of data on people’s private as well as professional lives. These decisions also need to be subject to parliamentary scrutiny. Parliaments need to organise and equip themselves to exercise effectively this supervisory role, which means having adequate access to legal and technical expertise. Parliaments need to check that governments do not allow AI use to be expanded on the sly and in a progressive way that hides significant leaps forward and inflection points. Parliaments will need to ensure that they are fully in the loop as well and have access to the right legal and technical expertise to scrutinise the government and private sector and to hold them to account. Parliaments can also flag up issues requiring broader civil society debate. One responsibility is to verify the functioning of AI systems that have already been introduced and not only focus on what lies ahead. In London, civil rights groups have complained that AI facial recognition technology repeatedly makes mistakes in identifying members of the black community in police investigations. They question the ‘knowledgeability’ or reliability of these systems used vis-à-vis different target groups.

Fourth, we need to help the world’s poorer and low-income countries to enter the AI economy and universe. AI may be most useful in helping poorer countries to leapfrog the development ladder, for instance, in targeting aid and measuring its effectiveness or building more efficient health services and agriculture, coping with the impact of climate change and producing more climate-resilient crops. Low-income countries should not be kept waiting until the advanced industrial economies have harvested all the benefits of AI and used it to increase further their productivity and technology advantage over their poorer neighbours. So, a useful initiative at the Bletchley Park meeting was the announcement by the UK Foreign Minister, James Cleverly, of a joint UK-US initiative to help 46 countries in Africa to rapidly access AI technology. It will also produce local language versions of the AI systems to help disseminate their use. The initial funding of the scheme at $100mn may be a drop in the ocean of the requirements of places like Africa, but it sends at least an important signal.

Finally, we need to pursue an approach of ‘safety by design’ rather than ‘safety as an afterthought’. This means that the tech companies will need to be incentivised or forced to integrate safeguards into AI technology from the conceptual stage and not leave it to governments to tell them how and where they need to modify their technologies to make them safer years later and with only limited effect. This will necessitate the willingness of AI developers to open their systems to the scrutiny of AI safety institutes at an early conceptual stage and work with regulators in tandem with their own research and development activities. In terms of intellectual property or patent protection and licensing, it will be quite a challenge to reverse the usual ‘innovation and commercialisation first, regulation second’ cycle. Government financing may help to persuade industry to cooperate, but will it also undermine fair competition? A big debate lies ahead here. One interesting idea, which was discussed at the Bletchley meeting, is to require AI developers to spend 30% of their research and development budgets on safety features.

When it comes to AI regulation, we can borrow a phrase from Winston Churchill: “It is not the end, it is not the beginning of the end. But it is the end of the beginning.” The UK can have the credit for starting the debate at the international level. Sunak has demonstrated that the post-Brexit UK can still use its scientific and technological base – demonstrated in developing the first vaccine against COVID-19 – to convene a high-level gathering and try to bring the three global power blocs – the US, the EU and China – together. Whether it could have done this just as effectively and with even more UK influence on the emerging AI debate is a topic that we will need to leave for another day. The torch now passes to South Korea which will organise the AI Safety Summit in 2024 and France which will host it six months later. This sense of urgency is encouraging, as experts tell us that AI will not in the future develop as slowly as it is developing today at its beginning. That pace is already fast and will only get faster. So, the next summits need to be more than stocktaking exercises and put in place an effective international code of conduct and a transparency and verification mechanism quickly. A UN agency such as those that monitor nuclear programmes (IAEA in Vienna) or chemical weapons (OPCW in The Hague) may need to be established soon. We need to drive AI before it starts driving us.


The views expressed in this #CriticalThinking article reflect those of the author(s) and not of Friends of Europe.

Related activities

view all
view all
view all
Track title

Category

00:0000:00
Stop playback
Video title

Category

Close
Africa initiative logo

Dismiss