In banning autonomous weapons, Europe must take the lead

#CriticalThinking

Picture of Frank Sauer
Frank Sauer

Senior Research Fellow and Lecturer at Bundeswehr University Munich

Frank Sauer is Senior Research Fellow and Lecturer at Bundeswehr University Munich

On 12 September 2018, the European Parliament adopted a resolution calling for an international, legally binding prohibition on ‘lethal autonomous weapon systems’ (LAWS). This term refers to weapons that are not under human control but are operated solely by algorithms.

That same week, YouGov published a survey on artificial intelligence (AI) in Germany. Results showed that 71% of Germans are against handing weapons control in warfare over to AI. Such concerns are neither new nor limited to Europe. Already in 2013, a YouGov survey in the United States reported that 55% of Americans opposed LAWS. Opinion polls conducted by the Open Roboethics Initiative (2015) and by IPSOS (2017) indicate a similar trend at global level.

In Europe and all around the world, the majority of people oppose the notion of weapon systems that select and engage targets identified solely by their software and instead prefer systems controlled by human judgment and decision-making – the exception being autonomous defensive systems firing exclusively on incoming munitions.

There are also serious doubts about the ability of LAWS to adequately comply with essential elements of international humanitarian law

Thanks to pressure from expert communities and civil society, LAWS became the subject of diplomatic discussions at the United Nations’ Geneva offices in 2014. Since then, not a single country has gone on record in favour of LAWS.

However, a few states, notably the US, want to first explore the potential benefits of autonomy in weapon systems and consider any regulative action – at the moment – premature. Russia is similarly stalling, and China sends mixed signals. France and Germany are proposing a non-binding political declaration as a first step towards a ban. Twenty-five other states, including Austria, are calling for an immediate outright ban.

Although there is no current consensus, LAWS are certainly causing great wariness. This stands in stark contrast to similar processes previously experienced in Geneva. Anti-personnel landmines, for example, were eventually prohibited, albeit outside of the UN framework, despite having continued to receive endorsements from numerous countries. While a LAWS ban seems not yet in reach at the UN at the moment, the risks associated with allowing ‘killer robots’ to roam future battlefields are almost universally acknowledged amongst security experts and diplomats alike.

These risks are indeed manifold. For instance, the behaviour of LAWS is inherently unpredictable in scenarios where multiple such algorithmically-controlled weapon systems would interact with one another. This has experts deeply worried about unwanted, split-second military escalations that could be triggered and escalated too fast for human cognition and intervention.

There are also serious doubts about the ability of LAWS to adequately comply with essential elements of international humanitarian law, such as the distinction between civilians and combatants or the proportionate use of military force. Moreover, the notion of delegating the legally-required human judgment to a machine is questionable in itself, regardless of the machine’s performance. After all, the guiding principle of respect for human dignity dictates that humans, not machines, should not only remain legally accountable for such life-or-death decisions but also feel the ethical gravity of the killing that takes place in war.

If states parties in Geneva are to be believed, no one wants to see weapons engaging inhabited military objects, soldiers or unintended civilians without meaningful human control and accountability. LAWS nevertheless present the international community with the classic security dilemma. No one may be keen on being the first mover, yet everyone wants to be ready to follow in case the line eventually is crossed. Since autonomy in weapon systems is less about hardware but predominantly about software, the proliferation and military application of LAWS will then be extremely rapid. Consequently, if we keep following the current path of this arms race, we will against our better knowledge stumble into a dark future in which states end up wielding – and facing – the weapon they never really wanted.

We cannot simply wait for the US, Russia, China or the rest of the world to take the lead

None of this is without precedent, of course. The situation is a textbook case for arms control. We know from history that arms control (think nuclear weapons (NewSTART) or conventional weapons in Europe (CFE)) is the best diplomatic and military tool to build initial trust and establish binding rules and verify compliance in cases where the overall strategic risks outweigh the tactical benefits. The hardest part of this path is mustering the initial political will to initiate the process. With public opinion so unambiguously against LAWS, Europe should assume leadership in achieving this end. We cannot simply wait for the US, Russia, China or the rest of the world to take the lead, and we must not let ourselves be disheartened by recent setbacks such as the erosion of the INF Treaty.

Regulating weapon systems autonomy is a challenge. Unfortunately, it is not comparable to banning landmines. Defining and prohibiting a discrete piece of military hardware will not be sufficient. After all, despite the fact that we will soon be able to make almost any weapon autonomous, no one will be able to tell its level of autonomy by simply looking at it from the outside.

Thus, the challenge is that we must examine our fundamental association with technology. Rather than only regulating numbers or capabilities of weapons, we need to regulate the relationship between humans and machines in warfare. To effectively ban LAWS, Europe needs to lead in making meaningful human control over weapon systems the clearly defined, verifiable baseline value in military practice.

This would not hamper civilian progress in AI or the development of precision weapons that seek to minimise collateral damage. Instead, this strategy would chart a course to a brighter future in which AI is used for the betterment of humanity, rather than the outsourcing of killing to machines.

Related activities

view all
view all
view all
Track title

Category

00:0000:00
Stop playback
Video title

Category

Close
Africa initiative logo

Dismiss