Is influence of artificial intelligence on modern security far greater than we think?

#CriticalThinking

Picture of Vanessa Müller
Vanessa Müller

Vanessa Müller is a Researcher at Facing Finance, which is part of the Campaign to Stop Killer Robots

This year’s Munich Security Conference in mid-February kicked off quite spectacularly. At its eve, a public side-event introduced what has been labelled by scientists and experts around the world as the third revolution of warfare after gunpowder and nuclear arms: lethal autonomous weapon systems. How could that be better done than with having a robot delivering the opening speech? Sophia, the first robot to receive citizenship of a country, claimed to make the world a better place with the use and development of artificial intelligence.

In some respects, Sophia is certainly right. Artificial intelligence could reduce misdiagnosis in healthcare, reduce resource consumption in agriculture or improve disaster relief programmes. However, the same artificial intelligence and machine learning applications that detect objects, such as pedestrians, in the environment of a self-driving car, could be used to search, identify, select and attack a target without human intervention in war.

Developments in artificial intelligence enhance offensive military capabilities, thereby creating a classic security dilemma for states. Certainly, there has been a long history of automation in defence systems; however, until to date humans have remained in control over the decision to use lethal force.

Quite similar to current tele-operated drones in warfare, autonomous weapon systems create a false image of a clean and bloodless war through the physical detachment from war itself. This is grounded in a deep misunderstanding on how autonomous weapon systems will affect modern security beyond the weapon system itself. Not only is the delegation of life-and-death-decisions from humans to algorithms inherently morally wrong, it will also likely facilitate the decision to go to war. The possession of autonomous weapons lowers the political costs of war in human, monetary and domestic terms. Shifting risks away from own soldiers will likely reduce the threshold for policy-makers to go to war, increasing the number of conflicts after all and contributing to international instability.

Developments in artificial intelligence enhance offensive military capabilities, thereby creating a classic security dilemma for states

In contrast to nuclear weapons, which involve hard-to-obtain raw materials and a specific military development trajectory, autonomous weapons such as microdrones can be cost-efficiently mass-produced and are likely to be obtained not only by governments, but also by non-state actors. They could be 3-D printed or manufactured with components ordered via Amazon. Advanced computer science students should be able to cope with crude software. Thus, lethal autonomous weapon systems will also likely be deployed by dictators to suppress opposition members, warlords to carry out ethnic cleansings or terrorists to destabilise nations.

Proposed operational advantages, such as the speed of autonomous weapon systems, should also be viewed with caution. Certainly, they react faster than humans and pre-empt conventional adversary weapon systems that await human approval for lethal decisions. However, if autonomous operational speed outpaces human decision-making and control, regional and international stability is jeopardised, possibly leading to higher levels of violence.

Human judgement, intuition and the understanding of a rapidly changing environment is critical to avoid escalation. High-speed decisions about life and death made by autonomous weapon systems without human control could have severe international consequences, especially if they are affected by malfunction, hacking or are simply not programmed for variation within the chaotic nature of today’s battlefields.

Artificial intelligence also poses the novel risk to be hacked and altered. A change of a GPS position would not be questioned by an autonomous weapon system. As a result, it might turn against friendly forces or civilians, causing an unintended escalation in crisis. In contrast, a human operator in control of a weapon system would be able to notice that something is wrong, leaving room for intervention.

Algorithms developed by deep learning cannot be held responsible

The progress of artificial intelligence and machine learning in the development of autonomous weapon systems does not only pose risks to international security, it also comes along with numerous legal and ethical concerns. Most importantly, autonomous weapons cross a moral line by delegating the decision over life and death to machines. The decision to use force and the value of a human’s life is trivialised, if no one has to carry any moral burden for that decision.

Furthermore, who should bear the responsibility in case of a humanitarian disaster provoked by an autonomous weapon system?

The concept of accountability puts humans and agencies led by people at its centre. Algorithms developed by deep learning cannot be held responsible. Hence, even if we assign accountability to a state, a programmer or a military commander as substitutes, this would hardly be meaningful, if none of these entities understands how and why the algorithm has made its decision.

A final knock-out criterion for autonomous weapon systems is their failure to comply with international humanitarian law. As experts of Artificial Intelligence and Robotics have continuously reported, the principles of distinction, proportionality and military necessity cannot be satisfied by current artificial intelligence and it might never will. A pre-emptive ban on autonomous weapon systems is therefore urgently needed to interrupt a dangerous arms race between nations, which has already begun.

Who are you listening to when it comes to artificial intelligence in weapon systems? To more than 3,700 of the world’s top scientists in Artificial Intelligence and Robotics, such as Barbara J. Grosz or Mustafa Suleyman, and an additional 20,000 other endorses of an open letter calling for a pre-emptive ban on autonomous weapons, such as Elon Musk, Jaan Tallinn, and brilliant scientists such as Stephen Hawking? Or are you rather listening to Sophia, a robot simulating to understand the world?

Related activities

view all
view all
view all
Track title

Category

00:0000:00
Stop playback
Video title

Category

Close
Africa initiative logo

Dismiss