Discussion summary: AI in defense - what would a code of conduct for AI look like?

Peace, Security & Defence

This is a summary of the recently concluded discussion on the seventh edition of Debating Security Plus (DS+). DS+ is a global online brainstorm that brings together a community of global security experts throughout the year to discuss the changing nature of warfare and its implication for global thinking on peace, security and defence.


Artificial intelligence in security and defence is developing at lightning speed and the international community must quickly catch up with technological advances if it is to regulate its use on the battlefield. This was the premise of the Debating Security Plus discussion on AI and defence.

What would a code of conduct for AI look like?

Courtney Weinbaum argues that an AI ‘code of conduct for defence’ should draw from other defence codes of conduct and extend the principles of the UN Declaration of Human Rights and the Geneva Convention. Zhanna Malekos Smith put forward a ‘warrior-in-the-design code of conduct’ for the armed forces, whereby humans would always verify targets prior to AI engagement. This code of conduct would integrate AI and armed robots to enhance, rather than supplant, human capability in combat.

Rod Thornton instead draws attention to the need for more trust in the international system, stressing that distrust could fuel the emergence of a ‘doomsday’ AI weapon. He warns that political leaders will not want to seem irresponsible and put their citizens at risk by limiting their own development of AI weapons if other states cannot be trusted to do the same.

What role does Europe play?

Olli Ruutu, Deputy Chief Executive of the European Defence Agency (EDA), presented the EDA’s efforts to develop common European guidelines on the use of AI for military purposes. It currently oversees 30 collaborative projects related to artificial intelligence. Yet Estonia’s Ambassador to the EU Political and Security Committee, Rein Tammsaar, calls on the EU to do even more and triple AI investment. He argues “overregulation and restrictions can undermine exploitation of AI in the defence field, including vis-à-vis other actors not constrained.”
What’s the way ahead?

Hugh Gusterson’s intervention contrasted with many of the other contributions, making the hard-hitting argument that AI technologies, like bioweapons and landmines, should be banned on the battlefield. On autonomous drones he contends that “Western powers may be the first to deploy them, but they will not be the last. If we create them, they will spread.”

Raluca Csernatoni of Carnegie Europe advocated the need for a shift in narratives, nuancing the debate away from either hype about AI revolutionising warfare or crippling anxieties over its potential damaging consequences. He warned that framing the debate as an arms race could cultivate an “insecurity strategic culture premised on antiquated Cold War rhetoric.”
rhetoric.”

Insights

view all insights
Track title

Category

00:0000:00
Stop playback
Video title

Category

Close

We use cookies to improve your online experience.
For more information, visit our privacy policy