Iran war must become Europe’s ‘Sputnik’ moment for AI

#CriticalThinking

Digital & Data Governance

Picture of Momodou Malcolm Jallow
Momodou Malcolm Jallow

Member of the Swedish Parliament

The war in Iran has forced us to confront a reckoning long-delayed: artificial intelligence is straining human rights. Europe’s lead in responsible AI counts for little if it remains on the sidelines.

When America launched its bombing campaign, with predictable regional blowback and civilian casualties, Europe chose caution. British and EU leaders are presently stressing the humanitarian toll and invoking international law as the path to de-escalation.

Alas, this restraint did not spare Europe from the conflict. An Iranian drone has already struck Cyprus, while missiles heading for the continent were intercepted by Turkey. Their effectiveness owes much to AI-targeting and guiding systems.

This is the evolution of warfare that policymakers have dreaded. AI is steadily pushing conflict towards greater autonomous use, yet concern for humanitarian consequences struggles to keep pace.

That moral reckoning may have surfaced already. Within the first day of bombing, American forces struck an Iranian school, killing more than 175 people – mostly children. Investigations are ongoing, but analysts have questioned whether outdated intelligence or automated targeting systems played a role. At the same time, the flood of AI-generated images and manipulated footage circulating online has made it harder to verify what happened, further complicating accountability in wartime. Whatever the final explanation, experts warn AI-assisted targeting can misidentify civilians and increase civilian casualties.

I fear we are approaching a civilisational dilemma for the rules-based order Europe claims to champion. AI is advancing faster than law or ethics can keep up. In that widening gap, excessive force can easily be blamed on the algorithm rather than the decision-maker.

This erosion of accountability may widen further under pressure from Washington. After all, American officials have already urged the EU to soften its AI rules, claiming they disadvantage American firms and reportedly using trade leverage to push deregulation.

Europe’s economic dependence is fast becoming a strategic vulnerability. American firms have already shown interest in European developers like Mistral and Aleph Alpha. If those acquisitions materialise, we may soon be importing Silicon Valley’s laissez-faire approach to AI along with them. That risk is hardly theoretical. Grok’s arrival in Europe, replete with racist tropes in its outputs, shows just how quickly such malignant technology can cross borders.

Europe should lead the push for a global compact on responsible AI use

A continent that prides itself on human rights and responsible AI cannot accept a false choice between ethics and security. That tension reflects decades of reliance on American protection. The answer is European technological sovereignty.

Europe must back its own AI industry and shield strategic technologies from takeovers that would erode its human rights-centred approach. With generative AI investment already tripling in a year, the continent has momentum. Now the EU must match it with greater financing.

But domestic action alone will not stop the next Minab-style disaster. AI is now a global force, and responsible AI must become a global ambition in parallel. That is why Europe should lead the push for a global compact on responsible AI use to guide the deployment of these technologies. Dismiss international law all you wish, but its influence in shaping national laws should not be underestimated. Here in Sweden, our AI strategy acknowledges this by explicitly tying it to international cooperation and global rulemaking.

Sceptics will call this naïve. The UN has already tried to set global AI standards, only to see Washington withdraw over fears of centralised control. But Europe has leverage the UN never did. London hosts DeepMind, Paris has Mistral and companies such as Saab and BAE Systems shape future defence technologies. If responsible standards are embedded there, those norms will inevitably spread worldwide.

It is through the slow and frustrating political work of building global consensus that we begin to see AI for what it is, rather than the cure-all it is sold as across the Atlantic. The current wave of large language models reveals clear limitations. They can hallucinate facts, fabricate sources and misread technical data while sounding serenely confident. As British AI expert Sachin Dev Duggal observes, they optimise for what sounds plausible rather than what is demonstrably true. Feed them vague human prompts, and the problem worsens further. The result is confident nonsense and a growing risk that misplaced trust in AI will one day produce the next humanitarian catastrophe. For systems intended to avoid human rights abuses, that risk is unacceptable.

That is why neuro-symbolic AI deserves serious attention. By combining neural pattern recognition with rule-based reasoning and structured knowledge, it aims to address the central weakness of current models: their inability to explain how they reached a conclusion. Duggal’s SeKondBrain AI offers a glimpse of this approach in the civilian world, building structured knowledge graphs and persistent memory to ground answers in evidence rather than probabilistic guesswork.

Translated into responsible AI use, this means AI capable of explaining how it reached a conclusion. In military planning, intelligence and border control, that clarity reshapes responsibility. Decisions can be questioned and corrected before they become disasters. Of course, this will require far closer collaboration between science and lawmaking – an area where Europe already leads.

Until that happens, I will weather the familiar mockery of Brussels and the tired cliché that while America innovates and China copies, Europe regulates. When it comes to reining in AI before it tramples rights the world takes for granted, that is no insult.


The views expressed in this #CriticalThinking article reflect those of the author(s) and not of Friends of Europe.

Related activities

view all
view all
view all
Track title

Category

00:0000:00
Stop playback
Video title

Category

Close
Africa initiative logo

Dismiss