- Area of Expertise
In February 2020, the European Commission published a White Paper on Artificial Intelligence (AI), formulating two goals: achieving an ecosystem of excellence in AI, and creating an ecosystem of trust. In other words: boosting innovation and securing protection.
Like all technologies, AI can be a force for good as well as harm – often unintentionally so. The immediate reflex to turn to AI in order to help tackle the outbreak of COVID-19, as well as the vehement warnings on the adverse impact this may evoke, is a topical case in point. Coordinating AI policies at the EU level could secure a level playing field of protection and benefits for European citizens, while simultaneously leveraging Europe’s strengths on the global market.
The fact that the White Paper constitutes the fourth Commission document on AI in less than two years corroborates its importance – and this comes in addition to the work of the High-Level Expert Group on AI, which produced Europe’s Ethics Guidelines for Trustworthy AI and a set of Policy Recommendations.
This Expert Group helped pave the way for the White Paper, which draws on its seven requirements for Trustworthy AI, its plea for a risk-based approach and its call to safeguard fundamental rights. Besides offering a blueprint for AI regulation, the White Paper also heralds Europe’s ambition to become the first global regulator in this field.
The political willingness to act may be present, but the vision is not
The EU, however, is not alone in the new regulatory playground that is emerging around AI. The first-mover advantage that can be gained from setting the standards means the race to AI has also become a race to AI regulation. Governments and international organisations have sprung to action, each asserting a pole position to govern AI with their own standards.
That a comprehensive governance framework for AI is lacking is evident. Current rules contain jurisdictional, substantive and procedural gaps, which fail to offer adequate protection against AI’s risks and hamper innovation due to legal uncertainty. Yet what this governance framework should ultimately look like is less clear.
The White Paper highlights a number of legal gaps and indicates routes to address them – but it only scratches the surface. The political willingness to act may be present, but the vision is not. Such a vision is nonetheless indispensable if the EU wishes to both claim and maintain regulatory leadership in AI.
How can this claim be strengthened? Europe’s approach to AI requires a comprehensive vision that is holistic, long-term and consistent.
The use of AI does not merely create, but mainly exacerbates existing discrimination and inequality
First, while it may be politically appealing to adopt a new all-encompassing ‘AI regulation’, there is no such thing as a single ‘AI’. Not only does this umbrella term cover various techniques with varying types of risks, but its meaning also changes over time.
Furthermore, most risks that must be tackled are not AI-specific. Similar harm can often stem from simple algorithms, mere statistics or even manual tools. By focusing only on AI, any regulation will thus be over- and under-inclusive. If today companies falsely claim to use AI to attract investments, tomorrow they will do the opposite in order to escape ill-scoped regulation.
This narrow focus also disregards the role of the human behind the machine. Bias in AI is a mere reflection of bias in our society. The use of AI does not merely create, but mainly exacerbates existing discrimination and inequality. It is essential to tackle these underlying issues too: fixing AI will not fix society.
A holistic perspective is therefore needed, and a careful reflection on whether technology-neutral regulation should be abandoned in favour of an AI-specific one. We need to capture the actual harm – not just the technology.
Guidance is needed on how to align AI with existing legislation
Second, our knowledge of AI, of its impact on society, and of the most efficient ways to tackle its adverse effects is progressively growing. But this learning curve risks being overlooked in favour of the here and now. The White Paper neglects longer-term risks and considers only current ones, for which it proposes measures that will take years to implement and may hence be too little too late.
The White Paper, for example, recommends ex-ante conformity assessments for high-risk AI-applications. Yet this demands not only the operationalisation of still very abstract requirements – and a methodology to tailor these requirements to specific applications – but also significantly more oversight capacity and knowhow. It is clear that for certain AI-applications such measures are needed and that work should get started right away to put them in place. But even in the unlikely event that Parliament and Council adopted these assessments swiftly, their implementation – and hence their protection – would only kick-in years from now.
More urgently, guidance is needed on how to align AI with existing legislation. Moreover, mandatory transparency and documentation obligations should be introduced to at least enable the ex-post assessment of systems that pose risks. This can be realised in the intermediate-term to increase protection until a fully-fledged ex-ante verification mechanism – and all the resources, know-how and political consensus it requires – materialises.
A realistic timeline should hence be drawn, listing all the actions to take in the short, medium and longer-term, and categorising these actions based on available knowledge, likelihood and extent of risk, and implementation speed.
It’s time for Europe to solidify its ambition through a comprehensive vision for AI governance that is holistic, long-term and consistent
Finally, Europe needs a more consistent approach. Rather than securing a marriage between innovation and protection, these issues are addressed as separate topics in the White Paper, and in that order of priority. Regulatory sandboxes – which help discover legal gaps and uncertainties – go unmentioned. Nor does the White Paper consider the possibility of stimulating AI-applications that may actually foster fundamental rights rather than hamper them.
Trust can be the basis of excellence, but as long as the dual issues of protection and innovation are juxtaposed rather than folded into each other, the uneasy balance between the two will most certainly be doomed. Europe’s aspirations for a competitive advantage of quality- and value-oriented AI will remain just that – an aspiration.
It takes courage to take a moral stance amidst all the global competition for AI, where ethics is too often either an afterthought or used as a sugar-coat. While Europe’s ambition to do the right thing is praiseworthy, it needs to do so in the right way. An approach whereby AI is implemented with good intentions but without a solid framework, is today more precarious than ever – especially given AI’s prevalence in current discussions on the coronavirus and the tensions it inevitably creates with fundamental rights. It’s time for Europe to solidify its ambition through a comprehensive vision for AI governance that is holistic, long-term and consistent.
- By Rianne Veen
- By Jamie Shea
- Area of Expertise
Next event online
- Area of Expertise
- Peace, Security & Defence