Since the founding of the ultimately ill-fated League of Nations in 1920, global governance has been dominated by a succession of supranational bodies, culminating in the foundation of the United Nations and the diverse Bretton Woods institutions at the end of the Second World War. These organisations have largely worked to ensure peace between the Western powers in the security sphere but also in finance and trade. In a changing world, however, with the rise of new powers, it is imperative that the voices and needs of emerging nations are also adequately reflected. Given that 21st century global concerns focus far more than ever before on hybrid threats, human rights and the environment, is it time to draw a line under the past 99 years of global governance and look to re-evaluate and reform our established systems?
This article is part of Friends of Europe’s upcoming discussion paper on global governance reform, in which we ask the ‘unusual suspects’ to share their views on what reforms are necessary to make the rules-based order work for us all.
Novel technologies referred to under the umbrella term artificial intelligence (AI) are assuming an increasingly important role in human society. AI technologies are highly transformative and will affect virtually all aspects of our existence, promising previously unimaginable benefits but also posing daunting challenges. As society grows ever more anxious by the disruptive powers of this technology, pressure on policymakers to regulate AI increases.
The stakes are high. History has shown that technological innovation can be both a blessing and a curse, and much depends on the quality of the regulatory environment we create to shape it. Importantly, this includes the governance framework (the institutional architecture structuring the collaboration of all parties involved in policymaking), which determines the quality of regulation.
Well-designed regulation can correct market failures by incentivising socially optimal behaviour, thereby ensuring all members of society benefit from the innovation. Misguided or inappropriately implemented policy interventions, however, can have a deleterious social impact. First, they tend to make some segments of society worse off, aggravating inequalities and creating tensions between the winners and losers of innovation.
A second danger of bad policies is that they may irrevocably damage public trust in the new technologies, which may inhibit their adoption and as a result deprive society of potentially significant economic benefits that may accrue thereof. Time is of the essence when it comes to trust: during transitional periods of regulation, high levels of uncertainty on behalf of businesses about liability exposure and the ability to generate revenues, along with safety concerns on behalf of consumers, may devastate emerging markets.
Fragmented and uncoordinated domestic measures lead to inefficiencies and may even create international tensions
Insufficient expertise and resources on the part of regulators are among the key practical obstacles to realising the true potential of new technologies. These problems are not specific to the regulation of emerging technologies, but apply to most modern regulatory domains due to their immense complexity. Accordingly, the paradigm today is to move away from state monopoly over-regulation in favour of a decentred, collaborative co-creation of policies by diverse stakeholders encompassing government, public sector agencies, as well as industry, academic, and civil society bodies.
This approach not only efficiently harnesses the expertise of all relevant stakeholders, but also ensures that among these divergent groups the policy design appropriately reflects their often-conflicting interests. With respect to issue areas like AI that have transnational impact, international relations literature stresses the necessity of international coordination. Fragmented and uncoordinated domestic measures lead to inefficiencies and may even create international tensions. Also, the authority and legitimacy of emerging national and transnational norms is determined by a complex interplay of domestic and transnational power dynamics.
When it comes to international AI governance, expertise and resource constraints pose a substantial challenge, given the rapid pace by which the technology continues to develop and the interdisciplinary skillset required to understand and solve the regulatory problems that arise as a consequence. The uncertainties of AI, the power wielded by the big tech companies, and the intensifying AI race between countries and regional units to secure a competitive advantage only add layers of complexity to the issue.
So how do we seize the opportunity to establish an AI regulatory framework that fosters trust, balances innovation with safety, all while achieving a socially optimal outcome? How do we ensure that the rules are accepted by all stakeholders?
More needs to be done at the intergovernmental level if we want to effectively coordinate domestic AI policies
AI regulation is not a challenge any one country can or should attempt to tackle alone. On the contrary, the international community should strive to create a robust, consistent and widely accepted AI regulatory framework. Ideally, each country should establish a framework that accounts for national interests and domestic stakeholders.
While national solutions will be partially constrained by cultural and other path dependencies, states should seek to employ diverse regulatory strategies that ensure multi-stakeholder involvement and include both state-driven and self-regulatory elements. These domestic frameworks should form the basis of and be complemented with an international AI regulatory framework, organized within the remit of a new or an existing intergovernmental organization.
Given AI’s novelty and potential for significant disruption, the substantial uncertainties surrounding it, and the urgency to develop sustainable AI policies, soft legal instruments and informal organisational structures facilitating international collaboration and consensus-building are preferable – at least initially – to hard law and formal institutional arrangements. International policy initiatives—like the European Commission’s Guidelines for Trustworthy AI, the OECD Principles on AI and the soon-to-be-launched OECD AI Observatory, the G20 AI Principles, and the French-Canadian endeavour to establish an International Panel on Artificial Intelligence (IPAI) —are a promising start. Yet more needs to be done at the intergovernmental level if we want to effectively coordinate domestic AI policies and come up with globally acceptable solutions in a timely manner.
Is it still appropriate or tenable to invoke sovereignty to justify unilateral national decisions that have a transnational or global impact?
Devising a global AI regulatory framework is admittedly an ambitious goal. Key practical considerations the international community should keep in mind are: first, communication barriers between various disciplines and stakeholders, especially among AI experts and policymakers, must be broken down. Collaboration on paper is not enough, we need to get better at actually listening to each other.
Second, we will need to give serious thought to whether the notion of sovereignty in its current form hinders or facilitates international governance. Is it still appropriate or tenable to invoke sovereignty to justify unilateral national decisions that have a transnational or global impact?
Third, rules are not of much value if they are ignored. Lacking respect for rules and policies and the will for sincere international coordination, even the best regulatory arrangements will be useless. For people to trust the system, the interests of all and not just a few privileged actors or countries must be accounted for.”
Finally, while it is easy to blame the regulatory challenges of AI on the technologies themselves, maybe it would be helpful to recognise that many of our problems really stem from human nature.