1 of 59

April 27, 2024 | EAGx Nordics 2024

Dr. Matthijs M. Maas,

Matthijs.maas@law-ai.org | mmm71@cam.ac.uk | @MatthijsMMaas

<Senior Research Fellow> <Research Affiliate>

Crafting International Institutions for Advanced AI

1

16/04/2024

2 of 59

Choosing our future with AI: busy years for AI governance

3 of 59

A question of international governance? Pick your institution...

4 of 59

Goals: an overview of challenges around crafting international institutions for AI

  • Challenges | Background & Foundations of AI governance
    • Why international AI governance? Rationales
    • When to regulate?
    • How to regulate? The toolbox of regulation
  • Choices | Key Questions in AI governance
    • Existing or new instruments?
    • (De)centralization? One world AI organization or many?
    • What institutional functions? 7 Models
    • A roadmap

4

16/04/2024

5 of 59

  • Challenges | Background & Foundations of AI governance
    • Why international AI governance? Rationales
    • When to regulate?
    • How to regulate? The toolbox of regulation
  • Choices | Key Questions in AI governance
    • Existing or new instruments?
    • (De)centralization? One world AI organization or many?
    • What institutional functions? 7 Models
    • A roadmap

5

18/04/2024

6 of 59

Why international AI governance? More AI – More problems?

6

16/04/2024

7 of 59

Why international AI governance? Scoping AI’s governance challenges

  • As a technology aimed at automating human performance across diverse domains, AI systems raise potential new issues in... any human domain
  • As already seen with conventional (narrow) AI systems used in different fields
  • Particularly critical for increasingly advanced AI systems at the frontier, whether:
    • Narrow but paradigm-shifting AI systems (e.g. Alphafold) that achieve superhuman performance in narrow tasks in significant domains
    • General-purpose AI systems (e.g. GPT-4, Claude, etc) that can be deployed with adequately competent performance across diverse tasks (coding, art, business planning, etc...), and which can display emergent capabilities

7

16/04/2024

(Morris et al. 2023)

8 of 59

Why international AI governance? Technical and political dimensions of global AI issues

9 of 59

Why international AI governance? Three clusters of global AI issues

  • Conventional AI Governance to govern diverse, broad-spectrum but domain-specific societal impacts from different narrow AI systems.
  • Military AI Governance to govern diverse international security impacts resulting from the adoption of different types of AI systems in military roles
  • Advanced AI Governance to govern cross-spectrum societal-scale risks resulting from the deployment of increasingly capable paradigm-shifting and/or general-purpose AI systems, especially in terms of their potential misuse, catastrophic misalignment, or systemic effects.

10 of 59

Why international AI governance? Three clusters of global AI issues

  • Conventional AI Governance to govern diverse, broad-spectrum but domain-specific societal impacts from different narrow AI systems.
  • Military AI Governance to govern diverse international security impacts resulting from the adoption of different types of AI systems in military roles
  • Advanced AI Governance to govern cross-spectrum societal-scale risks resulting from the deployment of increasingly capable paradigm-shifting and/or general-purpose AI systems, especially in terms of their potential misuse, catastrophic misalignment, or systemic effects.

11 of 59

Why international AI governance? Addressing coordination problems, securing global public goods

  • Some AI problems could be adequately addressed at a (sub)national level...
  • But many AI issues could benefit from international coordination, so we…
    • Seize benefits of standardisation
    • Avoid costs of trans-jurisdiction regulatory uncertainty
    • Avoid regulatory arbitrage
    • Avoid norm fragmentation...
  • Some key AI issues may require international cooperation of some form, so we ...
    • Address the most extreme risks of advanced system deployment.
    • Achieve sufficiently broad and lasting compliance to ensure safe outcomes,
    • Secure global public goods; avoid race to the bottom;

11

16/04/2024

12 of 59

Why international AI governance? Possible substantive goals

  • Bans/caps/controls on development or use
    • Compute thresholds and inputs/performance caps: bar running models above a certain size or compute input; (e.g. MAGIC proposal)
    • Bans on dangerous capabilities (e.g. autonomous replication; long-term-horizon planning, etc.)
    • Bans on harmful application (e.g. cyberwar; manipulation; at-scale disinformation, etc.)
    • Arms control and/or confidence-building measures around non-use of particular military AI systems
  • Non-proliferation to limit development or use
    • Non-proliferation of malicious capabilities or applications (e.g. hacking; surveillance, bioweapon-capable foundation models)
  • Shape paths & forms of development, applications, access, use
    • Licensing & safety assurances: mandate pre-deployment performance evaluations
    • Distribution of benefits (e.g. ‘Windfall Clause’); access provisions
  • Early warning and notification of risks
    • (cf. WHO ‘Public Health Emergency of International Concern; Interpol ‘Red Notice’ [Gutierrez 2023))
  • […]

12

16/04/2024

13 of 59

When international AI governance? Why decide today?

  • Risk of waiting because of ‘Collingridge dilemma’: (1981)
    • Early on in tech lifecycle, we face an information problem: the technology’s critical features, uses and impacts cannot be easily predicted (or agreed upon), ...
    • …until it is more widely used… by which time we face a power problem: control or governance is difficult…
          • Because the technology has become widely deployed, in path-dependent ways
          • Because established (unequal) stakes or interests are clear and entrenched;
          • Because governance has begun to focus on certain ‘regulation niches’ which lock-in issue framings, narratives, solution portfolios...

13

16/04/2024

14 of 59

When international AI governance? Why decide today?

  • On the one hand, international lawyers can ‘jump the gun’ on tech
    • (e.g. 1950s debate on ‘Center of the Earth Treaty’; 1960-70’s debates on ‘global regime for weather control’; 1968-1982s ‘Deep Seabed Mining’ prov. in UNCLOS)
  • Yet transformative AI is already near (here?)—and historical windows of opportunity matter:
    • During late 1940’s, failure of early Baruch Plan for global control of nuclear technology led to decades of delay in effective nuclear weapons governance
    • 1970: US diplomat George Kennan proposal for an ‘International Environmental Agency’; today, sustained fragmentation of international environmental law...

  • Choosing the right initial structure of the international AI governance regime may be critical to its long-term success, as not always easy to amend later

14

16/04/2024

15 of 59

How international AI governance? The toolbox of regulation (1/6)

  • Unilateral actions
    • National policies (invention secrecy doctrine; export controls, hardware backdoors; ‘left-of-launch sabotage’) at the ‘AI source’
      • Too late?
    • Signaling / threats: (Military) deterrence; diplomatic pressure & coercion
      • Very adversarial; risk of miscalculation; spoil the political conditions for other tools
  • Unilateral regulatory policies
    • Domestic/regional regulation with extraterritorial effects (e.g. ‘Brussels Effect’; regulate intermediaries such as cloud or hardware providers to shape behaviour)
      • assumes some adequate pre-existing legal framework to project outwards; can spark competition
    • Unilateral changes to existing treaties (to apply them to AI) (e.g. Explanatory Memoranda; Treaty Renunciation / Reservation)
      • assumes some adequate pre-existing int’l legal framework to amend; and likely splits that existing regime if other states do not agree

15

16/04/2024

16 of 59

How international AI governance? The toolbox of regulation (2/6)

  • Bilateral agreements
    • Confidence-Building Measures (CBMs), e.g. to avoid misunderstanding or accidental escalation around military AI; ‘Pre-deployment agreements’ (Belfield 2022)
      • Useful for key misuse or structural risks (e.g. use in nuclear NC3); but too narrow to cover all AI use cases, given dissemination?
  • Non-binding / self-regulatory standards
    • E.g. codes of conduct / codes of practice (e.g. around Responsible Scaling Policies)
      • Non-enforceable; and mixed track record in efficacy–many industry codes too narrowly scoped, ineffective, failed to get broad sign-on, etc.

16

16/04/2024

17 of 59

How international AI governance? The toolbox of regulation (3/6)

  • Gradual legal norm development in public international law
    • Customary International Law is broad, can provide some guidance...
      • “the basic norms of international peace and security law, such as the prohibitions on the use of force and intervention in the domestic affairs of other states […];
      • the basic principles of international humanitarian law, such as the requirements of humanity, distinction and proportionality […];
      • the basic principles of international human rights law, including the principles of human dignity and the right to life, liberty, and security of the person […];
      • the basic principles of international environmental law, including the no-harm principle, the obligation to prevent pollution, the obligation to protect vulnerable ecosystems and species, the precautionary principle, and a range of procedural obligations relating to cooperation, consultation, notification, and exchange of information, environmental impact assessment, and participation […].
      • The general customary rules on state responsibility and liability for harm also apply.” [Rayfuse, 2017]
    • Promising bc tech-neutral and widely accepted;
    • but also vague; whether and how they apply to AI is unclear and contested 🡪 needs interpretation

17

16/04/2024

18 of 59

How international AI governance? The toolbox of regulation (4/6)

  • Gradual legal norm development in public international law (cont’d)
    • Treaty law establishes a range of directly and indirectly applicable State obligations
      • Directly applicable rules: e.g. data collection and processing protections under Convention 108+ (Modernised Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data) → but: relatively narrow scope of obligations around data processing
      • Directly applicable (but unclear) rules: e.g. classification of rentable ‘AI advisor’ systems as a ‘good’ or ‘service’ under The General Agreement on Tariffs and Trade (GATT) vs. The General Agreement on Trade in Services (GATS); // autonomous ships under UNCLOS /// self-driving cars under 1949 & 1968 Road Traffic Conventions → but: relatively domain-specific (rather than general-purpose AI)
      • Indirectly applicable rules: wide range of cross-cutting treaty obligations under international human rights law, disaster law, international criminal & humanitarian law; environmental and economic law, etc. → May offer significant normative force & establish State obligations… but: need interpretation and clarification; siloed responses, slow; path dependency?

18

16/04/2024

19 of 59

How international AI governance? The toolbox of regulation (5/6)

  • Gradual legal norm development in public international law (cont’d)
    • International court decisions, jurisprudence & case law
      • Could eventually clarify customary & treaty norms’ applicability to advanced AI; can even take into account evolving scientific consensus (van Aaken)
      • But: case law reactive & slow (e.g. ICJ 1996 Advisory Opinion on nuclear weapons; ...)
      • Also: attempts (by Courts or States) to enable ‘adaptive interpretation’ by analogy may not pass the ‘laugh test’ (cf. ‘orbital weapons platforms’ banned because it is a form of ‘balloon bombardment’ under the Hague Conventions of 1899 and 1907? Probably not…)

19

16/04/2024

20 of 59

How international AI governance? The toolbox of regulation (6/6)

  • Coordinated adaptation of existing institutions
    • Multilaterally agreed treaty Amendment / Additional Protocols?
    • Ongoing processes to extend various existing domain-specific treaty regimes for AI (e.g. UN CCW, ICAO, IMO, ITU, Road Traffic Conventions; ...),
      • Slow process; fragmented; insufficient for general-purpose AI?
  • Establish new regimes (institutions; treaties) for AI
    • Many proposals for new AI treaties or international institutions
      • Viable? Along what design?
  • New types of institutions
    • E.g. private regulatory markets? (Clark and Hadfield); smart contracts (e.g. Buterin), etc.
      • Outside the Overton Window, barring historical shocks?

20

16/04/2024

21 of 59

How international AI governance? The toolbox of regulation (6/6)

  • Coordinated adaptation of existing institutions
    • Multilaterally agreed treaty Amendment / Additional Protocols?
    • Ongoing processes to extend various existing domain-specific treaty regimes for AI (e.g. UN CCW, ICAO, IMO, ITU, Road Traffic Conventions; ...),
      • Slow process; fragmented; insufficient for general-purpose AI?
  • Establish new regimes (institutions; treaties) for AI
    • Many proposals for new AI treaties or international institutions
      • Viable? Along what design?
  • New types of institutions
    • E.g. private regulatory markets? (Clark and Hadfield); smart contracts (e.g. Buterin), etc.
      • Outside the Overton Window, barring historical shocks?

21

16/04/2024

22 of 59

  • Challenges | Background & Foundations of AI governance
    • Why international AI governance? Rationales
    • When to regulate?
    • How to regulate? The toolbox of regulation
  • Choices | Key Questions in AI governance
    • Existing or new instruments?
    • (De)centralization? One world AI organization or many?
    • What institutional functions? 7 Models
    • A roadmap

22

18/04/2024

23 of 59

Choices in International AI Governance: understanding the evolving atlas

  • Military AI governance (2009-present)
  • Conventional AI governance (2015-present)
  • Advanced AI governance (2022-present)

24 of 59

Choices in International AI Governance: understanding the evolving atlas

  • Military AI governance (2009-present)
    • Initial global attention focused on governing ‘killer robots’:
      • 2009-2014 initial activism; 2014-2023 institutionalization at UN Convention on Certain Conventional Weapons.
      • …halting and limited progress in the face of AI weaponization, mostly nonbinding guidelines (2019) and states’ ‘political declarations’ (2023)
  • Conventional AI governance (2015-present)
    • 2015-present: Initial rise of global soft law (160+ sets of AI ethics principles)
    • 2016-2021: first non-binding club initiatives launched: OECD Principles on AI & G20 adoption (2019); Global Partnership on AI GPAI (2020)
    • 2018-present: Growing engagement within the UN system: UN HLPDC ‘Roadmap for Digital Cooperation’;2021: UNESCO adopts Recommendation on the Ethics of Artificial Intelligence
    • 2018-2023: ongoing push for binding law (regional regimes): EU AI Act and Council of Europe draft framework convention
  • Advanced AI governance (2022-present)

25 of 59

Choices in International AI Governance: understanding the evolving atlas

  • Military AI governance (2009-present)
  • Conventional AI governance (2015-present)
  • Advanced AI governance (2022-present)
    • 2022-2024: the rescoping of existing governance efforts for ‘General-Purpose AI systems’ (EU AI Act)
    • 2023-present: wave of new governance initiatives by industry (Frontier Model Forum, AI Alliance), UN (SG’s High-Level Advisory Body on AI), club initiatives (UK AI Safety Summit & Bletchley Declaration; G7 Hiroshima Process; 18-state Guidelines for Secure AI)
    • 2024: March; UNGA Resolution A/78/L.49 ‘Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development’ | (September) UN Summit of the Future; Global Digital Compact

26 of 59

Choices in International AI Governance: problems with a fragmented ‘regime complex’ of many AI gov initiatives

  • Institutional (in)adequacy?
    • Repurposed old institutions were not originally created to address AI (or even ICT) issues
    • Newer bodies lack some clarity around institutional missions, and/or adequacy on advanced AI
  • Many regimes and norms remain non-binding
  • Fragmentation in membership and issue areas
  • Lack of clarity about ideal institutional forms to be pursued

27 of 59

Key questions for international AI governance

Given this state of affairs… should we pursue…

  1. Existing or new institutions?

If new…

  1. One central world AI organization or a ‘regime complex’ of many overlapping institutions? (Centralization or decentralization)

If one...

  1. What institutional functions should it pursue? (7 models)

→ a roadmap

28 of 59

Choice 1: existing institutions or establish new instruments?

28

16/04/2024

(Schmitt 2021)

29 of 59

Choice 1: existing institutions or establish new instruments?

“[g]lobal governance solutions […] must take one of two approaches:

international actors can attempt to create an encompassing regime that can address all dimensions of the problem, or

international actors can accept that policy solutions will be crafted, coordinated, and implemented within a larger regime complex.

[…] although the first option might be more efficient and effective, it is rarely the solution adopted” [Alter & Raustiala 2018]

29

16/04/2024

30 of 59

Choice 1: existing institutions or establish new instruments?

Reasons why we might prefer settling for already-existing institutions:

  • Establishing new international organisations, or fully empowering them historically has taken long
    • (e.g. IAEA established in 1957; took a decade to assume a leading role in nonproliferation w/ NPT; took until 1997 for verification role to be significantly increased through Additional Protocol) (Sepasspour 2023).
  • Even if new institutions created, often old models prevail, as there are many pressures towards institutional mimicry in international affairs:
    • “when states create new IGOs, they model their design based on the features of pre-existing organizations with overlapping memberships, governance tasks, and issue areas” (Reinsberg and Westerwinter, 2023).
    • Reason: design strategy for boundedly rational designers faced with situations of complexity, as it reduces their uncertainty while lowering the costs of institutional establishment

30

16/04/2024

31 of 59

Choice 2: De/Centralization in AI governance? What are the consequences of different architectures? (Cihon et al. 2021)

  • Should AI governance be centralized? Do we need one encompassing ‘world AI organization’?
  • What are the legal, political and functional results of fragmentation or centralization? Considerations around what centralization does in terms of...

31

16/04/2024

Pro centralization

Con centralization

Implications depend on institutional design

Political power, legitimacy, authority (viz. states; other regimes)

Slowness; Brittleness

Centralization averts ‘forum-shopping’; but that strategy can be used both to stall as well as advance effective policy action

Efficiency & ease of participation in focal forum

Breadth vs. depth dilemma (lowest common denominator)

Top-down policy coordination, vs. gradual bottom-up policy convergence

32 of 59

Choice 3: if new international institution, what model? (Maas & Villalobos 2023)

  • 1. Scientific consensus-building institution:
    • Functions: to (1) establish a scientific consensus on an issue; (2) increase general policymaker and public awareness of an issue; (3) and facilitate common knowledge or shared perception of an issue amongst States (to motivate or enable some international action)
    • Past example: the Intergovernmental Panel on Climate Change (IPCC)
    • AI proposals: Commission on Frontier AI (Ho et al. 2023); Integovernmental Panel on Information Technology, Global AI Observatory (GAIO)
    • Notes: aims to be non-political—as in the IPCC’s mantra to be “policy-relevant and yet policy-neutral, never policy-prescriptive”.

33 of 59

Choice 3: if new international institution, what model? (Maas & Villalobos 2023)

  • 2. Political consensus-building and norm-setting institution:
    • Functions: to help States come to greater political agreement and convergence about the way to respond to an (usually) clearly identified and (ideally) agreed issue or phenomenon.
      • Allow for debate and negotiation
      • Help begin negotiations on more stringent institutions
      • Exert normative pressure on states to take action
      • Formulate and share non-binding (soft-law) guidelines
    • Past example: conferences of parties to a treaty (COPs), such as in the United Nations Framework Convention on Climate Change; OECD, G7, G20
    • AI proposals: French proposal for ‘World AI Organisation’ (2024); International Agency for AI (IAAI) (Marcus and Reuel 2023)
    • Notes: either aligns policies such that no further international gov is needed; or establishes foundations for such orgs–and in the meantime establishes informal standards for behaviour that can see take up

34 of 59

Choice 3: if new international institution, what model? (Maas & Villalobos 2023)

  • 3. Policy-coordinator institution:
    • Functions: help align and coordinate policies, standards, or norms, in order to ensure a coherent international approach to a common problem, by…
      • Directly regulating
      • Assist in implementation of policies
      • Focus on harmonization of policies
      • Certify compliance with standards
      • Monitor and enforce
    • Past example: WTO; ICAO; IMO
    • AI proposals: Generative AI global Governance Body (Chowdhury, 2023), International AI Organization (Trager et al. 2023)
    • Notes: faces potential challenges around incentivizing state participation in adopting the regulations; potential state disinterest

35 of 59

Choice 3: if new international institution, what model? (Maas & Villalobos 2023)

  • 4. Restrictions-enforcing institution:
    • Functions: prevent the production, proliferation or irresponsible deployment of a dangerous or illegal technology, product or activity, by…
      • Non-proliferation regimes & export control lists, registering and/or tracking of key resources
      • Confidence-building measures
    • Past example: IAEA
    • AI proposals: ‘Advanced AI Governance Organization’ (Ho et al. 2023), International Autonomous Incidents Agreement (IAIA) (Horowitz & Scharre 2021), Multinational AGI Consortium (Hausenloy et al. 2023)
    • Notes: difficulties around safeguarding AI systems in same way; Intensely intrusive levels of oversight required

36 of 59

Choice 3: if new international institution, what model? (Maas & Villalobos 2023)

  • 5. Stabilization and emergency response institution:
    • Functions: respond to AI incidents in the international system, in order to provide early warning, reduce ongoing vulnerabilities, and minimize impacts
    • Past example: Financial Stability Board
    • AI proposals: Geotechnology Stability Board (Brenner and Suleyman 2023)
    • Notes: inadequacy of responding to- (rather than preventing) risks of advanced AI systems

37 of 59

Choice 3: if new international institution, what model? (Maas & Villalobos 2023)

  • 6. International Joint Research institution:
    • Functions: conduct a multilateral scientific collaboration between States, on accelerating AI research or safety research
    • Past example: CERN, ITER, ISS;
    • AI proposals: AI Safety Project (Ho et al. 2023); Multilateral Artificial Intelligence Research Institute, Multinational AGI Consortium (Hausenloy et al. 2023)
    • Notes: challenges around security concerns and model leaking; differentially accelerating AI capabilities

38 of 59

Choice 3: if new international institution, what model? (Maas & Villalobos 2023)

  • 7. Benefit- & access-distributing institution:
    • Functions: provide global (unrestricted or conditional) access to a technology or its benefits
    • Past example: Gavi, Vaccine Alliance, IAEA (nuclear fuel bank under dual mandate)
    • AI proposals: ‘Frontier AI Collaborative’ (Ho et al. 2023); Fair and Equitable Benefit Sharing Model (Adan, forthcoming)
    • Notes: challenges around organizing participation in conditional access-focused institutions

39 of 59

A roadmap for an AI international organisation?

Beyond the overall model of the AI governance regime…

And its substance or purpose…

what are general institutional design considerations?

One roadmap…

40 of 59

A roadmap for an AI international organisation? (1/5)

Scope:

  • Goals and mandate: global process convergence on a few well-considered designs, tailored to different AI issue clusters (e.g. conventional-, military- advanced AI)
    • (rather than based on thin analogies)
  • Overall institutional model: mix of models, adopting optimum features from existing ones, rather than focused on institutional ‘copying’ or mimicry. E.g.
    • A single institutions with multiple functions and majority voting
    • Two or more institutions with different but complementary functions & strong coordination
  • Instrument choice: complex binding treaty under a main international organization, with assistance, monitoring, and enforcement mechanisms

41 of 59

A roadmap for an AI international organisation? (2/5)

Process of negotiation & entry into force

  • Negotiation forum: United Nations special conference with specific mandate
  • Negotiation process: one State or a small group of States champion the initiative and host rounds of negotiations; process takes 2-5 years
  • Signature / buy-in:
    • All major States with the capacity (or potential) to produce advanced AI systems or that host labs that can do so sign binding treaty and accede to enforcement mechanism;
    • Global South also signs treaty
  • Ratification and entry into force
    • Enters into force soon after treaty is signed, given large number of quick ratifications

42 of 59

A roadmap for an AI international organisation? (3/5)

Overall constellation (drawing on: Llerena 2023, Gutierrez 2023)

  • Regime Complex Design:
    • Secretariat
      • administrative units;
      • technical monitoring; + member state liaison
      • Depositary (entrusted with treaties, informs States of status & changes to treaty),
    • Funding mechanism (e.g. UNGA; Party-determined budget; trust fund)
    • Req’s for individual state Parties’ designated national bodies of experts (Management Authorities; Scientific Authorities – AI Safety Institutes)
    • Global impartial body of experts (e.g. World AI Organization),
    • Global advocacy body of experts to ensure advocacy for treaty execution
    • Oversight Commission of State representatives to oversee regime implementation & updating of risk categories;
    • Processes for revising & updating rules (e.g. International Board that votes to change classification schemas around AI models or compute inputs)
    • Representative body of all parties, vote on amendments and review overall direction of regime (e.g. Conference of Parties model)

43 of 59

A roadmap for an AI international organisation? (4/5)

Overall constellation cont’d (drawing on: Llerena 2023, Gutierrez 2023)

  • Regime Complex Design:
    • Transparency mechanism for regular reporting requirements on state and efficacy of treaty and institution
    • Oversight mechanism;
      • Mandate record-keeping systems tracking e.g. high-risk models, High-Performance compute chips
      • bilateral (open, e.g. inspections) or unilateral (‘closed’, e.g. satellites) monitoring arrangements
    • Complaints mechanism;
      • internal arbitration board
      • provisions to permit complaints lodged with the UN Security Council;

44 of 59

A roadmap for an AI international organisation? (5/5)

Instrument design:

  • Enforcement & arbitration
    • Strong enforcement mechanism which States buy into (e.g. IAEA, ICAO)
    • Justiciability mechanism (e.g. providing standing and resolution of disputes before the International Court of Justice or some new international court or treaty arbitral body)
  • Coordination with other regimes
    • Strong ‘orchestration’ of AI regime complex: coordination with other international organizations to ensure norms and rules on AI are mutually reinforcing with those in other adjacent regimes (data, security, trade, …)
  • Amendments to future changes in AI
    • Technology-neutral mandate or scope of regime
    • General majority (!consensus) voting rules
    • Flexible or relaxed amendment voting thresholds
    • Internal body able to interpret mandate
    • Explicit provision that allow for evolutive interpretation by courts (e.g. ECHR)

45 of 59

Open questions on institutional models

  • Need for hybrid models?
    • Given that many institutional functions might be required to govern advanced AI systems, is there a need for combined institutions with a ‘dual mandate’, like IAEA? (Law & Ho 2023)
  • Founding form (formality/informality; membership scope)
    • Should this organization be established formally, or are faster-moving informal ‘club’ approaches adequate?
    • How broad is its founding membership? Can narrow clubs be expanded horizontally later?
    • Should expansive authorities be granted from the start, or allowed to ‘evolve’ over time (e.g. IAEA)?
  • Other rules & design consideration:
    • Should voting rules within the institution work on consensus or simple majority?
    • What rules, if any, for states (or AI labs) to appeal or contest judgments of the IO?
    • What rules govern the adaptation or updating of the institution’s mission, mandate or rules, to track ongoing developments in AI?

46 of 59

In sum

  • Challenges | Background & Foundations of AI governance
    • Many rationales for some forms of international AI governance
    • International AI governance is needed soon
    • A wide-ranging toolkit of possible instruments, but many conditions

  • Choices | Key Questions in AI governance
    • The existing AI governance ecology has seen many recent developments, but still faces hurdles, as it is fragmented and inadequate
    • By default, many factors push towards adapting or working from existing institutions, even if new ones might be more appropriate
    • Seven institutional models to fulfill different functions
    • De/centralization considerations can cut both ways
    • We can identify a set of general institutional design components across models

    • So many other questions!

46

16/04/2024

47 of 59

Thank you!

47

16/04/2024

48 of 59

BACKUP

49 of 59

49

16/04/2024

50 of 59

  • Concepts | Background & Foundations of AI governance
    • Why international AI governance? Rationales
    • When to regulate?
    • How to regulate? The toolbox of regulation
  • Challenges | Hurdles & Developments in AI governance
    • Is AI governance viable?
    • History of international AI governance instruments: how did we get here?
  • Choices | Key Questions in AI governance
    • Existing or new instruments?
    • (De)centralization? One world AI organization or many?
    • What institutional functions? 7 Models
    • A roadmap

50

18/04/2024

51 of 59

Is AI governance viable? Challenges to the international governance for AI...

  • Complex ‘regulatory surface’:
    • Definitional problems around ‘AI’ itself (or other terms, ‘general-purpose AI’, etc)
    • Operational features of AI research and development: AI research can be discreet, discrete, diffuse, opaque (e.g. Scherer)
  • Political economy challenges:
    • Unequal global stakes; AI politicized and perceived as strategic linchpin by many actors (incl. great powers)
    • Importance of private sector expertise vs. relative lack of public sector expertise
    • Open source nature of development, model dissemination
  • Future uncertainty: pervasive lack of clarity around AI landscape

51

16/04/2024

52 of 59

Is AI governance viable? If negotiated and implemented, regimes for some AI could be enforceable

  • Type-1: material & centralized
    • E.g. 5G network; HPC AI chip manufacturing...
    • Clear physical presence in jurisdiction, cannot move; easily identifiable intermediaries
  • Type-2: material & decentralized
    • E.g. IoT; drones, LAWS; AI surveillance cameras...
    • Challenge of setting standards (e.g. ITU or ISO); regulatory fragmentation
  • Type-3: ‘immaterial’ and centralized
    • E.g. massive AI services (Google Translate; GPT-3).
    • Regulate through proxies, extraterritoriality
  • Type-4: ‘immaterial’ and decentralized
    • E.g. open-source DeepFake techniques (Stable Diffusion)...
    • Hard to regulate because no clear intermediaries

52

16/04/2024

Different challenges for different ‘regulatory objects’ (Beaumier et al. 2020)

53 of 59

Is AI governance viable? AI-stack technologies as types of regulatory object (drawing on Beaumier et al. 2020)

54 of 59

Is AI governance viable? Where best to situate governance levers?

  • Society/victim-level | impacts?
  • User-level | applications?
  • User-dev interface level | structured access (e.g. APIs) by user to developer-run pretrained model
  • Developer-level | inhouse R&D and evals processes?
  • Resource inputs-level | foundations of AI development?
    • software libraries; scientific publications
    • training data; (incl. human labor used in data annotation)
    • compute hardware (GPUs, TPUs); & semiconductor supply chain
    • human talent (researchers)

(Hua & Belfield 2022)

55 of 59

History of international AI governance instruments: the evolving atlas of AI governance

  • Military AI governance (2009-present)
  • Conventional AI governance (2015-present)
  • Advanced AI governance (2022-present)

56 of 59

The evolving atlas of AI governance: Military AI governance (2009-present)

  • Initial global attention focused on governing lethal autonomous weapons systems:
    • 2009-1014: growing activism to put ‘killer robots’ on international agenda
    • 2014-2023: institutionalization at UN Convention on Certain Conventional Weapons (CCW); Governmental Group of Experts process set up in 2016, meeting annually since; in 2019 issued 11 guiding principles, but no enforcement mechanism
    • halting and limited progress, in the face of AI weaponization; increasing NGO willingness to move to other fora; 2023 Belén Communiqué;
  • 2020-present: growth in minilateral partnerships to develop military AI; & political declarations to set norms:
    • 2020:AI Partnership for Defense’ (US + 13 allies) launched
    • 2023: US Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy – (but omitting discussion of restricting use in nuclear forces, found in draft)

56

16/04/2024

57 of 59

The evolving atlas of AI governance: Conventional (‘narrow’) AI governance (2015-present)

  • 2015-present: Initial rise of global soft law
    • 160+ sets of AI ethics principles issued. Some normative convergence, but under-operationalized
  • 2016-2021: first non-binding club initiatives launched
    • May 2019: OECD Principles on AI 🡪 adopted by G20.
    • June 2020: ‘Global Partnership on AI’ (GPAI) launched (initially 14 countries + EU);
  • 2018-present: Growing engagement within the UN system:
    • 2020 UN HLPDC ‘Roadmap for Digital Cooperation
    • 2021: UNESCO’s GC adopts Recommendation on the Ethics of Artificial Intelligence
  • 2018-2023: ongoing push for binding law (regional regimes)
    • 2021 – European Commission presents draft AI Act for negotiation
    • 2023: Council of Europe presents revised draft of framework convention on AI, Human Rights, Democracy and the Rule of Law...

57

16/04/2024

58 of 59

The evolving atlas of AI governance: Advanced (‘General-Purpose’/‘Frontier’) AI governance (2022-present)

  • 2022-2024: the rescoping of existing governance efforts for advanced AI
    • EU AI Act negotiations shifted to incorporate obligations for ‘General-Purpose AI Systems’
  • 2023-present: wave of new governance initiatives
    • New industry partnerships: Frontier Model Forum, AI Alliance
    • Renewed UN efforts: High-Level Advisory Body on AI
    • Renewed club initiatives: UK AI Safety Summit & Bletchley Declaration; G7 Hiroshima Process; 18-state Guidelines for Secure AI
  • 2024: consolidation & steps towards institutionalization
    • March 2024; UNGA Resolution A/78/L.49 ‘Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development’
    • (September) UN Summit of the Future; Global Digital Compact

58

16/04/2024

59 of 59

The evolving atlas of AI governance: …AI-specific global governance initiatives face hurdles

  • Institutional (in)adequacy?
    • Repurposed old institutions were not originally created to address AI (or even ICT) issues
    • Newer bodies (e.g. GPAI) lack some clarity around institutional missions, and/or adequacy on advanced AI
  • Membership fragmented; lack of inclusion...
    • ...of many states (especially from Global South)
    • ...of major AI actors (e.g. GPAI vs. China)
      • (but: Bletchley Declaration and recent Track-II negotiations)
  • Separate tracks around military & civilian AI applications (yet common underlying capabilities)
  • Insufficient capacity on AI expertise in public sector, falling behind on latest frontier/LLM wave
  • Lack of clarity about ideal institutional forms to be pursued

59

16/04/2024