1 of 20

Metacrisis and

AI governance

Jonah Wilberg, September 2025

2 of 20

Metacrisis

AI governance

  • Research field in ‘Liminal Web’
  • Multiple global crises
  • Structural roots
  • Daniel Schmachtenberger (c. 2018)

  • Research field in EA/AI Safety
  • Catastrophic risk from AI
  • Governance challenges
  • Allan Dafoe (c. 2018)

3 of 20

What can AI governance research learn from Metacrisis theory?

4 of 20

AI Governance

Research areas

Metacrisis theory

  • Policy-focused
  • Industry-focused
  • Field-building
  • Strategy Research

  • Exploring nature of AI threats and impacts
  • Setting priorities for governance research
  • Creating conceptual framework

5 of 20

Schmachtenberger on the Metacrisis

  1. Multiple catastrophic threats
  2. Structural roots: Moloch
  3. AI as prominent threat
  4. Civilization as misaligned system
  5. Need for a ‘third attractor’

6 of 20

  1. Multiple Catastrophic threats

Global Risk landscape

  • AI, nuclear, ecological, pandemics

Contribution to AI Governance?

  • Yes - but already well-known
  • Emphasis on interconnections (polycrisis)

7 of 20

2. Structural Roots: Moloch

System with value erosion from

  • Perverse incentives
  • Race Dynamics
  • Evolutionary selection

8 of 20

Interlude: deconfusing ‘Meditations on Moloch’

  • MoM gives 14 examples
  • Is there a common dynamic?
  • MoM suggests that there is - ‘multipolar traps’
  • There is no common dynamic
  • But some examples do combine all 3 dynamics
  • Moloch best seen as: a single system containing 3 distinct ‘Moloch dynamics’

9 of 20

2. Structural Roots: Moloch

Contribution to AI Governance?

  • Yes - but already well-known
  • Not surprising, given influence of Bostrom, SSC
  • All 3 dynamics are discussed in AI governance textbooks
  • A major goal of AI governance is to address dangerous race dynamics via policy and international coordination.

10 of 20

3. AI as prominent threat

Why is AI risk distinctive

  • General purpose technology
    • accelerates Moloch dynamics
    • augments other risks
  • Exponential technology
    • recursive self-improvement
  • Easily disseminated

Contribution to AI Governance?

  • Yes - but already well-known
  • Supports increasing focus on near-term (pre-AGI) risks

11 of 20

Interlude: Complexity science

  • Feedback loops
  • Phase transitions
  • Attractors
  • Emergence
  • Autopoietic systems

12 of 20

4. Civilization as misaligned system

  • Civilization as autopoietic collective intelligence system (‘Moloch’)
  • Evidence for misalignment
    • Resource wars
    • Wellbeing
    • Inequality
    • Ecological damage
    • Technological race dynamics
    • Past civilizational collapse
    • Interest, exponential growth

13 of 20

4. Civilization as misaligned system

Contribution to AI Governance?

  • Yes - alternative frame for governance work
  • AI governance research
    • typically accepts ‘modernity as progress’ narrative (Dafoe, Karnofsky)
    • background of liberal and libertarian political theory
    • sees positive externalities in markets
    • Dafoe’s career path as example
  • Metacrisis frame suggests need for
    • greater intervention in markets
    • radical, systemic change

14 of 20

4. Civilization as misaligned system

Contribution to AI alignment research

Alignment to what?

  • one person (control problem)
    • does not ensure good outcomes
  • current norms and institutions
    • does not ensure good outcomes
  • need alignment to
    • real values - full-stack alignment.
    • biosphere

15 of 20

5. The ‘third attractor’

16 of 20

5. The ‘third attractor’

  • Not attractor 1: Incentives aligned to real values
    • Internalising externalities
    • solving coordination problems
  • Not attractor 2: Distributed intelligence
    • Improved Collective sense-making
    • Cultural enlightenment (democracy)
    • not capturable

17 of 20

5. The ‘third attractor’

Triage, Transition, Long-term

Transition requires:

  • harnessing existing system dynamics, especially positive feedback loops
  • sensitivity to timelines

  • information technology

    • digital democracy (Taiwan)
    • liminal web
    • education
    • building consensus

18 of 20

5. The ‘third attractor’

Contribution to AI Governance

Yes: AI governance tends to focus on avoiding attractor 1.

  • policy, treaties, international organisations
  • industry standards, model evals
  • advocacy aimed at politicians/industry leaders

19 of 20

5. The ‘third attractor’

Contribution to AI Governance

Third attractor: need equal attention to attractor 2.

But not completely new for AI governance research:

AI advisors

  • for decision-making (shulman)
  • cooperative AI (dafoe)
  • can apply across population

Awareness raising

  • Pause AI
  • Book by Yudkowski

20 of 20

What can AI learn from metacrisis theory

  • Multiple catastrophic threats
  • Structural roots: Moloch
  • AI as prominent threat
  • Civilization as misaligned intelligence
  • Need for a ‘third attractor’