1 of 27

Artificial Intelligence

  1. AGI
  2. Potential dangers
    1. Weaponization
    2. Rogue AIs
  3. Tragedies of the commons and coordination
  4. When do countries cooperate?

POL 102 Monday March 3

2 of 27

Artificial Intelligence

  • AGI
  • Potential dangers
    • Weaponization
    • Rogue AIs
  • Tragedies of the commons and coordination
  • When do countries cooperate?

3 of 27

A. AGI

Imagine two kinds of computer programs.

The first is the kind that beat the world chess champion in 1997

4 of 27

A. AGI

Imagine two kinds of computer programs.

The first is the kind that beat the world chess champion in 1997

  • Everyone at the time focused on the hardware - “Deep Blue” was programmed by humans to calculate 200 million possible moves per second
  • Rules for how to evaluate those moves were programmed by humans…
  • … into thousands of “clusters” of concepts…
  • …which the computer then used as guidelines for how to choose which move to make

5 of 27

Imagine two kinds of computer programs.

The first is the kind that beat the world chess champion in 1997

Chess is a great game for that kind of computer

  • Played on an 8x8 square board
  • Different pieces have different values
  • Games usually take about 40 turns
  • Each turn usually has about 20 legal moves to choose from
  • A fast computer can calculate about 8 or so moves ahead

A. AGI

6 of 27

This kind of computer will never beat a human at “go”

A. AGI

7 of 27

Imagine two kinds of computer programs.

The first is the kind that beat the world chess champion in 1997

Go is a terrible game for that kind of computer

  • Played on an 19x19 square board
  • All pieces have the same value - just depends on positions
  • Games usually take about 200-600 turns
  • Each turn usually has about 300 legal moves to choose from
  • A fast computer can calculate about 3 or so moves ahead

A. AGI

8 of 27

So imagine our surprise in 2016…

A. AGI

9 of 27

DeepMind demolished Lee Sedol 4 games to 1.

But that’s not the interesting part

The interesting part was move 37 in game 2

March 10

2016

Deepmind was

innovative

A. AGI

10 of 27

Imagine two kinds of computer programs.

The second is the kind that beat the world go champion in 2016

  • The hardware wasn’t the limiting factor anymore - “DeepMind” was programmed by humans to calculate lots possible moves per second, but that’s not the interesting part
  • Rules for how to evaluate those moves were programmed by the computer itself…
  • … into hundreds of thousands of “clusters” of concepts…
  • …which the computer then used as guidelines for how to choose which move to make

A. AGI

11 of 27

How to build a computer that programs itself:

  • Hardwire in hundreds of thousands of places where the computer can create “clusters” of concepts
  • Show the computer tens of thousands of games that go masters played against each other (later versions skipped this step)
  • Tell the computer to play a lot of games against itself
  • Tell the computer to define each concept cluster itself
  • Tell the computer to figure out when to use which concept clusters

The computer will make decisions that even its human creators will not be able to understand

A. AGI

12 of 27

Chess and Go are narrowly-defined worlds with specific rules. We can create a computer that will program itself and outperform a human within that narrowly-defined world.

Can we do that for the “real” world?

Maybe! Probably! Maybe inevitably?

  • Computer programs itself with its own clusters
  • Not obvious where “Training data” comes from

Should we do that?

Maybe! Maybe we will regardless…

A. AGI

13 of 27

Artificial Intelligence

  • AGI
  • Potential dangers
    • Weaponization
    • Rogue AIs
  • Tragedies of the commons and coordination
  • When do countries cooperate?

14 of 27

AIs connected to agents that can do things online or in the physical world (like drones or robots) that can process information and make decisions quickly and independently to…

  • Play the stock market
  • Write news stories
  • Make scientific discoveries and share them
  • Make scientific discoveries and not share them
  • Blackmail people
  • Enforce laws
  • Hurt people while enforcing laws
  • Break laws
  • Fight wars

B.1. Weaponization

15 of 27

Artificial Intelligence

  • AGI
  • Potential dangers
    • Weaponization
    • Rogue AIs
  • Tragedies of the commons and coordination
  • When do countries cooperate?

16 of 27

A few vocab words:

  • Recursive self-improvement

An AI that invents and creates a better AI…

…which in turn invents and creates a yet better AI…

…which in turn (etc)

  • Singularity

A future in which AIs run everything in ways that humans cannot predict or understand

  • Alignment

The extent to which AIs in a “singularity” world make decisions that are the kinds of decisions humans would want them to make

B.2. Rogue AIs

17 of 27

Artificial Intelligence

  • AGI
  • Potential dangers
    • Weaponization
    • Rogue AIs
  • Tragedies of the commons and coordination
  • When do countries cooperate?

18 of 27

What is the tragedy of the commons?

C. Tragedies of the commons and coordination

19 of 27

Imagine each country has a choice:

  • Permit (or fund) AI research
  • Heavily regulate (and fund) AI research that focuses on alignment
  • Do not permit (or fund) AI research

Or, to put it another way, how heavily should countries regulate AI research?

Suppose one country regulates AI research heavily. What will other countries do?

C. Tragedies of the commons and coordination

20 of 27

How do countries affect other countries?

  • Coordinate to regulate AI? How?
  • Bribe other countries to join?
  • Punish other countries to stay out?

C. Tragedies of the commons and coordination

21 of 27

Lots of discussion of a globally-coordinated AI research pause

How would that be enforced?

C. Tragedies of the commons and coordination

22 of 27

C. Tragedies of the commons and coordination

23 of 27

The United States in the past few years tried to stop China from developing AI

  • Immigration policy and subsidies for research and production within the United States (“CHIPS Act”)
  • Bans on exports of advanced computer chips to China
  • “Secondary sanctions” - sanctions on countries or firms that don’t sanction China

C. Tragedies of the commons and coordination

24 of 27

Trump AI policy unclear

  • At first, proposed large state-run AGI project (many teach leaders, including Musk, oppose)
  • Signaled willingness to end limits on tech to China

So, possibly an AI arms race?

But maybe not

C. Tragedies of the commons and coordination

25 of 27

European Union

  • Limits on AI research and regulation on AI implementation in Europe
  • Invitation to negotiate with the US over access for US firms to European markets

C. Tragedies of the commons and coordination

26 of 27

Artificial Intelligence

  • AGI
  • Potential dangers
    • Weaponization
    • Rogue AIs
  • Tragedies of the commons and coordination
  • When do countries cooperate?

27 of 27

Our topic over the next few weeks - Factors that affect cooperation:

  • Numbers and Relative Sizes of Actors�Countries are more likely to cooperate when there are fewer countries involved�
  • Iteration, Linkage, and Strategies of Reciprocal Punishment�Countries are more likely to cooperate if there are lots of issues they all care about �
  • Institutions�Countries are more likely to cooperate if they have agreed to more specific rules about what specific things they will do�
  • Information�Countries are more likely to cooperate if they know more about what each other is doing

D. When do countries cooperate?