ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
AI Alternative Futures: Mapping the Scenario Space of Advanced Artificial Intelligence - Potential Paths, Risks, and Governance Strategies
2
Descriptions: Below are drivers/influences and key aspects of AI development and risk, what I’m calling dimensions. Within each dimension there are several possible conditions (uncertainties/paths) for each, that will form the boundaries of a risk model. I've struggled to define this well, but in the end, this process is just to create a risk/likelihood continuum for the model and is less focused on the individual condition as a prediction or even a question at all. Condition definitions can be found here: https://tinyurl.com/aidefin; all documents are located here: https://drive.google.com/drive/folders/1N8w7XbSs6NvAnTubiO7Idg4TwiZ8sTiv?usp=sharing; subfolder to save completed version here:

Instructions: Each cell in the likelihood (column-E) and impact column (column-F) have dropdown menus to rank/score each condition (scale on the right) from most plausible and dangerous to least. Select the dropdown that best matches your assessment (remember, this is just to create a risk-likelihood continuum, not prediction). likelihood scales from highly likely to highly unlikely, and impact from greatly increase to greatly decrease security. Steps include the following:

• First, for each condition below (column D), please rank each condition on its overall plausibility to occur, given your understanding of the issue and beliefs (from most to least plausible).
• Second, if the condition (column D) were to occur, please rank the degree of impact this outcome could potentially have on stability, technical safety, or international security.

Survey Purpose: The ranking is used to create a continuum of risk and likelihood for all conditions in the model which will add a layer of complexity and context to a GMA risk model (tab 2 in the spreadsheet) which usually just explores independent relationships. Here's an example of a traditional risk matrix here for reference if you're somewhat unfamiliar (https://www.researchgate.net/figure/A-standard-risk-matrix_fig7_323570642). This is the first go at using this method or even a class of methods, so I appreciate your support.

After completion: Upon completion, please make a copy of your completed spreadsheet for the project, give it a unique title, and save it in the subfolder titled “Completed_Worksheets.” Be sure to select your level of experience in the separate drop-down to the right.

Steps to save completed document: The step by step instructions include: 1) go to file, 2) select "make a copy," 3) at the prompt give your file a unique title such as "survey_2" or "awesome_AI_thunderbolt" whatever you like (please don’t overwrite others); 3) the default folder at the prompt is "morphological model," change this to the subfolder "Completed_Worksheets,” 4) save the document and you're done! Thank you very much! If you saved the spreadsheet locally, you can upload it to the “Completed_Worksheets” folder here: https://tinyurl.com/completeds; For other explanations or to restart, the overall project folder “morphological model” is here: https://rb.gy/zcmwto Please send me a note if you have any questions or concerns to: kyle.a.kilian@outlook.com or on Reddit Barcoverde88
3
System DimensionConditions/IndicatorsPlausibility for X to occurIf X were to occur, rank the impact on society, technical safety, and securityRating Scale
4
Technological Transition Capability-GeneralityApproximately as capable & general as current systemsSomewhat unlikelySomewhat increaseImpactLikelihood
5
Moderate capability and generality (limited independent decisions in several domains)Highly likelyGreatly increase
6
Human to superhuman level in narrow domains (e.g., CAIS Model)Somehwat LikelySomewhat increase
7
Human to superhuman capability and generality (standard AGI/ASI)Even change to occurNo effect
8
Distribution of HLMIAvailable for all users and developed openly (low resource req)Somewhat unlikelySomewhat decrease
9
Distributed across leading institutions (multipolar development)Highly unlikelyGreatly decrease
10
Concentrated in one group or system
11
Transition/TakoffIncremental development (multiple decades or longer for HLMI)Level of expertise or knowledge Answer (use dropdown)Answer key
12
Moderate uncontrolled takeoff (semi-discontinuous - months to years)
How familiar are you with long-term risk?
Very familiar
13
Moderate to fast controlled takeoff (competitive race dynamic - months to years)How familiar are you with AI safety?Familiar
14
Fast takeoff (discontinuity - hours/days/months) RSI
How familiar are you with AI governance?
Not so familiar
15
Accelerant to HLMICompute overhang or bottleneck
How familiar are you with existential risk?
Unfamiliar
16
New insight, paradigm, or architecture (e.g., quantum ML, neuroscience, neuromorphic)
17
Simulated embodiment or novel data type/structure Uncertainty Rating Scale - MITRE
18
Paradigm to HLMICurrent paradigm is capable to scale to high-level systems http://tiny.cc/Risk_Scale
19
New paradigm needed to attain high-level systems91-100%
20
Current paradigm plus something else required61-90%
21
TimelineLess than 20 years41 - 60%
22
20 to 40 years11-40%
23
Over 40 years 0 - 10%
24
Social-Technological EcologyRace dynamics for HLMIPolitical and economic cooperation increases (Race to the top scenario)
25
Global markets take an inward turn toward isolationism (isolated development, no coordination)
26
AI monopolies centralize control (increased acquisitions and power)
27
Race intensifies - government-led AI arms race
28
Most dangerous Risk Misuse (e.g., cyber-attacks, disinformation)
29
Accidents/Failures (agential influence & misaligned goals)
30
Structural (e.g., value erosion, offense/defense balance)
31
Technical safety risksGoal alignment failure
32
Influence-seeking and deception
33
Inner alignment failures (subtle and difficult to detect)
34
Governance-ControlSafety-capability relationship for HLMICurrent safety techniques scale to high-level systems
35
New techniques are required to control high-level systems (from first principles)
36
Custom techniques are required for each unique instantiation
37
Developer entityInternational coalitions develop advanced AI
38
Governments develop advanced AI
39
Corporations or organizations develop advanced AI
40
Individual developer creates first HLMI
41
Developer locationUSA-Western European
42
Asia-Pacific
43
Africa or Latin America/Caribbean
44
International Governance Weak governance (decline of collective action due to competition or conflict)
45
Capable governance (modest increase in international norms and safety agreements)
46
Good governance (widespread norms, international safety regime, and strenthened institutions)
47
Corporate Governance Decrease in safety standards (increase in competition and decrease in collaboration)
48
Modest improvements in safety governance (coordination of common safety standards develop)
49
Strengthened safety governance (agreements on standards for safe use agreements set)
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100