ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
Who do you defer to most on AI timelines?Who do you defer to second-most on AI timelines?Who do you defer to third-most on AI timelines?Comments?
2
A vague cluster of short-timelines LLM/scaling pilled people, like Kyle McDonell
The general Constellation cluster, led by Paul and Ajeya
Me
In total, I defer about 75%. My inside-view median is 5 years, and the distribution I defer towards has median 10 years. After deferring, I have an 8 year median.
3
AjeyaDaniel K
4
AjeyaJaniceQuintin Pope
5
Ajeya / openphil Daniel kokotajlo Me
6
Ajeya CotraBen GarfinkelDaniel Kokotajlo
7
Ajeya CotraDaniel KokotajloMe
8
Ajeya CotraDaniel Kokotajlo
9
Ajeya CotraDaniel Kokotajlo
10
Ajeya CotraDaniel Kokotajlo
11
Ajeya CotraDavid Field
12
Ajeya CotraEliezer YudkowskiMe
I'm not sure I endorse the format of this survey. E.g. my response above cashes out as something like "30% Ajeya; 15% Eliezer; 10% me; rest is on a bundle of other people plus deep uncertainty". I imagine this is very different from other people who might give the same surface answer.
13
Ajeya CotraEliezer YudkowskyPaul Christiano
14
Ajeya CotraEliezer Yudkowsky
15
Ajeya CotraHolden KarnofskyMe
16
Ajeya CotraHolden KarnofskyPaul Christiano
17
Ajeya CotraHolden KarnofskyRohin Shah
18
Ajeya CotraHolden KarnovskyLinch Zhang
19
Ajeya CotraHolden?Me / some sceptic voice
20
Ajeya CotraJoseph Carlsmith
Vague sense of distribution of AI safety community views
21
Ajeya CotraPaul ChristianoDaniel K
Could be nice to see whether people defer wholeheartedly vs adding a lot of model uncertainty
22
Ajeya CotraPaul ChristianoDaniel Kokotajlo
I don't feel like I defer to any of these three people the "most"; I chose somewhat arbitrarily. Often I consider arguments from the three of them and then see which ones make the most sense to me.
23
Ajeya CotraRohin ShahMyself
24
Ajeya CotraSamotsvety Forecasting AggregateMe
Cool idea!

A bit hard for me to rank the top 3 but I think these are definitely my top 3. When I am normally thinking I mainly just think about my independent impression, but if I were forced at gunpoint to give my best-guess for AI timelines I think this is roughly the order I would defer in. It's also pretty hard to separate independent impression from deference on complex topics, in my experience.
25
Ajeya CotraWill MacAskill (HoH skepticism)Eliezer Yudkowsky
26
Ajeya Cotra's "Forecasting Transformative AI with Biological Anchors" (and Ajeya's update: https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines).
Samotsvety's AI risk forecasts (https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts).
Thanks, this seems valuable!
27
Ben GarfinkelJonas SchuettMe
28
Bioanchors
29
Community average in metaculus and less wrong post
Daniel kokotajlaAjeya
30
Connor LeahyDaniel KokotajloInside view
31
Daniel KokotajloAjeya CotraMe
32
Daniel KokotajloAjeya CotraMe
33
EA postsScott Alexander porby on LessWrong
Good question- my timelines tend to swing depending on the last post I read
34
Eliezer YudkowskyAjeya CotraMe
35
Eliezer YudowskyPaul ChristianoMe
36
GwernPaulDaniel K
37
gwernYudkowskiMe
4 years +5/-2 95% confidence to human-level ability in enough domains to start achieving recursive self-improvement at a rate equal to human researchers. E.g. someone can say "I wish I had a model to predict X from this publicly available data" and there's a >1% chance an existing system can produce a working narrow AI (retraining or derivative) to predict X at roughly a human level from that kind of prompt. At that point someone could stick some NLP generator in a loop asking it to create prompts for a better system for producing working systems from prompts and ranking them in a simple way. Superhuman (in every measurable way) AGI within 2 years after self-improvement starts.
38
HoldenAjeyaHowie
39
HoldenAjeya Me
Also like ‘Eliezer but more reasonable’. A lot of my thinking feels like an unclear mix of deferring and inside view.
40
Holden KarnofskyAjeya CotraEliezer Yudkowsky
41
Holden KarnofskyAjeya CotraJoseph Carlsmith
42
Holden KarnofskyAjeya CotraKatja Grace's survey of experts
43
Holden/Ajeya comboBuck ShlegerisAlex Lawsen
prob better to decide whether the public data will be deanonymised eh

good survey idea
44
I do not kowmeI do not kow
45
I don't have well-defined AI timelines so if I did I would probably defer to the median view of people I interact with and not a particular person.
I can't think of any particular person with an explicit timeline that I give substantial credence to. I think nostalgebraist on Tumblr has the most consistently correct-seeming views (eg that match my inside view).
46
inside viewAjeyaMIRI
47
Inside viewDaniel Kokotajlo Metaculus / surveys of experts
48
inside views formed while studying ways to accelerate matmul using sparsity in 2016-17
deepmind could do it in a month if they wanted to
49
Jan BraunerHolden KarnofskyEliezer Yudkowsky
50
Joe CarlsmithAjeya CotraMetaculusThanks for doing this :)
51
Kelsey PiperHolden KarnofskyRob Wiblin
52
Kurzweil Sam AltmanIlya Sutskever
53
MeAjeya CotraBuck Schlegeris
54
MeAjeya CotraDaniel Kokotajlo
55
MeAjeya CotraEliezer Yudkowsky
56
MeAjeya CotraMetaculus
57
MeAjeya CotraMetaculus
Left off people whose views are highly correlated with Ajeya's (e.g. Paul, Holden)

My personal views are heavily influenced by Tom Davidson's report and other outside-view-ish considerations, but this isn't really deference to a particular person per se. So didn't list this.
58
MeAjeya CotraRohin Shah
59
MeAjeya CotraSiméon Campos
60
MeAjeya Cotra
61
MeAjeya Cotra Paul ChristianoThis seems useful, good work :)
62
MeDaniel KokotajloAjeya Cotrashort timelines
63
MeDaniel KokotaljoAjeya Cotra
64
Me
General attitudes among AI alignment/strategy/forecasting people I know, and especially those who have thought a lot about timelines or are at or close to leading labs
Daniel Kokotajlo
65
MeJan KulveitMIRI
Relevant things I am NOT deferring to:
Ajeya's report
Surveys of AI capabilities experts
Superforecasters, eg Samotsvety (because I don't have access to their reasoning. I predict I would if I did. 60% chance I would :-) )

Other things I would take into account (by factoring them into my inside view):
Paul Christiano
AI forecasting by "alignment people" with explained reasoning

Comment:
My top person to defer to (Jan Kulveit) is just "the person from my social circles that I think is strictly smarter and more knowledgeable about this". So I think that despite trying to form own views, my views would magically change if he disagreed :-). (But I don't interact with him that often, so I have myself on the 1st position.)
66
meMIRI Deepmind
67
MeNostalgebraist
My timelines are highly uncertain because I don't see a lot of evidence that most of the engineering time spent in ML is working on the things that I think would most accelerate AI timelines, but I could be wrong and these projects are happening secretly in the background.
68
MeOli HabrykaEliezer Yudkowsky
I don't really know the answer for second-most and third-most.
69
MePaul ChristianoBioanchors
70
MePaul ChristianoBuck Shlegeris
71
MePaul Christiano
Much of my deference is not to an individual, but to my overall guess at which timeline-related views the AI and AI safety communities think are reasonable
72
Me
73
Me
74
Me
75
Me
It's hard for me to remember how I would have answered in say, 2010, or 2000. I remember being unimpressed with Kurzweilian predictions a long time ago. It's possible I would have deferred some to Eliezer at some point early on if he had talked about timelines more often then.

Nowadays, there seems to be enough clear concrete information that there is no need for deference heuristics in timelines. In contrast, probability of doom conditioned on strong AI is much harder for me to quantify, and I do defer to some degree to a mix of people in the field (e.g. Eliezer, Paul Christiano and others).
76
Me
77
Me
78
MetaculusPosters on twitter, no one in particular My impressions of new large models
79
My inside viewEliezer
80
Nick BostromGary Marcus (NYU psych professor, AI skeptic)Eliezer Yudkowsky
I agree that deference to experts undermines independence of judgment on this issue
81
Nobody
Evidence-based/transparent reasoning from experts
Me, technically, though I've got nothing to go off of yet because there isn't a general approach others are willing to substantiate much and I don't know where else to start
82
Open PhilanthropyMeMetaculus
From Open Philanthropy I particularly mean Tom Davidson and Ajeya Cotra.
83
Paul ChristianoDaniel Kokotajlo
84
Paul ChristianoJared KaplanMe
85
Paul ChristianoMe
86
Paul Christiano / Cotra bionanchors reportmeTom D report
87
Robert MilesAjeya Cotra
88
Robin HansonKatja GraceThanks for doing this
89
SamotsvetyPaul ChristianoNeel Nanda
90
Some MIRI folks Me
91
The experts polled by Grace et al. And AI impactsAjeya Cotra Miles Brundage
92
93
94
Ajeya CotraHolden Karnofsky Me
95
ML researchersAI safety researchers
96
97
98
99
100