| A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | Who do you defer to most on AI timelines? | Who do you defer to second-most on AI timelines? | Who do you defer to third-most on AI timelines? | Comments? | ||||||||||||||||||||||
2 | A vague cluster of short-timelines LLM/scaling pilled people, like Kyle McDonell | The general Constellation cluster, led by Paul and Ajeya | Me | In total, I defer about 75%. My inside-view median is 5 years, and the distribution I defer towards has median 10 years. After deferring, I have an 8 year median. | ||||||||||||||||||||||
3 | Ajeya | Daniel K | ||||||||||||||||||||||||
4 | Ajeya | Janice | Quintin Pope | |||||||||||||||||||||||
5 | Ajeya / openphil | Daniel kokotajlo | Me | |||||||||||||||||||||||
6 | Ajeya Cotra | Ben Garfinkel | Daniel Kokotajlo | |||||||||||||||||||||||
7 | Ajeya Cotra | Daniel Kokotajlo | Me | |||||||||||||||||||||||
8 | Ajeya Cotra | Daniel Kokotajlo | ||||||||||||||||||||||||
9 | Ajeya Cotra | Daniel Kokotajlo | ||||||||||||||||||||||||
10 | Ajeya Cotra | Daniel Kokotajlo | ||||||||||||||||||||||||
11 | Ajeya Cotra | David Field | ||||||||||||||||||||||||
12 | Ajeya Cotra | Eliezer Yudkowski | Me | I'm not sure I endorse the format of this survey. E.g. my response above cashes out as something like "30% Ajeya; 15% Eliezer; 10% me; rest is on a bundle of other people plus deep uncertainty". I imagine this is very different from other people who might give the same surface answer. | ||||||||||||||||||||||
13 | Ajeya Cotra | Eliezer Yudkowsky | Paul Christiano | |||||||||||||||||||||||
14 | Ajeya Cotra | Eliezer Yudkowsky | ||||||||||||||||||||||||
15 | Ajeya Cotra | Holden Karnofsky | Me | |||||||||||||||||||||||
16 | Ajeya Cotra | Holden Karnofsky | Paul Christiano | |||||||||||||||||||||||
17 | Ajeya Cotra | Holden Karnofsky | Rohin Shah | |||||||||||||||||||||||
18 | Ajeya Cotra | Holden Karnovsky | Linch Zhang | |||||||||||||||||||||||
19 | Ajeya Cotra | Holden? | Me / some sceptic voice | |||||||||||||||||||||||
20 | Ajeya Cotra | Joseph Carlsmith | Vague sense of distribution of AI safety community views | |||||||||||||||||||||||
21 | Ajeya Cotra | Paul Christiano | Daniel K | Could be nice to see whether people defer wholeheartedly vs adding a lot of model uncertainty | ||||||||||||||||||||||
22 | Ajeya Cotra | Paul Christiano | Daniel Kokotajlo | I don't feel like I defer to any of these three people the "most"; I chose somewhat arbitrarily. Often I consider arguments from the three of them and then see which ones make the most sense to me. | ||||||||||||||||||||||
23 | Ajeya Cotra | Rohin Shah | Myself | |||||||||||||||||||||||
24 | Ajeya Cotra | Samotsvety Forecasting Aggregate | Me | Cool idea! A bit hard for me to rank the top 3 but I think these are definitely my top 3. When I am normally thinking I mainly just think about my independent impression, but if I were forced at gunpoint to give my best-guess for AI timelines I think this is roughly the order I would defer in. It's also pretty hard to separate independent impression from deference on complex topics, in my experience. | ||||||||||||||||||||||
25 | Ajeya Cotra | Will MacAskill (HoH skepticism) | Eliezer Yudkowsky | |||||||||||||||||||||||
26 | Ajeya Cotra's "Forecasting Transformative AI with Biological Anchors" (and Ajeya's update: https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines). | Samotsvety's AI risk forecasts (https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts). | Thanks, this seems valuable! | |||||||||||||||||||||||
27 | Ben Garfinkel | Jonas Schuett | Me | |||||||||||||||||||||||
28 | Bioanchors | |||||||||||||||||||||||||
29 | Community average in metaculus and less wrong post | Daniel kokotajla | Ajeya | |||||||||||||||||||||||
30 | Connor Leahy | Daniel Kokotajlo | Inside view | |||||||||||||||||||||||
31 | Daniel Kokotajlo | Ajeya Cotra | Me | |||||||||||||||||||||||
32 | Daniel Kokotajlo | Ajeya Cotra | Me | |||||||||||||||||||||||
33 | EA posts | Scott Alexander | porby on LessWrong | Good question- my timelines tend to swing depending on the last post I read | ||||||||||||||||||||||
34 | Eliezer Yudkowsky | Ajeya Cotra | Me | |||||||||||||||||||||||
35 | Eliezer Yudowsky | Paul Christiano | Me | |||||||||||||||||||||||
36 | Gwern | Paul | Daniel K | |||||||||||||||||||||||
37 | gwern | Yudkowski | Me | 4 years +5/-2 95% confidence to human-level ability in enough domains to start achieving recursive self-improvement at a rate equal to human researchers. E.g. someone can say "I wish I had a model to predict X from this publicly available data" and there's a >1% chance an existing system can produce a working narrow AI (retraining or derivative) to predict X at roughly a human level from that kind of prompt. At that point someone could stick some NLP generator in a loop asking it to create prompts for a better system for producing working systems from prompts and ranking them in a simple way. Superhuman (in every measurable way) AGI within 2 years after self-improvement starts. | ||||||||||||||||||||||
38 | Holden | Ajeya | Howie | |||||||||||||||||||||||
39 | Holden | Ajeya | Me | Also like ‘Eliezer but more reasonable’. A lot of my thinking feels like an unclear mix of deferring and inside view. | ||||||||||||||||||||||
40 | Holden Karnofsky | Ajeya Cotra | Eliezer Yudkowsky | |||||||||||||||||||||||
41 | Holden Karnofsky | Ajeya Cotra | Joseph Carlsmith | |||||||||||||||||||||||
42 | Holden Karnofsky | Ajeya Cotra | Katja Grace's survey of experts | |||||||||||||||||||||||
43 | Holden/Ajeya combo | Buck Shlegeris | Alex Lawsen | prob better to decide whether the public data will be deanonymised eh good survey idea | ||||||||||||||||||||||
44 | I do not kow | me | I do not kow | |||||||||||||||||||||||
45 | I don't have well-defined AI timelines so if I did I would probably defer to the median view of people I interact with and not a particular person. | I can't think of any particular person with an explicit timeline that I give substantial credence to. I think nostalgebraist on Tumblr has the most consistently correct-seeming views (eg that match my inside view). | ||||||||||||||||||||||||
46 | inside view | Ajeya | MIRI | |||||||||||||||||||||||
47 | Inside view | Daniel Kokotajlo | Metaculus / surveys of experts | |||||||||||||||||||||||
48 | inside views formed while studying ways to accelerate matmul using sparsity in 2016-17 | deepmind could do it in a month if they wanted to | ||||||||||||||||||||||||
49 | Jan Brauner | Holden Karnofsky | Eliezer Yudkowsky | |||||||||||||||||||||||
50 | Joe Carlsmith | Ajeya Cotra | Metaculus | Thanks for doing this :) | ||||||||||||||||||||||
51 | Kelsey Piper | Holden Karnofsky | Rob Wiblin | |||||||||||||||||||||||
52 | Kurzweil | Sam Altman | Ilya Sutskever | |||||||||||||||||||||||
53 | Me | Ajeya Cotra | Buck Schlegeris | |||||||||||||||||||||||
54 | Me | Ajeya Cotra | Daniel Kokotajlo | |||||||||||||||||||||||
55 | Me | Ajeya Cotra | Eliezer Yudkowsky | |||||||||||||||||||||||
56 | Me | Ajeya Cotra | Metaculus | |||||||||||||||||||||||
57 | Me | Ajeya Cotra | Metaculus | Left off people whose views are highly correlated with Ajeya's (e.g. Paul, Holden) My personal views are heavily influenced by Tom Davidson's report and other outside-view-ish considerations, but this isn't really deference to a particular person per se. So didn't list this. | ||||||||||||||||||||||
58 | Me | Ajeya Cotra | Rohin Shah | |||||||||||||||||||||||
59 | Me | Ajeya Cotra | Siméon Campos | |||||||||||||||||||||||
60 | Me | Ajeya Cotra | ||||||||||||||||||||||||
61 | Me | Ajeya Cotra | Paul Christiano | This seems useful, good work :) | ||||||||||||||||||||||
62 | Me | Daniel Kokotajlo | Ajeya Cotra | short timelines | ||||||||||||||||||||||
63 | Me | Daniel Kokotaljo | Ajeya Cotra | |||||||||||||||||||||||
64 | Me | General attitudes among AI alignment/strategy/forecasting people I know, and especially those who have thought a lot about timelines or are at or close to leading labs | Daniel Kokotajlo | |||||||||||||||||||||||
65 | Me | Jan Kulveit | MIRI | Relevant things I am NOT deferring to: Ajeya's report Surveys of AI capabilities experts Superforecasters, eg Samotsvety (because I don't have access to their reasoning. I predict I would if I did. 60% chance I would :-) ) Other things I would take into account (by factoring them into my inside view): Paul Christiano AI forecasting by "alignment people" with explained reasoning Comment: My top person to defer to (Jan Kulveit) is just "the person from my social circles that I think is strictly smarter and more knowledgeable about this". So I think that despite trying to form own views, my views would magically change if he disagreed :-). (But I don't interact with him that often, so I have myself on the 1st position.) | ||||||||||||||||||||||
66 | me | MIRI | Deepmind | |||||||||||||||||||||||
67 | Me | Nostalgebraist | My timelines are highly uncertain because I don't see a lot of evidence that most of the engineering time spent in ML is working on the things that I think would most accelerate AI timelines, but I could be wrong and these projects are happening secretly in the background. | |||||||||||||||||||||||
68 | Me | Oli Habryka | Eliezer Yudkowsky | I don't really know the answer for second-most and third-most. | ||||||||||||||||||||||
69 | Me | Paul Christiano | Bioanchors | |||||||||||||||||||||||
70 | Me | Paul Christiano | Buck Shlegeris | |||||||||||||||||||||||
71 | Me | Paul Christiano | Much of my deference is not to an individual, but to my overall guess at which timeline-related views the AI and AI safety communities think are reasonable | |||||||||||||||||||||||
72 | Me | |||||||||||||||||||||||||
73 | Me | |||||||||||||||||||||||||
74 | Me | |||||||||||||||||||||||||
75 | Me | It's hard for me to remember how I would have answered in say, 2010, or 2000. I remember being unimpressed with Kurzweilian predictions a long time ago. It's possible I would have deferred some to Eliezer at some point early on if he had talked about timelines more often then. Nowadays, there seems to be enough clear concrete information that there is no need for deference heuristics in timelines. In contrast, probability of doom conditioned on strong AI is much harder for me to quantify, and I do defer to some degree to a mix of people in the field (e.g. Eliezer, Paul Christiano and others). | ||||||||||||||||||||||||
76 | Me | |||||||||||||||||||||||||
77 | Me | |||||||||||||||||||||||||
78 | Metaculus | Posters on twitter, no one in particular | My impressions of new large models | |||||||||||||||||||||||
79 | My inside view | Eliezer | ||||||||||||||||||||||||
80 | Nick Bostrom | Gary Marcus (NYU psych professor, AI skeptic) | Eliezer Yudkowsky | I agree that deference to experts undermines independence of judgment on this issue | ||||||||||||||||||||||
81 | Nobody | Evidence-based/transparent reasoning from experts | Me, technically, though I've got nothing to go off of yet because there isn't a general approach others are willing to substantiate much and I don't know where else to start | |||||||||||||||||||||||
82 | Open Philanthropy | Me | Metaculus | From Open Philanthropy I particularly mean Tom Davidson and Ajeya Cotra. | ||||||||||||||||||||||
83 | Paul Christiano | Daniel Kokotajlo | ||||||||||||||||||||||||
84 | Paul Christiano | Jared Kaplan | Me | |||||||||||||||||||||||
85 | Paul Christiano | Me | ||||||||||||||||||||||||
86 | Paul Christiano / Cotra bionanchors report | me | Tom D report | |||||||||||||||||||||||
87 | Robert Miles | Ajeya Cotra | ||||||||||||||||||||||||
88 | Robin Hanson | Katja Grace | Thanks for doing this | |||||||||||||||||||||||
89 | Samotsvety | Paul Christiano | Neel Nanda | |||||||||||||||||||||||
90 | Some MIRI folks | Me | ||||||||||||||||||||||||
91 | The experts polled by Grace et al. And AI impacts | Ajeya Cotra | Miles Brundage | |||||||||||||||||||||||
92 | ||||||||||||||||||||||||||
93 | ||||||||||||||||||||||||||
94 | Ajeya Cotra | Holden Karnofsky | Me | |||||||||||||||||||||||
95 | ML researchers | AI safety researchers | ||||||||||||||||||||||||
96 | ||||||||||||||||||||||||||
97 | ||||||||||||||||||||||||||
98 | ||||||||||||||||||||||||||
99 | ||||||||||||||||||||||||||
100 |