A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | Go to Data > Create Filter View to filter by confidence or funding level locally. No one else will see your changes | |||||||||||||||||||||||||
2 | Title | Category | Confidence | Funding_Needed | Focus | Leader | ||||||||||||||||||||
3 | Don’t Worry About the Vase | Zvi | 3 - High | 0 - None | Zvi Mowshowitz writes a lot of words, really quite a lot. | Zvi Mowshowitz | ||||||||||||||||||||
4 | Psychosecurity Ethics at EURAIO | AI Non-Technical Research and Education | 3 - High | 0 - None | Summits to discuss AI respecting civil liberties and not using psychological manipulation or eroding autonomy. | Neil Watson | ||||||||||||||||||||
5 | Model Evaluation and Threat Research (METR) | ML Alignment Research | 3 - High | 0 - None | Model evaluations | Beth Barnes, Emma Abele, Chris Painter, Kit Harris | ||||||||||||||||||||
6 | Balsa Research | Zvi | 3 - High | 1 - Low | Groundwork starting with studies to allow repeal of the Jones Act | Zvi Mowshowitz | ||||||||||||||||||||
7 | AI Safety Info (Robert Miles) | AI Non-Technical Research and Education | 3 - High | 1 - Low | Making YouTube videos about AI safety, starring Rob Miles | Rob Miles | ||||||||||||||||||||
8 | Intelligence Rising | AI Non-Technical Research and Education | 3 - High | 1 - Low | Facilitation of the AI scenario planning game Intelligence Rising. | Caroline Jeanmaire | ||||||||||||||||||||
9 | Cybersecurity Lab at University of Louisville | ML Alignment Research | 3 - High | 1 - Low | Allow Roman Yampolskiy to continue his research and pursue a PhD | Roman Yampolskiy | ||||||||||||||||||||
10 | AI Safety Camp | Talent Funnels | 3 - High | 1 - Low | Learning by doing, participants work on a concrete project in the field | Remmelt Ellen and Linda Linsefors | ||||||||||||||||||||
11 | Center for Law and AI Risk | Talent Funnels | 3 - High | 1 - Low | Paying academics small stipends to move into AI safety work | Peter Salib (psalib @ central.uh.edu), Yonathan Arbel (yarbel @ law.ua.edu) and Kevin Frazier (kfrazier2 @ stu.edu). | ||||||||||||||||||||
12 | Existential Risk Observatory | AI Policy and Diplomacy | 2 - Medium | 1 - Low | Get the word out and also organize conferences | Otto Barten | ||||||||||||||||||||
13 | Mathematical Metaphysics Institute | Math Decision Theory and Agent Foundations | 1 - Low | 1 - Low | Searching for a mathematical basis for metaethics. | Alex Zhu | ||||||||||||||||||||
14 | Pour Domain | Bio Risk | 1 - Low | 1 - Low | AI enabled biorisks, among other things. | Patrick Stadler | ||||||||||||||||||||
15 | AI Safety Cape Town | Talent Funnels | 1 - Low | 1 - Low | AI safety community building and research in South Africa | Leo Hyams and Benjamin Sturgeon | ||||||||||||||||||||
16 | Catalyze Impact | Talent Funnels | 1 - Low | 1 - Low | Incubation of AI safety organizations | Alexandra Bos | ||||||||||||||||||||
17 | The Scenario Project | AI Non-Technical Research and Education | 3 - High | 2 - Medium | AI forecasting research projects, governance research projects, and policy engagement, in that order. | Daniel Kokotajlo, with Eli Lifland | ||||||||||||||||||||
18 | Effective Institutions Project (EIP) | AI Non-Technical Research and Education | 3 - High | 2 - Medium | AI governance, advisory and research, finding how to change decision points | Ian David Moss | ||||||||||||||||||||
19 | Artificial Intelligence Policy Institute (AIPI) | AI Non-Technical Research and Education | 3 - High | 2 - Medium | Polls about AI | Daniel Colson | ||||||||||||||||||||
20 | Pallisade Research | AI Non-Technical Research and Education | 3 - High | 2 - Medium | AI capabilities demonstrations to inform decision makers | Jeffrey Ladish | ||||||||||||||||||||
21 | Foundation for American Innovation (FAI) | AI Policy and Diplomacy | 3 - High | 2 - Medium | Tech policy research, thought leadership, educational outreach to government | Grace Meyer | ||||||||||||||||||||
22 | Center for AI Policy (CAIP) | AI Policy and Diplomacy | 3 - High | 2 - Medium | Lobbying Congress to adapt mandatory AI safety standards | Jason Green-Lowe | ||||||||||||||||||||
23 | Encode Justice | AI Policy and Diplomacy | 3 - High | 2 - Medium | Youth activism on AI safety issues | Sneha Revanur | ||||||||||||||||||||
24 | The Future Society | AI Policy and Diplomacy | 3 - High | 2 - Medium | AI governance standards and policy. | Caroline Jeanmaire | ||||||||||||||||||||
25 | Safer AI | AI Policy and Diplomacy | 3 - High | 2 - Medium | Specifications for good AI safety, also directly impacting EU AI policy | Simeon Campos | ||||||||||||||||||||
26 | Timaeus | ML Alignment Research | 3 - High | 2 - Medium | Interpretability research | Jesse Hoogland | ||||||||||||||||||||
27 | Simplex | ML Alignment Research | 3 - High | 2 - Medium | Mechanistic interpretability of how inference breaks down | Paul Riechers and Adam Shai | ||||||||||||||||||||
28 | Orthogonal | Math Decision Theory and Agent Foundations | 3 - High | 2 - Medium | AI alignment via agent foundations | Tamsin Leake | ||||||||||||||||||||
29 | Topos Institute | Math Decision Theory and Agent Foundations | 3 - High | 2 - Medium | Math for AI alignment | Brendan Fong and David Spivak. | ||||||||||||||||||||
30 | Eisenstat Research | Math Decision Theory and Agent Foundations | 3 - High | 2 - Medium | Two people doing research at MIRI, in particular Sam Eisenstat | Sam Eisenstat | ||||||||||||||||||||
31 | ALTER (Affiliate Learning-Theoretic Employment and Resources) Project | Math Decision Theory and Agent Foundations | 3 - High | 2 - Medium | This research agenda, with this status update, examining intelligence | Vanessa Kosoy | ||||||||||||||||||||
32 | MSEP Project at Science and Technology Futures (Their Website) | Cool Other Stuff Including Tech | 3 - High | 2 - Medium | Drexlerian Nanotechnology | Eric Drexler, of course | ||||||||||||||||||||
33 | Emergent Ventures | Talent Funnels | 3 - High | 2 - Medium | Small grants to individuals to help them develop their talent | Tyler Cowen | ||||||||||||||||||||
34 | Convergence Analysis | AI Non-Technical Research and Education | 2 - Medium | 2 - Medium | A series of sociotechnical reports on key AI scenarios, governance recommendations and conducting AI awareness efforts. | David Kristoffersson | ||||||||||||||||||||
35 | AI Standards Lab | AI Policy and Diplomacy | 2 - Medium | 2 - Medium | Accelerating the writing of AI safety standards | Ariel Gil and Jonathan Happel | ||||||||||||||||||||
36 | Safer AI Forum | AI Policy and Diplomacy | 2 - Medium | 2 - Medium | International AI safety conferences | Fynn Heide and Conor McGurk | ||||||||||||||||||||
37 | Pause AI and Pause AI Global | AI Policy and Diplomacy | 2 - Medium | 2 - Medium | Get the word out and also organize conferences | Holly Elmore | ||||||||||||||||||||
38 | Simons Institute for Longterm Governance | AI Policy and Diplomacy | 2 - Medium | 2 - Medium | Foundations and demand for international cooperation on AI governance and differential tech development | Konrad Seifert and Maxime Stauffer | ||||||||||||||||||||
39 | Alignment in Complex Systems Research Group | ML Alignment Research | 2 - Medium | 2 - Medium | AI alignment research on hierarchical agents and multi-system interactions | Jan Kulveit | ||||||||||||||||||||
40 | Charter Cities Institute | Cool Other Stuff Including Tech | 2 - Medium | 2 - Medium | Building charter cities | Kurtis Lockhart | ||||||||||||||||||||
41 | Secure DNA | Bio Risk | 2 - Medium | 2 - Medium | Scanning DNA synthesis for potential hazards | Kevin Esvelt, Andrew Yao and Raphael Egger | ||||||||||||||||||||
42 | Blueprint Biosecurity | Bio Risk | 2 - Medium | 2 - Medium | Increasing capability to respond to future pandemics, Next-gen PPE, Far-UVC. | Jake Swett | ||||||||||||||||||||
43 | Manifund | Regrant to Fund Other Organizations | 2 - Medium | 2 - Medium | Regranters to AI safety, existential risk, EA meta projects, creative mechanisms | Austin Chen (austin at manifund.org). | ||||||||||||||||||||
44 | AI Risk Mitigation Fund | Regrant to Fund Other Organizations | 2 - Medium | 2 - Medium | Spinoff of LTFF, grants for AI safety projects | Thomas Larsen | ||||||||||||||||||||
45 | Speculative Technologies | Talent Funnels | 2 - Medium | 2 - Medium | Fellowships for Drexlerian functional nanomachines, 3 - High-throughput tools and discovering new superconductors | Benjamin Reinhardt | ||||||||||||||||||||
46 | Talos Network | Talent Funnels | 2 - Medium | 2 - Medium | Fellowships to other organizations, such as Future Society, Safer AI and FLI. | Cillian Crosson (same as Tarbell for now but she plans to focus on Tarbell) | ||||||||||||||||||||
47 | Epistea | Talent Funnels | 2 - Medium | 2 - Medium | X-risk residencies, workshops, coworking in Prague, fiscal sponsorships | Irena Kotikova | ||||||||||||||||||||
48 | Longview Philanthropy | AI Non-Technical Research and Education | 1 - Low | 2 - Medium | Conferences and advice on x-risk for those giving >$1 million per year | Simran Dhaliwal | ||||||||||||||||||||
49 | Legal Advocacy for Safe Science and Technology | AI Policy and Diplomacy | 1 - Low | 2 - Medium | Legal team for lawsuits on catastrophic risk and to defend whistleblowers. | Tyler Whitmer | ||||||||||||||||||||
50 | Apart Research | ML Alignment Research | 1 - Low | 2 - Medium | AI safety hackathons | Esben Kran, Jason Schreiber | ||||||||||||||||||||
51 | Atlas Computing | ML Alignment Research | 1 - Low | 2 - Medium | Guaranteed safe AI | Evan Miyazono | ||||||||||||||||||||
52 | Focal at CMU | Math Decision Theory and Agent Foundations | 1 - Low | 2 - Medium | Game theory for cooperation by autonomous AI agents | Vincent Conitzer | ||||||||||||||||||||
53 | Carbon Copies for Independent Minds | Cool Other Stuff Including Tech | 1 - Low | 2 - Medium | Whole brain emulation | Randal Koene | ||||||||||||||||||||
54 | Foresight | Regrant to Fund Other Organizations | 1 - Low | 2 - Medium | Regrants, fellowships and events | Allison Duettmann | ||||||||||||||||||||
55 | Centre for Enabling Effective Altruism Learning & Research (CEELAR) | Regrant to Fund Other Organizations | 1 - Low | 2 - Medium | The Athena Hotel aka The EA Hotel as catered host for EAs in UK | Greg Colbourn | ||||||||||||||||||||
56 | Impact Academy Limited | Talent Funnels | 1 - Low | 2 - Medium | Incubation, fellowship and training in India for technical AI safety | Sebastian Schmidt | ||||||||||||||||||||
57 | Tarbell Fel1 - Lowship at PPF | Talent Funnels | 1 - Low | 2 - Medium | Journalism fellowships for oversight of AI companies. | Cillian Crosson (same as Talos Network for now but she plans to focus here) | ||||||||||||||||||||
58 | Akrose | Talent Funnels | 1 - Low | 2 - Medium | Various field building activities in AI safety | Victoria Brook | ||||||||||||||||||||
59 | CeSIA within EffiSciences | Talent Funnels | 1 - Low | 2 - Medium | New AI safety org in Paris, discourse, R&D collaborations, talent pipeline | Charbel-Raphael Segerie, Florent Berthet | ||||||||||||||||||||
60 | Stanford Existential Risk Initiative (SERI) | Talent Funnels | 1 - Low | 2 - Medium | Recruitment for existential risk causes | Steve Luby and Paul Edwards | ||||||||||||||||||||
61 | Lightcone Infrastructure | AI Non-Technical Research and Education | 3 - High | 3 - High | Rationality community infrastructure, LessWrong, AF and Lighthaven. | Oliver Habryka, Raymond Arnold, Ben Pace | ||||||||||||||||||||
62 | Center for AI Safety and the CAIS Action Fund | AI Policy and Diplomacy | 3 - High | 3 - High | AI research, field building and advocacy | Dan Hendrycks | ||||||||||||||||||||
63 | MIRI | AI Policy and Diplomacy | 3 - High | 3 - High | At this point, primarily AI policy advocacy, plus some research | Malo Bourgon, Eliezer Yudkowsky | ||||||||||||||||||||
64 | Alignment Research Center (ARC) | ML Alignment Research | 3 - High | 3 - High | Theoretically motivated alignment work | Jacob Hinton | ||||||||||||||||||||
65 | Apollo Research | ML Alignment Research | 3 - High | 3 - High | Evaluations, especially versus deception, some interpretability and governance. | Marius Hobbhahn | ||||||||||||||||||||
66 | Good Ancestor Foundation | Cool Other Stuff Including Tech | 3 - High | 3 - High | Collaborations for tools to increase civilizational robustness to catastrophes | Colby Thompson | ||||||||||||||||||||
67 | SFF Itself (!) | Regrant to Fund Other Organizations | 3 - High | 3 - High | Give out grants based on recommenders, primarily to 501c(3) organizations | Andrew Critch and Jaan Tallinn | ||||||||||||||||||||
68 | Institute for AI Policy and Strategy (IAPS) | AI Policy and Diplomacy | 2 - Medium | 3 - High | Papers and projects for ‘serious’ government circles, meetings with same. | Peter Wildeford | ||||||||||||||||||||
69 | CLTR at Founders Pledge | AI Policy and Diplomacy | 2 - Medium | 3 - High | UK Policy Think Tank focusing on ‘extreme AI risk and biorisk policy.’ | Angus Mercer | ||||||||||||||||||||
70 | Far AI | ML Alignment Research | 2 - Medium | 3 - High | Interpretability and other alignment research, incubator, hits based approach | Adam Gleave | ||||||||||||||||||||
71 | ALLFED | Cool Other Stuff Including Tech | 2 - Medium | 3 - High | Feeding people with resilient foods after a potential nuclear war | David Denkenberger | ||||||||||||||||||||
72 | MATS Research | Talent Funnels | 2 - Medium | 3 - High | Researcher mentorship for those new to AI safety. | Ryan Kidd and Christian Smith. | ||||||||||||||||||||
73 | Transluce | ML Alignment Research | 1 - Low | 3 - High | Interpretability, tools for AI control, and so forth. New org. | Jacob Steinhardt, Sarah Schwettmann | ||||||||||||||||||||
74 | German Primate Center (DPZ) – Leibniz Institute for Primate Research | Cool Other Stuff Including Tech | 1 - Low | 3 - High | Creating primates from cultured edited stem cells | Sergiy Velychko and Rudiger Behr | ||||||||||||||||||||
75 | Long Term Future Fund | Regrant to Fund Other Organizations | 1 - Low | 3 - High | Grants of 4-6 figures mostly to individuals, mostly for AI existential risk | Caleb Parikh (among other fund managers) | ||||||||||||||||||||
76 | Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS) | Talent Funnels | 1 - Low | 3 - High | Fellowships and affiliate programs for new alignment researchers | Nora Ammann, Lucas Teixeira and Dusan D. Nesic | ||||||||||||||||||||
77 | ||||||||||||||||||||||||||
78 | ||||||||||||||||||||||||||
79 | ||||||||||||||||||||||||||
80 | ||||||||||||||||||||||||||
81 | ||||||||||||||||||||||||||
82 | ||||||||||||||||||||||||||
83 | ||||||||||||||||||||||||||
84 | ||||||||||||||||||||||||||
85 | ||||||||||||||||||||||||||
86 | ||||||||||||||||||||||||||
87 | ||||||||||||||||||||||||||
88 | ||||||||||||||||||||||||||
89 | ||||||||||||||||||||||||||
90 | ||||||||||||||||||||||||||
91 | ||||||||||||||||||||||||||
92 | ||||||||||||||||||||||||||
93 | ||||||||||||||||||||||||||
94 | ||||||||||||||||||||||||||
95 | ||||||||||||||||||||||||||
96 | ||||||||||||||||||||||||||
97 | ||||||||||||||||||||||||||
98 | ||||||||||||||||||||||||||
99 | ||||||||||||||||||||||||||
100 |