ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
Go to Data > Create Filter View to filter by confidence or funding level locally. No one else will see your changes
2
TitleCategoryConfidence
Funding_Needed
FocusLeader
3
Don’t Worry About the VaseZvi3 - High0 - None
Zvi Mowshowitz writes a lot of words, really quite a lot.
Zvi Mowshowitz
4
Psychosecurity Ethics at EURAIOAI Non-Technical Research and Education3 - High0 - None
Summits to discuss AI respecting civil liberties and not using psychological manipulation or eroding autonomy.
Neil Watson
5
Model Evaluation and Threat Research (METR)
ML Alignment Research3 - High0 - NoneModel evaluations
Beth Barnes, Emma Abele, Chris Painter, Kit Harris
6
Balsa ResearchZvi3 - High1 - Low
Groundwork starting with studies to allow repeal of the Jones Act
Zvi Mowshowitz
7
AI Safety Info (Robert Miles)AI Non-Technical Research and Education3 - High1 - Low
Making YouTube videos about AI safety, starring Rob Miles
Rob Miles
8
Intelligence RisingAI Non-Technical Research and Education3 - High1 - Low
Facilitation of the AI scenario planning game Intelligence Rising.
Caroline Jeanmaire
9
Cybersecurity Lab at University of Louisville
ML Alignment Research3 - High1 - Low
Allow Roman Yampolskiy to continue his research and pursue a PhD
Roman Yampolskiy
10
AI Safety CampTalent Funnels3 - High1 - Low
Learning by doing, participants work on a concrete project in the field
Remmelt Ellen and Linda Linsefors
11
Center for Law and AI RiskTalent Funnels3 - High1 - Low
Paying academics small stipends to move into AI safety work
Peter Salib (psalib @ central.uh.edu), Yonathan Arbel (yarbel @ law.ua.edu) and Kevin Frazier (kfrazier2 @ stu.edu).
12
Existential Risk ObservatoryAI Policy and Diplomacy2 - Medium1 - Low
Get the word out and also organize conferences
Otto Barten
13
Mathematical Metaphysics Institute
Math Decision Theory and Agent Foundations1 - Low1 - Low
Searching for a mathematical basis for metaethics.
Alex Zhu
14
Pour DomainBio Risk1 - Low1 - Low
AI enabled biorisks, among other things.
Patrick Stadler
15
AI Safety Cape TownTalent Funnels1 - Low1 - Low
AI safety community building and research in South Africa
Leo Hyams and Benjamin Sturgeon
16
Catalyze ImpactTalent Funnels1 - Low1 - Low
Incubation of AI safety organizations
Alexandra Bos
17
The Scenario ProjectAI Non-Technical Research and Education3 - High2 - Medium
AI forecasting research projects, governance research projects, and policy engagement, in that order.
Daniel Kokotajlo, with Eli Lifland
18
Effective Institutions Project (EIP)
AI Non-Technical Research and Education3 - High2 - Medium
AI governance, advisory and research, finding how to change decision points
Ian David Moss
19
Artificial Intelligence Policy Institute (AIPI)
AI Non-Technical Research and Education3 - High2 - MediumPolls about AIDaniel Colson
20
Pallisade ResearchAI Non-Technical Research and Education3 - High2 - Medium
AI capabilities demonstrations to inform decision makers
Jeffrey Ladish
21
Foundation for American Innovation (FAI)
AI Policy and Diplomacy3 - High2 - Medium
Tech policy research, thought leadership, educational outreach to government
Grace Meyer
22
Center for AI Policy (CAIP)AI Policy and Diplomacy3 - High2 - Medium
Lobbying Congress to adapt mandatory AI safety standards
Jason Green-Lowe
23
Encode JusticeAI Policy and Diplomacy3 - High2 - MediumYouth activism on AI safety issuesSneha Revanur
24
The Future SocietyAI Policy and Diplomacy3 - High2 - Medium
AI governance standards and policy.
Caroline Jeanmaire
25
Safer AIAI Policy and Diplomacy3 - High2 - Medium
Specifications for good AI safety, also directly impacting EU AI policy
Simeon Campos
26
TimaeusML Alignment Research3 - High2 - MediumInterpretability researchJesse Hoogland
27
SimplexML Alignment Research3 - High2 - Medium
Mechanistic interpretability of how inference breaks down
Paul Riechers and Adam Shai
28
OrthogonalMath Decision Theory and Agent Foundations3 - High2 - MediumAI alignment via agent foundationsTamsin Leake
29
Topos InstituteMath Decision Theory and Agent Foundations3 - High2 - MediumMath for AI alignment
Brendan Fong and David Spivak.
30
Eisenstat ResearchMath Decision Theory and Agent Foundations3 - High2 - Medium
Two people doing research at MIRI, in particular Sam Eisenstat
Sam Eisenstat
31
ALTER (Affiliate Learning-Theoretic Employment and Resources) Project
Math Decision Theory and Agent Foundations3 - High2 - Medium
This research agenda, with this status update, examining intelligence
Vanessa Kosoy
32
MSEP Project at Science and Technology Futures (Their Website)
Cool Other Stuff Including Tech3 - High2 - MediumDrexlerian Nanotechnology
Eric Drexler, of course
33
Emergent VenturesTalent Funnels3 - High2 - Medium
Small grants to individuals to help them develop their talent
Tyler Cowen
34
Convergence AnalysisAI Non-Technical Research and Education2 - Medium2 - Medium
A series of sociotechnical reports on key AI scenarios, governance recommendations and conducting AI awareness efforts.
David Kristoffersson
35
AI Standards LabAI Policy and Diplomacy2 - Medium2 - Medium
Accelerating the writing of AI safety standards
Ariel Gil and Jonathan Happel
36
Safer AI ForumAI Policy and Diplomacy2 - Medium2 - MediumInternational AI safety conferences
Fynn Heide and Conor McGurk
37
Pause AI and Pause AI GlobalAI Policy and Diplomacy2 - Medium2 - Medium
Get the word out and also organize conferences
Holly Elmore
38
Simons Institute for Longterm Governance
AI Policy and Diplomacy2 - Medium2 - Medium
Foundations and demand for international cooperation on AI governance and differential tech development
Konrad Seifert and Maxime Stauffer
39
Alignment in Complex Systems Research Group
ML Alignment Research2 - Medium2 - Medium
AI alignment research on hierarchical agents and multi-system interactions
Jan Kulveit
40
Charter Cities InstituteCool Other Stuff Including Tech2 - Medium2 - MediumBuilding charter citiesKurtis Lockhart
41
Secure DNABio Risk2 - Medium2 - Medium
Scanning DNA synthesis for potential hazards
Kevin Esvelt, Andrew Yao and Raphael Egger
42
Blueprint BiosecurityBio Risk2 - Medium2 - Medium
Increasing capability to respond to future pandemics, Next-gen PPE, Far-UVC.
Jake Swett
43
ManifundRegrant to Fund Other Organizations2 - Medium2 - Medium
Regranters to AI safety, existential risk, EA meta projects, creative mechanisms
Austin Chen (austin at manifund.org).
44
AI Risk Mitigation FundRegrant to Fund Other Organizations2 - Medium2 - Medium
Spinoff of LTFF, grants for AI safety projects
Thomas Larsen
45
Speculative TechnologiesTalent Funnels2 - Medium2 - Medium
Fellowships for Drexlerian functional nanomachines, 3 - High-throughput tools and discovering new superconductors
Benjamin Reinhardt
46
Talos NetworkTalent Funnels2 - Medium2 - Medium
Fellowships to other organizations, such as Future Society, Safer AI and FLI.
Cillian Crosson (same as Tarbell for now but she plans to focus on Tarbell)
47
EpisteaTalent Funnels2 - Medium2 - Medium
X-risk residencies, workshops, coworking in Prague, fiscal sponsorships
Irena Kotikova
48
Longview PhilanthropyAI Non-Technical Research and Education1 - Low2 - Medium
Conferences and advice on x-risk for those giving >$1 million per year
Simran Dhaliwal
49
Legal Advocacy for Safe Science and Technology
AI Policy and Diplomacy1 - Low2 - Medium
Legal team for lawsuits on catastrophic risk and to defend whistleblowers.
Tyler Whitmer
50
Apart ResearchML Alignment Research1 - Low2 - MediumAI safety hackathons
Esben Kran, Jason Schreiber
51
Atlas ComputingML Alignment Research1 - Low2 - MediumGuaranteed safe AIEvan Miyazono
52
Focal at CMUMath Decision Theory and Agent Foundations1 - Low2 - Medium
Game theory for cooperation by autonomous AI agents
Vincent Conitzer
53
Carbon Copies for Independent Minds
Cool Other Stuff Including Tech1 - Low2 - MediumWhole brain emulationRandal Koene
54
ForesightRegrant to Fund Other Organizations1 - Low2 - MediumRegrants, fellowships and events
Allison Duettmann
55
Centre for Enabling Effective Altruism Learning & Research (CEELAR)
Regrant to Fund Other Organizations1 - Low2 - Medium
The Athena Hotel aka The EA Hotel as catered host for EAs in UK
Greg Colbourn
56
Impact Academy LimitedTalent Funnels1 - Low2 - Medium
Incubation, fellowship and training in India for technical AI safety
Sebastian Schmidt
57
Tarbell Fel1 - Lowship at PPFTalent Funnels1 - Low2 - Medium
Journalism fellowships for oversight of AI companies.
Cillian Crosson (same as Talos Network for now but she plans to focus here)
58
AkroseTalent Funnels1 - Low2 - Medium
Various field building activities in AI safety
Victoria Brook
59
CeSIA within EffiSciencesTalent Funnels1 - Low2 - Medium
New AI safety org in Paris, discourse, R&D collaborations, talent pipeline
Charbel-Raphael Segerie, Florent Berthet
60
Stanford Existential Risk Initiative (SERI)
Talent Funnels1 - Low2 - Medium
Recruitment for existential risk causes
Steve Luby and Paul Edwards
61
Lightcone InfrastructureAI Non-Technical Research and Education3 - High3 - High
Rationality community infrastructure, LessWrong, AF and Lighthaven.
Oliver Habryka, Raymond Arnold, Ben Pace
62
Center for AI Safety and the CAIS Action Fund
AI Policy and Diplomacy3 - High3 - High
AI research, field building and advocacy
Dan Hendrycks
63
MIRIAI Policy and Diplomacy3 - High3 - High
At this point, primarily AI policy advocacy, plus some research
Malo Bourgon, Eliezer Yudkowsky
64
Alignment Research Center (ARC)
ML Alignment Research3 - High3 - High
Theoretically motivated alignment work
Jacob Hinton
65
Apollo ResearchML Alignment Research3 - High3 - High
Evaluations, especially versus deception, some interpretability and governance.
Marius Hobbhahn
66
Good Ancestor FoundationCool Other Stuff Including Tech3 - High3 - High
Collaborations for tools to increase civilizational robustness to catastrophes
Colby Thompson
67
SFF Itself (!)Regrant to Fund Other Organizations3 - High3 - High
Give out grants based on recommenders, primarily to 501c(3) organizations
Andrew Critch and Jaan Tallinn
68
Institute for AI Policy and Strategy (IAPS)
AI Policy and Diplomacy2 - Medium3 - High
Papers and projects for ‘serious’ government circles, meetings with same.
Peter Wildeford
69
CLTR at Founders PledgeAI Policy and Diplomacy2 - Medium3 - High
UK Policy Think Tank focusing on ‘extreme AI risk and biorisk policy.’
Angus Mercer
70
Far AIML Alignment Research2 - Medium3 - High
Interpretability and other alignment research, incubator, hits based approach
Adam Gleave
71
ALLFEDCool Other Stuff Including Tech2 - Medium3 - High
Feeding people with resilient foods after a potential nuclear war
David Denkenberger
72
MATS ResearchTalent Funnels2 - Medium3 - High
Researcher mentorship for those new to AI safety.
Ryan Kidd and Christian Smith.
73
TransluceML Alignment Research1 - Low3 - High
Interpretability, tools for AI control, and so forth. New org.
Jacob Steinhardt, Sarah Schwettmann
74
German Primate Center (DPZ) – Leibniz Institute for Primate Research
Cool Other Stuff Including Tech1 - Low3 - High
Creating primates from cultured edited stem cells
Sergiy Velychko and Rudiger Behr
75
Long Term Future FundRegrant to Fund Other Organizations1 - Low3 - High
Grants of 4-6 figures mostly to individuals, mostly for AI existential risk
Caleb Parikh (among other fund managers)
76
Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS)
Talent Funnels1 - Low3 - High
Fellowships and affiliate programs for new alignment researchers
Nora Ammann, Lucas Teixeira and Dusan D. Nesic
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100