ABCEFGHIJ
1
Name with linkSummaryStructure - wordsLength start to finishTime per weekSocial aspect built in?Signing upGeneral notes
2
Add courses not listed or suggest updates
3
AGI safety fundamentals program or here3.5 hours reading, 1.5 hours talking a week w/ facilitator for 8 weeks. Focuses on technical safety more than othersMeet once a week. Project at the end. 8 weeks4.5yeshttps://airtable.com/shr41jcUXUc62n1SMWeek 0 (optional): introduction to machine learning
Week 1: Artificial general intelligence
Week 2: Goals and misalignment
Week 3: Threat models and types of solutions
Week 4: Learning from humans
Week 5: Decomposing tasks for outer alignment
Week 6: Other paradigms for safety work
Week 7: AGI safety in context
Week 8 (several weeks later): Projects
Further material
4
Alignment 201 CurriculumThis curriculum is intended as a follow-up to the Alignment Fundamentals curriculum (which is the ‘101’ to this 201 course). The course consists of 7 weeks of readings, then two weeks focused on developing a literature review and/or research proposal.

As in the Alignment Fundamentals course, participants are divided into groups of 4-6 people, matched based on their prior knowledge about ML. Each week, each group and their discussion facilitator will meet for 1.5 hours to discuss the readings and exercises. Weeks 6 and 7 branch into three parallel tracks: participants can choose whether to focus on Eliciting Latent Knowledge, Agent Foundations or Science of Deep Learning.

The main focus each week will be on the core readings and one exercise of your choice out of the exercises listed, for which you should allocate around 2 hours preparation time. Approximate times taken to read each piece in depth are listed next to them.
Meet once a week. Project at the end. 9 weeks4yeshttps://www.agisafetyfundamentals.com/participate-details
5
AI safety campActually do some AI research. More about output than learning.preparation (≧ 7 h/week for 7 weeks), weekend sprints (≧ 8 h/d), project (5-15 h)7-8 weeks≧ 7 h during preparationyeshttps://aisafety.camp/pre-sign-up/
6
Introduction to ML SafetyA course covering topics in machine learning relevant to ML safety.It includes recorded lectures, written assignments, coding assignments, and readings. In ML Safety Scholars it was used as part of a structured program but otherwise it's an online course.40 hoursn/anon/a
7
How to pursue a career in technical AI alignmentA reading list that includes both reading recommendations as well as general advice about how to get a career in the field such as whether you need a PhD, etc.Reading listn/an/anon/a
8
Victoria Krakovna resourcesBig collection of varied resources, starting with basic overviews and career advice and then jumps into technical research papersn/a40 weeksn/anon/a
9
How to get into independent research on alignmentGuide on how to do independent research specifically. Blog post 10 minutesn/anon/a
10
List of longtermist coursesList of courses related to longtermism more generally, not AI safety specificallyn/an/an/an/an/a
11
AI Alignment Forum SequencesWritten by Ngo, Shah, and Christiano. They probably have a lot of overlap with Ngo's fellowshipSimilar to the rationality sequences8 weeksn/anon/a
12
CHAI recommended reading materialsList of resources prioritized by importance to read. n/a40 weeksn/anon/a
13
80k reading syllabusLots of math, game theory, and textbooks about fundamental theory n/a40 weeksn/anon/a
14
Reading group No structure reading group. Has really good YouTube channel where they record the summaries of the papers they read. Reading group. No curriculuminf1yesThe AISafety.com Reading Group meets weekly, usually Thursdays at 19:45 UTC. To join, add “soeren.elverlin” on Skype. Usually, we start with small-talk and a presentation round, then the host gives a summary of the paper for roughly 20 minutes. The summary of the article is uploaded on the Youtube Channel. This is followed by discussion (both on the article and in general) and finally we decide on a paper to read the following week.
15
Safety and Control for Artificial General IntelligenceAn actual AI Safety university course (UC Berkeley). Touches multiple domains including cognitive science, utility theory, cybersecurity, human-machine interaction, and political science.Twice per week.
Coding & final projects
1 semester4yesn/a
16
AI Safety Support "Lots of Links"A huge collection of resources of many categories, some of them have no overlap with other collections (e.g. Funding, Landscapes, Talks and more)n/a40 weeksn/anon/a
17
Awesome Artificial Intelligence AlignmentList of resources organized by type of content. Good source of events, podcasts, and general resourcesn/a40 weeksn/anon/a
18
Bibliography of all safety researchNot a course, but a comprehensive list of all papers put out in safety. Bibliography60 weeksn/anon/a
19
AI alignment newsletterNot a course, but a great resource filled with summaries and commentary on safety papersNewsletterinfn/anonewsletter signup
20
Vael's recommended reading listA more general list including why to work on AI safety in the first place and also more in depth technical recommendations, governance, biosecurity, etc. Blog post 10 weeksn/anon/a
21
Neel Nanda's Inside View ResourcesNot a course, but a great resource filled with ways to form an inside view in a new field. List of resourcesn/an/anon/a
22
Existential Risks Introductory Course (ERIC)
Long distance course on general x-risks, including AI. Not a technical course. 1.5 hours a week meeting with fellow students and facilitators. Long distance course8 weeks~2 hr reading
1.5 hr discussion
yesExpression of interest form
23