Published using Google Docs
Pre-read: Predicting the future of advanced AI
Updated automatically every 5 minutes

Pre-read: Predicting the future of advanced AI

Limits to prediction (Spring 2024)

Arvind Narayanan

(As usual, links to reading list items are in bold.)

This week is about predicting the future of advanced AI, including existential risk. The first paper is on scaling laws and emergence, two seemingly contrasting phenomena regarding large language models. As models are scaled up, the improvement in performance is highly predictable in one sense and seems highly unpredictable in another sense.

Turning to existential risk, we start with an ambitious forecasting tournament that sought to predict many types of existential risk: AI, pathogens, nuclear, and non-anthropogenic. Read the first 4 sections and the AI subsection of Section 5.

Then we will dive into methods for forecasting. We will contrast two domains: asteroids and AI. On asteroid extinction we will read a classic paper. Make note of what assumptions, what theories, and what data go into making predictions, and how they talk about uncertainty.

Then we will read a draft paper (available on Ed) by Sayash Kapoor and me that argues, in part, that AI existential risk forecasting is a fundamentally meaningless exercise (in contrast to other types of existential risk such as asteroids). We’ll spend about half the class time workshopping this paper. Workshopping means that participants provide critical feedback to help authors improve the paper.