Artificial Intelligence and Existential Risk (AI-Xrisk)
David Krueger is a PhD student at the University of Montreal, and a former intern on DeepMind's and the Future of Humanity Institute's technical AI safety teams. He has also worked as a freelance career consultant with 80,000 hours helping people get into technical AI safety research.
Artificial intelligence may surpass human intelligence in our lifetime.
I'll explain why this might lead to human extinction and what we can do to reduce the probability of such an outcome.
I may also include arguments for prioritizing this issue over others and/or a more detailed overview of research related to reducing AI-Xrisk.