Loading...
Loading...

Hundreds of prominent AI scientists and other notable figures signed a statement in 2023 saying that mitigating the risk of extinction from AI should be a global priority. At 80,000 Hours, we’ve considered risks from AI to be the world’s most pressing problem since 2016.
But what led us to this conclusion? Could AI really cause human extinction? We’re not certain, but we think the risk is worth taking very seriously.
In particular, as companies create increasingly powerful AI systems, there’s a concerning chance that:
This article is written by Cody Fenwick and Zershaaneh Qureshi, and narrated by Zershaaneh Qureshi. It discusses why future AI systems could disempower humanity, what current AI research reveals about behaviours like power-seeking and deception, and how you can help mitigate the dangers.
You can see the original article — packed with graphs, images, footnotes, and further resources — on the 80,000 Hours website:
https://80000hours.org/problem-profiles/risks-from-power-seeking-ai/
Chapters:
Audio editing: Dominic Armstrong
Production: Zershaaneh Qureshi, Elizabeth Cox, and Katy Moore
No transcript available for this episode.
80,000 Hours Podcast