Loading...
Loading...

Justin Helps is the science educator behind Primer Learning with 2M subscribers. We cover how he got into AI safety, debate AGI timelines, and why he calculates p(doom) to be 70% by 2100 😱.
Timestamps
00:00:00 — Cold Open
00:00:38 — Introducing Justin Helps
00:02:03 — What's Your P(Doom)?™
00:03:38 — Justin's First Exposure to AI X-Risk
00:04:49 — Major Disagreements with Eliezer Yudkowsky
00:09:46 — Debating the Timeline to AGI
00:12:24 — Metaculus Prediction Market Estimates AGI by 2032
00:20:06 — Misguided Conceptions of AI's Limitations
00:25:23 — Only a 5% P(Doom) by 2040
00:28:40 — AIs Will Not Care About the Human Species
00:31:00 — Summarizing Justin's Position So Far
00:36:17 — High P(Doom), but We're Not Depressed
00:40:14 — Justin's "Computer Man" Thought Experiment
00:51:16 — Should We Pause AGI Development?
00:54:15 — AI Doom Is a Serious Concern
Links
Primer’s Video on AI Doom — https://www.youtube.com/watch?v=Qg5QXY_qZuI
Primer on YouTube — https://youtube.com/@PrimerBlobs
Primer’s Website — https://primerlearning.org/
Justin Helps on X — https://x.com/Helpsypoo
Harry Potter and the Methods of Rationality — https://hpmor.com/
Feeling Rational by Eliezer Yudkowsky —https://www.lesswrong.com/posts/SqF8cHjJv43mvJJzx/feeling-rational
Pause AI — https://pauseai.info
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏
No transcript available for this episode.

Doom Debates

Doom Debates

Doom Debates