80,000 Hours Podcast Podcast By Rob Luisa and the 80 000 Hours team cover art

80,000 Hours Podcast

80,000 Hours Podcast

By: Rob Luisa and the 80 000 Hours team
Listen for free

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.All rights reserved
Episodes
  • Rebuilding after apocalypse: What 13 experts say about bouncing back
    Jul 15 2025

    What happens when civilisation faces its greatest tests?

    This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and recover from catastrophic events. From nuclear winter and electromagnetic pulses to pandemics and climate disasters, we explore both the threats that could bring down modern civilisation and the practical solutions that could help us bounce back.

    Learn more and see the full transcript: https://80k.info/cr25

    Chapters:

    • Cold open (00:00:00)
    • Luisa’s intro (00:01:16)
    • Zach Weinersmith on how settling space won’t help with threats to civilisation anytime soon (unless AI gets crazy good) (00:03:12)
    • Luisa Rodriguez on what the world might look like after a global catastrophe (00:11:42)
    • Dave Denkenberger on the catastrophes that could cause global starvation (00:22:29)
    • Lewis Dartnell on how we could rediscover essential information if the worst happened (00:34:36)
    • Andy Weber on how people in US defence circles think about nuclear winter (00:39:24)
    • Toby Ord on risks to our atmosphere and whether climate change could really threaten civilisation (00:42:34)
    • Mark Lynas on how likely it is that climate change leads to civilisational collapse (00:54:27)
    • Lewis Dartnell on how we could recover without much coal or oil (01:02:17)
    • Kevin Esvelt on people who want to bring down civilisation — and how AI could help them succeed (01:08:41)
    • Toby Ord on whether rogue AI really could wipe us all out (01:19:50)
    • Joan Rohlfing on why we need to worry about more than just nuclear winter (01:25:06)
    • Annie Jacobsen on the effects of firestorms, rings of annihilation, and electromagnetic pulses from nuclear blasts (01:31:25)
    • Dave Denkenberger on disruptions to electricity and communications (01:44:43)
    • Luisa Rodriguez on how we might lose critical knowledge (01:53:01)
    • Kevin Esvelt on the pandemic scenarios that could bring down civilisation (01:57:32)
    • Andy Weber on tech to help with pandemics (02:15:45)
    • Christian Ruhl on why we need the equivalents of seatbelts and airbags to prevent nuclear war from threatening civilisation (02:24:54)
    • Mark Lynas on whether wide-scale famine would lead to civilisational collapse (02:37:58)
    • Dave Denkenberger on low-cost, low-tech solutions to make sure everyone is fed no matter what (02:49:02)
    • Athena Aktipis on whether society would go all Mad Max in the apocalypse (02:59:57)
    • Luisa Rodriguez on why she’s optimistic survivors wouldn’t turn on one another (03:08:02)
    • David Denkenberger on how resilient foods research overlaps with space technologies (03:16:08)
    • Zach Weinersmith on what we’d practically need to do to save a pocket of humanity in space (03:18:57)
    • Lewis Dartnell on changes we could make today to make us more resilient to potential catastrophes (03:40:45)
    • Christian Ruhl on thoughtful philanthropy to reduce the impact of catastrophes (03:46:40)
    • Toby Ord on whether civilisation could rebuild from a small surviving population (03:55:21)
    • Luisa Rodriguez on how fast populations might rebound (04:00:07)
    • David Denkenberger on the odds civilisation recovers even without much preparation (04:02:13)
    • Athena Aktipis on the best ways to prepare for a catastrophe, and keeping it fun (04:04:15)
    • Will MacAskill on the virtues of the potato (04:19:43)
    • Luisa’s outro (04:25:37)

    Tell us what you thought! https://forms.gle/T2PHNQjwGj2dyCqV9

    Content editing: Katy Moore and Milo McGuire
    Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
    Music: Ben Cordell
    Transcriptions and web: Katy Moore

    Show more Show less
    4 hrs and 27 mins
  • #220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years
    Jul 8 2025
    Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there’s a 25% chance that within four years, AI will be able to do everything needed to run an AI company, from writing code to designing experiments to making strategic and business decisions.As Ryan lays out, AI models are “marching through the human regime”: systems that could handle five-minute tasks two years ago now tackle 90-minute projects. Double that a few more times and we may be automating full jobs rather than just parts of them.Will setting AI to improve itself lead to an explosive positive feedback loop? Maybe, but maybe not.The explosive scenario: Once you’ve automated your AI company, you could have the equivalent of 20,000 top researchers, each working 50 times faster than humans with total focus. “You have your AIs, they do a bunch of algorithmic research, they train a new AI, that new AI is smarter and better and more efficient… that new AI does even faster algorithmic research.” In this world, we could see years of AI progress compressed into months or even weeks.With AIs now doing all of the work of programming their successors and blowing past the human level, Ryan thinks it would be fairly straightforward for them to take over and disempower humanity, if they thought doing so would better achieve their goals. In the interview he lays out the four most likely approaches for them to take.The linear progress scenario: You automate your company but progress barely accelerates. Why? Multiple reasons, but the most likely is “it could just be that AI R&D research bottlenecks extremely hard on compute.” You’ve got brilliant AI researchers, but they’re all waiting for experiments to run on the same limited set of chips, so can only make modest progress.Ryan’s median guess splits the difference: perhaps a 20x acceleration that lasts for a few months or years. Transformative, but less extreme than some in the AI companies imagine.And his 25th percentile case? Progress “just barely faster” than before. All that automation, and all you’ve been able to do is keep pace.Unfortunately the data we can observe today is so limited that it leaves us with vast error bars. “We’re extrapolating from a regime that we don’t even understand to a wildly different regime,” Ryan believes, “so no one knows.”But that huge uncertainty means the explosive growth scenario is a plausible one — and the companies building these systems are spending tens of billions to try to make it happen.In this extensive interview, Ryan elaborates on the above and the policy and technical response necessary to insure us against the possibility that they succeed — a scenario society has barely begun to prepare for.Summary, video, and full transcript: https://80k.info/rg25Recorded February 21, 2025.Chapters:Cold open (00:00:00)Who’s Ryan Greenblatt? (00:01:10)How close are we to automating AI R&D? (00:01:27)Really, though: how capable are today's models? (00:05:08)Why AI companies get automated earlier than others (00:12:35)Most likely ways for AGI to take over (00:17:37)Would AGI go rogue early or bide its time? (00:29:19)The “pause at human level” approach (00:34:02)AI control over AI alignment (00:45:38)Do we have to hope to catch AIs red-handed? (00:51:23)How would a slow AGI takeoff look? (00:55:33)Why might an intelligence explosion not happen for 8+ years? (01:03:32)Key challenges in forecasting AI progress (01:15:07)The bear case on AGI (01:23:01)The change to “compute at inference” (01:28:46)How much has pretraining petered out? (01:34:22)Could we get an intelligence explosion within a year? (01:46:36)Reasons AIs might struggle to replace humans (01:50:33)Things could go insanely fast when we automate AI R&D. Or not. (01:57:25)How fast would the intelligence explosion slow down? (02:11:48)Bottom line for mortals (02:24:33)Six orders of magnitude of progress... what does that even look like? (02:30:34)Neglected and important technical work people should be doing (02:40:32)What's the most promising work in governance? (02:44:32)Ryan's current research priorities (02:47:48)Tell us what you thought! https://forms.gle/hCjfcXGeLKxm5pLaAVideo editing: Luke Monsour, Simon Monsour, and Dominic ArmstrongAudio engineering: Ben Cordell, Milo McGuire, and Dominic ArmstrongMusic: Ben CordellTranscriptions and web: Katy Moore
    Show more Show less
    2 hrs and 51 mins
  • #219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand
    Jun 24 2025

    The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different methods, and those underlying technical changes force a big rethink of what coming years will look like.

    Toby Ord — Oxford philosopher and bestselling author of The Precipice — has been tracking these shifts and mapping out the implications both for governments and our lives.

    Links to learn more, video, highlights, and full transcript: https://80k.info/to25

    As he explains, until recently anyone can access the best AI in the world “for less than the price of a can of Coke.” But unfortunately, that’s over.

    What changed? AI companies first made models smarter by throwing a million times as much computing power at them during training, to make them better at predicting the next word. But with high quality data drying up, that approach petered out in 2024.

    So they pivoted to something radically different: instead of training smarter models, they’re giving existing models dramatically more time to think — leading to the rise in “reasoning models” that are at the frontier today.

    The results are impressive but this extra computing time comes at a cost: OpenAI’s o3 reasoning model achieved stunning results on a famous AI test by writing an Encyclopedia Britannica’s worth of reasoning to solve individual problems at a cost of over $1,000 per question.

    This isn’t just technical trivia: if this improvement method sticks, it will change much about how the AI revolution plays out, starting with the fact that we can expect the rich and powerful to get access to the best AI models well before the rest of us.

    Toby and host Rob discuss the implications of all that, plus the return of reinforcement learning (and resulting increase in deception), and Toby's commitment to clarifying the misleading graphs coming out of AI companies — to separate the snake oil and fads from the reality of what's likely a "transformative moment in human history."

    Recorded on May 23, 2025.

    Chapters:

    • Cold open (00:00:00)
    • Toby Ord is back — for a 4th time! (00:01:20)
    • Everything has changed (and changed again) since 2020 (00:01:37)
    • Is x-risk up or down? (00:07:47)
    • The new scaling era: compute at inference (00:09:12)
    • Inference scaling means less concentration (00:31:21)
    • Will rich people get access to AGI first? Will the rest of us even know? (00:35:11)
    • The new regime makes 'compute governance' harder (00:41:08)
    • How 'IDA' might let AI blast past human level — or not (00:50:14)
    • Reinforcement learning brings back 'reward hacking' agents (01:04:56)
    • Will we get warning shots? Will they even help? (01:14:41)
    • The scaling paradox (01:22:09)
    • Misleading charts from AI companies (01:30:55)
    • Policy debates should dream much bigger (01:43:04)
    • Scientific moratoriums have worked before (01:56:04)
    • Might AI 'go rogue' early on? (02:13:16)
    • Lamps are regulated much more than AI (02:20:55)
    • Companies made a strategic error shooting down SB 1047 (02:29:57)
    • Companies should build in emergency brakes for their AI (02:35:49)
    • Toby's bottom lines (02:44:32)


    Tell us what you thought! https://forms.gle/enUSk8HXiCrqSA9J8

    Video editing: Simon Monsour
    Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
    Music: Ben Cordell
    Camera operator: Jeremy Chevillotte
    Transcriptions and web: Katy Moore

    Show more Show less
    2 hrs and 48 mins
All stars
Most relevant  
For anyone who's interested in audiobooks, especially non-fiction work, this podcast is perfect. For people used to short-form podcasts, the 2-5 hour range may seem intimidating, but for those used to the length of audiobooks it's great. The length allows the interviewer to ask genuinely interesting questions, with a bit of back-and-forth with the interviewee.

Brilliant

Something went wrong. Please try again in a few minutes.