80,000 Hours Podcast Podcast By Rob Luisa and the 80 000 Hours team cover art

80,000 Hours Podcast

80,000 Hours Podcast

By: Rob Luisa and the 80 000 Hours team
Listen for free

About this listen

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.All rights reserved
Episodes
  • #218 – Hugh White on why Trump is abandoning US hegemony – and that’s probably good
    Jun 12 2025

    For decades, US allies have slept soundly under the protection of America’s overwhelming military might. Donald Trump — with his threats to ditch NATO, seize Greenland, and abandon Taiwan — seems hell-bent on shattering that comfort.

    But according to Hugh White — one of the world's leading strategic thinkers, emeritus professor at the Australian National University, and author of Hard New World: Our Post-American Future — Trump isn't destroying American hegemony. He's simply revealing that it's already gone.

    Links to learn more, video, highlights, and full transcript: https://80k.info/hw

    “Trump has very little trouble accepting other great powers as co-equals,” Hugh explains. And that happens to align perfectly with a strategic reality the foreign policy establishment desperately wants to ignore: fundamental shifts in global power have made the costs of maintaining a US-led hegemony prohibitively high.

    Even under Biden, when Russia invaded Ukraine, the US sent weapons but explicitly ruled out direct involvement. Ukraine matters far more to Russia than America, and this “asymmetry of resolve” makes Putin’s nuclear threats credible where America’s counterthreats simply aren’t. Hugh’s gloomy prediction: “Europeans will end up conceding to Russia whatever they can’t convince the Russians they’re willing to fight a nuclear war to deny them.”

    The Pacific tells the same story. Despite Obama’s “pivot to Asia” and Biden’s tough talk about “winning the competition for the 21st century,” actual US military capabilities there have barely budged while China’s have soared, along with its economy — which is now bigger than the US’s, as measured in purchasing power. Containing China and defending Taiwan would require America to spend 8% of GDP on defence (versus 3.5% today) — and convince Beijing it’s willing to accept Los Angeles being vaporised.

    Unlike during the Cold War, no president — Trump or otherwise — can make that case to voters.

    Our new “multipolar” future, split between American, Chinese, Russian, Indian, and European spheres of influence, is a “darker world” than the golden age of US dominance. But Hugh’s message is blunt: for better or worse, 35 years of American hegemony are over.

    Recorded 30/5/2025.

    Chapters:

    • 00:00:00 Cold open
    • 00:01:25 US dominance is already gone
    • 00:03:26 US hegemony was the weird aberration
    • 00:13:08 Why the US bothered being the 'new Rome'
    • 00:23:25 Evidence the US is accepting the multipolar global order
    • 00:36:41 How Trump is advancing the inevitable
    • 00:43:21 Rubio explicitly favours this outcome
    • 00:45:42 Trump is half-right that the US was being ripped off
    • 00:50:14 It doesn't matter if the next president feels differently
    • 00:56:17 China's population is shrinking, but it doesn't matter
    • 01:06:07 Why Hugh disagrees with other realists like Mearsheimer
    • 01:10:52 Could the US be persuaded to spend 2x on defence?
    • 01:16:22 A multipolar world is bad, but better than nuclear war
    • 01:21:46 Will the US invade Panama? Greenland? Canada?!
    • 01:32:01 What should everyone else do to protect themselves in this new world?
    • 01:39:41 Europe is strong enough to take on Russia
    • 01:44:03 But the EU will need nuclear weapons
    • 01:48:34 Cancel (some) orders for US fighter planes
    • 01:53:40 Taiwan is screwed, even with its AI chips
    • 02:04:12 South Korea has to go nuclear too
    • 02:08:08 Japan will go nuclear, but can't be a regional leader
    • 02:11:44 Australia is defensible but needs a totally different military
    • 02:17:19 AGI may or may not overcome existing nuclear deterrence
    • 02:34:24 How right is realism?
    • 02:40:17 Has a country ever gone to war over morality alone?
    • 02:44:45 Hugh's message for Americans
    • 02:47:12 Why America temporarily stopped being isolationist

    Tell us what you thought! https://forms.gle/AM91VzL4BDroEe6AA

    Video editing: Simon Monsour
    Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
    Music: Ben Cordell
    Transcriptions and web: Katy Moore

    Show more Show less
    2 hrs and 49 mins
  • #217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress
    Jun 2 2025
    AI models today have a 50% chance of successfully completing a task that would take an expert human one hour. Seven months ago, that number was roughly 30 minutes — and seven months before that, 15 minutes. (See graph.)These are substantial, multi-step tasks requiring sustained focus: building web applications, conducting machine learning research, or solving complex programming challenges.Today’s guest, Beth Barnes, is CEO of METR (Model Evaluation & Threat Research) — the leading organisation measuring these capabilities.Links to learn more, video, highlights, and full transcript: https://80k.info/bbBeth's team has been timing how long it takes skilled humans to complete projects of varying length, then seeing how AI models perform on the same work. The resulting paper “Measuring AI ability to complete long tasks” made waves by revealing that the planning horizon of AI models was doubling roughly every seven months. It's regarded by many as the most useful AI forecasting work in years.Beth has found models can already do “meaningful work” improving themselves, and she wouldn’t be surprised if AI models were able to autonomously self-improve as little as two years from now — in fact, “It seems hard to rule out even shorter [timelines]. Is there 1% chance of this happening in six, nine months? Yeah, that seems pretty plausible.”Beth adds:The sense I really want to dispel is, “But the experts must be on top of this. The experts would be telling us if it really was time to freak out.” The experts are not on top of this. Inasmuch as there are experts, they are saying that this is a concerning risk. … And to the extent that I am an expert, I am an expert telling you you should freak out.What did you think of this episode? https://forms.gle/sFuDkoznxBcHPVmX6Chapters:Cold open (00:00:00)Who is Beth Barnes? (00:01:19)Can we see AI scheming in the chain of thought? (00:01:52)The chain of thought is essential for safety checking (00:08:58)Alignment faking in large language models (00:12:24)We have to test model honesty even before they're used inside AI companies (00:16:48)We have to test models when unruly and unconstrained (00:25:57)Each 7 months models can do tasks twice as long (00:30:40)METR's research finds AIs are solid at AI research already (00:49:33)AI may turn out to be strong at novel and creative research (00:55:53)When can we expect an algorithmic 'intelligence explosion'? (00:59:11)Recursively self-improving AI might even be here in two years — which is alarming (01:05:02)Could evaluations backfire by increasing AI hype and racing? (01:11:36)Governments first ignore new risks, but can overreact once they arrive (01:26:38)Do we need external auditors doing AI safety tests, not just the companies themselves? (01:35:10)A case against safety-focused people working at frontier AI companies (01:48:44)The new, more dire situation has forced changes to METR's strategy (02:02:29)AI companies are being locally reasonable, but globally reckless (02:10:31)Overrated: Interpretability research (02:15:11)Underrated: Developing more narrow AIs (02:17:01)Underrated: Helping humans judge confusing model outputs (02:23:36)Overrated: Major AI companies' contributions to safety research (02:25:52)Could we have a science of translating AI models' nonhuman language or neuralese? (02:29:24)Could we ban using AI to enhance AI, or is that just naive? (02:31:47)Open-weighting models is often good, and Beth has changed her attitude to it (02:37:52)What we can learn about AGI from the nuclear arms race (02:42:25)Infosec is so bad that no models are truly closed-weight models (02:57:24)AI is more like bioweapons because it undermines the leading power (03:02:02)What METR can do best that others can't (03:12:09)What METR isn't doing that other people have to step up and do (03:27:07)What research METR plans to do next (03:32:09)This episode was originally recorded on February 17, 2025.Video editing: Luke Monsour and Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellTranscriptions and web: Katy Moore
    Show more Show less
    3 hrs and 47 mins
  • Beyond human minds: The bewildering frontier of consciousness in insects, AI, and more
    May 23 2025

    What if there’s something it’s like to be a shrimp — or a chatbot?

    For centuries, humans have debated the nature of consciousness, often placing ourselves at the very top. But what about the minds of others — both the animals we share this planet with and the artificial intelligences we’re creating?

    We’ve pulled together clips from past conversations with researchers and philosophers who’ve spent years trying to make sense of animal consciousness, artificial sentience, and moral consideration under deep uncertainty.

    Links to learn more and full transcript: https://80k.info/nhs

    Chapters:

    • Cold open (00:00:00)
    • Luisa's intro (00:00:57)
    • Robert Long on what we should picture when we think about artificial sentience (00:02:49)
    • Jeff Sebo on what the threshold is for AI systems meriting moral consideration (00:07:22)
    • Meghan Barrett on the evolutionary argument for insect sentience (00:11:24)
    • Andrés Jiménez Zorrilla on whether there’s something it’s like to be a shrimp (00:15:09)
    • Jonathan Birch on the cautionary tale of newborn pain (00:21:53)
    • David Chalmers on why artificial consciousness is possible (00:26:12)
    • Holden Karnofsky on how we’ll see digital people as... people (00:32:18)
    • Jeff Sebo on grappling with our biases and ignorance when thinking about sentience (00:38:59)
    • Bob Fischer on how to think about the moral weight of a chicken (00:49:37)
    • Cameron Meyer Shorb on the range of suffering in wild animals (01:01:41)
    • Sébastien Moro on whether fish are conscious or sentient (01:11:17)
    • David Chalmers on when to start worrying about artificial consciousness (01:16:36)
    • Robert Long on how we might stumble into causing AI systems enormous suffering (01:21:04)
    • Jonathan Birch on how we might accidentally create artificial sentience (01:26:13)
    • Anil Seth on which parts of the brain are required for consciousness (01:32:33)
    • Peter Godfrey-Smith on uploads of ourselves (01:44:47)
    • Jonathan Birch on treading lightly around the “edge cases” of sentience (02:00:12)
    • Meghan Barrett on whether brain size and sentience are related (02:05:25)
    • Lewis Bollard on how animal advocacy has changed in response to sentience studies (02:12:01)
    • Bob Fischer on using proxies to determine sentience (02:22:27)
    • Cameron Meyer Shorb on how we can practically study wild animals’ subjective experiences (02:26:28)
    • Jeff Sebo on the problem of false positives in assessing artificial sentience (02:33:16)
    • Stuart Russell on the moral rights of AIs (02:38:31)
    • Buck Shlegeris on whether AI control strategies make humans the bad guys (02:41:50)
    • Meghan Barrett on why she can’t be totally confident about insect sentience (02:47:12)
    • Bob Fischer on what surprised him most about the findings of the Moral Weight Project (02:58:30)
    • Jeff Sebo on why we’re likely to sleepwalk into causing massive amounts of suffering in AI systems (03:02:46)
    • Will MacAskill on the rights of future digital beings (03:05:29)
    • Carl Shulman on sharing the world with digital minds (03:19:25)
    • Luisa's outro (03:33:43)

    Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
    Additional content editing: Katy Moore and Milo McGuire
    Transcriptions and web: Katy Moore

    Show more Show less
    3 hrs and 35 mins
adbl_web_global_use_to_activate_webcro805_stickypopup
All stars
Most relevant  
For anyone who's interested in audiobooks, especially non-fiction work, this podcast is perfect. For people used to short-form podcasts, the 2-5 hour range may seem intimidating, but for those used to the length of audiobooks it's great. The length allows the interviewer to ask genuinely interesting questions, with a bit of back-and-forth with the interviewee.

Brilliant

Something went wrong. Please try again in a few minutes.