Episodes

  • #81 – Cynthia Schuck on Quantifying Animal Welfare
    Nov 21 2024

    Dr Cynthia Schuck-Paim is the Scientific Director of the Welfare Footprint Project, a scientific effort to quantify animal welfare to inform practice, policy, investing and purchasing decisions.

    You can find links and a transcript at www.hearthisidea.com/episodes/schuck.

    We discuss:

    • How to begin thinking about quantifying animal experiences in a cross-comparable way
    • Whether the ability to feel pain is unique to big brained animals, or more widespread in the tree of life
    • How fish farming compares to poultry and livestock farming
    • How worried to be about bird flu zoonosis
    • Whether different animal species experience time differently
    • Whether positive experiences like joy could make life worth living for some farmed animals
    • How animal welfare advocates can learn from anti-corruption nonprofits

    You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best way to support the show. Thanks for listening!

    Show more Show less
    1 hr and 37 mins
  • #80 – Dan Williams on How Persuasion Works
    Oct 26 2024

    Dan Williams is a Lecturer in Philosophy at the University of Sussex and an Associate Fellow at the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge.

    You can find links and a transcript at www.hearthisidea.com/episodes/williams.

    We discuss:

    • If reasoning is so useful, why are we so bad at it?
    • Do some bad ideas really work like ‘mind viruses’? Is the ‘luxury beliefs’ concept useful?
    • What's up with the idea of a ‘marketplace for ideas’? Are people shopping for new beliefs, or to rationalise their existing attitudes?
    • How dangerous is misinformation, really? Can we ‘vaccinate’ or ‘inoculate’ against it?
    • Will AI help us form more accurate beliefs, or will it persuade more people of unhinged ideas?
    • Does fact-checking work?
    • Under transformative AI, should we worry more about the suppression or the proliferation of counter-establishment ideas?

    You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best way to support the show. Thanks for listening!

    Show more Show less
    1 hr and 49 mins
  • #79 – Tamay Besiroglu on Explosive Growth from AI
    Sep 14 2024

    Tamay Besiroglu is a researcher working on the intersection of economics and AI. He is currently the Associate Director of Epoch AI, a research institute investigating key trends and questions that will shape the trajectory and governance of AI.

    You can find links and a transcript at www.hearthisidea.com/episodes/besiroglu

    In this episode we talked about open source the risks and benefits of open source AI models. We talk about:

    • The argument for explosive growth from ‘increasing returns to scale’
    • Does AI need to be able to automate R&D to cause rapid growth?
    • Which theories of growth best explain the Industrial Revolution; and what do they predict from AI?
    • What happens to human incomes under near-total job automation?
    • Are regulations likely to slow down frontier AI progress enough to prevent this? Might AI go the way of nuclear power?
    • Will AI hit on resource or power limits before explosive growth? Won't it run out of data first?
    • Why aren't academic economists more interested in the prospect of explosive growth, if indeed it is so plausible?

    You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best way to support the show. Thanks for listening!

    Show more Show less
    2 hrs and 9 mins
  • #78 – Jacob Trefethen on Global Health R&D
    Sep 8 2024

    Jacob Trefethen oversees Open Philanthropy’s science and science policy programs. He was a Henry Fellow at Harvard University, and has a B.A. from the University of Cambridge.

    You can find links and a transcript at www.hearthisidea.com/episodes/trefethen

    In this episode we talked about open source the risks and benefits of open source AI models. We talk about:

    • Life-saving health technologies which probably won't exist in 5 years (without a concerted effort) — like a widely available TB vaccine, and bugs which stop malaria spreading
    • How R&D for neglected diseases works —
    • How much does the world spend on it?
    • How do drugs for neglected diseases go from design to distribution?
    • No-brainer policy ideas for speeding up global health R&D
    • Comparing health R&D to public health interventions (like bed nets)
    • Comparing the social returns to frontier (‘Progress Studies’) to global health R&D
    • Why is there no GiveWell-equivalent for global health R&D?
    • Won't AI do all the R&D for us soon?

    You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

    Show more Show less
    2 hrs and 30 mins
  • #77 – Elizabeth Seger on Open Sourcing AI
    Jul 25 2024

    Elizabeth Seger is the Director of Technology Policy at Demos, a cross-party UK think tank with a program on trustworthy AI.

    You can find links and a transcript at www.hearthisidea.com/episodes/seger In this episode we talked about open source the risks and benefits of open source AI models. We talk about:

    • What ‘open source’ really means
    • What is (and isn’t) open about ‘open source’ AI models
    • How open source weights and code are useful for AI safety research
    • How and when the costs of open sourcing frontier model weights might outweigh the benefits
    • Analogies to ‘open sourcing nuclear designs’ and the open science movement

    You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

    Note that this episode was recorded before the release of Meta’s Llama 3.1 family of models. Note also that in the episode Elizabeth referenced an older version of the definition maintained by OSI (roughly version 0.0.3). The current OSI definition (0.0.8) now does a much better job of delineating between different model components.

    Show more Show less
    1 hr and 21 mins
  • #76 – Joe Carlsmith on Scheming AI
    Mar 16 2024

    Joe Carlsmith is a writer, researcher, and philosopher. He works as a senior research analyst at Open Philanthropy, where he focuses on existential risk from advanced artificial intelligence. He also writes independently about various topics in philosophy and futurism, and holds a doctorate in philosophy from the University of Oxford.

    You can find links and a transcript at www.hearthisidea.com/episodes/carlsmith

    In this episode we talked about a report Joe recently authored, titled ‘Scheming AIs: Will AIs fake alignment during training in order to get power?’. The report “examines whether advanced AIs that perform well in training will be doing so in order to gain power later”; a behaviour Carlsmith calls scheming.

    We talk about:

    • Distinguishing ways AI systems can be deceptive and misaligned
    • Why powerful AI systems might acquire goals that go beyond what they’re trained to do, and how those goals could lead to scheming
    • Why scheming goals might perform better (or worse) in training than less worrying goals
    • The ‘counting argument’ for scheming AI
    • Why goals that lead to scheming might be simpler than the goals we intend
    • Things Joe is still confused about, and research project ideas

    You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

    Show more Show less
    1 hr and 52 mins
  • #75 – Eric Schwitzgebel on Digital Consciousness and the Weirdness of the World
    Feb 4 2024

    Eric Schwitzgebel is a professor of philosophy at the University of California, Riverside. His main interests include connections between empirical psychology and philosophy of mind and the nature of belief. His book The Weirdness of the World can be found here.

    We talk about:

    • The possibility of digital consciousness
      • Policy ideas for avoiding major moral mistakes around digital consciousness
      • Prospects for the science of consciousness, and why we likely won't have clear answers in time
    • Why introspection is much less reliable than most people think
      • How and why we invent false stories about our own choices without realising
      • What randomly sampling people's experiences reveals about what we're doing with most of our attention
    • The possibility of 'overlapping minds'
    • How and why our actions might have infinite effects, both good and bad
      • Whether it would be good news to learn that our actions have infinite effects, or that the universe is infinite in extent
    • The best science fiction on digital minds and AI

    You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

    Show more Show less
    1 hr and 59 mins
  • #74 – Sonia Ben Ouagrham-Gormley on Barriers to Bioweapons
    Dec 19 2023

    Sonia Ben Ouagrham-Gormley is an associate professor at George Mason University and Deputy Director of their Biodefence Programme

    In this episode we talk about:

    • Where the belief that 'bioweapons are easy to make' came from and why it has been difficult to change
    • Why transferring tacit knowledge is so difficult -- and the particular challenges that rogue actors face
    • As well as lastly what Sonia makes of the AI-Bio risk discourse and what types of advances in technology would cause her concern

    You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

    Show more Show less
    1 hr and 54 mins