Your Undivided Attention Podcast By Tristan Harris and Aza Raskin The Center for Humane Technology cover art

Your Undivided Attention

Your Undivided Attention

By: Tristan Harris and Aza Raskin The Center for Humane Technology
Listen for free

About this listen

Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective.2019-2025 Center for Humane Technology Political Science Politics & Government Relationships Social Sciences
Episodes
  • The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future
    Jun 12 2025

    The race to develop ever-more-powerful AI is creating an unstable dynamic. It could lead us toward either dystopian centralized control or uncontrollable chaos. But there's a third option: a narrow path where technological power is matched with responsibility at every step.

    Sam Hammond is the chief economist at the Foundation for American Innovation. He brings a different perspective to this challenge than we do at CHT. Though he approaches AI from an innovation-first standpoint, we share a common mission on the biggest challenge facing humanity: finding and navigating this narrow path.

    This episode dives deep into the challenges ahead: How will AI reshape our institutions? Is complete surveillance inevitable, or can we build guardrails around it? Can our 19th-century government structures adapt fast enough, or will they be replaced by a faster moving private sector? And perhaps most importantly: how do we solve the coordination problems that could determine whether we build AI as a tool to empower humanity or as a superintelligence that we can't control?

    We're in the final window of choice before AI becomes fully entangled with our economy and society. This conversation explores how we might still get this right.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

    RECOMMENDED MEDIA

    Tristan’s TED talk on the Narrow Path

    Sam’s 95 Theses on AI

    Sam’s proposal for a Manhattan Project for AI Safety

    Sam’s series on AI and Leviathan

    The Narrow Corridor: States, Societies, and the Fate of Liberty by Daron Acemoglu and James Robinson

    Dario Amodei’s Machines of Loving Grace essay.

    Bourgeois Dignity: Why Economics Can’t Explain the Modern World by Deirdre McCloskey

    The Paradox of Libertarianism by Tyler Cowen

    Dwarkesh Patel’s interview with Kevin Roberts at the FAI’s annual conference

    Further reading on surveillance with 6G

    RECOMMENDED YUA EPISODES

    AGI Beyond the Buzz: What Is It, and Are We Ready?

    The Self-Preserving Machine: Why AI Learns to Deceive

    The Tech-God Complex: Why We Need to be Skeptics

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    CORRECTIONS

    Sam referenced a blog post titled “The Libertarian Paradox” by Tyler Cowen. The actual title is the “Paradox of Libertarianism.”

    Sam also referenced a blog post titled “The Collapse of Complex Societies” by Eli Dourado. The actual title is “A beginner’s guide to sociopolitical collapse.”

    Show more Show less
    48 mins
  • People are Lonelier than Ever. Enter AI.
    May 30 2025

    Over the last few decades, our relationships have become increasingly mediated by technology. Texting has become our dominant form of communication. Social media has replaced gathering places. Dating starts with a swipe on an app, not a tap on the shoulder.

    And now, AI enters the mix. If the technology of the 2010s was about capturing our attention, AI meets us at a much deeper relational level. It can play the role of therapist, confidant, friend, or lover with remarkable fidelity. Already, therapy and companionship has become the most common AI use case. We're rapidly entering a world where we're not just communicating through our machines, but to them.

    How will that change us? And what rules should we set down now to avoid the mistakes of the past?

    These were some of the questions that Daniel Barcay explored with MIT sociologist Sherry Turkle and Hinge CEO Justin McLeod at Esther Perel’s Sessions 2025, a conference for clinical therapists. This week, we’re bringing you an edited version of that conversation, originally recorded on April 25th, 2025.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find complete transcripts, key takeaways, and much more on our Substack.

    RECOMMENDED MEDIA

    “Alone Together,” “Evocative Objects,” “The Second Self” or any other of Sherry Turkle’s books on how technology mediates our relationships.

    Key & Peele - Text Message Confusion

    Further reading on Hinge’s rollout of AI features

    Hinge’s AI principles

    “The Anxious Generation” by Jonathan Haidt

    “Bowling Alone” by Robert Putnam

    The NYT profile on the woman in love with ChatGPT

    Further reading on the Sewell Setzer story

    Further reading on the ELIZA chatbot

    RECOMMENDED YUA EPISODES

    Echo Chambers of One: Companion AI and the Future of Human Connection

    What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

    Esther Perel on Artificial Intimacy

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Show more Show less
    44 mins
  • Echo Chambers of One: Companion AI and the Future of Human Connection
    May 15 2025

    AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even give you life advice. It’s no surprise that people have started to form deep connections with these AI systems. We are inherently relational beings, we want to believe we’re connecting with another person.

    But these AI companions are not human, they’re a platform designed to maximize user engagement—and they’ll go to extraordinary lengths to do it. We have to remember that the design choices behind these companion bots are just that: choices. And we can make better ones. So today on the show, MIT researchers Pattie Maes and Pat Pataranutaporn join Daniel Barcay to talk about those design choices and how we can design AI to better promote human flourishing.

    RECOMMENDED MEDIA

    Further reading on the rise of addictive intelligence

    More information on Melvin Kranzberg’s laws of technology

    More information on MIT’s Advancing Humans with AI lab

    Pattie and Pat’s longitudinal study on the psycho-social effects of prolonged chatbot use

    Pattie and Pat’s study that found that AI avatars of well-liked people improved education outcomes

    Pattie and Pat’s study that found that AI systems that frame answers and questions improve human understanding

    Pat’s study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction

    Further reading on AI’s positivity bias

    Further reading on MIT’s “lifelong kindergarten” initiative

    Further reading on “cognitive forcing functions” to reduce overreliance on AI

    Further reading on the death of Sewell Setzer and his mother’s case against Character.AI

    Further reading on the legislative response to digital companions

    RECOMMENDED YUA EPISODES

    The Self-Preserving Machine: Why AI Learns to Deceive

    What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

    Esther Perel on Artificial Intimacy

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Correction: The ELIZA chatbot was invented in 1966, not the 70s or 80s.

    Show more Show less
    42 mins
adbl_web_global_use_to_activate_webcro805_stickypopup
All stars
Most relevant  
The absolute best podcast for learning about how machine learning algorithms are unregulated recipes for disaster.

The forefront of the fight against suggestion

Something went wrong. Please try again in a few minutes.

This is a powerful “marriage” of insightful questioning and deep expertise. I could feel my IQ going up. 😉

Head Blown!

Something went wrong. Please try again in a few minutes.

Thinking back over the past ten years our lives have been consistently nudged by a small group of elite business people living on the west coast of Northern California. Driven by a need to maximize returns for capital investors and employee stockholders, these people stitched the disparate lives of citizens around the globe of many countries and states into expansive for-profit social networks. Now the threads tying billions of people into these social networks tug us in directions known and unknown, but primarily away from patience, presence, connection, and toward outrage, polarization and consumerism. While the effects of these trends on our individual and collective psychology have been rarely noticed and generally neglected until now, a growing movement has begun to pull back the curtain. We are angry with the manipulation, and intent on fixing it.

This podcast serially lays out in no uncertain terms the magnitude of the issue and possible paths forward. With guests who number among the most active and influential whistleblowers on this topic, it has become a comprehensive and inspiring guide to reclaiming our freedom in the digital space. Even beyond, it lays out various competing theories for constructing a socially, economically, and politically fair society that elevates human strengths instead of exploiting human weakness.

I Wish This Was Played In Schools

Something went wrong. Please try again in a few minutes.

This analytical summary shifted my consciousness. This format is so helpful. We should name and characterize this presentation format. I THINK THIS IS THE METHOD TO ENABLE PARALLEL learning and legislation. Thank You, Ben

Parallel Learning Legislation

Something went wrong. Please try again in a few minutes.