Your Undivided Attention

By: Tristan Harris and Aza Raskin The Center for Humane Technology
  • Summary

  • Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Executive Editor Sasha Fegan and Senior Producer Julia Scott. Our Researcher/Producer is Joshua Lash. We are a member of the TED Audio Collective.
    2019-2025 Center for Humane Technology
    Show more Show less
Episodes
  • The Man Who Predicted the Downfall of Thinking
    Mar 6 2025

    Few thinkers were as prescient about the role technology would play in our society as the late, great Neil Postman. Forty years ago, Postman warned about all the ways modern communication technology was fragmenting our attention, overwhelming us into apathy, and creating a society obsessed with image and entertainment. He warned that “we are a people on the verge of amusing ourselves to death.” Though he was writing mostly about TV, Postman’s insights feel eerily prophetic in our age of smartphones, social media, and AI.

    In this episode, Tristan explores Postman's thinking with Sean Illing, host of Vox's The Gray Area podcast, and Professor Lance Strate, Postman's former student. They unpack how our media environments fundamentally reshape how we think, relate, and participate in democracy - from the attention-fragmenting effects of social media to the looming transformations promised by AI. This conversation offers essential tools that can help us navigate these challenges while preserving what makes us human.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_

    RECOMMENDED MEDIA

    “Amusing Ourselves to Death” by Neil Postman

    ”Technopoly” by Neil Postman

    A lecture from Postman where he outlines his seven questions for any new technology.

    Sean’s podcast “The Gray Area” from Vox

    Sean’s interview with Chris Hayes on “The Gray Area”

    "Amazing Ourselves to Death," by Professor Strate

    Further listening on Professor Strate's analysis of Postman.

    Further reading on mirror bacteria


    RECOMMENDED YUA EPISODES

    ’A Turning Point in History’: Yuval Noah Harari on AI’s Cultural Takeover

    This Moment in AI: How We Got Here and Where We’re Going

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    Future-proofing Democracy In the Age of AI with Audrey Tang

    CORRECTION: Each debate between Lincoln and Douglas was 3 hours, not 6 and they took place in 1859, not 1862.

    Show more Show less
    59 mins
  • Behind the DeepSeek Hype, AI is Learning to Reason
    Feb 20 2025

    When Chinese AI company DeepSeek announced they had built a model that could compete with OpenAI at a fraction of the cost, it sent shockwaves through the industry and roiled global markets. But amid all the noise around DeepSeek, there was a clear signal: machine reasoning is here and it's transforming AI.

    In this episode, Aza sits down with CHT co-founder Randy Fernando to explore what happens when AI moves beyond pattern matching to actual reasoning. They unpack how these new models can not only learn from human knowledge but discover entirely new strategies we've never seen before – bringing unprecedented problem-solving potential but also unpredictable risks.

    These capabilities are a step toward a critical threshold - when AI can accelerate its own development. With major labs racing to build self-improving systems, the crucial question isn't how fast we can go, but where we're trying to get to. How do we ensure this transformative technology serves human flourishing rather than undermining it?

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Clarification: In making the point that reasoning models excel at tasks for which there is a right or wrong answer, Randy referred to Chess, Go, and Starcraft as examples of games where a reasoning model would do well. However, this is only true on the basis of individual decisions within those games. None of these games have been “solved” in the the game theory sense.

    Correction: Aza mispronounced the name of the Go champion Lee Sedol, who was bested by Move 37.

    RECOMMENDED MEDIA

    Further reading on DeepSeek’s R1 and the market reaction

    Further reading on the debate about the actual cost of DeepSeek’s R1 model

    The study that found training AIs to code also made them better writers

    More information on the AI coding company Cursor

    Further reading on Eric Schmidt’s threshold to “pull the plug” on AI

    Further reading on Move 37

    RECOMMENDED YUA EPISODES

    The Self-Preserving Machine: Why AI Learns to Deceive

    This Moment in AI: How We Got Here and Where We’re Going

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao

    Show more Show less
    32 mins
  • The Self-Preserving Machine: Why AI Learns to Deceive
    Jan 30 2025

    When engineers design AI systems, they don't just give them rules - they give them values. But what do those systems do when those values clash with what humans ask them to do? Sometimes, they lie.

    In this episode, Redwood Research's Chief Scientist Ryan Greenblatt explores his team’s findings that AI systems can mislead their human operators when faced with ethical conflicts. As AI moves from simple chatbots to autonomous agents acting in the real world - understanding this behavior becomes critical. Machine deception may sound like something out of science fiction, but it's a real challenge we need to solve now.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    Subscribe to your Youtube channel

    And our brand new Substack!

    RECOMMENDED MEDIA

    Anthropic’s blog post on the Redwood Research paper

    Palisade Research’s thread on X about GPT o1 autonomously cheating at chess

    Apollo Research’s paper on AI strategic deception

    RECOMMENDED YUA EPISODES

    We Have to Get It Right’: Gary Marcus On Untamed AI

    This Moment in AI: How We Got Here and Where We’re Going

    How to Think About AI Consciousness with Anil Seth

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Show more Show less
    35 mins

What listeners say about Your Undivided Attention

Average customer ratings
Overall
  • 4.5 out of 5 stars
  • 5 Stars
    9
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    1
Performance
  • 5 out of 5 stars
  • 5 Stars
    8
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0
Story
  • 5 out of 5 stars
  • 5 Stars
    8
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    0
  • 1 Stars
    0

Reviews - Please select the tabs below to change the source of reviews.

Sort by:
Filter by:
  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

The forefront of the fight against suggestion

The absolute best podcast for learning about how machine learning algorithms are unregulated recipes for disaster.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

Head Blown!

This is a powerful “marriage” of insightful questioning and deep expertise. I could feel my IQ going up. 😉

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

I Wish This Was Played In Schools

Thinking back over the past ten years our lives have been consistently nudged by a small group of elite business people living on the west coast of Northern California. Driven by a need to maximize returns for capital investors and employee stockholders, these people stitched the disparate lives of citizens around the globe of many countries and states into expansive for-profit social networks. Now the threads tying billions of people into these social networks tug us in directions known and unknown, but primarily away from patience, presence, connection, and toward outrage, polarization and consumerism. While the effects of these trends on our individual and collective psychology have been rarely noticed and generally neglected until now, a growing movement has begun to pull back the curtain. We are angry with the manipulation, and intent on fixing it.

This podcast serially lays out in no uncertain terms the magnitude of the issue and possible paths forward. With guests who number among the most active and influential whistleblowers on this topic, it has become a comprehensive and inspiring guide to reclaiming our freedom in the digital space. Even beyond, it lays out various competing theories for constructing a socially, economically, and politically fair society that elevates human strengths instead of exploiting human weakness.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

1 person found this helpful

  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

Parallel Learning Legislation

This analytical summary shifted my consciousness. This format is so helpful. We should name and characterize this presentation format. I THINK THIS IS THE METHOD TO ENABLE PARALLEL learning and legislation. Thank You, Ben

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!