COMPLEXITY

By: Santa Fe Institute
  • Summary

  • The official podcast of the Santa Fe Institute. Subscribe now and be part of the exploration!
    2019-2024 Santa Fe Institute
    Show more Show less
activate_Holiday_promo_in_buybox_DT_T2
Episodes
  • Nature of Intelligence, Ep. 5: How do we assess intelligence?
    Nov 20 2024

    Guests:

    • Erica Cartmill, Professor, Anthropology and Cognitive Science, Indiana University Bloomington
    • Ellie Pavlick, Assistant Professor, Computer Science and Linguistics, Brown University

    Hosts: Abha Eli Phoboo & Melanie Mitchell

    Producer: Katherine Moncure

    Podcast theme music by: Mitch Mignano

    Follow us on:
    Twitter • YouTube • Facebook • Instagram • LinkedIn • Bluesky

    More info:

    • Tutorial: Fundamentals of Machine Learning
    • Lecture: Artificial Intelligence
    • SFI programs: Education
    • Diverse Intelligences Summer Institute

    Books:

    • Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell

    Talks:

    • How do we know what an animal understands by Erica Cartmill
    • The Future of Artificial Intelligence by Melanie Mitchell

    Papers & Articles:

    • “Just kidding: the evolutionary roots of playful teasing,” in Biology Letters (September 23, 2020), doi.org/10.1098/rsbl.2020.0370
    • “Overcoming bias in the comparison of human language and animal communication,” in PNAS (November 13, 2023), doi.org/10.1073/pnas.22187991
    • “Using the senses in animal communication,” by Erica Cartmill, in A New Companion to Linguistic Anthropology, Chapter 20, Wiley Online Library (March 21, 2023)
    • “Symbols and grounding in large language models,” in Philosophical Transactions of the Royal Society A (June 5, 2023), doi.org/10.1098/rsta.2022.0041
    • “Emergence of abstract state representations in embodied sequence modeling,” in arXiv (November 7, 2023), doi.org/10.48550/arXiv.2311.02171
    • “How do we know how smart AI systems are,” in Science (July 13, 2023), doi: 10.1126/science.adj59
    Show more Show less
    48 mins
  • Nature of Intelligence, Ep. 4: Babies vs Machines
    Nov 6 2024

    Guests:

    • Linda Smith, Distinguished Professor and Chancellor's Professor, Psychological and Brain Sciences, Department of Psychological and Brain Sciences, Indiana University Bloomington
    • Michael Frank, Benjamin Scott Crocker Professor of Human Biology, Department of Psychology, Stanford University

    Hosts: Abha Eli Phoboo & Melanie Mitchell

    Producer: Katherine Moncure

    Podcast theme music by: Mitch Mignano

    Follow us on:
    Twitter • YouTube • Facebook • Instagram • LinkedIn • Bluesky

    More info:

    • Tutorial: Fundamentals of Machine Learning
    • Lecture: Artificial Intelligence
    • SFI programs: Education

    Books:

    • Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell

    Talks:

    • Why "Self-Generated Learning” May Be More Radical and Consequential Than First Appears by Linda Smith
    • Children’s Early Language Learning: An Inspiration for Social AI, by Michael Frank at Stanford HAI
    • The Future of Artificial Intelligence by Melanie Mitchell

    Papers & Articles:

    • “Curriculum Learning With Infant Egocentric Videos,” in NeurIPS 2023 (September 21)
    • “The Infant’s Visual World The Everyday Statistics for Visual Learning,” by Swapnaa Jayaraman and Linda B. Smith, in The Cambridge Handbook of Infant Development: Brain, Behavior, and Cultural Context, Chapter 20, Cambridge University Press (September 26, 2020)
    • “Can lessons from infants solve the problems of data-greedy AI?” in Nature (March 18, 2024), doi.org/10.1038/d41586-024-00713-5
    • “Episodes of experience and generative intelligence,” in Trends in Cognitive Sciences (October 19, 2022), doi.org/10.1016/j.tics.2022.09.012
    • “Baby steps in evaluating the capacities of large language models,” in Nature Reviews Psychology (June 27, 2023), doi.org/10.1038/s44159-023-00211-x
    • “Auxiliary task demands mask the capabilities of smaller language models,” in COLM (July 10, 2024)
    • “Learning the Meanings of Function Words From Grounded Language Using a Visual Question Answering Model,” in Cognitive Science (First published: 14 May 2024), doi.org/10.1111/cogs.13448
    Show more Show less
    39 mins
  • Nature of Intelligence, Ep. 3: What kind of intelligence is an LLM?
    Oct 23 2024

    Guests:

    • Tomer Ullman, Assistant Professor, Department of Psychology, Harvard University
    • Murray Shanahan, Professor of Cognitive Robotics, Department of Computing, Imperial College London; Principal Research Scientist, Google DeepMind

    Hosts: Abha Eli Phoboo & Melanie Mitchell

    Producer: Katherine Moncure

    Podcast theme music by: Mitch Mignano

    Follow us on:
    Twitter • YouTube • Facebook • Instagram • LinkedIn • Bluesky

    More info:

    • Tutorial: Fundamentals of Machine Learning
    • Lecture: Artificial Intelligence
    • SFI programs: Education

    Books:

    • Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell
    • The Technological Singularity by Murray Shanahan
    • Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds by Murray Shanahan
    • Solving the Frame Problem by Murray Shanahan
    • Search, Inference and Dependencies in Artificial Intelligence by Murray Shanahan and Richard Southwick

    Talks:

    • The Future of Artificial Intelligence by Melanie Mitchell
    • Artificial intelligence: A brief introduction to AI by Murray Shanahan

    Papers & Articles:

    • “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled,” in New York Times (Feb 16, 2023)
    • “Bayesian Models of Conceptual Development: Learning as Building Models of the World,” in Annual Review of Developmental Psychology Volume 2 (Oct 26, 2020), doi.org/10.1146/annurev-devpsych-121318-084833
    • “Comparing the Evaluation and Production of Loophole Behavior in Humans and Large Language Models,” in Findings of the Association for Computational Linguistics (December 2023), doi.org/10.18653/v1/2023.findings-emnlp.264
    • “Role play with large language models,” in Nature (Nov 8, 2023), doi.org/10.1038/s41586-023-06647-8
    • “Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks,” arXiv (v5, March 14, 2023), doi.org/10.48550/arXiv.2302.08399
    • “Talking about Large Language Models,” in Communications of the ACM (Feb 12, 2024),
    • “Simulacra as Conscious Exotica,” in arXiv (v2, July 11, 2024), doi.org/10.48550/arXiv.2402.12422
    Show more Show less
    45 mins

What listeners say about COMPLEXITY

Average customer ratings
Overall
  • 4 out of 5 stars
  • 5 Stars
    2
  • 4 Stars
    1
  • 3 Stars
    0
  • 2 Stars
    1
  • 1 Stars
    0
Performance
  • 4 out of 5 stars
  • 5 Stars
    3
  • 4 Stars
    0
  • 3 Stars
    0
  • 2 Stars
    1
  • 1 Stars
    0
Story
  • 4.5 out of 5 stars
  • 5 Stars
    3
  • 4 Stars
    0
  • 3 Stars
    1
  • 2 Stars
    0
  • 1 Stars
    0

Reviews - Please select the tabs below to change the source of reviews.

Sort by:
Filter by:
  • Overall
    4 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

I’m a big fan

The only source to follow this new way to model life and human behavior. End

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

  • Overall
    2 out of 5 stars
  • Performance
    2 out of 5 stars
  • Story
    3 out of 5 stars

No clear hola for the panel

No clear point on the discussion. Everyone talked about what they think time is but no one came closer to a takeaway or insight.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!