• AI Safety...Ok Doomer: with Anca Dragan

  • Aug 28 2024
  • Length: 38 mins
  • Podcast

AI Safety...Ok Doomer: with Anca Dragan

  • Summary

  • Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind.

    For further reading, search "Introducing the Frontier Safety Framework" and "Evaluating Frontier Models for Dangerous Capabilities".

    Thanks to everyone who made this possible, including but not limited to:

    Presenter: Professor Hannah Fry

    Series Producer: Dan Hardoon

    Editor: Rami Tzabar, TellTale Studios

    Commissioner & Producer: Emma Yousif

    Music composition: Eleni Shaw

    Camera Director and Video Editor: Tommy Bruce

    Audio Engineer: Perry Rogantin

    Video Studio Production: Nicholas Duke

    Video Editor: Bilal Merhi

    Video Production Design: James Barton

    Visual Identity and Design: Eleanor Tomlinson

    Commissioned by Google DeepMind

    Want to share feedback? Why not leave a review on your favorite streaming platform? Have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.

    Show more Show less

What listeners say about AI Safety...Ok Doomer: with Anca Dragan

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.