• Chatbots, Synthetic Users and AI for User Research with Hassan Aleem
    Nov 20 2024

    AI Chatbots and Synthetic Users with Hassan Aleem

    In this special episode of the Behavioral Design Podcast, host Samuel kicks off a new mini-series featuring expert practitioners from the Nuance Behavior team.

    This week’s guest is Hassan Aleem, a respected behavioral practitioner with a Ph.D. in neuroscience and extensive experience in industries like fintech, health wearables, and public health.

    Together, Samuel and Hassan explore the fascinating intersection of AI and behavioral science. They discuss AI’s impact on user research, the opportunities and challenges of AI-powered chatbots, the role of synthetic users in behavioral research, and the potential of AI to streamline literature reviews.

    The conversation culminates in a thought-provoking discussion: can AI truly understand and design for beauty?

    This episode is packed with insights on how AI can enhance behavioral science practice while emphasizing the irreplaceable value of human expertise.

    TIMESTAMPS

    00:00 Introduction to the Behavioral Design Podcast
    02:00 Meet Hassan Aleem: Neuroscientist and Behavioral Practitioner
    02:37 Exploring AI in Behavioral Science
    03:42 The Role of AI in User Research
    10:21 Chatbots and Behavioral Design
    18:50 AI in Literature Reviews and Research
    34:59 Can AI Understand Beauty?
    40:48 Conclusion and Final Thoughts

    LINKS:

    • ⁠Hassan's LinkedIn
    • Nuance Behavior Website

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Show more Show less
    41 mins
  • Personalized AI with Amy Bucher
    Nov 13 2024
    Using AI to Change Human Behavior In this episode of the Behavioral Design Podcast, hosts Aline Holzwarth and Samuel Salzer explore the fascinating intersection of AI and behavioral science with Amy Bucher, Chief Behavior Officer at Lirio. Together, they dive into the challenges and opportunities of integrating AI with behavioral science for health interventions, focusing on the critical need to design AI tools with human behavior in mind. Key topics include the role of reinforcement learning and precision nudging in behavior change, the importance of grounded behavioral insights to cut through AI hype, and Amy’s experiences with personalized health interventions. Amy also sheds light on the effectiveness of digital tools in behavior change and shares her vision for the future of AI in behavioral health. Tune in for an insightful discussion on how behavioral science can shape the next generation of AI-driven health interventions! LINKS: Amy Bucher Lirio WebsiteLinkedIn Profile Further Reading on AI and Behavioral Science: ⁠How Machine Learning and Artificial Intelligence are Used in Digital Behavior Change Interventions: A Scoping Review⁠ The Power of Large Behavior Models in Healthcare Consumer EngagementMoral Agents for Sustainable TransitionsPersonalized Digital Health Communications to Increase COVID-19 Vaccination in Underserved Populations: A Double Diamond Approach to Behavioral DesignThe Patient Experience of the Future is Personalized: Using Technology to Scale an N of 1 ApproachDigital Twins and the Emerging Science of Self: Implications for Digital Health Experience Design and “Small” DataFeasibility of a Reinforcement Learning–Enabled Digital Health Intervention to Promote MammogramsPrecision Nudging and Health InterventionsReinforcement Learning in Behavior Change TIMESTAMPS: 00:30 Behavioral Science and AI: A Crucial Intersection 07:44 Introducing Amy Bucher 10:43 Scoping Review on AI in Behavior Change 16:05 Challenges and Misconceptions in AI 22:07 Reinforcement Learning and AI Agents 28:40 Designing Interventions with AI and Behavioral Science 31:32 Operationalizing Behavior Change Techniques 35:25 Challenges in Measuring Engagement 42:43 The Role of Behavioral Science in AI 46:53 Quickfire Round: To AI or Not to AI 49:25 Controversial Opinions on AI 53:52 Closing Thoughts and Acknowledgements -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠ Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠
    Show more Show less
    57 mins
  • Misinformation Machines with Gordon Pennycook – Part 2
    Nov 6 2024
    Debunkbot and Other Tools Against Misinformation In this follow-up episode of the Behavioral Design Podcast, hosts Aline Holzwarth and Samuel Salzer welcome back Gordon Pennycook, psychology professor at Cornell University, to continue their deep dive into the battle against misinformation. Building on their previous conversation around misinformation’s impact on democratic participation and the role of AI in spreading and combating falsehoods, this episode focuses on actionable strategies and interventions to combat misinformation effectively. Gordon discusses evidence-based approaches, including nudges, accuracy prompts, and psychological inoculation (or prebunking) techniques, that empower individuals to better evaluate the information they encounter. The conversation highlights recent advancements in using AI to debunk conspiracy theories and examines how AI-generated evidence can influence belief systems. They also tackle the role of social media platforms in moderating content, the ethical balance between free speech and misinformation, and practical steps that can make platforms safer without stifling expression. This episode provides valuable insights for anyone interested in understanding how to counter misinformation through behavioral science and AI. LINKS: Gordon Pennycook: ⁠Google Scholar Profile⁠⁠Twitter⁠⁠Personal Website⁠⁠Cornell University Faculty Page⁠ Further Reading on Misinformation: Debunkbot - The AI That Reduces Belief in Conspiracy TheoriesInterventions Toolbox - Strategies to Combat Misinformation TIMESTAMPS: 01:27 Intro and Early Voting06:45 Welcome back, Gordon!07:52 Strategies to Combat Misinformation11:10 Nudges and Behavioral Interventions14:21 Comparing Intervention Strategies19:08 Psychological Inoculation and Prebunking32:21 Echo Chambers and Online Misinformation34:13 Individual vs. Policy Interventions36:21 If You Owned a Social Media Company37:49 Algorithm Changes and Platform Quality38:42 Community Notes and Fact-Checking39:30 Reddit’s Moderation System42:07 Generative AI and Fact-Checking43:16 AI Debunking Conspiracy Theories45:26 Effectiveness of AI in Changing Beliefs51:32 Potential Misuse of AI55:13 Final Thoughts and Reflections -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠nuancebehavior.com.⁠⁠⁠ Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠
    Show more Show less
    1 hr and 3 mins
  • Misinformation Machines with Gordon Pennycook – Part 1
    Nov 4 2024

    The Role of Misinformation and AI in the US Election with Gordon Pennycook

    In this episode of the Behavioral Design Podcast, hosts Aline and Samuel explore the complex world of misinformation in the context of the U.S. elections with special guest Gordon Pennycook, a psychology professor at Cornell University.

    The episode covers the effects of misinformation on democratic participation, and how behavioral science sheds light on reasoning errors that drive belief in falsehoods. Gordon shares insights from his groundbreaking research on misinformation, exploring how falsehoods gain traction and the role AI can play in both spreading and mitigating misinformation.

    The conversation also tackles the evolution of misinformation, including the impact of social media and disinformation campaigns that blur the line between truth and fiction.

    Tune in to hear why certain falsehoods spread faster than truths, the psychological appeal of conspiracy theories, and how humor can amplify the reach of misinformation in surprising ways.


    LINKS:

    Gordon Pennycook:

    • Google Scholar Profile
    • Twitter
    • Personal Website
    • Cornell University Faculty Page

    Further Reading on Misinformation:

    • Brandolini’s Law and the Spread of Falsehoods
    • Role of AI in Misinformation
    • The Psychology of Conspiracy Theories


    TIMESTAMPS:

    00:00 Introduction

    03:14 Behavioral Science and Misinformation

    05:28 Introducing Gordon Pennycook

    10:02 The Evolution of Misinformation

    12:46 AI’s Role in Misinformation

    14:51 Impact of Misinformation on Elections

    21:43 COVID-19 and Vaccine Misinformation

    26:32 Technological Advancements in Misinformation

    33:50 Conspiracy Theories

    35:39 Misinformation and Social Media

    42:35 The Role of Humor in Misinformation

    48:08 Quickfire Round: To AI or Not to AI

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠nuancebehavior.com.⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Show more Show less
    53 mins
  • The Dark Side of AI – Halloween Special
    Oct 30 2024

    In this spine-chilling Halloween special of the Behavioral Design Podcast, co-hosts Aline Holzwarth and Samuel Salzer take listeners on a journey into the eerie intersection of AI and behavioral science. They explore the potential ethical and social consequences of AI, from our urge to anthropomorphize machines to the creeping influence of human biases in AI engineering.

    The episode kicks off with the hosts sharing their favorite Halloween costumes and family traditions before delving into the broader theme of Frankenstein as an apt metaphor for AI. They discuss the human inclination to attribute human qualities to non-human entities and the ethical implications of creating machines that mirror humanity. The conversation deepens with reflections on biases in AI development, risks of ‘playing God,’ and the tension between technological progress and human oversight.

    In a thrilling twist, the hosts read a co-authored sci-fi story written with ChatGPT, illustrating the potential dark consequences of unchecked AI advancement. The episode wraps up with Halloween-themed wishes, encouraging listeners to ponder the boundaries between human and machine as they celebrate the holiday.


    Timestamps:

    03:38Frankenstein: Revisiting the original story

    09:09 – Frankenstein’s Modern AI Metaphor: Parallels to today’s technology

    18:06 – Reflections on AI and Anthropomorphism: The urge to humanize machines

    36:31 – Exploring Human Biases in AI Development: How biases shape AI

    42:06 – Trust in AI: Human vs. algorithmic decision-making

    46:45 – The Personalization of AI Systems: Pros and cons of tailored experiences

    49:10 – The Ethics of Playing God with AI: Examining the risks

    55:56 – Concluding Thoughts and Halloween Wishes: Reflecting on AI’s duality

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠nuancebehavior.com.⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    Show more Show less
    1 hr
  • Recommender Systems with Carey Morewedge
    Oct 23 2024
    In this episode of the Behavioral Design Podcast, we delve into the world of AI recommender systems with special guest Carey Morewedge, a leading expert in behavioral science and AI. The discussion covers the fundamental mechanics behind AI recommendation systems, including content-based filtering, collaborative filtering, and hybrid models. Carey explains how platforms like Netflix, Twitter, and TikTok use implicit data to make predictions about user preferences, and how these systems often prioritize short-term engagement over long-term satisfaction. The episode also touches on ethical concerns, such as the gap between revealed and normative preferences, and the risks of relying too much on algorithms without considering the full context of human behavior. Join co-hosts Aline Holzwarth and Samuel Salzer as they together with Carey explore the delicate balance between human preferences and algorithmic influence. This episode is a must-listen for anyone interested in understanding the complexities of AI-driven recommendations! -- LINKS: Carey Morewedge: Google Scholar Profile Carey Morewedge - LinkedIn Boston University Faculty Page Personal Website Understanding AI Recommender Systems: How Netflix’s Recommendation System WorksImplicit Feedback for Recommender Systems (Research Paper)Why People Don’t Trust Algorithms (Harvard Business Review)⁠Nuance Behavior Website⁠ -- TIMESTAMPS: 00:00 The 'Do But Not Recommend' Game 07:53 The Complexity of Recommender Systems 08:58 Types of Recommender Systems 12:08 Introducing Carey Morewedge 14:13 Understanding Decision Making in AI 17:00 Challenges in AI Recommendations 32:13 Long-Term Impact on User Behavior 33:00 Understanding User Preferences 35:03 Challenges with A/B Testing 40:06 Algorithm Aversion 46:51 Quickfire Round: To AI or Not to AI 52:55 The Future of AI and Human Relationships -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠nuancebehavior.com.⁠ Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠
    Show more Show less
    59 mins
  • AI and Behavioral Science – What You Need to Know
    Oct 16 2024
    In the latest episode of the Behavioral Design Podcast, we are excited to launch Season 4 with an in-depth exploration of how behavioral science and AI converge, setting the stage for an engaging and thought-provoking season. This episode tackles big questions around AI’s growing influence, offering insights into both its promise and its challenges, especially as they relate to human behavior and decision-making. Join co-hosts Aline Holzwarth and Samuel Salzer as they introduce key themes for the season, including the profound implications of AI on behavioral science and society at large. The episode opens with breaking news from the AI world, such as the significance of neural networks, which serve as the foundation of modern AI systems. The hosts explain how neural networks work and contrast them with the extraordinary complexity of the human brain. The episode covers essential concepts for behavioral scientists, including large language models (LLMs), the backbone of generative AI, as well as prompt engineering and AI agents. These tools are transforming fields from healthcare to customer service, and the hosts break down their real-world applications, highlighting how they are used to enhance decision-making, automate workflows, and drive personalized interventions. Samuel and Aline debunk several common myths about AI, such as whether generative AI truly enhances creativity or if more complex models are always better. They also explore algorithmic bias versus human bias, discussing how AI can both amplify and address societal inequities depending on how it is designed and implemented. In “To AI or Not to AI”, this season’s quickfire round, the hosts weigh in on whether they’d trust AI for tasks like driving their kids to daycare or offering relationship advice, sparking a thought-provoking discussion on AI’s role in everyday life. This episode is a must-listen for anyone curious about the evolving relationship between behavioral science and AI, offering both high-level insights and detailed explorations of the real-world implications of these technologies. -- TIMESTAMPS: 00:00 Introduction to the Behavioral Design Podcast 02:36 Breaking News 04:30 Understanding Neural Networks 09:38 The Beauty and Complexity of the Human Brain 17:37 Season Preview 21:53 Meet Your Hosts 29:00 Nuanced Behavior 30:43 AI 101 for Behavioral Scientists 44:14 Debunking AI Myths 01:02:15 To AI or Not to AI: Quickfire Round 01:14:45 Final Thoughts LINKS: Geoffrey Hinton’s Talk on AI and John Hopfield’s Contributions to Neural NetworksSherry Turkle’s Memoir “The Empathy Diaries”Marvin Minsky and the Concept of the Brain as a MachineCassie Kozyrkov’s Blog on Machine LearningSendhil Mullainathan’s Paper on Algorithmic FairnessGenerative AI enhances individual creativity but reduces the collective diversity of novel content Superintelligence: Paths, Dangers, Strategies Biased Algorithms Are Easier to Fix Than Biased PeopleNuance Behavior Website -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: nuancebehavior.com. Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠
    Show more Show less
    1 hr and 18 mins
  • 2023 in Review – Season 3 Finale 🌟
    Dec 14 2023
    We've reached the end of Season 3! 🎉 In this finale, we give you the inside scoop on topics behavioral design from 2023. From our favorite resources to AI to films, we explore all things behavioral design, so you too are in the inside scoop! All resources are linked below. Enjoy! From the bottom of our hearts, thank you for supporting us throughout the year! We appreciate you! 🙏 🙌 Gratitude: A systematic review of the strength of evidence for the most commonly recommended happiness strategies in mainstream media | Nature Human Behaviour (Dunigan Folk & Elizabeth Dunn)No Sweat book - Michelle SegarPreregistering, transparency, and large samples boost psychology studies’ replication rate to nearly 90% | ScienceHigh replicability of newly discovered social-behavioural findings is achievable | Nature Human Behaviour Favorite Resources: BehaviorBytesWomen in Behavioral Science and the Women in Behavioral Science LinkedIn group – Darcie Piechowski Lesson on Fraud and Whistleblowing – Zoe ZianiChoice Overload: It’s not about the number – Hassan & Roos7 Routes to Applied Behavioural Science Experimentation and Observation – Affective + OECDMapping Behavioural Journeys – Common ThreadA Manifesto for Applying Behavioral Science – The Behavioural Insights TeamBehavioral Science as a Specialization – Connor JoyceThe Science of Context – Jared Peterson Top 10 films: Fallen leavesClosePassagesLuxembourg, LuxembourgPast LivesBeau Is AfraidOne Fine MorningBarbieOppenheimerInfinity Pool -- Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠Habit Weekly newsletter⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠
    Show more Show less
    52 mins