• The Importance of AI Governance
    Apr 28 2025
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with BABL AI Chief Sales Officer Bryan Ilg to explore why AI governance is becoming critical for businesses of all sizes. Bryan shares insights from a recent speech he gave to a nonprofit in Richmond, Virginia, highlighting the real business value of strong AI governance practices — not just for ethical reasons, but as a competitive advantage. They dive into key topics like the importance of early planning (with a great rocket ship analogy!), how AI governance ties into business success, practical steps organizations can take to get started, and why AI governance is not just about risk mitigation but about driving real business outcomes. Shea and Bryan also discuss trends in AI governance roles, challenges organizations face, and BABL AI's new Foundations of AI Governance for Business Professionals certification program designed to equip non-technical leaders with essential AI governance skills. If you're interested in responsible AI, business strategy, or understanding how to make AI work for your organization, this episode is packed with actionable insights!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    41 mins
  • Ensuring LLM Safety
    Apr 7 2025
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown dives deep into one of the most pressing questions in AI governance today: how do we ensure the safety of Large Language Models (LLMs)? With new regulations like the EU AI Act, Colorado’s AI law, and emerging state-level requirements in places like California and New York, organizations developing or deploying LLM-powered systems face increasing pressure to evaluate risk, ensure compliance, and document everything. 🎯 What you'll learn: Why evaluations are essential for mitigating risk and supporting compliance How to adopt a socio-technical mindset and think in terms of parameter spaces What auditors (like BABL AI) look for when assessing LLM-powered systems A practical, first-principles approach to building and documenting LLM test suites How to connect risk assessments to specific LLM behaviors and evaluations The importance of contextualizing evaluations to your use case—not just relying on generic benchmarks Shea also introduces BABL AI’s CIDA framework (Context, Input, Decision, Action) and shows how it forms the foundation for meaningful risk analysis and test coverage. Whether you're an AI developer, auditor, policymaker, or just trying to keep up with fast-moving AI regulations, this episode is packed with insights you can use right now. 📌 Don’t wait for a perfect standard to tell you what to do—learn how to build a solid, use-case-driven evaluation strategy today.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    28 mins
  • Explainability of AI
    Mar 31 2025
    What does it really mean for AI to be explainable? Can we trust AI systems to tell us why they do what they do—and should the average person even care? In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by regular guests Jeffery Recker and Bryan Ilg to unpack the messy world of AI explainability—and why it matters more than you might think. From recommender systems to large language models, we explore: 🔍 The difference between explainability and interpretability -Why even humans struggle to explain their decisions -What should be considered a “good enough” explanation -The importance of stakeholder context in defining "useful" explanations -Why AI literacy and trust go hand-in-hand -How concepts from cybersecurity, like zero trust, could inform responsible AI oversight Plus, hear about the latest report from the Center for Security and Emerging Technology calling for stronger explainability standards, and what it means for AI developers, regulators, and everyday users. Mentioned in this episode: 🔗 Link to BABL AI's Article: https://babl.ai/report-finds-gaps-in-ai-explainability-testing-calls-for-stronger-evaluation-standards/ 🔗 Link to "Putting Explainable AI to the Test" paper: https://cset.georgetown.edu/publication/putting-explainable-ai-to-the-test-a-critical-look-at-ai-evaluation-approaches/?utm_source=ai-week-in-review.beehiiv.com&utm_medium=referral&utm_campaign=ai-week-in-review-3-8-25 🔗 Link to BABL AI's "The Algorithm Audit" paper: https://babl.ai/algorithm-auditing-framework/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    34 mins
  • AI’s Impact on Democracy
    Mar 24 2025
    In this thought-provoking episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with Jeffery Recker and Bryan Ilg to unpack one of the most pressing topics of our time: AI’s impact on democracy. From algorithm-driven echo chambers and misinformation to the role of social media in shaping political discourse, the trio explores how AI is quietly—and sometimes loudly—reshaping our democratic systems. - What happens when personalized content becomes political propaganda? - Is YouTube the new social media without us realizing it? - Can regulations keep up with AI’s accelerating influence? - And are we already too far gone—or is there still time to rethink, regulate, and reclaim our democratic integrity? This episode dives into: - The unintended consequences of algorithmic curation - The collapse of objective reality in the digital age - AI-driven misinformation in elections - The tension between regulation and free speech - Global responses—from Finland’s education system to the EU AI Act - What society can (and should) do to fight back Whether you’re in tech, policy, or just trying to make sense of the chaos online, this is a conversation you won’t want to miss. 🔗 Jeffery’s free course, Intro to the EU AI Act, is available now! Get your Credly badge and learn how to start your compliance journey → https://babl.ai/introduction-to-the-eu-ai-act/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    46 mins
  • AI Literacy
    Mar 17 2025
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by Jeffery Recker and Bryan Ilg to discuss the growing importance of AI literacy—what it means, why it matters, and how individuals and businesses can stay ahead in an AI-driven world. Topics covered: The evolution of AI education and BABL AI’s new subscription model for training & certifications. Why AI auditing skills are becoming essential for professionals across industries. How AI governance roles will shape the future of business leadership. The impact of AI on workforce transition and how individuals can future-proof their careers. The EU AI Act’s new AI literacy requirements—what they mean for organizations. Want to level up your AI knowledge? Check out BABL AI’s courses & certifications! 🚀 Subscribe to our courses: https://courses.babl.ai/p/the-algorithmic-bias-lab-membership 👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20". Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    21 mins
  • Shea Visits RightsCon 2025
    Mar 3 2025
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown joins us live from RightsCon 2025 in Taipei to break down the latest conversations at the intersection of AI, human rights, and global policy. He’s joined by BABL AI COO Jeffery Recker and CSO Bryan Ilg, as they dive into the big takeaways from the conference and what it means for the future of AI governance. What’s in this episode? ✅ RightsCon Recap – How AI has taken over the human rights agenda ✅ AI Auditing & Accountability – Why organizations need to prove AI compliance ✅ Investors Are Paying Attention – Why AI risk management is becoming a priority ✅ The Role of Education – Why AI literacy is the key to ethical and responsible AI ✅ The International Association of Algorithmic Auditors – A new professional field is emerging 🚀 If you're passionate about AI, governance, and accountability, this episode is packed with insights you don’t want to miss.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    24 mins
  • A Conversation with Ezra Schwartz on UX Design
    Feb 24 2025
    Join BABL AI CEO Dr. Shea Brown on Lunchtime BABLing as he sits down with UX Consultant Ezra Schwartz for an in-depth conversation about the evolving world of user experience—and how it intersects with responsible AI. In this episode, you'll discover: • Ezra’s Journey: From being a student in our AI & Algorithm Auditor Certification Program to becoming a seasoned UX consultant specializing in age tech. • Beyond UI Design: Ezra breaks down the true essence of UX, explaining how it’s not just about pretty interfaces, but about creating intuitive, accessible, and human-centered experiences that build trust and drive user satisfaction. • The Role of UX in AI: Learn how thoughtful UX design is essential in managing AI risks, facilitating cross-department collaboration, and ensuring that digital products truly serve their users. • Age Tech Insights: Explore how innovative solutions, from fall detection systems to digital caregiving tools, are reshaping life for our aging population—and the importance of balancing technology with privacy and ethical considerations. If you’re passionate about design, responsible AI, or just curious about the human side of technology, this episode is a must-listen. 👉 Connect with Ezra Schwartz: Website: https://www.artandtech.com LinkedIn: https://www.linkedin.com/in/ezraschwartz Responsible AgeTech Conference I’m organizing: https://responsible-agetech.orgCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    33 mins
  • Interview with Mahesh Chandra Mukkamala from Quantpi
    Feb 17 2025
    🇩🇪 People can join Quantpi's "RAI in Action" event series kicking off in Germany in March: 👉 https://www.quantpi.com/resources/events 🇺🇸 U.S. based folks can join Quantpi's GTC session on March 20th called "A scalable approach toward trustworthy AI": 👉 https://www.nvidia.com/gtc/session-catalog/?ncid=so-link-241456&linkId=100000328230011&tab.catalogallsessionstab=16566177511100015Kus&search=antoine#/session/1726160038299001jn0f 👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20". 📚 Sign up for our courses today: https://babl.ai/courses/ 🔗 Follow us for more: https://linktr.ee/babl.ai 🎙️ Lunchtime BABLing: An Interview with Mahesh Chandra Mukkamala from Quantpi 🎙️ In this episode of Lunchtime BABLing, host Dr. Shea Brown, CEO of BABL AI, sits down with Mahesh Chandra Mukkamala, a data scientist from Quantpi, to discuss the complexities of black box AI testing, AI risk assessment, and compliance in the age of evolving AI regulations. 💡 Topics Covered: ✔️ What is black box AI testing, and why is it crucial? ✔️ How Quantpi ensures model robustness and fairness across different AI systems ✔️ The role of AI risk assessment in EU AI Act compliance and enterprise AI governance ✔️ Challenges businesses face in AI model evaluation and best practices for testing ✔️ Career insights for aspiring AI governance professionals With increasing regulatory pressure from laws like the EU AI Act, companies need to test their AI models rigorously. Whether you’re an AI professional, compliance officer, or just curious about AI governance, this conversation is packed with valuable insights on ensuring AI systems are trustworthy, fair, and reliable. 🔔 Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest AI governance insights from BABL AI! 📢 Listen to the podcast on all major podcast streaming platforms 📩 Connect with Mahesh on Linkedin: https://www.linkedin.com/in/maheshchandra/ 📌 Follow Quantpi for more AI insights: https://www.quantpi.comCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    28 mins
adbl_web_global_use_to_activate_webcro768_stickypopup