• How to Break Into AI Governance?
    Jun 30 2025
    Ever wondered how to start a career in AI Governance, Responsible AI, or AI Risk Management? In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg for a no-nonsense, practical conversation about how to actually break in to this fast-growing, high-demand field. 🌟 What you'll learn in this episode ✅ What AI governance really is (and why it matters in every business using AI) ✅ The 3 main career paths into AI governance: Dedicated governance roles Expanding your current role to include AI oversight Building something new as an entrepreneur/intrapreneur ✅ Do you need to be technical? How much? ✅ The real skills hiring managers want ✅ How to transition from zero experience to credible candidate ✅ Why governance is essential for scaling AI safely and responsibly 🧭 Key themes Hands-on learning: You have to use AI to govern AI Systems thinking: Understanding how decisions get made at scale Risk awareness: The #1 thing employers want Building your profile: Projects, credentials, volunteering, networking Niche strategy: Why specializing beats general buzzwords Marathon mindset: This is not a quick certification cash-inCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    48 mins
  • AI Ethicist Reacts to Different Uses of AI
    Jun 16 2025
    In this fun and thought-provoking episode of Lunchtime BABLing, BABL AI CEO and AI ethicist Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg for a rapid-fire discussion on some of the most surprising, bizarre, and controversial uses of AI circulating online. From jailbreaking legal loopholes with ChatGPT, to AI-generated testimony from the deceased, to digital therapy bots and AI relationships—no use case is off-limits. The trio explores the ethical, legal, and emotional implications of everyday AI encounters, reacting in real-time with humor, insight, and a healthy dose of skepticism. 🎧 Topics include: Can AI help someone get out of jail? Is it ethical to use AI-generated avatars in court? Talking to an AI version of a dead loved one—grief or avoidance? Should AI replace your therapist? Professors using ChatGPT to grade student essays AI as your relationship coach (or third wheel) Confirmation bias and the future of learning in the AI age 💬 This episode steps away from regulation and compliance to explore how AI is quietly reshaping human behavior—and whether we’re ready for it.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    38 mins
  • What is ISO 42001?
    Jun 2 2025
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker to break down ISO/IEC 42001 — the first international standard for AI management systems. Whether you're leading an AI team, navigating AI risk, or just starting your Responsible AI journey, this high-level introduction will help you understand: What ISO 42001 is and why it matters How it fits into global AI governance (including the EU AI Act and U.S. regulations) Key components of the standard — from leadership, risk assessments, and operations to monitoring and continual improvement Common challenges organizations face when adopting it Practical first steps for implementation, even for startups and resource-limited teams 💡 ISO 42001 is quickly becoming the North Star for organizations aiming to demonstrate trustworthy and responsible AI practices — especially in today’s fast-moving regulatory environment.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    27 mins
  • A New Framework to Assess the Business VALUE of AI
    May 19 2025
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown unveils a powerful new framework to assess business value when implementing AI—shifting the conversation from “Which tool should I use?” to “What value do I want to create?” Joined by CSO Bryan Ilg and COO Jeffery Recker, the trio dives into the origin, design, and real-world application of the AI VALUE Framework: - Visualize your operations - Ask the right questions - Link to AI capabilities - Understand feasibility & risk - Experiment & evaluate This episode is packed with insights for business leaders, innovation teams, and AI professionals navigating the hype, risk, and opportunity of artificial intelligence. The framework—originally developed for BABL AI’s upcoming certification for business professionals—is meant to reduce AI project failure and help organizations do it right, not fast. 💡 Key topics: - The difference between asking about tools vs. asking about value - Why most AI projects fail—and how to avoid it - How AI governance can create value, not just mitigate risk - The importance of metrics, pilot testing, and customer focus - Why being proactive beats being reactive in AI implementationCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    32 mins
  • The Importance of AI Governance
    Apr 28 2025
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with BABL AI Chief Sales Officer Bryan Ilg to explore why AI governance is becoming critical for businesses of all sizes. Bryan shares insights from a recent speech he gave to a nonprofit in Richmond, Virginia, highlighting the real business value of strong AI governance practices — not just for ethical reasons, but as a competitive advantage. They dive into key topics like the importance of early planning (with a great rocket ship analogy!), how AI governance ties into business success, practical steps organizations can take to get started, and why AI governance is not just about risk mitigation but about driving real business outcomes. Shea and Bryan also discuss trends in AI governance roles, challenges organizations face, and BABL AI's new Foundations of AI Governance for Business Professionals certification program designed to equip non-technical leaders with essential AI governance skills. If you're interested in responsible AI, business strategy, or understanding how to make AI work for your organization, this episode is packed with actionable insights!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    41 mins
  • Ensuring LLM Safety
    Apr 7 2025
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown dives deep into one of the most pressing questions in AI governance today: how do we ensure the safety of Large Language Models (LLMs)? With new regulations like the EU AI Act, Colorado’s AI law, and emerging state-level requirements in places like California and New York, organizations developing or deploying LLM-powered systems face increasing pressure to evaluate risk, ensure compliance, and document everything. 🎯 What you'll learn: Why evaluations are essential for mitigating risk and supporting compliance How to adopt a socio-technical mindset and think in terms of parameter spaces What auditors (like BABL AI) look for when assessing LLM-powered systems A practical, first-principles approach to building and documenting LLM test suites How to connect risk assessments to specific LLM behaviors and evaluations The importance of contextualizing evaluations to your use case—not just relying on generic benchmarks Shea also introduces BABL AI’s CIDA framework (Context, Input, Decision, Action) and shows how it forms the foundation for meaningful risk analysis and test coverage. Whether you're an AI developer, auditor, policymaker, or just trying to keep up with fast-moving AI regulations, this episode is packed with insights you can use right now. 📌 Don’t wait for a perfect standard to tell you what to do—learn how to build a solid, use-case-driven evaluation strategy today.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    28 mins
  • Explainability of AI
    Mar 31 2025
    What does it really mean for AI to be explainable? Can we trust AI systems to tell us why they do what they do—and should the average person even care? In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by regular guests Jeffery Recker and Bryan Ilg to unpack the messy world of AI explainability—and why it matters more than you might think. From recommender systems to large language models, we explore: 🔍 The difference between explainability and interpretability -Why even humans struggle to explain their decisions -What should be considered a “good enough” explanation -The importance of stakeholder context in defining "useful" explanations -Why AI literacy and trust go hand-in-hand -How concepts from cybersecurity, like zero trust, could inform responsible AI oversight Plus, hear about the latest report from the Center for Security and Emerging Technology calling for stronger explainability standards, and what it means for AI developers, regulators, and everyday users. Mentioned in this episode: 🔗 Link to BABL AI's Article: https://babl.ai/report-finds-gaps-in-ai-explainability-testing-calls-for-stronger-evaluation-standards/ 🔗 Link to "Putting Explainable AI to the Test" paper: https://cset.georgetown.edu/publication/putting-explainable-ai-to-the-test-a-critical-look-at-ai-evaluation-approaches/?utm_source=ai-week-in-review.beehiiv.com&utm_medium=referral&utm_campaign=ai-week-in-review-3-8-25 🔗 Link to BABL AI's "The Algorithm Audit" paper: https://babl.ai/algorithm-auditing-framework/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    34 mins
  • AI’s Impact on Democracy
    Mar 24 2025
    In this thought-provoking episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with Jeffery Recker and Bryan Ilg to unpack one of the most pressing topics of our time: AI’s impact on democracy. From algorithm-driven echo chambers and misinformation to the role of social media in shaping political discourse, the trio explores how AI is quietly—and sometimes loudly—reshaping our democratic systems. - What happens when personalized content becomes political propaganda? - Is YouTube the new social media without us realizing it? - Can regulations keep up with AI’s accelerating influence? - And are we already too far gone—or is there still time to rethink, regulate, and reclaim our democratic integrity? This episode dives into: - The unintended consequences of algorithmic curation - The collapse of objective reality in the digital age - AI-driven misinformation in elections - The tension between regulation and free speech - Global responses—from Finland’s education system to the EU AI Act - What society can (and should) do to fight back Whether you’re in tech, policy, or just trying to make sense of the chaos online, this is a conversation you won’t want to miss. 🔗 Jeffery’s free course, Intro to the EU AI Act, is available now! Get your Credly badge and learn how to start your compliance journey → https://babl.ai/introduction-to-the-eu-ai-act/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    46 mins