• How will a Trump Presidency Impact AI Regulation
    Nov 18 2024
    🎙️ Lunchtime BABLing Podcast: What Will a Trump Presidency Mean for AI Regulations? In this thought-provoking episode, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg to explore the potential impact of a Trump presidency on the landscape of AI regulation. 🚨🤖 Key topics include: Federal deregulation and the push for state-level AI governance. The potential repeal of Biden's executive order on AI. Implications for organizations navigating a fragmented compliance framework. The role of global AI policies, such as the EU AI Act, in shaping U.S. corporate strategies. How deregulation might affect innovation, litigation, and risk management in AI development. This is NOT a political podcast—we focus solely on the implications for AI governance and the tech landscape in the U.S. and beyond. Whether you're an industry professional, policymaker, or tech enthusiast, this episode offers essential insights into the evolving world of AI regulation.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    37 mins
  • A BABL Deep Dive
    Nov 4 2024
    Welcome to a special Lunchtime BABLing episode, BABL Deep Dive, hosted by BABL AI CEO Dr. Shea Brown and Chief Sales Officer Brian Ilg. This in-depth discussion explores the fundamentals and nuances of AI assurance—what it is, why it's crucial for modern enterprises, and how it works in practice. Dr. Brown breaks down the concept of AI assurance, highlighting its role in mitigating risks, ensuring regulatory compliance, and building trust with stakeholders. Brian Ilg shares key insights from his conversations with clients, addressing common questions and challenges that arise when organizations seek to audit and assure their AI systems. This episode features a detailed presentation from a recent risk conference, offering a behind-the-scenes look at how BABL AI conducts independent AI audits and assurance engagements. If you're a current or prospective client, an executive curious about AI compliance, or someone exploring careers in AI governance, this episode is packed with valuable information on frameworks, criteria, and best practices for AI risk management. Watch now to learn how AI assurance can protect your organization from potential pitfalls and enhance your reputation as a responsible, forward-thinking entity in the age of AI!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    51 mins
  • AI Literacy Requirements of the EU AI Act
    Oct 21 2024
    👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20". 📚 Courses Mentioned: 1️⃣ AI Literacy Requirements Course: https://courses.babl.ai/p/ai-literacy-for-eu-ai-act-general-workforce 2️⃣ EU AI Act - Conformity Requirements for High-Risk AI Systems Course: https://courses.babl.ai/p/eu-ai-act-conformity-requirements-for-high-risk-ai-systems 3️⃣ EU AI Act - Quality Management System Certification: https://courses.babl.ai/p/eu-ai-act-quality-management-system-oversight-certification 4️⃣ BABL AI Course Catalog: https://babl.ai/courses/ 🔗 Follow us for more: https://linktr.ee/babl.ai In this episode of Lunchtime BABLing, CEO Dr. Shea Brown dives into the "AI Literacy Requirements of the EU AI Act," focusing on the upcoming compliance obligations set to take effect on February 2, 2025. Dr. Brown explains the significance of Article 4 and discusses what "AI literacy" means for companies that provide or deploy AI systems, offering practical insights into how organizations can meet these new regulatory requirements. Throughout the episode, Dr. Brown covers: AI literacy obligations for providers and deployers under the EU AI Act. The importance of AI literacy in ensuring compliance. An overview of BABL AI’s upcoming courses, including the AI Literacy Training for the general workforce, launching November 4.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    20 mins
  • AI Frenzy: Will It Really Replace Our Jobs?
    Oct 7 2024
    In this episode of Lunchtime BABLing, hosted by Dr. Shea Brown, CEO of BABL AI, we're joined by frequent guest Jeffery Recker, Co-Founder and Chief Operating Officer of BABL AI. Together, they dive into an interesting question in the AI world today: Will AI really replace our jobs? Drawing insights from a recent interview with MIT economist Daron Acemoglu, Shea and Jeffery discuss the projected economic impact of AI and what they believe the hype surrounding AI-driven job loss will actually look like. With only 5% of jobs expected to be heavily impacted by AI, is the AI revolution really what everyone thinks it is? They explore themes such as the overcorrection in AI investment, the role of responsible AI governance, and how strategic implementation of AI can create competitive advantages for companies. Tune in for an honest and insightful conversation on what AI will mean for the future of work, the economy, and beyond. If you enjoy this episode, don't forget to like and subscribe for more discussions on AI, ethics, and technology!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    17 mins
  • How NIST Might Help Deloitte With the FTC
    Sep 23 2024
    Welcome back to another insightful episode of Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker dive into a fascinating discussion on how the NIST AI Risk Management Framework could play a crucial role in guiding companies like Deloitte through Federal Trade Commission (FTC) investigations. In this episode, Shea and Jeffery on a recent complaint filed against Deloitte regarding its automated decision system for Medicaid eligibility in Texas, and how adherence to established frameworks could have mitigated the issues at hand. 📍 Topics discussed: Deloitte’s Medicaid eligibility system in Texas The role of the FTC and the NIST AI Risk Management Framework How AI governance can safeguard against unintentional harm Why proactive risk management is key, even for non-AI systems What companies can learn from this case to improve compliance and oversight Tune in now and stay ahead of the curve! 🔊✨ 👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    32 mins
  • 'The Regulatory Landscape for AI in Insurance'
    Sep 2 2024

    Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

    Show more Show less
    34 mins
  • Where to Get Started with the EU AI Act: Part Two
    Aug 12 2024
    In the second part of our in-depth discussion on the EU AI Act, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker continue to explore the essential steps organizations need to take to comply with this groundbreaking regulation. If you missed Part One, be sure to check it out, as this episode builds on the foundational insights shared there. In this episode, titled "Where to Get Started with the EU AI Act: Part Two," Dr. Brown and Mr. Recker dive deeper into the practical aspects of compliance, including: Documentation & Transparency: Understanding the extensive documentation and transparency measures required to demonstrate compliance and maintain up-to-date records. Challenges for Different Organizations: A look at how compliance challenges differ for small and medium-sized enterprises compared to larger organizations, and what proactive steps can be taken. Global Compliance Considerations: Discussing the merits of pursuing global compliance strategies and the implications of the EU AI Act on businesses operating outside the EU. Enforcement & Penalties: Insight into how the EU AI Act will be enforced, the bodies responsible for oversight, and the significant penalties for non-compliance. Balancing Innovation with Regulation: How the EU AI Act aims to foster innovation while ensuring that AI systems are human-centric and trustworthy. Whether you're a startup navigating the complexities of AI governance or a large enterprise seeking to align with global standards, this episode offers valuable guidance on how to approach the EU AI Act and ensure your AI systems are compliant, trustworthy, and ready for the future. 🔗 Key Topics Discussed: What documentation and transparency measures are required to demonstrate compliance? How can businesses effectively maintain and update these records? How will the EU AI Act be enforced, and which bodies are responsible for its oversight and implementation? What are the biggest challenges you foresee in complying with the EU AI Act? What resources or support mechanisms are being provided to businesses to help them comply with the new regulations? How does the EU AI Act balance the need for regulation with the need to foster innovation and competitiveness in the AI sector? What are the penalties for non-compliance, and how will they be determined and applied? What guidelines should entities follow to ensure their AI systems are human-centric and trustworthy? What proactive measures can entities take to ensure their AI systems remain compliant as technology and regulations evolve? How do you see the EU AI Act evolving in the future, and what additional measures or amendments might be necessary? 👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    46 mins
  • Where to Get Started with the EU AI Act: Part One
    Aug 12 2024
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker to kick off a deep dive into the EU AI Act. Titled "Where to Get Started with the EU AI Act: Part One," this episode is designed for organizations navigating the complexities of the new regulations. With the EU AI Act officially in place, the discussion centers on what businesses and AI developers need to do to prepare. Dr. Brown and Mr. Recker cover crucial topics including the primary objectives of the Act, the specific aspects of AI systems that will be audited, and the high-risk AI systems requiring special attention under the new regulations. The episode also tackles practical questions, such as how often audits should be conducted to ensure ongoing compliance and how much of the process can realistically be automated. Whether you're just starting out with compliance or looking to refine your approach, this episode offers valuable insights into aligning your AI practices with the requirements of the EU AI Act. Don't miss this informative session to ensure your organization is ready for the changes ahead! 🔗 Key Topics Discussed: What are the primary objectives of the EU AI Act, and how does it aim to regulate AI technologies within the EU? What impact will this have outside the EU? What specific aspects of AI systems will need conformity assessments for compliance with the EU AI Act? Are there any particular high-risk AI systems that require special attention under the new regulations? How do you assess and manage the risks associated with AI systems? What are the key provisions and requirements of the Act that businesses and AI developers need to be aware of? How are we ensuring that our AI systems comply with GDPR and other relevant data protection regulations? How often should these conformity assessments be conducted to ensure ongoing compliance with the EU AI Act? 📌 Stay tuned for Part Two where we continue this discussion with more in-depth analysis and practical tips! 👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes. #AI #EUAIACT #ArtificialIntelligence #Compliance #TechRegulation #AIAudit #LunchtimeBABLing #BABLAICheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show more Show less
    21 mins