• How AI Happens

  • By: Sama
  • Podcast

How AI Happens

By: Sama
  • Summary

  • How AI Happens is a podcast featuring experts and practitioners explaining their work at the cutting edge of Artificial Intelligence. Tune in to hear AI Researchers, Data Scientists, ML Engineers, and the leaders of today’s most exciting AI companies explain the newest and most challenging facets of their field. Powered by Sama.
    2021 Sama, Inc
    Show more Show less
activate_Holiday_promo_in_buybox_DT_T2
Episodes
  • dbt Labs Co-Founder Drew Banin
    Nov 21 2024



    Key Points From This Episode:

    • Drew and his co-founders’ background working together at RJ Metrics.
    • The lack of existing data solutions for Amazon Redshift and how they started dbt Labs.
    • Initial adoption of dbt Labs and why it was so well-received from the very beginning.
    • The concept of a semantic layer and how dbt Labs uses it in conjunction with LLMs.
    • Drew’s insights on a recent paper by Apple on the limitations of LLMs’ reasoning.
    • Unpacking examples where LLMs struggle with specific questions, like math problems.
    • The importance of thoughtful prompt engineering and application design with LLMs.
    • What is needed to maximize the utility of LLMs in enterprise settings.
    • How understanding the specific use case can help you get better results from LLMs.
    • What developers can do to constrain the search space and provide better output.
    • Why Drew believes prompt engineering will become less important for the average user.
    • The exciting potential of vector embeddings and the ongoing evolution of LLMs.

    Quotes:

    “Our observation was [that] there needs to be some sort of way to prepare and curate data sets inside of a cloud data warehouse. And there was nothing out there that could do that on [Amazon] Redshift, so we set out to build it.” — Drew Banin [0:02:18]

    “One of the things we're thinking a ton about today is how AI and the semantic layer intersect.” — Drew Banin [0:08:49]

    “I don't fundamentally think that LLMs are reasoning in the way that human beings reason.” — Drew Banin [0:15:36]

    “My belief is that prompt engineering will – become less important – over time for most use cases. I just think that there are enough people that are not well versed in this skill that the people building LLMs will work really hard to solve that problem.” — Drew Banin [0:23:06]

    Links Mentioned in Today’s Episode:

    Understanding the Limitations of Mathematical Reasoning in Large Language Models

    Drew Banin on LinkedIn

    dbt Labs

    How AI Happens

    Sama

    Show more Show less
    28 mins
  • Saidot CEO Meeri Haataja
    Oct 31 2024

    In this episode, you’ll hear about Meeri's incredible career, insights from the recent AI Pact conference she attended, her company's involvement, and how we can articulate the reality of holding companies accountable to AI governance practices. We discuss how to know if you have an AI problem, what makes third-party generative AI more risky, and so much more! Meeri even shares how she thinks the Use AI Act will impact AI companies and what companies can do to take stock of their risk factors and ensure that they are building responsibly. You don’t want to miss this one, so be sure to tune in now!

    Key Points From This Episode:

    • Insights from the AI Pact conference.
    • The reality of holding AI companies accountable.
    • What inspired her to start Saidot to offer solutions for AI transparency and accountability.
    • How Meeri assesses companies and their organizational culture.
    • What makes generative AI more risky than other forms of machine learning.
    • Reasons that use-related risks are the most common sources of AI risks.
    • Meeri’s thoughts on the impact of the Use AI Act in the EU.

    Quotes:

    “It’s best to work with companies who know that they already have a problem.” — @meerihaataja [0:09:58]

    “Third-party risks are way bigger in the context of [generative AI].” — @meerihaataja [0:14:22]

    “Use and use-context-related risks are the major source of risks.” — @meerihaataja [0:17:56]

    “Risk is fine if it’s on an acceptable level. That’s what governance seeks to do.” — @meerihaataja [0:21:17]

    Links Mentioned in Today’s Episode:

    Saidot

    Meeri Haataja on LinkedIn

    Meeri Haataja on Instagram

    Meeri Haataja on X

    How AI Happens

    Sama

    Show more Show less
    25 mins
  • FICO Chief Analytics Officer Dr. Scott Zoldi
    Oct 18 2024

    In this episode, Dr. Zoldi offers insight into the transformative potential of blockchain for ensuring transparency in AI development, the critical need for explainability over mere predictive power, and how FICO maintains trust in its AI systems through rigorous model development standards. We also delve into the essential integration of data science and software engineering teams, emphasizing that collaboration from the outset is key to operationalizing AI effectively.


    Key Points From This Episode:

    • How Scott integrates his role as an inventor with his duties as FICO CAO.
    • Why he believes that mindshare is an essential leadership quality.
    • What sparked his interest in responsible AI as a physicist.
    • The shifting demographics of those who develop machine learning models.
    • Insight into the use of blockchain to advance responsible AI.
    • How FICO uses blockchain to ensure auditable ML decision-making.
    • Operationalizing AI and the typical mistakes companies make in the process.
    • The value of integrating data science and software engineering teams from the start.
    • A fear-free perspective on what Scott finds so uniquely exciting about AI.

    Quotes:

    “I have to stay ahead of where the industry is moving and plot out the directions for FICO in terms of where AI and machine learning is going – [Being an inventor is critical for] being effective as a chief analytics officer.” — @ScottZoldi [0:01:53]

    “[AI and machine learning] is software like any other type of software. It's just software that learns by itself and, therefore, we need [stricter] levels of control.” — @ScottZoldi [0:23:59]

    “Data scientists and AI scientists need to have partners in software engineering. That's probably the number one reason why [companies fail during the operationalization process].” — @ScottZoldi [0:29:02]

    Links Mentioned in Today’s Episode:

    FICO

    Dr. Scott Zoldi

    Dr. Scott Zoldi on LinkedIn

    Dr. Scott Zoldi on X

    FICO Falcon Fraud Manager

    How AI Happens

    Sama

    Show more Show less
    34 mins

What listeners say about How AI Happens

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.