• EP 546: AI Bias Exposed: Real-World Strategies to Keep LLMs Honest
    Jun 13 2025

    AI is powerful. Buuuuuuuut also dangerously biased. Is your team ready to face this reality? Can you even spot when your AI is lying? We're talking real-world solutions with a bias-detection pro.

    Tune in or risk your AI becoming more fiction than fact.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Ask Jordan and Anatoly questions on AI bias

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:
    1. Understanding Cognitive Bias
    2. Cognitive Bias in AI Models
    3. Training Data and Model Development
    4. Future of AI and Managing Bias

    Timestamps:
    02:00 Cognitive Bias Mitigation Platform
    08:50 AI Enthusiasm vs. Cautionary Tales
    12:48 AI Bias Stems from Human Bias
    16:14 Influence of System Prompts on Bias
    19:46 AI Information Parsing Challenges
    20:56 AI Training and Labeling Challenges
    24:05 "Achieve AI Success with Expertise"
    28:23 Bias and Diversity in AI Models
    31:33 Addressing Cognitive Bias in Data

    Keywords:
    Cognitive bias, AI failure, large language models, ChatGPT, Gemini, Copilot, Claude, bias reflection, AI news, AI sales tools, Microsoft, Salesforce, Microsoft 365 Copilot, Sales Agent, Sales Chat, Google, AI mode, Google One AI Premium, Gemini 2.0, OpenAI, AI agents, enterprise automation tools, confirmation bias, heuristic, framing bias, hallucination, training data, model perception, data labeling, reasoning models, agentic environments.

    Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Show more Show less
    32 mins
  • EP 545: How to build reliable AI agents for mission-critical tasks
    Jun 12 2025

    Every enterprise is legit rushing to build AI agents.

    But there's no instructions.

    So, what do you do?
    How do you make sure it works?
    How do you track reliability and traceability?

    We dive in and find out.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Have a question? Join the convo here.

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:

    1. Google Gemini's Veo 3 Video Creation Tool
    2. Trust & Reliability in AI Agents
    3. Building Reliable AI Agents Guide
    4. Agentic AI for Mission-Critical Tasks
    5. Micro Agentic System Architecture Discussion
    6. Nondeterministic Software Challenges for Enterprises
    7. Galileo's Agent Leaderboard Overview
    8. Multi-Agent Systems: Future Protocols


    Timestamps:
    00:00 "Building Reliable Agentic AI"

    05:23 The Future of Autonomous AI Agents

    08:43 Chatbots vs. Agents: Key Differences

    10:48 "Galileo Drives Enterprise AI Adoption"

    13:24 Utilizing AI in Regulated Industries

    18:10 Test-Driven Development for Reliable Agents

    22:07 Evolving AI Models and Tools

    24:05 "Multi-Agent Systems Revolution"

    27:40 Ensuring Reliability in Single Agents


    Keywords:
    Google Gemini, Agentic AI, reliable AI agents, mission-critical tasks, large language models, AI reliability platform, AI implementation, microservices, micro agents, ChuckGPT, AI observability, enterprise applications, nondeterministic software, multi-agentic systems, AI trust, AI authentication, AI communication, AI production, test-driven development, agent EVALS, Hugging Face space, tool calls, expert protocol, MCP protocol, Google A2A protocol, multi-agent systems, agent reliability, real-time prevention, CICD aspect, mission-critical agents, nondeterministic world, reliable software, Galileo, agent leaderboard, AI planning, AI execution, observability feedback, API calls, tool selection quality.

    Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Show more Show less
    29 mins
  • EP 544: AI Magic - Convert Outdated Content into Engagement Gold
    Jun 11 2025

    Got old docs that need updating?

    Yeah, yeah, yeah. You can do that with AI. But that's as basic as a Pumpkin Spice Latte in October.

    What if, in a few minutes, you could not just bring life to your old docs with AI by making them interactive, but also ADD AI functionality into those docs?

    We show you how it's done.

    In our new segment -- Working Wednesdays with AI, we tackle practical use-cases that even non-technical people can pick up and run with.

    Don't miss this one.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Have a question? Join the convo here.

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:

    1. Transform Outdated Content with AI
    2. AI Tools for Efficient Document Updating
    3. Google Gemini Deep Research Capabilities
    4. AI Studio's PDF Transcription Feature
    5. Enhance Presentations with AI Studio
    6. Interactive Presentations with Gemini Canvas
    7. Embedding AI in Presentations
    8. Google AI Studio's Developer Features


    Timestamps:
    00:00 Transforming Document Creation with AI

    03:53 Rethinking AI in Content Management

    09:29 Updating Small Language Models Presentation

    11:55 AI Workflow Insights

    14:30 Google Gemini Evaluates Web Data

    17:27 Exploring Google AI Studio

    20:54 Unscripted Presentation Update Strategy

    25:17 "Deep Research and Presentation Updates"

    29:23 "Redefining 'Small' in Language Models"

    33:00 "2025: On-Device AI Revolution"

    35:43 "AI-Enhanced Slide Summarization"

    38:42 "Interactive Live AI Chat Widget"

    41:34 "AI's Efficiency Explained"

    43:41 "Everyday AI Practice Wednesdays"


    Keywords:
    Google Gemini, Gemini 2.5, AI Studio, Google AI Pro plan, Ultra plan, generative AI, convert outdated content, engagement gold, AI magic, interactive presentation, Commonplace AI tasks, Deep research, AI workflows, Google AI Studio capabilities, AI document transcription, Google Gemini Deep Research, interactive and slick interface, embedded AI capabilities, transcribe PDF presentation, factual keyword updates, targeted deep research, Google search grounding, transforming outdated documents, Canvas mode, integrate AI with presentations, automate mundane tasks, creative AI applications, AI-driven efficiency, enhancing content with AI, live AI capabilities, small language models, presentation refinement, knowledge worker efficiency, research improvements.

    Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Show more Show less
    45 mins
  • EP 543: Apple’s Weaponized Research: Inside its illusion of thinking paper
    Jun 10 2025

    Apple’s new AI paper says advanced AI thinking is an "illusion."

    Is this a groundbreaking scientific discovery?

    Or is it a cynical, weaponized piece of marketing dropped the weekend before WWDC to hide the fact that Apple is catastrophically behind in the AI race?

    We read the paper so you don't have to.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Have a question? Join the convo here.

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:

    1. Apple's Viral Illusion of Thinking Paper
    2. Critique of Apple's AI Research Methodology
    3. Apple's AI Deception and Flawed Logic
    4. Strategic Corporate Propaganda in AI Research
    5. Apple's $2 Trillion AI Market Loss
    6. AI Reasoning Models Tool Use Restrictions
    7. Tower of Hanai and Token Limitations
    8. Apple Research's Industry Skepticism Strategy


    Timestamps:
    00:00 Daily AI Insights & Growth

    04:32 "Evaluating AI: Illusion of Thinking"

    08:30 "Apple's AI Papers: $2 Trillion Dilemma"

    11:23 Apple's Missed $2 Trillion Opportunity

    15:06 Apple's AI Oversight: Massive Blunder

    18:55 PhD AI Research: Industry Influence

    21:43 Apple Challenges AI Test Validity

    26:15 AI Model Testing Complexity

    29:16 "The Challenge of Complex Puzzles"

    33:10 AI Testing Limits: A Designed Failure

    36:45 Questioning Study Methodology

    37:54 iPhone SOS Satellite Test Fails

    41:29 Flawed Report Undermines Credibility

    46:18 "Corporate Strategy Masked as Research"

    50:25 Apple's Controversial Stance on Research


    Keyword:
    Apple's illusion of thinking, Apple's AI research paper, AI reasoning models, large reasoning models, strategic deception, cherry picked science, weaponized research, flawed logic, cherry picked testing, all or nothing grading, Apple's marketing tactics, Apple vs. Microsoft, Apple's AI failures, WWDC conference, Apple's intelligence, Ajax model, Apple's AI spending, Apple's competition, generative AI, Microsoft, Google, OpenAI, code usage restriction, token output limits, reasoning collapse, AI's reasoning limitations, Tower Of Hanai, reasoning lab, Claude 3.7 SONNET, DeepSeek, thinking models, chain of thought processing, corporate propaganda, premeditated media strike, fear, uncertainty, doubt, strategic media strike, data contamination, Apple's research credibility, research methodology, scientific integrity.

    Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Show more Show less
    55 mins
  • EP 542: Apple’s controversial AI study, Google’s new model and more AI News That Matters
    Jun 9 2025

    ↳ Why is Anthropic in hot water with Reddit?
    ↳ Will OpenAI become the de facto business AI tool?
    ↳ Did Apple make a mistake in its buzzworthy AI study?
    ↳ And why did Google release a new model when it was already on top?

    So many AI questions. We’ve got the AI answers.

    Don’t waste hours each day trying to keep up with AI developments.
    We do that for you on Mondays with our weekly AI News That Matters segment.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Have a question? Join the convo here.

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:

    1. OpenAI's Advanced Voice Mode Update
    2. Reddit's Lawsuit Against Anthropic
    3. OpenAI's New Cloud Connectors
    4. Google's Gemini 2.5 Pro Release
    5. DeepSeek Accused of Data Sourcing
    6. Anthropic Cuts Windsurf Claude Access
    7. Apple's AI Reasoning Models Study
    8. Meta's Investment in Scale AI

    Timestamps:
    00:00 Weekly AI News Summary

    04:27 "Advanced Voice Mode Limitations"

    09:07 Reddit's Role in AI Tensions

    10:23 Reddit's Impact on Content Strategy

    16:10 "RAG's Evolution: Accessible Data Insights"

    19:16 AI Model Update and Improvements

    22:59 DeepSeek Accused of Data Misuse

    24:18 DeepSeek Accused of Distilling AI Data

    28:20 Anthropic Limits Windsurf Cloud Access

    32:37 "Study Questions AI Reasoning Models"

    36:06 Apple's Dubious AI Research Tactics

    39:36 Meta-Scale AI Partnership Potential

    40:46 AI Updates: Apple's Gap Year

    43:52 AI Updates: Voice, Lawsuits, Models


    Keywords:
    Apple AI study, AI reasoning models, Google Gemini, OpenAI, ChatGPT, Anthropic, Reddit lawsuit, Large Language Model, AI voice mode, Advanced voice mode, Real-time language translation, Cloud connectors, Dynamic data integration, Meeting recorder, Coding benchmarks, DeepSeek, R1 model, Distillation method, AI ethics, Windsurf, Claude 3.x, Model access, Privacy and data rights, AI research, Meta investment, Scale AI, WWDC, Apple's AI announcements, Gap year, On-device AI models, Siri 2.0, AI market strategy, ChatGPT teams, SharePoint, OneDrive, HubSpot, Scheduled actions, Sparkify, VO3, Google AI Pro plan, Creative AI, Innovation in AI, Data infrastructure.

    Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Show more Show less
    47 mins
  • Ep 541: AI & Trust: When 98% accuracy won't cut it and how Sage can fix it
    Jun 6 2025

    Your CFO just lost sleep over a single missing penny... again.

    Here's the thing about finance teams: they'll hunt for days to find ONE CENT that's off in their books. Because in accounting, even 98% accuracy = complete failure.

    So when it comes to your company's finances and AI, there's a HUGE elephant in the room: trust.

    Sage is changing the conversation around AI, trust and your books.

    Sage is a global leader in cloud-based accounting, financial management, and business management solutions.

    Sage CTO Aaron Harris joins the Everyday AI show to show us the new recipe for trust they're cooking up.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Have a question? Join the convo here.

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:

    1. AI Trust Issues with Financial Accuracy
    2. Sage's 7-Billion Parameter Model Training
    3. Sage Copilot's Accounting AI Accuracy
    4. Transparent Trust Labels in AI Usage
    5. Financial Leaders' Trust in Sage's AI
    6. Sage's AI Factory Safety Measures
    7. Sage's Industry Collaboration for AI Accuracy
    8. AI Implementation Strategy in Accounting


    Timestamps:
    00:00 AI Trust and Business Accuracy

    03:08 CFO's Role in Trust Building

    08:14 "Leveraging AI for Financial Growth"

    12:23 "Enhancing AI Trust in Finance"

    15:56 Early Machine Learning Infrastructure Pioneers

    17:53 "Sage AI Factory Overview"

    22:48 AI Transparency and Data Safety

    24:12 "Trust Label Eases Customer Evaluations"

    26:55 "Insights on AI and Industry"


    Keywords:
    AI trust, 98% accuracy, business leaders, Sage Future Conference, Atlanta, trust in AI, Sage Copilot, accounting software, global software company, Newcastle, North America headquarters, CFO, finance team, financial reports, forecast, budgets, credibility, financial accuracy, creative accounting, large language models, CHAT GPT, task-based AI, accounts payable automation, invoice reading, data science, AI development, neural models, conversational interface, GPT billions of predictions, generative AI, deterministic AI, billions of documents, fine-tuned models, accounting expertise, AICPA partnership, AI factory, automated machine learning, observability, model drift, hallucination detection, Sage AI factory, industry trust signals, safety mechanisms, customer by customer basis, Sage trust label, transparency labels, trustworthiness, ethical AI, responsible AI, AI safety, AI innovation, industry standards, problem-solving, financial trustwort

    Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Show more Show less
    28 mins
  • EP 540: Solving the AI Productivity Paradox
    Jun 5 2025

    AI makes us all more productive.... so why isn't revenue soaring?

    That's the AI Productivity Paradox.

    ↳ Does that mean GenAI doesn't work?
    ↳ Or do we all collectively stink at measuring GenAI ROI?
    ↳ Or are employees just pocketing that time savings?

    Faisal Masud is a tech veteran with answers. He's the President of HP Digital Services, and he's going to help us solve the AI Productivity Paradox.


    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Have a question? Join the convo here.

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:

    1. AI Productivity Paradox Explained
    2. Hybrid Work's AI Integration Challenges
    3. Generative AI Impact on Large Enterprises
    4. Raising Productivity Expectations in AI Era
    5. AI Tools vs. Traditional Employment Roles
    6. Effective AI Policy Implementation in Enterprises
    7. Building Internal AI Capabilities Strategy
    8. Insights from AI-Based Easy Button History

    Timestamps:
    00:00 "Your Daily AI Insight Hub"

    03:43 Workforce Experience Platform Overview

    07:46 High Hiring Bar Enhances Productivity

    10:31 Enterprise Productivity Lag with GenAI

    15:53 AI Integration in Workflows

    19:01 Raising Expectations in Tech Management

    21:57 Hiring for Future-Ready Roles

    25:23 Staples' Innovative "Easy Button" Strategy

    27:22 Less is More for Success


    Keywords:
    AI productivity paradox, generative AI, productivity improvements, employee experience, HP Digital Services, hybrid work, employee productivity, generative AI wave, AI tools, workforce experience platform, AI PCs, employee sentiment data, hybrid work challenges, generative AI boom, overemployment, AI policy, large enterprises, business leaders, remote work, Microsoft Copilot, improved productivity, customer experience, agentic workflows, AI-enabled tasks, augmented roles, future of work, AI solutions, digital transformation, management challenges, augmented society, productivity metrics, less is more approach, efficient work processes.

    Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Show more Show less
    31 mins
  • EP 539: The 1 new Claude feature that changes knowledge work and how to use it
    Jun 4 2025

    Stop what you're doing and read this 👇

    ↳ Claude just dropped something new that slipped under the radar.
    ↳ Other companies have been working on it for a bit as well.
    ↳ This is the future of knowledge work.
    ↳ So you should learn now.
    ↳ Join us to find out what it is.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Have a question? Join the convo here.

    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Topics Covered in This Episode:

    1. New Claude Feature: Deep Research Mode
    2. Impact on Knowledge Work & Integration
    3. Claude's Research and Integrations Rollout
    4. Tool Use and AI Agentic Capabilities
    5. Live Demo of Claude's Feature
    6. Use Cases for Research Integrations
    7. Integration with Google Workspace & Apps
    8. Analyze Business Knowledge with Claude

    Timestamps:
    00:00 "Claude's Game-Changing New Feature"

    04:42 "Claude's $20 Plan Limitations"

    06:28 Claude's Deep Research Tool Expansion

    10:41 "Gemini 2.5 Pro Update Preview"

    12:42 "New Integrations in Claude"

    17:48 "Creating AI Agent Mashups"

    20:48 Deep Research Progress Update

    25:51 Inefficiency of Manual Data Retrieval

    28:08 "Exciting MCP and Zapier Integration"

    31:27 "Plan and Pilot AI Initiatives"

    32:59 Interactive Learning and Feedback

    39:20 Checking AI-Generated Podcast Quotes

    42:29 Initial Impressions on AI Drafting

    44:08 Rethinking Work Through Interviews

    Keywords:
    Anthropic Claude, Claude feature, knowledge work, AI tool, software engineers, deep research mode, base pro plan, $20 a month plan, new Claude feature, AI integration, AI tool use, live context, agentic tool use, Google Drive, Google Calendar, Gmail, Zapier integration, MCP mode, model context protocol, generative AI, AI work Wednesdays, large language model, AI updates, deep research capabilities, tool usability, OpenAI, Gemini, Microsoft Copilot, deep research services, contextual business knowledge, AI-driven research, AI-powered insights, Claude Pro plan, data integration, transforming knowledge work, AI research tools, AI systems integration, business intelligence, AI agent capabilities, Claude updates, data analysis, information extraction, AI collaboration tools, Anthropic integration, AI-driven innovation, AI work efficiency, information processing.

    Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Try Google Veo 3 today! Sign up at gemini.google to get started.

    Show more Show less
    50 mins
adbl_web_global_use_to_activate_webcro805_stickypopup