• Speculative Decoding and Efficient LLM Inference with Chris Lott - #717

  • Feb 4 2025
  • Length: 1 hr and 17 mins
  • Podcast

Speculative Decoding and Efficient LLM Inference with Chris Lott - #717

  • Summary

  • Today, we're joined by Chris Lott, senior director of engineering at Qualcomm AI Research to discuss accelerating large language model inference. We explore the challenges presented by the LLM encoding and decoding (aka generation) and how these interact with various hardware constraints such as FLOPS, memory footprint and memory bandwidth to limit key inference metrics such as time-to-first-token, tokens per second, and tokens per joule. We then dig into a variety of techniques that can be used to accelerate inference such as KV compression, quantization, pruning, speculative decoding, and leveraging small language models (SLMs). We also discuss future directions for enabling on-device agentic experiences such as parallel generation and software tools like Qualcomm AI Orchestrator. The complete show notes for this episode can be found at https://twimlai.com/go/717.
    Show more Show less

What listeners say about Speculative Decoding and Efficient LLM Inference with Chris Lott - #717

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.