Code: https://github.com/hao-ai-lab/LookaheadDecoding

Blog post: https://lmsys.org/blog/2023-11-21-lookahead-decoding/

Description:

We introduce lookahead decoding, a new, exact, and parallel decoding algorithm to accelerate LLM inference. Lookahead decoding breaks the sequential dependency in autoregressive decoding by concurrently extracting and verifying n-grams directly with the LLM, utilizing the Jacobi iteration method. Lookahead decoding functions without the need for a draft model or a data store. It linearly decreases the number of decoding steps directly correlating with the log(FLOPs) used per decoding step. Below is a demo of lookahead decoding accelerating LLaMa-2-Chat 7B generation:

https://i.redd.it/k61qtr4zz22c1.gif

  • too_long_story@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Well, but how to merry it with batching so that flash attention kernels can work with it?

    Any complicated masks for attention imply hard times of making possible supporting batching.