Code: https://github.com/hao-ai-lab/LookaheadDecoding

Blog post: https://lmsys.org/blog/2023-11-21-lookahead-decoding/

Description:

We introduce lookahead decoding, a new, exact, and parallel decoding algorithm to accelerate LLM inference. Lookahead decoding breaks the sequential dependency in autoregressive decoding by concurrently extracting and verifying n-grams directly with the LLM, utilizing the Jacobi iteration method. Lookahead decoding functions without the need for a draft model or a data store. It linearly decreases the number of decoding steps directly correlating with the log(FLOPs) used per decoding step. Below is a demo of lookahead decoding accelerating LLaMa-2-Chat 7B generation:

https://i.redd.it/c3q2lr71z22c1.gif

    • _Lee_B_@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The blue text is what this method improved the speed of (I think by parallelizing the inference similarly to CPU pipelining), and so what contributed to the overall text being produced more quickly.