TL;DR: Organize your neurons into a tree to get 78x faster inference (theoretical limit is 341x).

This was demonstrated on BERT-base, where this change preserved 96% of its downstream GLUE performance. For a quick comparison, DistilBERT offers 1.6x acceleration while preserving 97% of GLUE performance.

This is a HuggingFace Featured Paper from 11/21/2023.

Paper: https://arxiv.org/abs/2311.10770

Code: https://github.com/pbelcak/UltraFastBERT

Model: https://huggingface.co/pbelcak/UltraFastBERT-1x11-long

Abstract:

Language models only really need to use an exponential fraction of their neurons for individual inferences.

As proof, we present UltraFastBERT, a BERT variant that uses 0.3% of its neurons during inference while performing on par with similar BERT models. UltraFastBERT selectively engages just 12 out of 4095 neurons for each layer inference. This is achieved by replacing feedforward networks with fast feedforward networks (FFFs).

While no truly efficient implementation currently exists to unlock the full acceleration potential of conditional neural execution, we provide high-level CPU code achieving 78x speedup over the optimized baseline feedforward implementation, and a PyTorch implementation delivering 40x speedup over the equivalent batched feedforward inference.

We publish our training code, benchmarking setup, and model weights.

This exponential acceleration was achieved on a 180mn BERT model. Just imagine how amazing the speedup would be on a multi-bn parameter model such as LLaMA if the tree trick (i.e. “fast feedforward networks”) continues to scale up to larger layer sizes…

  • qalis@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Very interesting development, but I’m waiting for more production-ready version. Having to set up separate Github repo, with manual installation inside, is not exactly nice. However, if this gets fully compatible with HuggingFace Hub, then this will be huge for simpler cases.

  • luxsteele@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    As someone who writes CUDA code professionally, these are my two cents on the matter: The reported speed enhancements, particularly the claimed 117.83x speedup, might be somewhat misleading. Consider, for example, the comparison of CUDA speedups. The authors contrast their CUDA Fast Feed Forward (CUDA fff) implementation with their own highly unoptimized version of a CUDA Fast Forward (CUDA ff).

    In an effort to ensure a fair comparison, they maintained the same code structure for both CUDA fff and CUDA ff. However, this approach resulted in the CUDA ff not utilizing any shared memory and caused significant memory divergence due to the use of threadIdx.x for indexing the outer dimensions of matrices.

    • we_are_mammals@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      the claimed 117.83x speedup, might be somewhat misleading

      If you compare the best implementation of FFF on CUDA to the best implementation of FF on CUDA, then the speed-up they got is 3.15x:

      See Page 5 Further comparisons: “On GPU, the PyTorch BMM implementation of FFF delivers a 3.15x speedup over the fastest (Native fused) implementation of FF”

      The 40x that u/lexected mentioned seems to apply only when comparing to an apparently much slower FF version.

      It’s a pretty cool paper regardless, as far as I can tell from skimming it. But it could benefit from stating more clearly what has been achieved.

  • blackkettle@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    It’s probably just my own internal bias but I feel like this last week of chaos in this space has resulted in a sudden surge in cool new ideas percolating in the OSS/localllama/ml spaces.

    Thanks for sharing this!

  • we_are_mammals@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    78x speedup over the optimized baseline feedforward implementation

    So they are 78x faster than MKL using the same number of cores?

  • we_are_mammals@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I think DistilBERT needs to be in Table 2, since it’s their most direct competitor: it trades off accuracy for speed, and requires extra training effort, like their approach.

    Still, if they are about 20x faster than DistilBERT using cuBLAS, that’s pretty amazing.

  • DaBobcat@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Maybe I missed it, but how did they select which neurons should be used in each layer? Max values after the activation function? Something else? Did they just fix the number of neurons that should be used fixed? e.g., to 12? So just taking the max 12 values?

    • StartledWatermelon@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      The output of each parent neuron is basically treated as logit. So no activation is necessary. At inference, the logits below zero correspond to the choice of one child node and logits above zero correspond to the choice of alternative child node. At their deepest model, there are 11 such consecutive choices to be made, a descent down the binary tree.

      The specifics of training are discussed in the previous paper of the authors. All nodes are computed during training so there’s no speed-up at this stage compared to vanilla dense layer.

      The number of neurons that should be used is fixed in advance. Basically, it’s determined by the shape of the tree in which neurons are organised.