So, it was bothering me a bit that the only metric people really had to understand the ‘loss’ of quantization objectively was perplexity.

So, after hacking with koboldcpp’s sampler code to force output probabilities for a predetermined sequence so that I can make a fair comparison…

Mistral 7b Avg Quantization Differences

Ta-da!

This is Mistral 7b GGUF’s various popular quantizations, compared to the fp16 base model, as measured by KL divergence. What I’m specifically doing to measure this is comparing the probability similarities between models. Specifically, I did this for a predetermined sequence of about ~350 tokens worth of Wikipedia text.

This means (if we adapt the scale for readability):

  • fp16 = ~0 measured KL change from original probabilities (cause it’s the original)
  • Q8_0 = ~0.06 avg. measured KL change from original probabilities
  • Q6_K = ~0.1 avg. measured KL change from original probabilities
  • Q5_K_M = ~0.3 avg. measured KL change from original probabilities
  • Q4_K_M = ~1.0 avg. measured KL change from original probabilities
  • Q3_K_M = ~3.7 avg. measured KL change from original probabilities
  • Q2_K = ~8.2 avg. measured KL change from original probabilities

“Average difference” obscures the bigger problem with low quantization, though. Technically, if many tokens are easily predictable or predetermined no matter what quant, this will contribute to the average. So what happens if, out of the 300+ tokens of text I tested on, we specifically pick the highest reported difference in KL divergence for each respective quantization and graph that?

Now it becomes clear how big the gap can be for ‘difficult’ tokens!

To make the differences less aggressive, let’s take the top ~5% of the most affected by quantization tokens for each quant, and graph that out.

https://preview.redd.it/3baou5l9mv1c1.png?width=1324&format=png&auto=webp&s=afc4ff00c6b4e14cc86f322e9ccae887bd23b91c

So, if we soley compare the top 5% of tokens that were ‘most affected’ by quantization when doing an average (we do that to exclude the ‘obvious’ tokens), the scale is significantly more dramatic.

I’ll be updating this post with 13b soon enough. I’d also do it for 70b, but since I’m on 12GB VRAM, measuring would be extremely slow as it’d go into the pagefile for every single quant. is this the part where I should shill a kofi or something?

I hope this helps the sub understand how much quantization really impacts models in a somewhat more objective sense.

  • A_for_Anonymous@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Thanks, this is interesting. This all said, it still looks like B is a much more important factor than quantisation down to Q3, meaning a 20B Q3 is going to write better than a 13B fp16. And such it seemed to me personally but I haven’t done any rigorous testing.

  • JealousAmoeba@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Would I get better results in general by running a 7B model with Q8, or a 13B model with Q4/Q5? My laptop can do either.

    I’m guessing the quantized 13B model will be better but has anyone ever benchmarked 7B vs 13B for different levels of quantization?

    • LOLatent@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m in the exact same boat, if you get an answer, pls lettus know! 7b q8 or 13b q4?

  • erikqu_@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Reminds me of pruning, pruning has been shown to have little impact on model performance in other areas, although I haven’t seen it applied to this space much (afaik)

  • kpodkanowicz@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    you are on fire. This is your yet another great post - btw. i changed perplexity scripts to only measure responses after the instruction and using for example, the evol dataset. The preset is configured accordingly to the model - i got completely different results than normal perplexity - interestingly, when running code isntructions on normal model and for instance roleplay instructions on coding model not just perpelxity is around 1 vs. 3 but also degradate differently

  • dnsod_si666@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    You could also use this to measure different models against each other right? And just in general, use this as a model benchmark.

    1. Get dataset of text.
    2. Tokenize dataset.
    3. Measure true probabilities straight from the dataset.
    4. Train model number 1 on tokenized dataset.
    5. Measure KL divergence of model from true probabilities.
    6. Repeat steps 4,5 for model number 2
    7. Compare KL divergence of model 1 to model 2.

    -Separate Idea- Also isn’t getting the true probabilities useful anyway, because then we could have the training process be:

    1. Get dataset.
    2. Tokenize.
    3. Get true probabilities.
    4. Train on probabilities instead of directly on the tokens.

    Like instead of training twice (sequence to probabilities):

    1. sequence1 -> [1, 0]
    2. sequence1 -> [0, 1] You train it once with:
    3. sequence1 -> [0.5, 0.5]

    So you are training on less data which would reduce training costs and whatnot.

  • CardAnarchist@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Hi there, you seem like the man to ask on this somewhat related topic to the OP,

    I’ve recently found out that models output different results based on the number of layers loaded into GPU. I’ve been told that more layers loaded in = better output.

    How does the loss asociated with layers not in GPU compare to the loss say between quants?

      • CardAnarchist@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I thought it odd myself. So much so that I thought SillyTavern was bugged but that wasn’t the case.

        It’s pretty easy to test yourself. Just use Koboldcpp to load in say 31 layers generate some output on seed 1 then, restart Koboldcpp with 30 layers.

        Example of 31 layers of a 7B vs 30 layers on the same seed.

        Each seed works the same if the layers are close enough it seems like. The output starts exactly the same before branching off.

        It’s worth mentioning that the person who told me the quality was “better” with more layers loaded in simply said it was as far as he recalled.