The title, pretty much.

I’m wondering whether a 70b model quantized to 4bit would perform better than a 7b/13b/34b model at fp16. Would be great to get some insights from the community.

  • daHaus@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    This seems like something that would be difficult to predict considering how fundamental what your changing is. The method you use to quantize it and how refined it is also matters a great deal.

  • Herr_Drosselmeyer@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    It’s a rule of thumb that yes, higher parameter at low quant beats lower parameter at high quant (or no quant) but take it with a grain of salt as you may still prefer a lower parameter model that’s more tuned for the task you prefer.

  • Ion_GPT@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Depending on the task. For anything related to multilingual, like translating, the quant will destroy the model. I suspect that this is because the sampling data used during the process is all English.

  • Sea_Particular_4014@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Adding into Automata’s theoretical info, I can say that anecdotally I find 4bit 70B substantially better than 8bit 34B or below, but it’ll depend on your task.

    It seems like right now the 70b are really good for storywriting, RP, logic, etc, while if you’re doing programming or data classification or similar you might be better off with a high precision smaller model that’s been fine-tuned towards the task at hand.

    I noticed in my 70b circle jerk rant thread I posted a couple days ago, most of the people saying they didn’t find the 70b that much better (or better at all) were doing programming or data classification type stuff.

        • harrro@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Using Q3, you can fit it in 36GB (I have a weird combo of RTX 3060 with 12GB and P40 with 24GB and I can run a 70B at 3bit fully on GPU).

            • harrro@alien.topB
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              Yes llama.cpp will automatically split the model to work across GPUs. You can also specify how much of the full model should be on each GPU.

              Not sure on AMD support but for nvidia it’s pretty easy to do.

        • Dry-Vermicelli-682@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          44GB of GPU VRAM? WTH GPU has 44GB other than stupid expensive ones? Are average folks running $25K GPUS at home? Or those running these like working for company’s with lots of money and building small GPU servers to run these?

      • Sea_Particular_4014@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Well… none at all if you’re happy with ~1 token per second or less using GGUF CPU inference.

        I have 1 x 3090 24GB and get about 2 tokens per second with partial offload. I find it usable for most stuff but many people find that too slow.

        You’d need 2 x 3090 or an A6000 or something to do it quickly.

      • Dusty_da_Cat@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        The golden standard is 2 x 3090/4090 cards, which is 48 GBs of VRAM total. You can get by with 2 P40s(Need cooling solution) and run onboard video, if you want to save some money. The speeds will be slower, but still better than running on System RAM on typical setups.

  • Secret_Joke_2262@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    A friend told me that for 70b when using q4, performance drops by 10%. The larger the model, the less it suffers from weight quantization

  • tu9jn@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Usually number of parameters matter more than bit per weight, but I had some problems with really low bpw models like 70b 2.55bpw exllamav2.

    34b Yi could be a good compromise, I am impressed with it, and it has a long context length as well.

    • AutomataManifold@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Early research suggested that there was an inflection point below 4-bits, where things got markedly worse. In my personal use, I find that accuracy definitely suffers below there, though maybe modern quants are a bit better at it.

      34B Yi does seem like a sweet spot, though I’m starting to suspect that we need some fine-tunes that use longer stories as part of the training data, because it doesn’t seem to be able to maintain the quality for the entire length of the context. Still, being able to include callbacks to events from thousands of tokens earlier is impressively practical. I’ve been alternating between a fast 13B (for specific scenes), 34B Yi (for general writing), and 70B (for when you need it to be smart and varied). And, of course, just switching models can help with the repetition sometimes.

  • semicausal@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    In my experience, the lower you go…the model:

    - hallucinates more (one time I asked Llama2 what made the sky blue and it freaked out and generated thousands of similar questions line by line)

    - is more likely to give you an inaccurate response when it doesn’t hallucinate

    - is significantly more unreliable and non-deterministic (seriously, providing the same prompt can cause different answers!)

    At the bottom of this post, I compare the 2-bit and 8-bit extreme ends of Code Llama Instruct model with the same prompt and you can see how it played out: https://about.xethub.com/blog/comparing-code-llama-models-locally-macbook

    • NachosforDachos@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      That was useful and interesting.

      Speaking of hypothetical situations how much money do you think an individual would need to buy the computing power needed to provide themselves with a gpt 4 turbo like experience locally?