A few people here tried the Goliath-120B model I released a while back, and looks like TheBloke has released the quantized versions now. So far, the reception has been largely positive.

https://huggingface.co/TheBloke/goliath-120b-GPTQ

https://huggingface.co/TheBloke/goliath-120b-GGUF

https://huggingface.co/TheBloke/goliath-120b-AWQ

The fact that the model turned out good is completely unexpected. Every LM researcher I’ve spoken to about this in the past few days has been completely baffled. The plan moving forward, in my opinion, is to finetune this model (preferably a full finetune) so that the stitched layers get to know each other better. Hopefully I can find the compute to do that soon :D

On a related note, I’ve been working on LLM-Shearing lately, which would essentially enable us to shear down a transformer down to much smaller sizes, while preserving accuracy. The reason goliath-120b came to be was an experiment in moving at the opposite direction of shearing. I’m now wondering if we can shear a finetuned Goliath-120B to around ~70B again and end up with a much better 70B model than the existing ones. This would of course be prohibitively expensive, as we’d need to do continued pre-train after the shearing/pruning process. A more likely approach, I believe, is shearing Mistral-7B to ~1.3B and perform continued pretrain on about 100B tokens.

If anyone has suggestions, please let me know. Cheers!

  • those2badguys@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I’m just a lowly end user and spectator, can someone ballpark how much it’d cost to shear Goliath-120B to 70B so I can wake up and sip my coffee then spray it on my monitor and say “good lord that’s rather expensive!”

    Also, how much for a 7B to 1.3B? and has it been done before? How bad is the drop in quality? I mean older 7B models are not so great to began with so the idea of seeing Mistral-7B downsized to 1.3B would be kind of fun and definitely something I want to play with.

    • AlpinDale@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      The shearing process would likely need to close to 1 billion tokens of data, so I’d guess about a few days on ~24x A100-80G/H100s. And if we get a ~50B model out of it, we’d need to train that on around ~100B tokens, which would need at least 10x H100s for a few weeks. Overall, very expensive.

      And yes, princeton-nlp did a few shears of Llama2 7B/13B. It’s up on their HuggingFace.

      • those2badguys@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Thank you kindly for the response.

        a few days on ~24x A100-80G/H100s

        I looked at some pricing and did some two handed 10 finger math and estimated it at 12-15 grand?

        10x H100s for a few weeks

        again, just looking at some retail cloud GPU renters, 20-25 grand?

        I’m sure you have better things to do with your time so without doing too much on your end, how far off am I on these guesses?