A few people here tried the Goliath-120B model I released a while back, and looks like TheBloke has released the quantized versions now. So far, the reception has been largely positive.

https://huggingface.co/TheBloke/goliath-120b-GPTQ

https://huggingface.co/TheBloke/goliath-120b-GGUF

https://huggingface.co/TheBloke/goliath-120b-AWQ

The fact that the model turned out good is completely unexpected. Every LM researcher I’ve spoken to about this in the past few days has been completely baffled. The plan moving forward, in my opinion, is to finetune this model (preferably a full finetune) so that the stitched layers get to know each other better. Hopefully I can find the compute to do that soon :D

On a related note, I’ve been working on LLM-Shearing lately, which would essentially enable us to shear down a transformer down to much smaller sizes, while preserving accuracy. The reason goliath-120b came to be was an experiment in moving at the opposite direction of shearing. I’m now wondering if we can shear a finetuned Goliath-120B to around ~70B again and end up with a much better 70B model than the existing ones. This would of course be prohibitively expensive, as we’d need to do continued pre-train after the shearing/pruning process. A more likely approach, I believe, is shearing Mistral-7B to ~1.3B and perform continued pretrain on about 100B tokens.

If anyone has suggestions, please let me know. Cheers!

  • AlpinDale@alien.topOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Makes sense the benchmark results would be surprisingly low for goliath. After playing around with it for a few days, I’ve noticed two glaring issues:

    • it tends to make slight spelling mistakes
    • it hallucinates words They happen rarely, but frequent enough to throw off benchmarks. I’m very positive this can be solved by a quick full finetune over a 100 or so steps, which would align the layers to better work together.
    • noeda@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Not sure if you misread, but it’s actually high, i.e. it’s better than Xwin and Euryale it’s made out of (in this particular quick test).

      It beat all the 70B models I tested there, although the gap is not super high.

      • AlpinDale@alien.topOPB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Yes well it should perform much higher than that. Turboderp ran MMLU at 3.25bpw and it was performing worse than other 70B models. I assume quantization further degrades the spelling consistency.

    • polawiaczperel@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Your model is little breahthrough in local LLM’s. What plans do you have right now? Could you try to merge some big models with for an example deepSeekCoder, or Phind? It would be awesome.