The title, pretty much.

I’m wondering whether a 70b model quantized to 4bit would perform better than a 7b/13b/34b model at fp16. Would be great to get some insights from the community.

  • Ion_GPT@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    il y a 1 an

    Depending on the task. For anything related to multilingual, like translating, the quant will destroy the model. I suspect that this is because the sampling data used during the process is all English.