The question is probably too basic. But how do i load llama2 70B model using 8b quantization? I see TheBlokeLlama2_70B_chat_GPTQ but they only show 3b/4b quantization. I have 80G A100 and try to load llama2 70B model with 8b quantization. Thanks a lot!
You must log in or register to comment.
I haven’t used gptq in a while, but i can say that gguf has 8 bit quantization, which you can use with llamacpp. Furthermore, if you use the original huggingface models, the ones which you load using the transformers loader, you have options in there to load in either 8 or 4bit.
thanks!
Grab the original (fp16) models. They are quantized to 8-bit on the fly with bitsandbytes.