Phind-CodeLlama 34B is the best model for general programming, and some techy work as well. But it’s a bad joker, it only does serious work. Try quantized models if you don’t have access to A100 80GB or multiple GPUs. 4 bit quantization can fit in a 24GB card.
Phind-CodeLlama 34B is the best model for general programming, and some techy work as well. But it’s a bad joker, it only does serious work. Try quantized models if you don’t have access to A100 80GB or multiple GPUs. 4 bit quantization can fit in a 24GB card.