AntoItaly@alien.topBtoLocalLLaMA@poweruser.forum•Yet another 70B Foundation Model: Aquila2-70B-ExprEnglish
1·
1 year agoSource?
Source?
Wow, this model seems very good for the Italian language!
Replicate $0.000575/sec for a Nvidia A40 (48GB Vram)
also they used DPO
I hope GPT-3 becomes opensource with Mira Murati as CEO
Wow, with this quantization method, LLama 70B weighs only 17.5GB!