ortegaalfredo@alien.topBtoLocalLLaMA@poweruser.forum•🐺🐦⬛ LLM Comparison/Test: 2x 34B Yi (Dolphin, Nous Capybara) vs. 12x 70B, 120B, ChatGPT/GPT-4English
0·
1 year agoI’m hosting Goliath 120b with a much better quant (4.5b exl2, need 3x3090) and its scary, it feels alive sometimes. Also, with exllama2 it has about the same speed as a 70B model.
Check panchovix repo on huggingface.