minus-squarePerimeter666@alien.topBtoLocalLLaMA@poweruser.forum•🐺🐦⬛ LLM Comparison/Test: 2x 34B Yi (Dolphin, Nous Capybara) vs. 12x 70B, 120B, ChatGPT/GPT-4linkfedilinkEnglisharrow-up1·1 year agoGoliath is a masterpiece so far. Running it on 4x4090, speed is OK, but not the best still. For my taste it writes stories better than GPT4 itself, immersing deeper and avoiding useless watery poetic shit GPT4 is full of. Just give the thing 16k context and with a 16x4096 setup it’ll be divine lol linkfedilink
minus-squarePerimeter666@alien.topBtoLocalLLaMA@poweruser.forum•dolphin-2.2-yi-34b releasedlinkfedilinkEnglisharrow-up1·1 year ago16k context is awesome. Now we need Goliath 120b with 16k context and I’m done with OpenAI linkfedilink
Goliath is a masterpiece so far. Running it on 4x4090, speed is OK, but not the best still.
For my taste it writes stories better than GPT4 itself, immersing deeper and avoiding useless watery poetic shit GPT4 is full of.
Just give the thing 16k context and with a 16x4096 setup it’ll be divine lol