minus-squareSunija_Dev@alien.topBtoLocalLLaMA@poweruser.forum•Is Open LLM Leaderboard reliable source ? yi:34B is at the top but I get better results with neural-chat:7B modellinkfedilinkEnglisharrow-up1·11 months ago90% of the time a bigger model is “worse” because… A) I messed up the prompt format B) (For roleplaying) Smaller models seem more creative, because they’re less consistent. But after some messages, the missing consistency makes them really bad. linkfedilink
minus-squareSunija_Dev@alien.topBtoLocalLLaMA@poweruser.forum•For roleplay purposes, Goliath-120b is absolutely thrilling melinkfedilinkarrow-up1·1 year agoExamples? :3 linkfedilink
90% of the time a bigger model is “worse” because…
A) I messed up the prompt format
B) (For roleplaying) Smaller models seem more creative, because they’re less consistent. But after some messages, the missing consistency makes them really bad.