minus-squareeggandbacon_0056@alien.topBtoLocalLLaMA@poweruser.forum•Could multiple 7b models outperform 70b models?linkfedilinkEnglisharrow-up1·1 year agoStill waiting for someone to use actual ensemble models and inference over all models and pick max or similar linkfedilink
minus-squareeggandbacon_0056@alien.topBtoLocalLLaMA@poweruser.forum•Orca 2: Teaching Small Language Models How to ReasonlinkfedilinkEnglisharrow-up1·1 year agoCome on stop that bs smh … https://preview.redd.it/u02s07bxup1c1.png?width=1567&format=png&auto=webp&s=f64dbc83fcd64fbf33c18007f3b8b45419703179 linkfedilink
minus-squareeggandbacon_0056@alien.topBtoLocalLLaMA@poweruser.forum•Finetuning LLMs: Does it add new knowledge to model or not?linkfedilinkarrow-up1·1 year agoWrong. linkfedilink
minus-squareeggandbacon_0056@alien.topBtoLocalLLaMA@poweruser.forum•Finetuning LLMs: Does it add new knowledge to model or not?linkfedilinkEnglisharrow-up1·1 year agoObviously it is adding knowledge. The training is done the same as for the preparing with adjusted hyper parameters. … linkfedilink
Still waiting for someone to use actual ensemble models and inference over all models and pick max or similar