We’ve seen pretty amazing performance of mistral 7b when comparing with Llama 34B & Llama2 13B. I’m curious, theoretically, will it be possible to build an SLM, with 7-8B parameters, able to outperform GPT4 in all tasks? If so, what are potential difficulties / problems to solve? And when do you expect such SLM to come?
ps: sorry for the typo. This is my real question.
Is it possible for SLM to outperform GPT4 in all tasks?
“A 34B model beating all 70Bs and achieving the same perfect scores as GPT-4 and Goliath 120B in this series of tests!”
https://www.reddit.com/r/LocalLLaMA/comments/17vcr9d/llm_comparisontest_2x_34b_yi_dolphin_nous/
from a link another commenter posted