We’ve seen pretty amazing performance of mistral 7b when comparing with Llama 34B & Llama2 13B. I’m curious, theoretically, will it be possible to build an SLM, with 7-8B parameters, able to outperform GPT4 in all tasks? If so, what are potential difficulties / problems to solve? And when do you expect such SLM to come?

ps: sorry for the typo. This is my real question.

Is it possible for SLM to outperform GPT4 in all tasks?

  • bortlip@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s not really about fairness though, it’s about knowing where things stand.

    I’ve used GPT 4 a lot so I have a rough idea of what it can do in general, but I’ve almost no experience with local LLMs. That’s something I’ve only played a little with recently after seeing the advances in the past year.

    So, I don’t really see it as a question that disparages local LLMs, so I don’t see fairness as an issue - it’s not a competition to me.