Yi is a series of LLMs trained from scratch at 01.AI. The models have the same architecture of Llama, making them compatible with all the llama-based ecosystems. Just in November, they released

  • Base 6B and 34B models
  • Models with extended context of up to 200k tokens
  • Today, the Chat models

With the release, they are also releasing 4-bit quantized by AWQ and 8-bit quantized by GPTQ

Things to consider:

  • Llama compatible format, so you can use across a bunch of tools
  • License is not commercial unfortunately, but you can request commercial use and they are quite responsive
  • 34B is an amazing model size for consumer GPUs
  • Yi-34B is at the top of the OS Leaderboard, making it a very strong base model for a chat one
  • hackerllama@alien.topOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Base models are not trained for conversations, so you cannot use it as a chat. It’s like GPT-4 and ChatGPT. GPT-4 is the base model, then it’s fine-tuned to be conversational, which is what you see in ChatGPT. Same as Llama vs Chat Llama.

      • Utoko@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Yes both the same model “GPT4TurboChat”, the only difference is on the WebUI, there is a hidden System prompt in front and they it also has set parameters, TopP, Temp and co which you are not able to change.

        So the output is not exactly the same but close.

        but the base model of GPT3.5 and 4 was never open for anyone outside of openAI.

      • Tacx79@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        You can also drive on formula 1 track in Toyota corolla and it will be decent too