So Mistral-7b is a pretty impressive 7B param model … but why is it so capable? Do we have any insights into its dataset? Was it trained very far beyond the scaling limit? Any attempts at open reproductions or merges to scale up # of params?

  • Nkingsy@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Trained on a larger # of tokens. All the llama models are under trained it appears, especially the 70b