If i have multiple 7b models where each model is trained on one specific topic (e.g. roleplay, math, coding, history, politic…) and i have an interface which decides depending on the context which model to use. Could this outperform bigger models while being faster?

  • wishtrepreneur@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    You can use a general LLM to “classify” a prompt and then route the entire prompt to a downstream LLM.

    why can’t you just train the “router” LLM on which downstream LLM to use and pass the activations to the downstream LLMs? Can’t you have “headless” (without encoding layer) downstream LLMs? So inference could use a (6.5B+6.5B) params model with the generalizability of a 70B model.

    • feynmanatom@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Hmm, not sure if I track what an encoding layer is? The encoding phase involves filling the KV cache across the depth of the model. I don’t think there’s an activation you could just pass across without model surgery + additional fine tuning.