If i have multiple 7b models where each model is trained on one specific topic (e.g. roleplay, math, coding, history, politic…) and i have an interface which decides depending on the context which model to use. Could this outperform bigger models while being faster?
Very true.
We honestly have no clue what’s going on behind ClosedAI’s doors.
I don’t know enough about MoEs to say one way or the other, so I’ll take your word on it. I’ll have to do more research on them.