sampdoria_supporter@alien.topBtoLocalLLaMA@poweruser.forum•Could multiple 7b models outperform 70b models?English
1·
1 year agoI can’t believe I hadn’t run into this. Would you indulge me on the implications for agentic systems like Autogen? I’ve been working on having experts cooperate that way rather than being combined into a single model.
Honestly, Ollama + LiteLLM is fantastic for people in your position (assuming you’re running Linux). Way easier to focus on your application and not have to deal with the complications you’re describing. It just works.