Title says it all. Why spend so much effort finetuning and serving models locally when any closed-source model will do the same for cheaper in the long run. Is it a philosophical argument? (As in freedom vs free beer) Or are there practical cases where a local model does better.

Where I’m coming from is the requirement of a copilot, primarily for code but maybe for automating personal tasks as well, and wondering whether to put down the $20/mo for GPT4 or roll out my own personal assistant and run it locally (have an M2 max, compute wouldn’t be a huge issue)

  • kivathewolf@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I like the analogy that Andrej Karpathy posted on X sometime back. LLM OS

    Think of LLM as an OS. There are closed source OS like Windows and Mac, and then there are open source OS based on Linux. Each has its place. For most regular consumers, windows and mac are sufficient. However Linux has its place for all kinds of applications (from the Mars rover, to your raspberry pi home automation project). The LLMs may evolve in a similar fashion. For highly specific use cases, it maybe better to use a small LLM fine tuned for your application. In cases where data sovereignty is important, it’s not possible to use open AIs tools. Next, let’s say you have an application where u need an AI service and internet is not available. Local models are the only way you can go about.

    It’s also important to understand that when you use GPT4, you aren’t using an LLM, but a full solution, where there’s the LLM, RAG, classic software functions (math), internet browsing and may be even other “expert LLMs”. When you download a model from Hugging face and run it, you are just using one piece of the puzzle. So yes, your results will not be comparable to GPT4. What open source gives you, is the ability to make a system like GPT4, but you need to do the work to get it there.