Title says it all. Why spend so much effort finetuning and serving models locally when any closed-source model will do the same for cheaper in the long run. Is it a philosophical argument? (As in freedom vs free beer) Or are there practical cases where a local model does better.
Where I’m coming from is the requirement of a copilot, primarily for code but maybe for automating personal tasks as well, and wondering whether to put down the $20/mo for GPT4 or roll out my own personal assistant and run it locally (have an M2 max, compute wouldn’t be a huge issue)
GPT-4 is plagued with outages. I’ve found the API unreliable to use in a production setting. Perhaps this will improve with time :)
Why buy a car when there is uber?
The alternative here isn’t Uber. It’s a fast public transportation system. Local LLMs still don’t hold a candle to GPT-4’s performance from my experience, no matter what benchmarks say
I have decent public transportation in my city. It still takes 2 hours to get somewhere. Won’t drop me to the door on my schedule.
Autonomy counts for something. Best case is always “get both”.
Why eat out when you can have a home-cooked meal
I don’t think we’re at a good home cooked meal yet. I think we’re at “Mom: we have AI at home, you don’t need that”
- Local AI belongs to you, GPT-4 don’t. You are simply buying permission to use it for a limited time, and AI company can take AI from you anytime they want for any reason they like. You can only lose your local AI if someone physically removes it from your PC and you no longer can download it.
- GPT-4 is censored and biased. Local AI have uncensored options.
- AI companies can monitor, log and use your data for training their AI. With local AI you own your privacy.
- GPT-4 requires internet connection, local AI don’t.
- GPT-4 is subscription based and costs money to use. Local AI is free use.
Are there any good tutorials on where to start? Im a FW engineer with a M1 Macbook, I dont know much about AI or LLMs
https://github.com/oobabooga/text-generation-webui
How much ram do you have? It matters a lot.
For a BIF simplification, think of the models you can run as the size (billion parameter, for example 13B means 13 billion) = 50-60% of your RAM.
If you have 16 GB, you can run a 7B model for example.
If you have 128GB, you can run 70B,
Look up ollama.ai as a starting point…
If you are cool just using the command line, ollama is great and easy to use.
Otherwise, you could download LMStudio app on Mac, then download a model using the search feature, then you can start chatting. Models from TheBloke are good. You will probably need to try a few models (GGML format most likely). Mistral 7B or llama2 7B is a good starting place IMO.
GPT4all may be the easiest on ramp for your Mac. 7b models run fine on 8gb system, although take much of the memory.
It’s not just philosophical. When you have a technology that holds power to change the world, it should either be destroyed or given into everyone’s hands so that people can adapt and be at easy with it. Otherwise the person inventing the technology will rule the world. Or in today’s world, will influence politics, will have support from powerful people, will attract wealth, and will make mistakes which could destroy he world.
So it’s not just about morals, it’s about survival.
because they are run by the borg (microsoft)
never think that ease is the only reason to do something privacy security
and overall control of your own domain are very good reasons.
another great reason local never says no.
Data collection. You’re sending all of your queries to the GPT4 server, to people you don’t know. Who knows what they’re doing with it?
closed-source model
You gave your own answer:
Not monitored
Not controlled
Uncensored
Private
Anonymous
Flexible
I use it for development. All the things mentioned are nice, but there’s no way I could afford to do development using a paid service. I pass/generate way too many tokens and my company hasn’t really sponsored my work yet.
Having chatgpt write a pirate poem hardly costs a thing. Getting an llm to summarize a bunch of search results, or read an email inbox flagging certain scenarios, or parse through a codebase looking for specific features gets very, very expensive fast.
“Those who would give up privacy to purchase a temporarily better large language model interface, deserve neither” - Benjamin Franklin
GPT-4 is much much better for most normal use cases. Hopefully that changes one day, but OpenAI’s lead might just keep getting bigger.
I was long a hold out for ChatGPT because I wasn’t confident about OpenAI’s handling of my personal information. I’ve started using Llama just a couple weeks ago, and whilst I’m happy that it can be run locally, I’m still looking forward to open source LLMs, because Llama isn’t actually open source.
Control. You can have the control or you can let someone else have the control. Open source LLMs give The masses and other option. An option they don’t have to pay for. Your question is like saying why don’t you use Microsoft 360 instead of open office.
Local models aren’t censored lol