Man those h100s really are on another level. I shudder to think where are in 5 years.
Man those h100s really are on another level. I shudder to think where are in 5 years.
Wrong again, this is a peanut butter and jelly sandwhich
For me, it’s just another option in the tool belt. If I need somthing that I know a local model can do, I will do it on a local model. Normally this is summaries, reviewing papers/essays, and bouncing basic ideas off of to see if I’m missing anything. They also excel at creative writing and role playing which can be fun. If I need something with a concrete answer, or high confidence I’ll use the GPT4 API.
Mistal OpenOrca thinks it’s ChatGPT. It was trained on it’s responses. If chatGPT has a baked in response, OpenOrca probably has it too
Commenting to follow, this is fascinating.
second for this. I haven’t tested it for code yet, but it’s very enjoyable to converse with. I find it does summaries quite well, I have asked it about a wide range of topics and it has been ~90% correct on the first response, and can kind of fall apart after going back and forth a few times, but it’s only 7b.
For me it’s privacy and multi tasking NLP. Traditional NLP needed a lot of vertical fine tuning for each task whereas a capable LLM really just needs a single lateral fine tune.
And obvious, privacy. Just offering semantic search and query of a companies knowledge base is a huge value add. More companies than not don’t want to rely on an API accessing all their sales data etc, but you can make an air gapped LLM and give them comparable capabilities.
Also I do creative audio and video work as a side gig and just normal LLM things helping work through ideas or make sure I’m not missing an obvious creative choice. A lot of the local models have fine tunes just for creative writing and they are so much fun to play with as a creative tool.
I agree with the other guy, you just need semantic search of some flavor unless you have industry specific language that the LLM won’t understand well enough to write