As requested, this is the subreddit’s second megathread for model discussion. This thread will now be hosted at least once a month to keep the discussion updated and help reduce identical posts.
I also saw that we hit 80,000 members recently! Thanks to every member for joining and making this happen.
Welcome to the r/LocalLLaMA Models Megathread
What models are you currently using and why? Do you use 7B, 13B, 33B, 34B, or 70B? Share any and all recommendations you have!
Examples of popular categories:
-
Assistant chatting
-
Chatting
-
Coding
-
Language-specific
-
Misc. professional use
-
Role-playing
-
Storytelling
-
Visual instruction
Have feedback or suggestions for other discussion topics? All suggestions are appreciated and can be sent to modmail.
^(P.S. LocalLLaMA is looking for someone who can manage Discord. If you have experience modding Discord servers, your help would be welcome. Send a message if interested.)
Has anyone tried out TheBloke’s quants for 7b openhermes 2 5 neural chat v3 1?
7b OpenHermes 2.5 was really good by itself, but the merge with neural chat seems REALLY good so far based on my limited chats with it.
https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF
This seems like it would be pretty good. Downloading now to try it, thanks!
After seeing your comment I tried the OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF model you mention.
Unfortunately setup the way I am it didn’t respond very well for me.
Honestly I don’t think the concept of that merge is too good to be frank.
OpenHermes is fantastic. If I had to state it’s flaws I’d say it’s prose is a bit dry and the dialogue seems to speak past you in a way rather than clearly responding to you. Only issues for roleplay really.
From all I’ve read neuralchat is much the same (tbh though I’ve not got neuralchat to work particularly well for me at all) so any merge created from those two models I would expect to be a bit lacking in the roleplay department.
That said if you are wanting a model for more professional purposes it might be worth further testing.
For roleplay Misted-7B is leagues better. At least in my testing in my setup.
Any chance of positing your settings? :D
Mostly I’m still using slightly older models, with a few slightly newer ones now:
-
marx-3b-v3.Q4_K_M.gguf for “fast” RAG inference,
-
medalpaca-13B.ggmlv3.q4_1.bin for medical research,
-
mistral-7b-openorca.Q4_K_M.gguf for creative writing,
-
NousResearch-Nous-Capybara-3B-V1.9-Q4_K_M.gguf for creative writing, and probably for giving my IRC bots conversational capabilities (a work in progress),
-
puddlejumper-13b-v2.Q4_K_M.gguf for physics research, questions about society and philosophy, “slow” RAG inference, and translating between English and German,
-
refact-1_6b-Q4_K_M.gguf as a coding copilot, for fill-in-the-middle,
-
rift-coder-v0-7b-gguf.git as a coding copilot when I’m writing python or trying to figure out my coworkers’ python,
-
scarlett-33b.ggmlv3.q4_1.bin for creative writing, though less than I used to.
I also have several models which I’ve downloaded but not yet had time to evaluate, and am downloading more as we speak (though even more slowly than usual; a couple of weeks ago my download rates from HF dropped roughly in third, and I don’t know why).
Some which seem particularly promising:
-
yi-34b-200k-llamafied.Q4_K_M.gguf
-
rocket-3b.Q4_K_M.gguf
-
llmware’s “bling” and “dragon” models. I’m downloading them all, though so far there are only GGUFs available for three of them. I’m particularly intrigued at the prospect of llmware-dragon-falcon-7b-v0-gguf which is tuned specifically for RAG and is supposedly “hallucination-proofed”, and llmware-bling-stable-lm-3b-4e1t-v0-gguf which might be a better IRC-bot conversational model.
Of all of these, the one I use most frequently is PuddleJumper-13B-v2.
-
I use SOLAR-v0-70b , one of the best models out there. And the main point that I like, they run inference themselves (the creators of this model - “Upstage”), you can just connect to it via API. It’s he best quality for the best price imo
THey run their inference on together.ai if you are interested.
Using Yi-34b Dolphin right now.
Can’t wait to try Qwen 14b and 72b.Goliat 120B
- LoneStriker_OpenHermes-2-Mistral-7B-8.0bpw-h6-exl2 - my generic goto
- LoneStriker_airoboros-l2-70b-3.1-2.4bpw-h6-exl2 - this one (and the whole family) is great for creative and precise tasks. If they don’t work I jump to wizardlm or vicuna.
- oobabooga_CodeBooga-34B-v0.1-EXL2-4.250b and phind-codellama-34b-v2.Q4_K_M.gguf are great for coding. I haven’t decided which one is better yet.
With a system limited machine (2017 i5 iMac Cpu only) I am getting very pleasing results with:
Openhermes2-mistral (7B 4bit K_M quant) for general chat, desktop assistant, and some coding assistance - Ollama backend with my own front end U/I and llama-index libraries implementation. Haven’t tried 2.5 but may.
Synatra 7B mistral fine tune (4bit K_M quant) seems to produce longer responses and spicier with same system prompt (same use case as above)
Deepseek-coder 6.7B (4bit quant) as a coding assistant alternative to GPT-3.5 - just trying out in last week or so and building the personalized coding assistant front end u/I for fun
OrcaMini-3B - for chat when I just want something smaller and faster to run on my machine - the 7B quants are about max for the old iMac. But OrcaMini sometimes doesn’t give great stuff for me.
IIUC, for coding you suggest deepseek-coder-6.7b-instruct.Q4_K_M.gguf, right? Can I run it with 16 Gb? I’m on a i5 Windows machine, using LM Studio.
Yes that’s the one from The Bloke. I imagine you could, but try it! I can run it on an old i5 3.4 GHz chip with 8GB RAM and it seems to run as long as I’m not trying to keep a bunch of stuff open and using up RAM. I haven’t really used it a lot so can’t tell fully yet.
openhermes 2.5 as an assistant
tiefighter for other use
Openhermes seems pretty capable of “other use”, no?
Llama2-70B for generating the plan and than using CodeLlama-34B for coding, or LLama-13B for executing the instructions from LLama-2-70B
Currently in the process of exploring what other models to add once LLama2-70B generates the plan for what needs to get done
What do you mean by generating the plan? Can you describe your workflow ?
Lets say you’ve got a task like
write a blog post
. Instead of issuing a single command, have a GPT model plan it out. Something akin tosystem: You are a planning AI, you will come up with a plan that will assist the user in any task they need help with as best you can. You will layout a clear and well followed plan. User: Hello Planner AI, I need your help with coming up with a plan for the following task : {user_prompt}
So the now LLama2-70B generates a plan that has steps in it that are numbered. Next, you can regex on the numbers and than pass that along to the worker model that will execute the task. As LLMS write more than humans and add in additional details that LLMS can follow, the subsequent LLMs will do a better job in executing the task rather than if you asked a smaller model
write me a blog post about 3D printing D&D minis
. Now go replace the task of writing a blog post with whatever it is you’re doing and you’ll be getting resultsWow. Thank you so much for this explanation !!! ❤️
WizardLM (WizardLM-70b-v1.0.Q8_0 when quality is needed, WizardLM-30B Q5_K_M when speed is needed).
If you can run Q8_0 but use Q5_K_M for speed, any reason you don’t just run an exl2 at 8bpw?
A few folks mentioning EXL2 here. Is this now the preferred Exllama format over GPTQ?
I won’t use anything else for GPU processing.
The quality bump I’ve seen for my 4090 is very noticeable in speed, coherence and context.
Wild to me that thebloke doesn’t ever use it.
Easy enough to find quants though if you just go to models and search “exl2” and sort by whatever.
EXL2 runs fast and the quantization process implements some fancy logic behind the scenes to do something similar to k_m quants for GGUF models. Instead of quantizing every slice of the model to the same bits per weight (bpw), it determines which slices are more important and uses a higher bpw for those slices and a lower bpw for the less-important slices where the effects of quantization won’t matter as much. The result is the average bits per weight across all the layers works out to be what you specified, say 4.0 bits per weight, but the performance hit to the model is less severe than its level of quantization would suggest because the important layers are maybe 5.0 bpw or 5.5 bpw, something like that.
In short, EXL2 quants tend to punch above their weight class due to some fancy logic going on behind the scenes.
Thank you! I’m reminded of variable bit rate encoding used in various audio and video formats, this sounds not dissimilar.
EXL2 provides more options and has a smaller quality decrease for as far as I know.
In addition to what others said, exl2 is very sensitive to the quantization dataset, which it uses to choose where to assign those “variable” bits.
Most online quants use wikitext. But I believe if you quantize models yourself on your own chats, you can get better results, especially below 4bpw.
Yi-34B-Chat
It’s not the most uncensored, and probably not the best, but I really like it’s prose and coherence.
And Q4_K_M guff runs on my 32gb ram laptop.
(and yes it’s slow)
What kind of stuff do you use it for?
stupid stuff and silly scenarios. My latest:
Jane, Marc bratty over energetic sister, really wants to borrow Marc shiny new convertible. Marc is not so sure…
Write their over the top bickering. Jane is relentless and stops at nothing, to the exasperation of Marc.
For all serious stuff I use gpt4 of course.
Yi 34b
Only using it because I’m in the middle of an upgrade and so far all I’ve added is an extra stick of ram which lets me barely use Yi 34b. Waiting on another stick of ram + a second GPU to run LZLV 70b
13B and 20B Noromaid for RP/ERP.
I am experimenting with comparing GGUF to EXL2 as well as stretching context. So far, Noromaid 13b at GGUF Q5_K_M stretches to 12k context on a 3090 without issues. Noromaid 20B at Q3_K_M stretches to 8k without issues and is in my opinion superior to the 13B. I have recently stretched Noromaid 20B to 10k using 4bpw EXL2 and it is giving coherent responses. I haven’t used it enough to assess the quality however.
All this is to say, if you enjoy roleplay you should be giving Noromaid a look.
TheBloke/mistral-7B-finetuned-orca-dpo-v2-GGUF
Lets most 13B models bite the dust. I use it for a local application - thus inference on CPU-only using llama.cpp with clblast support compiled in. Generates about 10 tokens / sec. on a Dell laptop with intel i7.