Hi everyone, I’d like to share something that I’ve been working on for the past few days: https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0
This model is the result of interleaving layers from three different models: Euryale-1.3-L2-70B, Nous-Hermes-Llama2-70b, and SynthIA-70B-v1.5, resulting in a model that it larger than any of the three used for the merge. I have branches on the repo for exl2 quants at 3.0 and 4.85 bpw, which will allow the model to run in 48GB or 80GB of vram, respectively.
I love using LLMs for RPs and ERPs and so my goal was to create something similar to Goliath, which is honestly the best roleplay model I’ve ever used. I’ve done some initial testing with it and so far the results seem encouraging. I’d love to get some feedback on this from the community! Going forward, my plan is to do more experiments with merging models together, possibly even going even larger than 120b parameters to see where the gains stop.
Huh, interesting weave, it did feel like it made less spelling and simple errors when comparing it to goliath.
Once again Euryale’s included. The lack of xwin makes it better imo, Xwin may be smart but it has repetition issues at long context, that’s just my opinion.
I’d honestly scale it down, there’s really no need to go 120b, from testing a while back ~90-100b frankenmerges have the same effect.
Goliath makes spelling errors?
I’ve only used a handful of mistral 7B’s due to constraints but I’ve never seen it make any spelling errors.
Is that a side effect of merging?
I have noticed too, that Goliath makes spelling errors somewhat frequently, more often than other models.
It doesn’t seem to affect the “smarts” part as much though. It otherwise still makes high quality text.
I will set this to run overnight on Hellaswag 0-shot like I did here on Goliath when it was new: https://old.reddit.com/r/LocalLLaMA/comments/17rsmox/goliath120b_quants_and_future_plans/k8mjanh/
Thanks for the model! I started investigating some approaches to combine models and see if it can be better than its individual parts. Just today I finished code to use a genetic algorithm to pick out parts and frankenstein 7B models together (trying to prove that there is merit to this approach using smalelr models…but we’ll see).
I’ll report back on the Hellaswag results on this model.
Thanks! I’m eager to see the results :)
Any tips/attempts on frankensteining 2 yi-34b models together to make a ~51B model?
Don’t shuffle well, keep it in chunks.
We need 2 or 3 yi stacked together and then face them off vs 70b.
Exactly what I was thinking. I just fail miserably each time I merge the layers.
Almost happened (straight merge with 34b size). The result is good. https://huggingface.co/brucethemoose/Capybara-Tess-Yi-34B-200K-DARE-Ties
Great work! Does anyone happen to have a guide, tutorial, or paper on how to combine or interleave models together? I would also love to try it out frankensteining models
Is this model better at writing stories? I want to compare it with goliath, which I use on my local computer. Goliath can write stories, but he definitely lacks originality and creativity
Hard to say. Try it out and let me know!
One thing’s for sure: it handles RoPE scaling much better than Goliath. Goliath starts falling apart at about 10-12k context for me, but Venus didn’t start doing so until like 30k.
What hardware are you guys even using to run something this big?
I…also L ove oliath! I … i RALLY hope you’re is better. A random hallucination walks up and punches trollsalot right in the face. WHY ARENT WE HAVING SEX YET! she screams
Try it out and let me know! I included Nous-Hermes in the merge because I’ve found it to be one of the best roleplaying models that doesn’t hallucinate too much. However, Nous-Hermes also tends to lack a bit in terms of the prose it writes, from my experience. I was hoping to get something that’s coherent most of the time and creative.
I could not get any of the quants loaded, looks like the config is looking for XX of 25 safetensors
FileNotFoundError: No such file or directory: "models\Venus-120b-v1.0\model-00001-of-00025.safetensors"
with exl2-3.0bpw having only XX of 06 safetensors
🤔 How are you trying to load it? I tested both quants in text-generation-webui and they worked fine for me. I used exllama2_hf to load it
Defaulted to transformers, loaded right away in ExLlamav2_HF, thank you I didn’t know what I don’t know.
Models on ooba without “exl” on the folder name will redirect to transformers by default, so that may be the reason he got that by default.
possibly even going even larger than 120b parameters
I didn’t know that was possible, have people made a 1T model yet?
haha damn, I should have taken the NSFW warning seriously before clicking the huggingface link in front of people lol.
Is this model any good for SFW stuff?
Yeah I wanted a picture to go with the model and that’s what stable diffusion spat out :D
And I haven’t tried it for SFW stuff but my guess is that it would work fine.
Is this model any good for SFW stuff?
Every uncensored llm I tried worked fine with SFW stuff.
If you are talking about story telling they might be even better that SFW models. And I also never seen NSFW/uncensored models to write NSFW stuff unless explicitly asked to do so.
Yeah okay, I’ll give them a try again. I only ever tried one, and it was completely insane, always ended up with something sexual and after a while started randomly spamming words like ‘sex toy’.
Looks like it was taken down / experimental: https://old.reddit.com/r/LocalLLaMA/comments/16qrdpa/plotbot_13b_finetuned_llama_2_model_for_writing/
That’s a great work!
Just a question… Have anyone tried to fine tune one of those “Frankenstein” models? Some time ago (when the first “Frankenstein” came out, it was a ~20B model) I read here on reddit that lots of users agreed that a fine tune on those merged models would have “better” results since it would help to “smooth” and adapt the merged layers. Probably I lack the technical knowledge needed to understand, so I’m asking…
Tess-XL-1.0… so far I didn’t like the results.
Is that a LORA or a full fine tune?
Hell yea! No Xwin. I hate that model. I’m down for the 3 bit. I didn’t like tess-XL so far so hopefully you made a david here.
I used this dataset for the quants: https://huggingface.co/datasets/jasonkstevens/pippa-llama2-chat/tree/refs%2Fconvert%2Fparquet/default/train
I still have this feeling in my gut that closedai have been doing this for a while. It seems like a free lunch.
I don’t think so, this is something you do when you’re GPU poor, closedai would just not undertrain their models in the first place.
It seems like my duel rtx 4090 setup just falls short of memory to load it up, where Goliath loads fine on the 3.0 bpw model.
Looks promising! I’ll try loading it up on my 2x3090 setup on 3.0bpw
Oh we definably need GGUF variant of this model, I love Goliat-120B (I event think it might be better that Falcon-180B) and would love to run this model.