Hi everyone, I’d like to share something that I’ve been working on for the past few days: https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0

This model is the result of interleaving layers from three different models: Euryale-1.3-L2-70B, Nous-Hermes-Llama2-70b, and SynthIA-70B-v1.5, resulting in a model that it larger than any of the three used for the merge. I have branches on the repo for exl2 quants at 3.0 and 4.85 bpw, which will allow the model to run in 48GB or 80GB of vram, respectively.

I love using LLMs for RPs and ERPs and so my goal was to create something similar to Goliath, which is honestly the best roleplay model I’ve ever used. I’ve done some initial testing with it and so far the results seem encouraging. I’d love to get some feedback on this from the community! Going forward, my plan is to do more experiments with merging models together, possibly even going even larger than 120b parameters to see where the gains stop.

  • Saofiqlord@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Huh, interesting weave, it did feel like it made less spelling and simple errors when comparing it to goliath.

    Once again Euryale’s included. The lack of xwin makes it better imo, Xwin may be smart but it has repetition issues at long context, that’s just my opinion.

    I’d honestly scale it down, there’s really no need to go 120b, from testing a while back ~90-100b frankenmerges have the same effect.

    • CardAnarchist@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Goliath makes spelling errors?

      I’ve only used a handful of mistral 7B’s due to constraints but I’ve never seen it make any spelling errors.

      Is that a side effect of merging?

      • noeda@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I have noticed too, that Goliath makes spelling errors somewhat frequently, more often than other models.

        It doesn’t seem to affect the “smarts” part as much though. It otherwise still makes high quality text.

  • noeda@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I will set this to run overnight on Hellaswag 0-shot like I did here on Goliath when it was new: https://old.reddit.com/r/LocalLLaMA/comments/17rsmox/goliath120b_quants_and_future_plans/k8mjanh/

    Thanks for the model! I started investigating some approaches to combine models and see if it can be better than its individual parts. Just today I finished code to use a genetic algorithm to pick out parts and frankenstein 7B models together (trying to prove that there is merit to this approach using smalelr models…but we’ll see).

    I’ll report back on the Hellaswag results on this model.

  • xadiant@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Any tips/attempts on frankensteining 2 yi-34b models together to make a ~51B model?

  • xinranli@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Great work! Does anyone happen to have a guide, tutorial, or paper on how to combine or interleave models together? I would also love to try it out frankensteining models

  • Ok_Library5522@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Is this model better at writing stories? I want to compare it with goliath, which I use on my local computer. Goliath can write stories, but he definitely lacks originality and creativity

      • tenmileswide@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        One thing’s for sure: it handles RoPE scaling much better than Goliath. Goliath starts falling apart at about 10-12k context for me, but Venus didn’t start doing so until like 30k.

  • trollsalot1234@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I…also L ove oliath! I … i RALLY hope you’re is better. A random hallucination walks up and punches trollsalot right in the face. WHY ARENT WE HAVING SEX YET! she screams

    • nsfw_throwitaway69@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Try it out and let me know! I included Nous-Hermes in the merge because I’ve found it to be one of the best roleplaying models that doesn’t hallucinate too much. However, Nous-Hermes also tends to lack a bit in terms of the prose it writes, from my experience. I was hoping to get something that’s coherent most of the time and creative.

  • th3st0rmtr00p3r@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I could not get any of the quants loaded, looks like the config is looking for XX of 25 safetensors

    FileNotFoundError: No such file or directory: "models\Venus-120b-v1.0\model-00001-of-00025.safetensors"
    

    with exl2-3.0bpw having only XX of 06 safetensors

    • nsfw_throwitaway69@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      🤔 How are you trying to load it? I tested both quants in text-generation-webui and they worked fine for me. I used exllama2_hf to load it

      • th3st0rmtr00p3r@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Defaulted to transformers, loaded right away in ExLlamav2_HF, thank you I didn’t know what I don’t know.

      • panchovix@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Models on ooba without “exl” on the folder name will redirect to transformers by default, so that may be the reason he got that by default.

  • Aaaaaaaaaeeeee@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    possibly even going even larger than 120b parameters

    I didn’t know that was possible, have people made a 1T model yet?

  • CheatCodesOfLife@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    haha damn, I should have taken the NSFW warning seriously before clicking the huggingface link in front of people lol.

    Is this model any good for SFW stuff?

    • nsfw_throwitaway69@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Yeah I wanted a picture to go with the model and that’s what stable diffusion spat out :D

      And I haven’t tried it for SFW stuff but my guess is that it would work fine.

    • uti24@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Is this model any good for SFW stuff?

      Every uncensored llm I tried worked fine with SFW stuff.

      If you are talking about story telling they might be even better that SFW models. And I also never seen NSFW/uncensored models to write NSFW stuff unless explicitly asked to do so.

  • Distinct-Target7503@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    That’s a great work!

    Just a question… Have anyone tried to fine tune one of those “Frankenstein” models? Some time ago (when the first “Frankenstein” came out, it was a ~20B model) I read here on reddit that lots of users agreed that a fine tune on those merged models would have “better” results since it would help to “smooth” and adapt the merged layers. Probably I lack the technical knowledge needed to understand, so I’m asking…

  • a_beautiful_rhind@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Hell yea! No Xwin. I hate that model. I’m down for the 3 bit. I didn’t like tess-XL so far so hopefully you made a david here.

  • ambient_temp_xeno@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I still have this feeling in my gut that closedai have been doing this for a while. It seems like a free lunch.

    • Charuru@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I don’t think so, this is something you do when you’re GPU poor, closedai would just not undertrain their models in the first place.

  • Human-Most-6115@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    It seems like my duel rtx 4090 setup just falls short of memory to load it up, where Goliath loads fine on the 3.0 bpw model.

  • uti24@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Oh we definably need GGUF variant of this model, I love Goliat-120B (I event think it might be better that Falcon-180B) and would love to run this model.