Looking for any model that can run with 20 GB VRAM. Thanks!

  • flossraptor@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    For some people “uncensored” means it hasn’t been lobotomized, but for others it means it can write porn.

  • Sweet_Protection_163@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    34B Nous-capybara was the only model I could use reliably for complicated nlp and json output. My go to for any real work. The first, really.

  • BlueMetaMind@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Best experience I had was with TheBloke/Wizard-Vicuna-30B- Uncensored-GGML

    Best 30B llm so far in general. Censorship kill’s capabilities

  • FullOf_Bad_Ideas@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Spicyboros based on Yi-34B should be the best one, trying it out soon. I found Open Hermes 2.5 to be censored, so I wouldn’t bother.

  • Brave-Decision-1944@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    People, one more thing, in case of LLM, you can use simulationsly multiple GPUs, and also include RAM (and also use SSDs as ram, boosted with raid 0) and CPU, all of that at once, splitting the load.

    So if your GPU is 24GB you are not limited to that in this case.

    In practice: I used https://github.com/oobabooga/text-generation-webui

    Copied Augmental-Unholy-13B-GGUF folder to models folder. In UI I just selected load model, it automatically switched to llama.cpp.

    But there is setting n-gpu-layers set to 0 which is wrong, in case of this model I set 45-55. The results was loading and using my second GPU (NVIDIA 1050ti), while no SLI, primary is 3060, they where running both loaded full. n_ctx setting is “load of CPU”, got to drop to ~2300 for my CPU is older. Now it ran pretty much fast, up to Q4-KM. Most slowdown was caused while 100%SSD load, that’s why I think of RAID 0 (which would be ideal because it was one big chunk at top speed), but didn’t brought that another physical drive jet.

    Batch 512, thread’s 8, threads batch 8, these settings where pure quess but it worked, and got to get back to it to understand properly. This subinformation may help if you want to try that on old AMD faking to be FX 8370 8core, and 14GB DDR3 RAM acting as 10GB.

  • LienniTa@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    gguf goliath will give you best answers but will be very slow. you can unload like 40 layers to vram and your ram will still be a speed bottleneck, but i think 2 t/s are possible on 2 bit quant.

  • tronathan@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’ve been out of the loop for a bit, so despite this thread coming back again and again, I’m finding it useful/relevant/timely.

    What I’m having a hard time figuring out is if I’m still SOTA with running text-generation-webui and exllama_hf. Thus far, I ALWAYS use GPTQ, ubuntu, and like to keep everything in RAM on 2x3090. (I also run my own custom chat front-end, so all I really need is an API.)

    I know exllamav2 is out, exl2 format is a thing, and GGUF has supplanted GGML. I’ve also noticed a ton of quants from the bloke in AWQ format (often *only* AWQ, and often no GPTQ available) - but I’m not clear on which front-ends support AWQ. (I looked a vllm, but it seems like more of a library/package than a front-end.)

    edit: Just checked, and it looks like text-generation-webui supports AutoAWQ. Guess I should have checked that earlier.

    I guess I’m still curious if others are using something besides text-generation-webui for all-VRAM model loading. My only issue with text-generation-webui (that comes to mind, anyway) is that it’s single-threaded; for doing experimentation with agents, it would be nice to be able to run multi threaded.

    • zumba75@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      What is the app you’re using it in? I tried the 13b in Ooga Booga and wasn’t able to make it work consistently (goes and replies instead of me after a short while)

      • BriannaBromell@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I just recently wrote my own pure python/chromadb program but before i had great success in oogabooga and this model. I think maybe there is a setting that is overlooked that maybe i enabled in oobabooga or maybe its one of the generation kwargs that just seems to work flawlessly. The model has issues with keeping its self separate from the user so take care in your wording in the system message too.

        having seen the model’s tokenizer.default_chat_template that isnt unbelievable, its a real mess with impossible conditions.

        My health is keeping me from making a better response but If you’re dead set on using it message me and we’ll work it out together. I like this model the most.

  • Herr_Drosselmeyer@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    What are you looking for?

    With a 3090, you can run any 13b model in 8 bit, group size 128, act order true, at decent speed.

    Go-tos for the more spicy stuff would be Mythomax and Tie fighter.