I have a server with 512gb RAM and 2x Intel Xeon 6154. It has spare 16x pcie 3.0 slot once I get rid of my current gpu.

I’d like to add a better gpu so I can generate paper summaries (the responses can take a few minutes to come back) that are significantly better than the quality I get now with 4bit Llama2 13b. Anyone know whats the minimum gpu I should be looking at with this setup to be able to upgrade to the 70b model?Will hybrid cpu+gpu inference with RTX 4090 24GB be enough?

  • Sea_Particular_4014@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Your 512GB of RAM is overkill. Those Xeons are probably pretty mediocre for this sort of thing due to the slow memory, unfortunately.

    With a 4090 or 3090, you should get about 2 tokens per second with GGUF q4_k_m inference. That’s what I do and find it tolerable but it depends on your use case.

    You’d need a 48GB GPU, or fast DDR5 RAM to get faster generation than that.

    • Dankmre@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Op seems to want 5-10 T/s on a budget with 70B… Not going to happen I think.

  • Ravenpest@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I got a 4090, 128 GB of RAM. 70b runs fine at quant 5 and takes about 280 seconds to generate a message (full reprocessing) and around 100 less on a normal message. So I’d say yo’d be fine with that.

  • Aaaaaaaaaeeeee@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Last I checked, 38t/s is minimum prompt processing speeds with zero layers offloaded on a 3090 for 70B q4_k_m

    I’m sure its way higher now. When you offload layers, you can do more, but I think you have to have pre knowledge of the max length, so that your gpu doesnt OOM towards the end.

    I think your supposed to adjust the prompt processing batch size settings also.

    I highly recommend checking the nvidia PRs in llama.cpp for the prompt processing speeds, for differences between GPUs. If they have double or triple that will tell you something and you could calculate the amount of time for processing your text.

  • georgejrjrjr@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Used first generation (non-Ada) 48GB A6000 is an option. Kinda slow, but also the only card in its VRAM-density-per-dollar niche.

  • kdevsharp@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Well, if you use llama.cpp and https://huggingface.co/TheBloke/Llama-2-70B-Chat-GGUF model, and the Q5_K_M quantisation file, it uses 51.25 GB of memory. So your 24GB card can take less than half the layers of that file. So I guess if your offloading < half the layers to the graphics card, then it will be less than twice as fast as CPU only. Have you tried a quantised model like that with CPU only?