Sorry for the noob question. I’m building out a new server and as I love playing with new tech, I thought I would throw in a GPU so I can try learn to integrate AI with things like Private GPT, document generation, meeting transcription, maybe some integrations with Obsidian, or even Home Assistant for automation. I like the idea of it being able to crawl all my information and offer suggestion, rather than me having to copy and paste snippets as I do now with Chat GPT. I’m a solo IT consultant by trade, so I’m really hoping it will help me augment my work.

Budget isn’t super important, it more that it’s fit for purpose, but to stop the people suggesting a £30,000 GPU, I cap it at ~£1000!

Thanks!

  • a_beautiful_rhind@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    P40, 3090, those are your “affordable” 24gb GPU unless you want to go AMD or have enough to make 3x16gb or something.

    • idarryl@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I have a soft spot for AMD and I’m considering the R9 7900 or (lesser known) R9 PRO 7945 (it has AVX-512 support, which I understand is beneficial in the ML world) for the CPU, but heard AMD GPU support was lack in some workloads and I really don’t need that learning curve.

      I can only afford and justify the space of one CPU so 3 x something is out of the question.

      I looked at the P100, but read the 4060 was the better choice. I’m at a little bit of a loss. A 3u chassis looks like the way to go, and I have the option to water cool either the CPU or GPU.

      • a_beautiful_rhind@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I just got a P100 for like $150, going to test it out and see how it does with its FP16 vs P40 for SD and exllama overflow.

        4060 is faster but its multiple times as expensive. For your sole GPU you really need 24gb+. The AMD are becoming somewhat competitive but still have some hassle and slowness.

        CPU is going to give you 3t/s, its not really anywhere near, even with the best procs. Sure get it for other things in the system, but don’t expect it to help much with ML. I guess newer will get you faster ram but it’s not enough.