Curious if anyone got the whole rig and realize they didn’t really need it etc

  • LordAshon@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    No, I’m just glad I asked for a high end gaming machine from my build shop last year before all this exploded on the scene. The RTX 3060 is good, but I shoulda gone a level up because I do get my fair of CUDA memory errors when I try to build Lora’s in one step.

  • Spskrk@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Last time I built one was 3 years ago and I used it mainly for gaming. I managed to convince myself that this time it’s going to be different and I am building a new one but lets see

  • Maykey@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I moved from desktop with GTX1070(and laptop with 1050) to laptop with 3080ti specifically so I can run video games when I’m not running LLM.

    My only two regrets is downgrade in RAM(64GB->32GB) and storage(4TB hdd -> 2TB M.2 NVME), but it’s not critical.

    I thought about upgrading desktop, but it wouldn’t be minor upgrade so after calculations it turned out getting laptop is better. ~Year and half later I still think so.

  • sickvisionz@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Not for me, because the stuff that’s good for AI is also good for video games and doesn’t hurt for the creative stuff I use my computer for either.

  • floridamoron@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The only regret i had after buying 2nd hand 3090 this summer is that local models, while impressive still, wasn’t there yet, and after experimenting with kobold, ST and stuff, eventually returned to gpt because sadly, after tasting the best, any models i tried felt really boring and too simple. I’m still using it for image generation and some other ai stuff.

    I really not a fan of this situation, if one day we’ll be able to get some local model close to gpt 3.5 for roleplay, i’ll ditch oai asap. But i don’t expect this soon.

  • jubjub07@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I built my 2x3090 with parts from eBay… MB (x299 Giga), i9 CPU, 64G RAM and 2 3090s… I did spring for a new, heavy duty PS and case with big fans.

    All in, I spent about $2k.

    System runs 70B models like Llama-2-70B-Orca-200k just fine at 11 T/s…

    I feel like there’s not a ton of downside - I think the 3090s will be valuable for a while yet, and that’s over half the value of the system.

    Having the hardware right here means I can have thing running all the time - when I read about a new model, I can download, play, etc. in minutes. Spinning up a runpod feels frustratingly slow to me. I went that route for a while, but found that the friction involved meant I tried fewer things. Having a system that might be slower, but is always available just works for my way of working.

    So no “regerts” here.

    • Infamous_Charge2666@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      lol i have just finished identical system just a tad stronger…x299x mobo, i9- 10980 XE, 2 x 3090TI, 256gb. 72TB HDD (WD reds) + 4TB Samsung 990 Pro for 3.5k

  • OldPin8654@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Winter has enveloped us in its chilly embrace. In my quest for warmth, I realized I needed a heater. But then, a memory dawned on me – I had bought one before! It was during those long nights of training a model with a 100k dataset, which made the room toastier. Now, thanks to that, everyone in this house can enjoy a peaceful and warm winter.

  • iwishilistened@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I was building an app and then realized it was cheaper to just call inference API for Llama on Azure lol. Put my local llama on hold now

  • cringelord000222@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Depends on what your “whole rig” is, if its just a mac studio or 4090 then its fine. If its a whole server and enterprise build then you are better off renting it to someone. Enterprise GPUs are real low on stock rn.

  • count023@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I used LLMs as an excuse to buy a new high end rig. You know what it’s been doing for the last months since I built it? Playing 2023 games 4k 120fps+.

    Might have a brief regret that it’s not being used for what i bought it for, but i’m still using it.

  • Prince_Noodletocks@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Nope. Have a 2x3090 system and planning to buy another 3090 system to be able to do SD LoRA training while being able to use Dolphin 70b.