SupplyChainNext@alien.topBtoLocalLLaMA@poweruser.forum•Tool to quickly iterate when fine-tuning open-source LLMsEnglish
1·
1 year agoThis is an amazing sub with amazingly talented individuals. I love it here. This is great.
This is an amazing sub with amazingly talented individuals. I love it here. This is great.
figure out the size and speed you need. Buy the Nvidia pro gpus (A series) x 20-50 + the server cluster hardware and network infrastructure needed to make them run efficiently.
Think in the several hundred thousand dollar range. I’ve looked into it.
Meh I’m running 13bs on a 13900k 64 gb ddr5 and a 6900xt with LM studio and it’s faster than my office workstations 12900ks 3090ti. Sometimes ram and processor with decent VRM is enough.
LM Studio (don’t shoot me people)
Download, download your desired model and if you have the resources you’re good to go.
Fine tune + RAG seems to be the method we’re expected to go in. Need to fine tune for terminology and RAG for policy and procedure knowledge.