I am going to build a LLM server very soon, targeting 34B models (specifically phind-codellama-34b-v2.Q4 GGUF GPTQ AWQ).
I am stuck between these two setups:
- 12400 + DDR5 6000MHz 30CL + 4060 Ti 16GB (GGUF; Split the workload between CPU and GPU)
- 3090 (GPTQ/AWQ model fully loaded in GPU)
Not sure if the speed bump of 3090 is worth the hefty price increase. Does anyone have benchmarks/data comparing these two setups?
BTW: Alder Lake CPUs run DDR5 in gear 2 (while AM4 run DDR5 in gear 1). AFAIK gear 1 offers lower latency. Would this give AM4 big advantage when it comes to LLM?
I’d do the 4060 ti and add a 16gb p100 to the mix to avoid doing any cpu inference. Use exl2. Otherwise I’d go 3090. CPU is slowww