Was wondering if theres anyway to use a bunch of old equipment like this to build an at home crunch center for running your own LLM at home, and whether it would be worth it.
Was wondering if theres anyway to use a bunch of old equipment like this to build an at home crunch center for running your own LLM at home, and whether it would be worth it.
I tried it. Something like 1.2 tokens inference on lamma 70b with a mix of cards (but 4 1080s). Would process would crash occasionally. Ideally every card would have the same vram.
Going to try it with 1660 TI’s. I think it may be the ‘sweet spot’ in power to price to performance.
Did you use some q3 gguf quant with this?