minus-squareiwishilistened@alien.topBtoLocalLLaMA@poweruser.forum•Anyone spend a bunch of $$ on a computer for LLM and regret it?linkfedilinkEnglisharrow-up1·1 year agoI was building an app and then realized it was cheaper to just call inference API for Llama on Azure lol. Put my local llama on hold now linkfedilink
I was building an app and then realized it was cheaper to just call inference API for Llama on Azure lol. Put my local llama on hold now