gpt872323@alien.topB to LocalLLaMA@poweruser.forumEnglish · 11 months agoWhat is the major difference between different frameworks with regards to performance, hardware requirements vs model support? Llama.cpp vs koboldcpp vs local ai vs gpt4all vs Oobaboogaplus-squaremessage-squaremessage-square0fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareWhat is the major difference between different frameworks with regards to performance, hardware requirements vs model support? Llama.cpp vs koboldcpp vs local ai vs gpt4all vs Oobaboogaplus-squaregpt872323@alien.topB to LocalLLaMA@poweruser.forumEnglish · 11 months agomessage-square0fedilink
minus-squaregpt872323@alien.topOPBtoLocalLLaMA@poweruser.forum•3060 Performance with 13b ModellinkfedilinkEnglisharrow-up1·11 months ago thanks linkfedilink
gpt872323@alien.topB to LocalLLaMA@poweruser.forumEnglish · 11 months ago3060 Performance with 13b Modelplus-squaremessage-squaremessage-square2fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-square3060 Performance with 13b Modelplus-squaregpt872323@alien.topB to LocalLLaMA@poweruser.forumEnglish · 11 months agomessage-square2fedilink
minus-squaregpt872323@alien.topBtoLocalLLaMA@poweruser.forum•Cheapest way to run local LLMs?linkfedilinkEnglisharrow-up1·1 year agohave this same question. I am thinking of a mini pc more powerful than both and price relatively ok. Mini pc not with nuc rather amd or intel mobile series. linkfedilink