Has anyone tried to combine a server with a moderately powerful GPU with a server with a lot of RAM to run inference? Especially with llama. Cpp where you can offload just some of the layers to GPU?
You must log in or register to comment.
I seen something like that in LOLLMs UI, it’s called petal, and basically it bandwidth the processing along computers connected to that network. There was also other remote “binding” from same maker as the UI. But I didn’t tired those.