Why is there no analog to napster/bittorent/bitcoin with LLMs?
Is there a technical reason that there is not some kind of open source LLM that we can all install on our local host which contributes computing power to answering prompts, and rewards those who contribute computing power by allowing them to enter more prompts?
Obviously, there must be a technical reason which prevents distributed LLMs or else it would have already been created by now.
Yes, for actually dividing models across machines, which was the original idea. I’d shifted to a different (and less technically interesting) question of sharing GPUs without dividing the model.
For dividing training, though, see this paper:
SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient