Just read about this project on Twitter and it sounds really interesting:
https://github.com/mozilla-Ocho/llamafile
What do guys think, might be even more simple than Ollama?
Just read about this project on Twitter and it sounds really interesting:
https://github.com/mozilla-Ocho/llamafile
What do guys think, might be even more simple than Ollama?
Exciting and worrying… I have gone to great efforts to use safetensors… I would have to see every model packaged in executable format… but then again I have seen comments about llama.cpp behavior changing for the same model and settings (not sure if it is true but that could be bad)