Couldn’t wait for the great TheBloke to release it so I’ve uploaded a Q5_K_M GGUF of Intel/neural-chat-7b-v3-1.
From some preliminary test on PISA sample questions it seems at least on par with OpenHermers-2.5-Mistral-7B
Couldn’t wait for the great TheBloke to release it so I’ve uploaded a Q5_K_M GGUF of Intel/neural-chat-7b-v3-1.
From some preliminary test on PISA sample questions it seems at least on par with OpenHermers-2.5-Mistral-7B
Thank you for your work! Is it possible to download this model if I can’t run Ollama? I couldn’t find a download link or a HF repo.