In no particular order! Don’t forget to use each of their specific prompts for the best generations!
AWQ, and GGUF also available.
https://huggingface.co/NurtureAI/zephyr-7b-beta-16k
https://huggingface.co/NurtureAI/neural-chat-7b-v3-16k
https://huggingface.co/NurtureAI/neural-chat-7b-v3-1-16k
https://huggingface.co/NurtureAI/SynthIA-7B-v2.0-16k
Have fun LocalLLaMA fam <3 ! Let us know what you find! <3
is this a scam or what? none of the models above are from NurtureAI:
- zephyr-beta is trained by HuggingFace and is 32K by default
- neural-chat is from Intel
- synthia is from migtissera
Original links:
https://huggingface.co/HuggingFaceH4/zephyr-7b-beta
https://huggingface.co/Intel/neural-chat-7b-v3-1
https://huggingface.co/migtissera/SynthIA-7B-v2.0
NurtureAI extended the context size to 16k
the context was already 32K
https://preview.redd.it/5jl7c7a53i0c1.png?width=958&format=png&auto=webp&s=ae51ae2b52717bb5ab14bed76580e7e0a45075ed
So assuming this release does anything at all the only thing I can think of would be that instead of “hidden size” cause being 4k giving a 4k sliding window into 32k context it would be a hidden size of 16k giving a 16k window into the 32k context.
However that’s just speculation on my part because… Otherwise the release means nothing… Which would be weird.
That’s not what hidden size does.