Eric Hartford, the author of dolphin models, released dolphin-2.2-yi-34b.
This is one of the earliest community finetunes of the yi-34B.
yi-34B was developed by a Chinese company and they claim sota performance that are on par with gpt-3.5
HF: https://huggingface.co/ehartford/dolphin-2_2-yi-34b
Announcement: https://x.com/erhartford/status/1723940171991663088?s=20
SynthIA-70B-v1.5 seems to have the same context length of 2k as SynthIA-70B-v1.2, not the same 4k context length as SynthIA-70B-v1.2b
You’re right with your observation, when I load the GGUF, KoboldCpp says “n_ctx_train: 2048”. Could that be an erroneous display? Because I’ve always used v1.5 with 4K context, did all my tests with that, and it’s done so well. If it’s true, it might even be better with native context! Still, 2K just doesn’t cut it anymore, though.