https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat
https://huggingface.co/deepseek-ai/deepseek-llm-67b-base
Knowledge cutoff May 2023, not bad.
Online demo: https://chat.deepseek.com/ (Google oauth login)
another Chinese model, demo is censored by keywords, not that censored on local.
I asked it to create a simple chat interface to talk with open ai’s gpt 3.5 api and to use stream = true option. On the first try, it didn’t know how to handle the stream, so it simply used res.json(). After that, I told it that we needed to take care of streaming text in a special way. It understood this and wrote the correct code. Overall, I’m quite impressed. Way to go deepseak coder!
Does it give refusals on base? 67B sounds like full foundation train.
GGUF via TheBloke:
Wow, this model seems very good for the Italian language!
I wish there was a 13b model which can just fit in on my GPU with quant
Seems I am doing something wrong with this one.
I got abismal results with 4_K_M: it had silly grammatical errors and typos, it also did not stick to prompt, so I don’t know.
I don’t know if this helps but I’m using the GGUF version of that and it’s working perfectly
Just find that this is released by high-flyer quant, one of the largest private equity firm in China.
The chat model is the first that knows how to compare the weight of bricks and feathers.
The weight of an object is determined by its mass and the gravitational force acting on it. In this case, both objects are being compared under the same gravitational conditions (assuming they’re both on Earth), so we can compare their masses directly to determine which weighs more.
1kg of bricks has a mass of 1 kilogram. 2kg of feathers has a mass of 2 kilograms.
Since 2 is greater than 1, the 2kg of feathers weigh more than the 1kg of bricks.
That coding is pretty damn good based off of limited tests. I’ll have to experiment more.
I made it write about itself using LocalAI https://sfxworks.net/posts/deepseek/
I will post a how-to on using local-ai on my free time if anyone is interested
I’m desensitized at this point. I wonder if this is yet another Pretraining on the Test Set Is All You Need marketing stunt or not, as most new models lately have been.
I threw my reasoning test questions at the web version and it performed worse than most 70B i tried. About the level of Yi.
Brother u/The-Bloke , can we get a quant of the uncensored base model too? ♥╣[-_-]╠♥
LoneStriker has a 2.4 bpw quant up: https://huggingface.co/LoneStriker/deepseek-llm-67b-chat-2.4bpw-h6-exl2
not that censored on local.
So… Some censoring?
depepseek is one of my fav, i use it everyday for code generation. its got an extra option for the chat now at the link you shared, just general chat about anything, pretty good at it