I have tried to set up 3 different versions of it, TheBloke GPTQ/AWQ versions and the original deepseek-coder-6.7b-instruct .

I have tried the 34B as well.

My specs are 64GB ram, 3090Ti , i7 12700k

In AWQ I get just bugged response (“”“”“”“”“”“”“”") until max tokens,

GPTQ works much better, but all versions seem to add unnecessary * at the end of some lines.

and gives worse results than on the website (deepseek.com) Let’s say il ask for a snake game in pygame, it usually gives an unusable version, and after 5-6 tries il get somewhat working version but still il need to ask for a lot of changes.

While on the official website il get the code working on first try, without any problems.

I am using the Alpaca template with adjustment to match the deepseek version (oogabooga webui)

What can cause it? Is the website version different from the huggingface model?

  • AfterAte@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Oh nice! I’ll have to try those settings and compare with the StarChat preset in Oobabooga. I hear ya, I get 1t/s too… it’s unbearable.

    • YearZero@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I miss the days when high-end gpu’s were like $400-500! I’m not made of moneys, and I also use a laptop, so the most I could buy right now would be 16gb vram anyway. I’ll probably save up and wait for next gen and see if they make any headway there.