Title essentially. I’m currently running RTX 3060 with 12GB of VRAM, 32GB RAM and an i5-9600k. Been running 7B and 13B models effortlessly via KoboldCPP(i tend to offload all 35 layers to GPU for 7Bs, and 40 for 13Bs) + SillyTavern for role playing purposes, but slowdown becomes noticeable at higher context with 13Bs(Not too bad so i deal with it). Is this setup capable of running bigger models like 20B or potentially even 34B?

  • sampdoria_supporter@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Honestly, Ollama + LiteLLM is fantastic for people in your position (assuming you’re running Linux). Way easier to focus on your application and not have to deal with the complications you’re describing. It just works.

    • henk717@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Koboldcpp which he is already using is a better fit due to the superior context shifting.