Title essentially. I’m currently running RTX 3060 with 12GB of VRAM, 32GB RAM and an i5-9600k. Been running 7B and 13B models effortlessly via KoboldCPP(i tend to offload all 35 layers to GPU for 7Bs, and 40 for 13Bs) + SillyTavern for role playing purposes, but slowdown becomes noticeable at higher context with 13Bs(Not too bad so i deal with it). Is this setup capable of running bigger models like 20B or potentially even 34B?

  • flurbz@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    You’re right, this shouldn’t work. But for some strange reason, using --usecublas loads the hipblas library:

    Welcome to KoboldCpp - Version 1.49.yr1-ROCm
    Attempting to use hipBLAS library for faster prompt ingestion. A compatible AMD GPU will be required.
    Initializing dynamic library: koboldcpp_hipblas.so

    I have no idea why this works but it does and since the 6700XT took quite a bit of effort to get going, i’m keeping it this way.