As the title says when combining a p40 and a rtx 3090 a few use casese come to mind and i wanted to know if they could be done ? greatly appreciate your help:
first could you run larger modells where they are computed on the 3090 and the p40 is just used for vram offloading and would that be faster then system memory ?

Could you compute on both of them in a asymetric fashion like putting some layers on the RTX3090 and fewer on the p40 ?

Lastly and that one probably works you could run two different instances of LLms for example a bigger one on the 3090 and a smaller on the p40 i asume.

  • Hoppss@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    This is not true, I have split two separate LLM models partially across a 4090 and a 3080 and have had them both run inference at the same time.

    This can be done in oobabooga’s repo with just a little tinkering.