Hello, this has probably been asked a bazillion times, but I can’t find an example. I have installed stable diffusion and LLaMA on my new PC. However, it does not appear to be utilising my new RTX 4080 for generation. Generation of text or images is very slow, and the GPU utilisation stays at 0% - 4% throughout. Any idea how this could be addressed? I am no expert, so I have not a clue what I could change for this.
It is on a laptop by the way, NVIDIA RTX 4080 (Laptop) and 12th Gen Intel CPU.
Thanks in advance!
If you have installed or use Oogabooga tex-generation-webui download a model that has ben quantized for nVidia GPU. Those are the models with GTPQ and the newer AWQ suffixes.
On hugging face, the user “thebloke” has aggregated dozens and dozens, maybe hundreds, of models.
the youtube channel Aitrepreneur a couple good videos on installing ooga and how to run the GPU quantized models