• 0 Posts
  • 5 Comments
Joined 11 months ago
cake
Cake day: November 1st, 2023

help-circle

  • I am currently running windows with my AMD, but that is only because I prefer windows. Pretty much nothing, except Stable Diffusion at very slow speeds via direct ml and koboldcpp-rocm inference, works. I was able to use normal Stable Diffusion on Ubuntu after ~2h of trying to get it to work. Sadly, it randomly stopped working the next week. Never managed to get Ooba working, but I gave up rather quick after I found koboldcpp-rocm.


    • Model loaders: If you want to load a GPTQ model, you can use ExLlama 1 or 2. AutoGPTQ is old. I personally only use GGUF models, loaded in via Llama.cpp

    • Start-up parameter: I only use auto launch.

    • Context length: The normal length for Llama 1 based models is 2048, Llama 2 based (ethery model except new 7B models) is 4096 and Mistral (new 7B models) is 8192. You can use alpha rope and rope base to make more context usable. More VRAM is required. If you want to 2x your context (4k to 8k), you can put alpha rope to 2.5 and rope base to 25000. Do not use compress_pos.

    • Models: On 24GB you can fit any 7B and 13B model. 20B models are a thing, but not that great. Recently a few good 34B models have been released, but you won’t be able to run them with a high context window.