I want to begin by saying my specs are an rtx 4080 with 16gb VRAM + 32GB regular ram.
I’ve managed to run chronoboros 33b model pretty smoothly, even though a tad slow.
Yet I’ve ran into hardware issues (I think) trying to run TheBloke/Capybara-Tess-Yi-34B-200K-GPTQ and Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k (Tried both AWQ and GPTQ), is there a reason models with a pretty similar amount of parameters won’t run?
You must log in or register to comment.
What are you using to run them?
In any case, larger context models require *a lot* more RAM/VRAM.
I’m using ooba, I haven’t bothered much with KoboldCPP because I’m not really running GGUF
What kind of performance do you get on this rig with a 7B 8bit model like mistral?