Not a LoRA expert, but I’m guessing that your model is also on your GPU. The majority of that 8.5GB footprint is probably just your model, meaning that LoRA actually is giving you a significant decrease in GPU memory usage added during training.
Try just loading your model and checking your GPU memory usage. If it’s ~8GB, LoRA is cutting your training memory usage from 0.5 to 0.1GB.
Not a LoRA expert, but I’m guessing that your model is also on your GPU. The majority of that 8.5GB footprint is probably just your model, meaning that LoRA actually is giving you a significant decrease in GPU memory usage added during training.
Try just loading your model and checking your GPU memory usage. If it’s ~8GB, LoRA is cutting your training memory usage from 0.5 to 0.1GB.