minus-squarebjergerk1ng@alien.topBtoMachine Learning@academy.garden•[D] What is the motivation for parameter-efficient fine tuning if there's no significant reduction in runtime or GPU memory usage?linkfedilinkEnglisharrow-up1·10 months agoI think the big win comes from combining LoRA with quantization (i.e. QLoRA) which you can’t normally do with full fine-tuning. linkfedilink
I think the big win comes from combining LoRA with quantization (i.e. QLoRA) which you can’t normally do with full fine-tuning.