seraschka@alien.topB to Machine Learning@academy.gardenEnglish · 2 years ago[P] Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation): Things I Learned From Hundreds of Experimentsmagazine.sebastianraschka.comexternal-linkmessage-square2linkfedilinkarrow-up11arrow-down10cross-posted to: localllama@poweruser.forum
arrow-up11arrow-down1external-link[P] Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation): Things I Learned From Hundreds of Experimentsmagazine.sebastianraschka.comseraschka@alien.topB to Machine Learning@academy.gardenEnglish · 2 years agomessage-square2linkfedilinkcross-posted to: localllama@poweruser.forum
minus-squareseraschka@alien.topOPBlinkfedilinkEnglisharrow-up1·2 years ago =256, alpha=64 I think I need glasses 😅. You are right that other ratios worked quite well, too, indeed. I amended this section a bit.
I think I need glasses 😅. You are right that other ratios worked quite well, too, indeed. I amended this section a bit.