ninjasaid13@alien.topB to LocalLLaMA@poweruser.forumEnglish · 1 year agoLoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70Barxiv.orgexternal-linkmessage-square13fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkLoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70Barxiv.orgninjasaid13@alien.topB to LocalLLaMA@poweruser.forumEnglish · 1 year agomessage-square13fedilink
minus-squaret0nychan@alien.topBlinkfedilinkEnglisharrow-up1·1 year agoI think Perplexity AI used the same technique to train their newly released model pplx-7b-chat and pplx-70b-chat
I think Perplexity AI used the same technique to train their newly released model pplx-7b-chat and pplx-70b-chat