At a high level, merges allows you to get a lot of training done cheaply.
If you have a model finetuned on set A and another model finetuned on set B, merging these would allow you to very cheaply create a model that was trained on both set A and set B.
It is the magical solution to “I can’t afford to finetune a model”.
I’m getting tired of all those merges, as if this was the magical solution to everything
At a high level, merges allows you to get a lot of training done cheaply.
If you have a model finetuned on set A and another model finetuned on set B, merging these would allow you to very cheaply create a model that was trained on both set A and set B.
It is the magical solution to “I can’t afford to finetune a model”.