it’s possible to “overfit” to a subset of the data. generalization error going up is a symptom of “overfitting” to the entire dataset. memorization is functionally equivalent to locally overfitting, i.e. generalization error going up in a specific neighborhood of the data. you can have a global reduction in generalization error while also having neighborhoods where generalization gets worse.
Memorization is functionally equivalent to locally overfitting.
Uh, no it is not. Memorization and overfitting are not the same thing. You are certainly capable of memorizing things without degrading your generalization performance (I hope).
On most tasks, memorization would be overfitting, but I think one would see that “overfitting” is task/generalization dependent. As long as accurate predictions are being made for new data, it doesn’t matter that it can cough up the old.
Indeed. Just like with training humans to be smart, rote memorization sometimes happens but is generally not the goal. Research like this helps avoid it better in future.
Hopefully I’m not being offtopic here, but a recent paper suggested that repeating a requirement several times within the same instructions lead the model to be more compliant towards it.
Should be expected. It’s overfitting.
Overfitting, by definition, happens when your generalization error goes up.
it’s possible to “overfit” to a subset of the data. generalization error going up is a symptom of “overfitting” to the entire dataset. memorization is functionally equivalent to locally overfitting, i.e. generalization error going up in a specific neighborhood of the data. you can have a global reduction in generalization error while also having neighborhoods where generalization gets worse.
Uh, no it is not. Memorization and overfitting are not the same thing. You are certainly capable of memorizing things without degrading your generalization performance (I hope).
On most tasks, memorization would be overfitting, but I think one would see that “overfitting” is task/generalization dependent. As long as accurate predictions are being made for new data, it doesn’t matter that it can cough up the old.
Indeed. Just like with training humans to be smart, rote memorization sometimes happens but is generally not the goal. Research like this helps avoid it better in future.
That’s not overfitting. That’s just fitting.
Hopefully I’m not being offtopic here, but a recent paper suggested that repeating a requirement several times within the same instructions lead the model to be more compliant towards it.
Do you know whether it’s true or grounded ?
Thanks in advance.