So there detect pretrain data, https://swj0419.github.io/detect-pretrain.github.io/ , where one can test if a model has been pretrained on the text or not, so why dont we just test all the models going on the leaderboard, and just reject those detected for pretrain data? It would end the “train on test” issue
It’s all a rabbit hole of time wasting, imho. People judge x or y model on how well it works for their use cases.
It’s all a rabbit hole of time wasting, imho. People judge x or y model on how well it works for their use cases.
Well people don’t want to be falsely advertised on the capabilities of the model, if it’s only good on certain use cases, then just say it.
Ask on the huggingface leaderboard page!
The HF staff do seem to look at it, and have an interest in weeding out “contaminated” models (as they have already marked a few).
I will use this now for some tests.
Noice man