Often when I read ML papers the authors compare their results against a benchmark (e.g. using RMSE, accuracy, …) and say “our results improved with our new method by X%”. Nobody makes a significance test if the new method Y outperforms benchmark Z. Is there a reason why? Especially when you break your results down e.g. to the anaylsis of certain classes in object classification this seems important for me. Or do I overlook something?

  • yoshiK@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    How do you measure “statistical significance” for a benchmark? Run your model 10 times and, getting the same result each time, you conclude that variance is 0 and the significance is infty sigma?

    So to get reasonable statistics, you would need to split your test set into say 10 parts and then you can calculate mean and average, but that is only a reasonable thing to do as long as it is cheap to gather data for the test set (and running the test set is fast).