Often when I read ML papers the authors compare their results against a benchmark (e.g. using RMSE, accuracy, …) and say “our results improved with our new method by X%”. Nobody makes a significance test if the new method Y outperforms benchmark Z. Is there a reason why? Especially when you break your results down e.g. to the anaylsis of certain classes in object classification this seems important for me. Or do I overlook something?
I second that. Additionally I would say, statistics are designed for hypothesis testing. You have an idea, make a priori assumptions (! e.g. which variables to look at!), collect sample data and then want to know if the findings can be generalized to the entire population (or: everything/everyone in the world). The underlying idea is to better understand the workings of the world.
My own experience, as a pretty traditionally trained researcher (i.e. knowing a bit about statistics) leading a group of data scientist is that the goal is really different. My data scientist try to build a modlę that works, that is the primary goal. I am trying to understand why something is that way, even if that means that my (a priori) build model doesn’t work.
The boarder between “traditional stats” and ML are very fluid. ML uses a lot of stats, and hypothesis testing research uses a lot ML these days. Just the underlying motivation might be slightly different.