Often when I read ML papers the authors compare their results against a benchmark (e.g. using RMSE, accuracy, …) and say “our results improved with our new method by X%”. Nobody makes a significance test if the new method Y outperforms benchmark Z. Is there a reason why? Especially when you break your results down e.g. to the anaylsis of certain classes in object classification this seems important for me. Or do I overlook something?
One big reason for this is that there is a difference between prediction and inference. Most machine learning papers are not testing a hypothesis.
That said - ML definitely does get applied to inference as well, but in those cases the lack of p-values is often one of the lesser complaints.