Often when I read ML papers the authors compare their results against a benchmark (e.g. using RMSE, accuracy, …) and say “our results improved with our new method by X%”. Nobody makes a significance test if the new method Y outperforms benchmark Z. Is there a reason why? Especially when you break your results down e.g. to the anaylsis of certain classes in object classification this seems important for me. Or do I overlook something?

  • UnusualClimberBear@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Old papers were doing so. Yet now that we decided that NN are bricks that should be trained leaving no data out, we do not care about statistical significance anymore. Anyway test set is probably partially included in the training set of the fondation model you downloaded. It started with Deepmind and RL where experiments where very expensive to run (Joelle Pineau had a nice talk about theses issues). Yet as alpha_whatever are undeniable successes researcher pursued this path. Now go for useless confidence intervals when a training is worth 100 millions in compute… Nah, better rate the outputs by a bunch of humans.

  • Ambiwlans@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Statistical significance doesn’t make sense when nothing is stochastic. … They test the entirety of the benchmark.

  • bethebunny@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    While it’s not super common in academia, it’s actually really useful in industry. I use statistical bootstrapping – poisson resampling of the input dataset – to train many runs on financial fraud models and estimate variance of my experiments as a function of sampling bias.

    Having a measure of the variance of your results is critical when you’re deciding whether to ship models whose decisions have direct financial impact :P

    • Lanky_Product4249@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Does it actually work? I.e. if you construct a 95% confidence interval with that variance, are your model predictions within the interval 95% of the time?

  • matt_leming@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Statistical significance is best used for establishing group differences. ML is used for individual datapoint classification. If you have 75% accuracy in a reasonably sized dataset, it’s trivial to include a p-value to establish statistical significance, but it may not be impressive by ML standards (depending on the task).

  • devl82@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    They don’t even report if ANY kind of statistically sane validation method is used when selecting model parameters (usually a single number is reported) and you expect rigorous statistical significance testing? That.is.bold.

  • __Maximum__@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    They used to do this, I remember papers from 2015 the performance analysis of many papers were very comprehensive. They even provided useful statistics about used datasets. Now it’s “we used COCO and evaluated on 2017val. Here is the final number.” Unless the paper is about being better in certain classes, they will report the averaged percentage.

  • kazza789@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    One big reason for this is that there is a difference between prediction and inference. Most machine learning papers are not testing a hypothesis.

    That said - ML definitely does get applied to inference as well, but in those cases the lack of p-values is often one of the lesser complaints.

  • chief167@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    a combination of different factors:

    • it is not thought in most self-educated programs.
    • therefore most actually don’t know that 1) it exists 2) how to do it 3) how to do power calculations
    • since most don’t know it, there is no demand for it
    • costs compute time and resources, as well as human time, so it’s skipped if nobody asks for it
    • there is no standardized approach for ML models. Do you vary only the training, how to partition your dataset? there is no sklearn prebuilt stuff either
  • yoshiK@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    How do you measure “statistical significance” for a benchmark? Run your model 10 times and, getting the same result each time, you conclude that variance is 0 and the significance is infty sigma?

    So to get reasonable statistics, you would need to split your test set into say 10 parts and then you can calculate mean and average, but that is only a reasonable thing to do as long as it is cheap to gather data for the test set (and running the test set is fast).

  • isparavanje@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    You’re right. This is likely one of the reasons why ML has a reproducibility crisis, together with other effects like data leakage. (see: https://reproducible.cs.princeton.edu/)

    Sometimes, indeed, results are so different that things are obviously statistically significant, even by eye, and that is uncommon in natural sciences. Even then, however, it should be stated clearly that the researchers believe this to be the case, and some evidence should be given.

  • neo_255_0_0@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    You’d be surprised to know that most academia is so focused on publishing that the rigor is not even a priority. Forget reproducibility. This is in part because repeated experiments would indeed require more time and resources both of which are a constraint.

    That is why the most good tech which can be validated is produced by the industry.

  • Seankala@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    The reason why is because most researchers can’t be bothered because no one pays attention to it anyway. I’m always doubtful about the number of researchers who even properly understand statistical testing.

    I’d be grateful if a paper ran experiments using 5-10 different random seeds and provided the mean and variance.

    • Crimsoneer@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Statistical significance was designed in an age when you had 300 observations to represent a population of hundreds of thousands. It is far less meaningful when you move into the big data space.

      • kniglas@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I second that. Additionally I would say, statistics are designed for hypothesis testing. You have an idea, make a priori assumptions (! e.g. which variables to look at!), collect sample data and then want to know if the findings can be generalized to the entire population (or: everything/everyone in the world). The underlying idea is to better understand the workings of the world.

        My own experience, as a pretty traditionally trained researcher (i.e. knowing a bit about statistics) leading a group of data scientist is that the goal is really different. My data scientist try to build a modlę that works, that is the primary goal. I am trying to understand why something is that way, even if that means that my (a priori) build model doesn’t work.

        The boarder between “traditional stats” and ML are very fluid. ML uses a lot of stats, and hypothesis testing research uses a lot ML these days. Just the underlying motivation might be slightly different.

    • Jurph@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I’d be grateful if a paper ran experiments using 5-10 different random seeds and provided the mean and variance.

      Unfortunately most papers are generated using stochastic grad student descent, where the seed keeps being re-rolled until a SOTA result is achieved.