Hey everyone. I’m a graduate student currently studying machine learning. I’ve had a decent amount of exposure to the field; I’ve already seen many students publish and many students graduate. This is just to say that I have some experience so I hope I won’t be discounted when I say with my whole chest: I hate machine learning conferences.

Everybody puts the conferences on a pedestal The most popular machine learning conferences are a massive lottery, and everyone knows this and complains about this, right? But for most students, your standing in this field is built off this random system. Professors acknowledge the randomness but (many) still hold up the students who get publications. Internships and jobs depend on your publication count. Who remembers that job posting from NVIDIA that asked for a minimum of 8 publications at top conferences?

Yet the reviewing system is completely broken Reviewers have no incentive to give coherent reviews. If they post an incoherent review, reviewers still have no incentive to respond to a rebuttal of that review. Reviewers have no incentive to update their score. Reviewers often have incentive to give negative reviews, since many reviewers are submitting papers in the same area they are reviewing. Reviewrs have incentive to collude, because this can actually help their own papers.

The same goes for ACs: they have no incentive to do anything beyond simply thresholding scores.

I have had decent reviewers, both positive and negative, but (in my experience) they are the minority. Over and over again I see a paper that is more or less as good as many papers before it, but whether it squeaks in, or gets an oral, or gets rejected, all seem to depend on luck. I have seen bad papers get in with faked data or other real faults because the reviewers were positive and inattentive. I have seen good papers get rejected for poor or even straight up incorrect reasons that bad, negative reviewers put forth and ACs follow blindly.

Can we keep talking about it? We have all seen these complaints many times. I’m sure to the vast majority of users in this sub, nothing I said here is new. But I keep seeing the same things happen year after year, and complaints are always scattered across online spaces and soon forgotten. Can we keep complaining and talking about potential solutions? For example:

  • Should reviewers have public statistics tied to their (anonymous) reviewer identity?
  • Should reviewers have their identities be made public after reviewing?
  • Should institutions reward reviewer awards more? After all, being able to review a project well should be a useful skill.
  • Should institutions focus less on a small handful of top conferences?

A quick qualification This is not to discount people who have done well in this system. Certainly it is possible that good work met good reviewers and was rewarded accordingly. This is a great thing when it happens. My complaint is that whether this happens or not, seems completely random. I’m getting repetitive, but we’ve all seen good work meet bad reviewers and bad work meet good reviewers…

All my gratitude for people who have been successful with machine learning conferences but are still willing to entertain the notion that the system is broken. Unfortunately, some people take complaints like this as if they were attacks on their own success. This NeurIPS cycle, I remember reading an area chair complain unceasingly about reviewer complaints. Reviews are almost always fair, rebuttals are practically useless, authors are always whining…they are reasonably active on academic Twitter so there wasn’t too much pushback. I searched their Twitter history and found plenty of author-side complaints about reviewers being dishonest or lazy…go figure.

  • lifesthateasy@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Reddit posts are problematic.

    I’ve been an active participant in the Machine Learning subreddit for quite some time now, and lately, I’ve noticed a trend that’s been concerning. While the subreddit serves as an incredible hub for knowledge sharing and discussions around ML, there’s a growing issue with the quality and reliability of some posts.

    Numerous submissions lack proper context, thorough explanations, or credible sources, making it challenging for newcomers and even seasoned practitioners to discern accurate information from misinformation. This trend isn’t just about incomplete explanations; it also extends to the validity of claims made in these posts.

    It’s important to acknowledge that not all content falls into this category—there are incredible insights shared regularly. However, the influx of hasty, ill-explained, or unverified information is diluting the overall value the subreddit offers to the community.

    In a field as intricate as machine learning, accuracy and credibility are paramount. Misleading or incomplete information can misguide newcomers and even experts, leading to misconceptions or wasted efforts in pursuit of understanding or implementing certain techniques.

    Thus, after observing this trend over some time, I firmly believe that there is indeed a problematic issue with the quality and reliability of several Reddit posts within the Machine Learning subreddit. It’s a plea to the community to uphold standards of clarity, depth, and substantiation in discussions and submissions to maintain the subreddit’s integrity and credibility.

    • MLConfThrowaway@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      These claims are very widespread. You can check previous conference posts in this subreddit to find people saying similar things. Every year there is some drama at some conference…plagiarism that slips past reviews (CVPR 2022), controversial decisions on papers (the last few ICML Best Paper awards). Complaints about the reviewing process is the reason venues like TMLR exist.

      My point is that there is, already, years and years of evidence that the reviewing system is broken. How much longer are junior researchers supposed to sit on their feet and act like it isn’t happening?

      • lifesthateasy@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Absolutely, there have been instances of controversy and concerns regarding the reviewing process at various conferences. However, it’s crucial to note that while these incidents do occur, they might not necessarily represent the entire system. Many conferences continuously strive to improve their review processes and address these issues. While acknowledging these problems is essential, it’s also important to engage constructively in efforts to make the system better, perhaps by actively participating in discussions or proposing reforms, rather than solely highlighting the flaws.

        • MLConfThrowaway@alien.topOPB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Perhaps by actively participating in discussions or proposing reforms

          I proposed some solutions already; please join in the discussion if you want to contribute. I’m guessing from the slightly nonsensical and overly verbose responses that these comments are LLM-generated.

          • lifesthateasy@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            The comments might differ in style, but they do address the issue. Engaging in thoughtful discourse can enrich conversations, even if the perspectives expressed aren’t in alignment with one’s own.

        • Terrible_Button_1763@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          If you’ve been in the field long enough, you’d recognize that dissenting voices have been marginalized… several times.

          • lifesthateasy@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            While dissenting voices may have faced challenges historically, acknowledging their existence doesn’t discount the progress made in recognizing diverse perspectives over time.