Hey everyone. I’m a graduate student currently studying machine learning. I’ve had a decent amount of exposure to the field; I’ve already seen many students publish and many students graduate. This is just to say that I have some experience so I hope I won’t be discounted when I say with my whole chest: I hate machine learning conferences.

Everybody puts the conferences on a pedestal The most popular machine learning conferences are a massive lottery, and everyone knows this and complains about this, right? But for most students, your standing in this field is built off this random system. Professors acknowledge the randomness but (many) still hold up the students who get publications. Internships and jobs depend on your publication count. Who remembers that job posting from NVIDIA that asked for a minimum of 8 publications at top conferences?

Yet the reviewing system is completely broken Reviewers have no incentive to give coherent reviews. If they post an incoherent review, reviewers still have no incentive to respond to a rebuttal of that review. Reviewers have no incentive to update their score. Reviewers often have incentive to give negative reviews, since many reviewers are submitting papers in the same area they are reviewing. Reviewrs have incentive to collude, because this can actually help their own papers.

The same goes for ACs: they have no incentive to do anything beyond simply thresholding scores.

I have had decent reviewers, both positive and negative, but (in my experience) they are the minority. Over and over again I see a paper that is more or less as good as many papers before it, but whether it squeaks in, or gets an oral, or gets rejected, all seem to depend on luck. I have seen bad papers get in with faked data or other real faults because the reviewers were positive and inattentive. I have seen good papers get rejected for poor or even straight up incorrect reasons that bad, negative reviewers put forth and ACs follow blindly.

Can we keep talking about it? We have all seen these complaints many times. I’m sure to the vast majority of users in this sub, nothing I said here is new. But I keep seeing the same things happen year after year, and complaints are always scattered across online spaces and soon forgotten. Can we keep complaining and talking about potential solutions? For example:

  • Should reviewers have public statistics tied to their (anonymous) reviewer identity?
  • Should reviewers have their identities be made public after reviewing?
  • Should institutions reward reviewer awards more? After all, being able to review a project well should be a useful skill.
  • Should institutions focus less on a small handful of top conferences?

A quick qualification This is not to discount people who have done well in this system. Certainly it is possible that good work met good reviewers and was rewarded accordingly. This is a great thing when it happens. My complaint is that whether this happens or not, seems completely random. I’m getting repetitive, but we’ve all seen good work meet bad reviewers and bad work meet good reviewers…

All my gratitude for people who have been successful with machine learning conferences but are still willing to entertain the notion that the system is broken. Unfortunately, some people take complaints like this as if they were attacks on their own success. This NeurIPS cycle, I remember reading an area chair complain unceasingly about reviewer complaints. Reviews are almost always fair, rebuttals are practically useless, authors are always whining…they are reasonably active on academic Twitter so there wasn’t too much pushback. I searched their Twitter history and found plenty of author-side complaints about reviewers being dishonest or lazy…go figure.

  • Terrible_Button_1763@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    The problem from a theoretic perspective is that many of the things you recommend might have unintended and in fact, the opposite effect.

    • Should reviewers have public statistics tied to their (anonymous) reviewer identity?
      • We do have public statistics tied to our actual profile (our name). You will run into the same reviewers again and again within the peer review process. You’ll remember the person a few years ago who was totally unreasonable in the same paper as you. You’ll remember that one reviewer who made a brilliant point that the meta-reviewer overrode and turned out they were right. Yes, you’ll also run into your former advisors in the peer review process. Try reviewing a paper that your former advisor is a coauthor on and raking it over the coals and finding out later that it was your former advisor’s paper. Then some time later your advisor and you are reviewing the same paper side by side, and you have to decide whether you agree or disagree with them.
    • Should reviewers have their identities be made public after reviewing?
      • Not sure. This might be good for senior reviewers who do take their job very seriously. But junior reviewers without much experience mess up all the time. Imagine having your social media post from when you were 14 be public, forever, and un-deleteable. That’s what it’s like to be a junior reviewer and messing up and having it be public. Peer review is much like everything else in academia, an apprenticeship. You learn by doing, and that process requires an element of psychological safety that anonymity can provide.
    • Should institutions reward reviewer awards more? After all, being able to review a project well should be a useful skill.
      • I’d love this. We always need more people willing to review well and dispassionately.
    • Should institutions focus less on a small handful of top conferences?
      • Institutions do, everyone does. Top conference echo chambers are only for those who think the world revolves around ICML/NeurIPS/ICLR. I’ll let you in on a little secret, everyone at COLT laughs at your papers… don’t even get me started at what people in STOC think about your papers.
    • MLConfThrowaway@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I appreciate the response! I’m afraid I don’t understand what you mean when you say we have public statistics tied to our profile. Currently reviews are tied to an anonymized name, and I’m thinking we should be able to link the name to a past history of review scores, meta-reviews, etc.

      I’ll let you in on a little secret, everyone at COLT laughs at your papers… don’t even get me started at what people in STOC think about your papers.

      Ive heard this before! I never worked on anything that could be submitted to COLT or STOC, are the review processes different?

      • Terrible_Button_1763@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        we have public statistics tied to our profile

        I meant that if you’re reviewing for a conference, then all the reviewers know each other’s name. This adds up over time as you (typically) stay in the same or similar areas through many years of your career. That way, your reviewing personality and thinking is something that everyone informally keeps track of. I agree that ideally this should also be public as some sort of statistical measure for all to see… however this also has the complicated issue of not making the reviewing apprenticeship safe for newer and less experienced reviewers.

        Ive heard this before! I never worked on anything that could be submitted to COLT or STOC, are the review processes different?

        Well, it’s hard for any person to speak on behalf of two entire conferences and two entire sub-communities. What I can say is the quality, impact, and rigor of papers at COLT or STOC is far higher. They’re also incredibly challenging to publish in as well. Even towards the final years of your PhD, you’ll still be mostly supervised and mostly learning relatively lower level details of doing theory work. It simply takes forever to learn to come up with mental abstractions such that you can start to do theory work. Then, the real challenge becomes what theory problems do you want to solve? Which problems are worth solving? What do people in these communities want to see and what would they be surprised by seeing?