• 1 Post
  • 10 Comments
Joined 1 year ago
cake
Cake day: November 25th, 2023

help-circle
  • I appreciate the response! I’m afraid I don’t understand what you mean when you say we have public statistics tied to our profile. Currently reviews are tied to an anonymized name, and I’m thinking we should be able to link the name to a past history of review scores, meta-reviews, etc.

    I’ll let you in on a little secret, everyone at COLT laughs at your papers… don’t even get me started at what people in STOC think about your papers.

    Ive heard this before! I never worked on anything that could be submitted to COLT or STOC, are the review processes different?



  • I agree with many of these points.

    Making the identities of the reviewers public afterwards would be one way but I think it creates other problems (such as breeding animosity).

    Along this line, what if we gave reviewers public statistics on openreview, while keeping everything else anonymous? We would see if a reviewer tends to reject papers or accept papers way more than average. If we add “good reviewer” or “bad reviewer” badges as you suggested, that would follow their reviewer history too. That could be a way of forcing accountability while preserving privacy (I think?)





  • That’s real good of you to try to give thoughtful reviews. I agree, the review process shouldn’t have to depend on PhDs volunteering their time uncomplainingly.

    I think the solution is to get rid of reviewing on a voluntary basis and stop conferences mooching off from early stage researchers.

    Curious to hear more about this. Do you think conferences should have editors instead, like journals? Among other things, I am concerned about how that will scale. Like you mentioned, there are already too many papers and not enough reviewers. (Or do you think it will scale better with proper incentive, like payment?)



  • These claims are very widespread. You can check previous conference posts in this subreddit to find people saying similar things. Every year there is some drama at some conference…plagiarism that slips past reviews (CVPR 2022), controversial decisions on papers (the last few ICML Best Paper awards). Complaints about the reviewing process is the reason venues like TMLR exist.

    My point is that there is, already, years and years of evidence that the reviewing system is broken. How much longer are junior researchers supposed to sit on their feet and act like it isn’t happening?


  • Definitely agree with everything you say. It’s unfortunate…I know the reviewers and people who want to move on after academia are not at fault, although they are often made to carry the extra burden of making the system more fair.

    • Should institutions reward reviewer awards more? After all, being able to review a project well should be a useful skill.

    What do you think of this suggestion, by the way? I think if industry (and everyone really) recognizes reviewing as a good skill, it might slowly give good reviews more incentive.