• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: November 23rd, 2023

help-circle


  • we have public statistics tied to our profile

    I meant that if you’re reviewing for a conference, then all the reviewers know each other’s name. This adds up over time as you (typically) stay in the same or similar areas through many years of your career. That way, your reviewing personality and thinking is something that everyone informally keeps track of. I agree that ideally this should also be public as some sort of statistical measure for all to see… however this also has the complicated issue of not making the reviewing apprenticeship safe for newer and less experienced reviewers.

    Ive heard this before! I never worked on anything that could be submitted to COLT or STOC, are the review processes different?

    Well, it’s hard for any person to speak on behalf of two entire conferences and two entire sub-communities. What I can say is the quality, impact, and rigor of papers at COLT or STOC is far higher. They’re also incredibly challenging to publish in as well. Even towards the final years of your PhD, you’ll still be mostly supervised and mostly learning relatively lower level details of doing theory work. It simply takes forever to learn to come up with mental abstractions such that you can start to do theory work. Then, the real challenge becomes what theory problems do you want to solve? Which problems are worth solving? What do people in these communities want to see and what would they be surprised by seeing?


  • The problem from a theoretic perspective is that many of the things you recommend might have unintended and in fact, the opposite effect.

    • Should reviewers have public statistics tied to their (anonymous) reviewer identity?
      • We do have public statistics tied to our actual profile (our name). You will run into the same reviewers again and again within the peer review process. You’ll remember the person a few years ago who was totally unreasonable in the same paper as you. You’ll remember that one reviewer who made a brilliant point that the meta-reviewer overrode and turned out they were right. Yes, you’ll also run into your former advisors in the peer review process. Try reviewing a paper that your former advisor is a coauthor on and raking it over the coals and finding out later that it was your former advisor’s paper. Then some time later your advisor and you are reviewing the same paper side by side, and you have to decide whether you agree or disagree with them.
    • Should reviewers have their identities be made public after reviewing?
      • Not sure. This might be good for senior reviewers who do take their job very seriously. But junior reviewers without much experience mess up all the time. Imagine having your social media post from when you were 14 be public, forever, and un-deleteable. That’s what it’s like to be a junior reviewer and messing up and having it be public. Peer review is much like everything else in academia, an apprenticeship. You learn by doing, and that process requires an element of psychological safety that anonymity can provide.
    • Should institutions reward reviewer awards more? After all, being able to review a project well should be a useful skill.
      • I’d love this. We always need more people willing to review well and dispassionately.
    • Should institutions focus less on a small handful of top conferences?
      • Institutions do, everyone does. Top conference echo chambers are only for those who think the world revolves around ICML/NeurIPS/ICLR. I’ll let you in on a little secret, everyone at COLT laughs at your papers… don’t even get me started at what people in STOC think about your papers.








  • The interesting challenge is trying to figure out how you solved the problem to get 1/4 instead of 1/2. In Bayesian thinking, you have the prior and posterior. The prior (before you see the evidence that a = 1, b =1, and c = 0) is the K column by itself. P(K = 1) is 1/2 as there are 4 ones and 4 zeros.

    Now the posterior is evaluated with respect to the prior. In Naive bayes, it is the case that the pieces of evidence are viewed independently (naively) from each other. So P(K = 1 | a = 1 and b = 1 and c = 0) is simplified as P(K = 1) P(a = 1 and b = 1 and c = 0 | K = 1) / P(a = 1 and b = 1 and c = 0). The numerator simplifies to 1/2 * P(a = 1 | K = 1) * P(b = 1 | K = 1) * P(c = 0 | K = 1) = 1/2 * 1/2 * 1/4 * 1/2.

    The denominator is again, challenging. If you calculate it like you should (not-naively), it should equal P(a = 1 and b = 1 and c = 0). But the problem becomes that… that will make all the probabilities over the conditional distribution sum up to 1 if you are to calculate it non-naively (i.e., *not* assuming P(a = 1 and b = 1 and c = 0 | K = 1) = P(a = 1| K = 1) * P(b = 1 | K = 1) * P(C = 0 | K = 1)).

    The way the solution is calculating it sidesteps this issue by expressing P(a = 1 and b = 1 and c = 0) in such a way that is amenable to Naive Bayes. Think about this further.