we have public statistics tied to our profile
I meant that if you’re reviewing for a conference, then all the reviewers know each other’s name. This adds up over time as you (typically) stay in the same or similar areas through many years of your career. That way, your reviewing personality and thinking is something that everyone informally keeps track of. I agree that ideally this should also be public as some sort of statistical measure for all to see… however this also has the complicated issue of not making the reviewing apprenticeship safe for newer and less experienced reviewers.
Ive heard this before! I never worked on anything that could be submitted to COLT or STOC, are the review processes different?
Well, it’s hard for any person to speak on behalf of two entire conferences and two entire sub-communities. What I can say is the quality, impact, and rigor of papers at COLT or STOC is far higher. They’re also incredibly challenging to publish in as well. Even towards the final years of your PhD, you’ll still be mostly supervised and mostly learning relatively lower level details of doing theory work. It simply takes forever to learn to come up with mental abstractions such that you can start to do theory work. Then, the real challenge becomes what theory problems do you want to solve? Which problems are worth solving? What do people in these communities want to see and what would they be surprised by seeing?
The problem from a theoretic perspective is that many of the things you recommend might have unintended and in fact, the opposite effect.
I agree, that’s why we should recognize the OP’s diverse perspective no?
If you’ve been in the field long enough, you’d recognize that dissenting voices have been marginalized… several times.
I was wondering when this topic would inevitably show up after ICLR rebuttal closed. 48 hours it seems is the right amount of time to wait.
Pull up a chair, pour yourself a drink. Let’s commiserate on our collective misery.
I rate your answer 🌶️🌶️🌶️🌶️🌶️ / this dumpster fire.
Save it for your next submission, friend.
That’s not what the questions is asking. And that’s not Bayes’ rule. The denominator is not even calculating P(Y) under Naive Bayes.
Hmm, maybe machine learning is not just import tensorflow/pytorch/llm.
At the very least your calculation does not agree with your formula of P(X|Y) = P(X,Y)/P(Y).
How is the numerator a calculation of P(X,Y)? [0.5 * 0.25 * 0.5] is P(a = 1 | K =1 ) * P(b = 1 | K = 1) * P(c = 0 | K = 1) which is (in Naive Bayes) P(X|Y) by Naive Bayes and not P(X, Y).
The interesting challenge is trying to figure out how you solved the problem to get 1/4 instead of 1/2. In Bayesian thinking, you have the prior and posterior. The prior (before you see the evidence that a = 1, b =1, and c = 0) is the K column by itself. P(K = 1) is 1/2 as there are 4 ones and 4 zeros.
Now the posterior is evaluated with respect to the prior. In Naive bayes, it is the case that the pieces of evidence are viewed independently (naively) from each other. So P(K = 1 | a = 1 and b = 1 and c = 0) is simplified as P(K = 1) P(a = 1 and b = 1 and c = 0 | K = 1) / P(a = 1 and b = 1 and c = 0). The numerator simplifies to 1/2 * P(a = 1 | K = 1) * P(b = 1 | K = 1) * P(c = 0 | K = 1) = 1/2 * 1/2 * 1/4 * 1/2.
The denominator is again, challenging. If you calculate it like you should (not-naively), it should equal P(a = 1 and b = 1 and c = 0). But the problem becomes that… that will make all the probabilities over the conditional distribution sum up to 1 if you are to calculate it non-naively (i.e., *not* assuming P(a = 1 and b = 1 and c = 0 | K = 1) = P(a = 1| K = 1) * P(b = 1 | K = 1) * P(C = 0 | K = 1)).
The way the solution is calculating it sidesteps this issue by expressing P(a = 1 and b = 1 and c = 0) in such a way that is amenable to Naive Bayes. Think about this further.
1: Can we have a precise definition of what AI is before we start this again. I’m tired of the working definition of, “AI is whatever scared me last night.”
2: I agree we should be concerned about new technologies.
GCP and Azure ui and useability seem better than AWS.
No idea whether the current atmosphere in /r/machinelearning is temporary or permanent.
It could be the case after a year or so the newer people will trickle out leaving it to the usual deep learning hype train with a good post once a month or so.