At the very least your calculation does not agree with your formula of P(X|Y) = P(X,Y)/P(Y).
How is the numerator a calculation of P(X,Y)? [0.5 * 0.25 * 0.5] is P(a = 1 | K =1 ) * P(b = 1 | K = 1) * P(c = 0 | K = 1) which is (in Naive Bayes) P(X|Y) by Naive Bayes and not P(X, Y).
P(K=1) = 1/2
P(a=1|K=1) = P(a=1,K=1)/P(K=1) = (1/4)/(1/2)=1/2
P(b=1|K=1) = P(b=1,K=1)/P(K=1) = (1/8)/(1/2)=1/4
P(c=0|K=1) = P(c=0, K=1)/P(K=1) = (1/4)/(1/2)=1/2
P(a=1, b=1, c=0, K=1) = 0
P(a=1, b=1, c=0, K=0) = 1/8
[0.5 * 0.25 * 0.5] / (0 + 1/8) = (1/16) / (1/8) = 1/2
For conditionals, convert it into joints and priors first and THEN use the table to count instances out of N samples.
P(X|Y) = P(X,Y)/P(Y)
:)
At the very least your calculation does not agree with your formula of P(X|Y) = P(X,Y)/P(Y).
How is the numerator a calculation of P(X,Y)? [0.5 * 0.25 * 0.5] is P(a = 1 | K =1 ) * P(b = 1 | K = 1) * P(c = 0 | K = 1) which is (in Naive Bayes) P(X|Y) by Naive Bayes and not P(X, Y).
Uh not sure what Fubini’s theorem is, I just use the equivalence of P(X|Y)P(Y) = P(Y|X)P(X) = P(X,Y)
That’s not what the questions is asking. And that’s not Bayes’ rule. The denominator is not even calculating P(Y) under Naive Bayes.
Hmm, maybe machine learning is not just import tensorflow/pytorch/llm.
Features are independent when conditioned on the dependent is pretty much what I know about Naive Bayes, I personally don’t care for the semantics.
Also the last time I was using naive bayes was grad school 7 years ago so things are fuzzy, sorry
Save it for your next submission, friend.
Oh wait I made a typo, OP ignore my answer 😅
(1/32)/(2/32)
Seems like you dropped one of the 1/2s from the numerator. Maybe I’m missing something but the answer looks like 1/4 to me as your workings show
I rate your answer 🌶️🌶️🌶️🌶️🌶️ / this dumpster fire.