The interesting challenge is trying to figure out how you solved the problem to get 1/4 instead of 1/2. In Bayesian thinking, you have the prior and posterior. The prior (before you see the evidence that a = 1, b =1, and c = 0) is the K column by itself. P(K = 1) is 1/2 as there are 4 ones and 4 zeros.
Now the posterior is evaluated with respect to the prior. In Naive bayes, it is the case that the pieces of evidence are viewed independently (naively) from each other. So P(K = 1 | a = 1 and b = 1 and c = 0) is simplified as P(K = 1) P(a = 1 and b = 1 and c = 0 | K = 1) / P(a = 1 and b = 1 and c = 0). The numerator simplifies to 1/2 * P(a = 1 | K = 1) * P(b = 1 | K = 1) * P(c = 0 | K = 1) = 1/2 * 1/2 * 1/4 * 1/2.
The denominator is again, challenging. If you calculate it like you should (not-naively), it should equal P(a = 1 and b = 1 and c = 0). But the problem becomes that… that will make all the probabilities over the conditional distribution sum up to 1 if you are to calculate it non-naively (i.e., *not* assuming P(a = 1 and b = 1 and c = 0 | K = 1) = P(a = 1| K = 1) * P(b = 1 | K = 1) * P(C = 0 | K = 1)).
The way the solution is calculating it sidesteps this issue by expressing P(a = 1 and b = 1 and c = 0) in such a way that is amenable to Naive Bayes. Think about this further.
The interesting challenge is trying to figure out how you solved the problem to get 1/4 instead of 1/2. In Bayesian thinking, you have the prior and posterior. The prior (before you see the evidence that a = 1, b =1, and c = 0) is the K column by itself. P(K = 1) is 1/2 as there are 4 ones and 4 zeros.
Now the posterior is evaluated with respect to the prior. In Naive bayes, it is the case that the pieces of evidence are viewed independently (naively) from each other. So P(K = 1 | a = 1 and b = 1 and c = 0) is simplified as P(K = 1) P(a = 1 and b = 1 and c = 0 | K = 1) / P(a = 1 and b = 1 and c = 0). The numerator simplifies to 1/2 * P(a = 1 | K = 1) * P(b = 1 | K = 1) * P(c = 0 | K = 1) = 1/2 * 1/2 * 1/4 * 1/2.
The denominator is again, challenging. If you calculate it like you should (not-naively), it should equal P(a = 1 and b = 1 and c = 0). But the problem becomes that… that will make all the probabilities over the conditional distribution sum up to 1 if you are to calculate it non-naively (i.e., *not* assuming P(a = 1 and b = 1 and c = 0 | K = 1) = P(a = 1| K = 1) * P(b = 1 | K = 1) * P(C = 0 | K = 1)).
The way the solution is calculating it sidesteps this issue by expressing P(a = 1 and b = 1 and c = 0) in such a way that is amenable to Naive Bayes. Think about this further.