• Terrible_Button_1763@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The interesting challenge is trying to figure out how you solved the problem to get 1/4 instead of 1/2. In Bayesian thinking, you have the prior and posterior. The prior (before you see the evidence that a = 1, b =1, and c = 0) is the K column by itself. P(K = 1) is 1/2 as there are 4 ones and 4 zeros.

    Now the posterior is evaluated with respect to the prior. In Naive bayes, it is the case that the pieces of evidence are viewed independently (naively) from each other. So P(K = 1 | a = 1 and b = 1 and c = 0) is simplified as P(K = 1) P(a = 1 and b = 1 and c = 0 | K = 1) / P(a = 1 and b = 1 and c = 0). The numerator simplifies to 1/2 * P(a = 1 | K = 1) * P(b = 1 | K = 1) * P(c = 0 | K = 1) = 1/2 * 1/2 * 1/4 * 1/2.

    The denominator is again, challenging. If you calculate it like you should (not-naively), it should equal P(a = 1 and b = 1 and c = 0). But the problem becomes that… that will make all the probabilities over the conditional distribution sum up to 1 if you are to calculate it non-naively (i.e., *not* assuming P(a = 1 and b = 1 and c = 0 | K = 1) = P(a = 1| K = 1) * P(b = 1 | K = 1) * P(C = 0 | K = 1)).

    The way the solution is calculating it sidesteps this issue by expressing P(a = 1 and b = 1 and c = 0) in such a way that is amenable to Naive Bayes. Think about this further.

  • mofoss@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    P(K=1) = 1/2

    P(a=1|K=1) = P(a=1,K=1)/P(K=1) = (1/4)/(1/2)=1/2

    P(b=1|K=1) = P(b=1,K=1)/P(K=1) = (1/8)/(1/2)=1/4

    P(c=0|K=1) = P(c=0, K=1)/P(K=1) = (1/4)/(1/2)=1/2

    P(a=1, b=1, c=0, K=1) = 0

    P(a=1, b=1, c=0, K=0) = 1/8

    [0.5 * 0.25 * 0.5] / (0 + 1/8) = (1/16) / (1/8) = 1/2

    For conditionals, convert it into joints and priors first and THEN use the table to count instances out of N samples.

    P(X|Y) = P(X,Y)/P(Y)

    :)

    • Kruki37@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Seems like you dropped one of the 1/2s from the numerator. Maybe I’m missing something but the answer looks like 1/4 to me as your workings show

    • Terrible_Button_1763@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      At the very least your calculation does not agree with your formula of P(X|Y) = P(X,Y)/P(Y).

      How is the numerator a calculation of P(X,Y)? [0.5 * 0.25 * 0.5] is P(a = 1 | K =1 ) * P(b = 1 | K = 1) * P(c = 0 | K = 1) which is (in Naive Bayes) P(X|Y) by Naive Bayes and not P(X, Y).

      • mofoss@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Uh not sure what Fubini’s theorem is, I just use the equivalence of P(X|Y)P(Y) = P(Y|X)P(X) = P(X,Y)

        • Terrible_Button_1763@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          That’s not what the questions is asking. And that’s not Bayes’ rule. The denominator is not even calculating P(Y) under Naive Bayes.

          Hmm, maybe machine learning is not just import tensorflow/pytorch/llm.

          • mofoss@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Features are independent when conditioned on the dependent is pretty much what I know about Naive Bayes, I personally don’t care for the semantics.

            Also the last time I was using naive bayes was grad school 7 years ago so things are fuzzy, sorry