Hello.

I am seeing the GAN specialization course on Coursera by instructor Sharon Zhou.

In one of the lectures, she says that a disadvantage of GANs is that it cannot do density estimation and thus is not useful for anomaly detection. (not sure if you can access it).

In the next video, she says that VAEs don’t have this problem.

I am a little confused about this. Could anyone please explain what she means?

As far as I can understand from the lecture, density estimation means learning how probable/frequent particular features are in a dataset. Like, how probable is it that a dog will have droopy ears. Then, we can use this info to detect anomalies if they do not exhibit these features.

But, isn’t this exactly what GANs learn? aren’t GANs learning to mimic the distribution of the training data?

Also, how is a VAE different in this particular regard?

Could someone please help explain this?

Thank you.

  • gwern@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    GANs learn to generate samples in similar ratios as the original data: if there’s 10% dogs, there will be 10% dogs in the samples. But they don’t work backwards from a dog image to 10%, you might say - they are ‘likelihood-free’. They just generate plausible images. They don’t know how plausible an existing image is.

    In theory, a VAE can tell you this and look at a dog image and say ‘10% likelihood’ and look at a weird pseudoimage and say ‘wtf this is like, 0.00000001% likely’, and you could use it to eliminate all your pseudoimages. In practice, they don’t always work that well for outlier detection and seem to be fragile. So, the advantage of VAEs there may be less compelling than it sounds on a slide.

    • racc15@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      In theory, can I use the discriminator of the GAN for this?

      It will look at a weird picture and say: this looks fake?

      • gwern@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Can I use the trained discriminator to detect anomalous images? I guess the discriminator should mark them as “fake” due to not being prevalent in the dataset?

        Generally, no. What a Discriminator learns seems to be weirder than that. It seems to be closer to ‘is this datapoint in the dataset’ (the original dataset, not the distribution). You can look at the ranking of a Discriminator over a dataset and this can be useful for finding datapoints to look at more closely, but it’s weird: https://gwern.net/face#discriminator-ranking