PAPER: https://arxiv.org/abs/2310.16764

SUMMARY

The paper “ConvNets Match Vision Transformers at Scale” from Google DeepMind aims to debunk the prevalent notion that Vision Transformers (ViTs) are inherently superior to ConvNets for large-scale image classification. Using the NFNet model family as a representative ConvNet architecture, the authors pre-train various models on the extensive JFT-4B dataset under different compute budgets, ranging from 0.4k to 110k TPU-v4 core hours. Through this empirical analysis, they observe a log-log scaling law between held-out loss and compute budget. Importantly, when these NFNets are fine-tuned on ImageNet, they match the performance metrics of ViTs trained under comparable computational constraints. Their most resource-intensive model even achieves a Top-1 ImageNet accuracy of 90.4%.

The crux of the paper’s argument is that the supposed performance gap between ConvNets and ViTs largely vanishes under a fair comparison, which accounts for compute and data scale. In other words, the efficacy of a machine learning model in large-scale image classification is more dependent on the available data and computational resources than on the choice between ConvNet and Vision Transformer architectures. This challenges the community’s leaning towards ViTs and emphasizes the importance of equitable benchmarking when evaluating different neural network architectures.

  • currentscurrents@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Maybe it’s less about having as many parameters as the human brain, and more about having datasets as rich and diverse as the real world.

    • TheCrazyAcademic@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Well people with mutations like megacephaly which is an enlarged brain aren’t any smarter and somehow become even dumber because it messes with neuronal density so we know brain size does not correlate to intelligence at all. Animals with bigger brains meaning more neurons then humans aren’t smarter at least in theory, scientists could just be using bad benchmarks.

    • TikiTDO@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      People talk a lot about datasets being “rich” and “diverse,” but I wish they would also mentioned “not full of crap” in the same breath. Whether it be AI or humans, garbage-in, garbage-out still applies. You can have a rich and diverse dataset that teaches AI horrific, terrible ideas and practices.

      We know with humans you get a very different effect based on the quality of the teacher and the teaching material, and we know that a bad teacher teaching bad lessons can be even worse than nothing at all. AI isn’t really that different.

      • shanereid1@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Was at a big data industry conference yesterday, and one of the big takeaways was that data quality is going to be critical in the age of genAI.