CUDA cores (or shader cores in general) have long been used to compute graphics. A very often used operation in computer graphics are matrix multiplications, just like in deep learning. Back in the days (AlexNet) NNs were computed using shader cores, but now have completely moved to be computed on Tensor cores. My question are:

  1. Why have these workloads been seperated? (Yes obviously the tensor cores are more specialized and leave out a bunch of unnecessary operations, but how and why not integrate it into the CUDA cores to boost MM operations for computer graphics?)

  2. Why isn’t the workload offloaded to the other cores when the mathematical operations are the same

  3. What makes tensor cores so much more efficient and faster?

  • VirtualHat@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This big difference with tensor cores is that they use a 16-bit float multiply combined with a 32-bit float accumulate. This makes them much more efficient in terms of transistors required… but not a swap in replacement for CUDA.

    Libraries like Pytorch can do matrix multiply (MM) on both CUDA cores and Tensor cores (and CPU, too, if you like). Typically Tensor cores are ~1.5-2x faster (in theory they’re much faster, in practice we’re often memory bandwidth limited so it doesn’t matter). The current default in Pytorch is to perform MM on CUDA, and convolutions on Tensor cores. The reason being that MM sometimes requires extra precision, and in vision models, most of the work is in the convolutions anyway.

    • wen_mars@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      And recently tensor cores have started appearing with 8 bit float/int as well, which gives them a huge advantage in inference throughput. The memory bandwidth limitation can be mitigated by increasing the batch size.

      • wen_mars@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        If you multiply two 16-bit numbers the result can overflow the range that can be represented by 16 bits.