CUDA cores (or shader cores in general) have long been used to compute graphics. A very often used operation in computer graphics are matrix multiplications, just like in deep learning. Back in the days (AlexNet) NNs were computed using shader cores, but now have completely moved to be computed on Tensor cores. My question are:
-
Why have these workloads been seperated? (Yes obviously the tensor cores are more specialized and leave out a bunch of unnecessary operations, but how and why not integrate it into the CUDA cores to boost MM operations for computer graphics?)
-
Why isn’t the workload offloaded to the other cores when the mathematical operations are the same
-
What makes tensor cores so much more efficient and faster?
Building a solution that solves the specific problem you have is always going to yield faster and more efficient results compared to something that is even 90% a solution.
A recent example would be bitcoin ASICs. While not into crypto personally it was amazing to see just how fast bitcoin ASICs got rolled out.
Its now at a point where no one in their right mind would use anything else and if a faster one gets released people clamor to grab as many as they can afford.
Having dedicated hardware to do the specific math for ML is the logical move.