• drplan@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Perfect. Next please a chip that can do half the inference speed of an A100 with 15 Watts power.

    • MrTacobeans@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I don’t think that will come from Nvidia. It’s going to take in memory compute to get anywhere near that level of efficiency. First samples of these SOCs are no where near the memory requirements needed even for small models. These type of accelators will likely come from Intel/arm/risc/amd before Nvidia does it.