With the proof of concept done and users able to get over 180gb/s on a PC with AMD’s 3d vcache, it sure would be nice if we could figure a way to use that bandwidth for CPU based inferencing. I think it only worked on Windows but if that is the case we should be able to come up with a way to do it under Linux too.

  • tu9jn@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Vcache only helps when you want to access lots of tiny chunks of data that fit inside the 96-128mb cache.

    During inference you have to read the entire several Gb model for each token generation, so your botleneck is still the Ram bandwidth.

    • ccbadd@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Maybe, but it’s a lot faster than what we can do right now and its only the start.

  • FaustBargain@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    So there are CPU intrinsics for prefetching data. If we can get better at anticipating the next pieces of data that need to be calculated you can speckle in those preload instructions and achieve more speed.

  • mcmoose1900@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    There are actually TSVs for 3D Cache on the AMD 7900 series, but AMD doesn’t use them. Presumably because it makes the chip run hotter, so they’d have to downclock it.

    But I think it would be a great candidate for an ML card. Not for directly accelerating models, but for basically fitting any kind of intermediate calculations in cache to preserve all the RAM bandwidth for model weights.