Sorry if I’m not the first to bring this up. It seems like a simple enough solution.

  • JackGreenEarth@lemm.ee
    link
    fedilink
    arrow-up
    26
    ·
    edit-2
    1 year ago

    What other company besides AMD makes GPUs, and what other company makes GPUs that are supported by machine learning programs?

      • jon@lemmy.tf
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        AMD has ROCm which tries to get close. I’ve been able to get some CUDA applications running on a 6700xt, although they are noticeably slower than running on a comparable NVidia card. Maybe we’ll see more projects adding native ROCm support now that AMD is trying to cater to the enterprise market.

        • Turun@feddit.de
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          They kinda have that, yes. But it was not supported on windows until this year and is in general not officially supported on consumer graphics cards.

          Still hoping it will improve, because AMD ships with more VRAM at the same price point, but ROCm feels kinda half assed when looking at the official support investment by AMD.

        • meteokr@community.adiquaints.moe
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          I don’t own any nvidia hardware out of principal, but ROCm is no where even close to cuda as far as mindshare goes. At this point I rather just have a cuda->rocm shim I can use, in the same was as directx->vulkan does with proton. Trying to fight for mindshare sucks, so trying to get every dev to support it just feel like a massive uphill battle.

    • Dudewitbow@lemmy.ml
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      AMD supports ML, its just a lot of smaller projects are made with CUDA backends, and dont have developers there to switch from CUDA to OpenCL or similar.

      Some of the major ML libraries that used to built around CUDA like Tensorflow has already made non CUDA branches, but thats only because tensorflow is open source, ubiquitous in the scene and litterally has google behind it.

      ML for more niche uses basically is in the chicken and egg situation. People wont use other gpus for ML because theres no dev working on non CUDA backends. No ones working on non CUDA backends because the devs end up buying Nvidia, which is basically what Nvidia wants.

      There are a bunch of followers but a lack in of leaders to move the direction in a more open compute environment.

      • PlatinumSf@pawb.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Huh, my bad. I was operating off of old information. They’ve actually already released the sdk and apis I was referring to.

    • coffeetest@kbin.social
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      My Intel Arc 750 works quite well at 1080 and is perfectly sufficient for me. If people need hyper refresh rates and resolution and all all the bells well then have fun paying for it. But if you need functional, competent gaming, at US$200 Arc is nice.

    • PlatinumSf@pawb.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      No joke, probably intel. The cards won’t hold a candle to a 4090 but they’re actually pretty decent for both gaming and ML tasks. AMD definitely needs to speed up the timeline on their new ML api tho.

    • Erdrick@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I jumped to team red this build.
      I have been very happy with my 7900XTX.
      4K max settings / FPS on every game I’ve thrown at it.
      I don’t play the latest games, so I guess I could hit a wall if I play the recent AAA releases, but many times they simply don’t interest me.