Given the information around how AMD are currently focusing on the mid tier, it got me thinking about their focus on multi chiplet approaches for RDNA5+, they will be having to do a lot of work to manage high speed interconnects and some form of internal scheduler/balancer for the chipets to split out the work etc.

So with this in mind if they could leverage that work on interconnectors and schedulers at a higher level to be a more cohesive form of Crossfire/SLI they wouldnt even need to release any high end cards, as they could just sell you multiple mid tier cards and you just daisy chain them together (within reason). It would allow them to sell multiple cards to individuals increasing sales numbers and also let them focus on less models so simpler/cheaper production costs.

Historically I think the issues with Crossfire/SLI was that to make best use of it the developers had to do a lot of legwork to spread out loads etc, but if they could somehow handle this at the lower levels like they do with the chiplets then maybe it could be abstracted away from the developers somewhat, i.e you designate a master/slave GPUs so the OS just treats the main one as a bigger one or something.

I doubt this is on the cards but it felt like something the was plausible and worth discussion.

  • apt_install_coffee@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    22 hours ago

    It’s unlikely; convincing people to buy two of your GPUs instead of one of your competitors has always been a hard sell, even when Radeon and NVIDIA we’re kneck and kneck market share wise.

    Combine that with the fact that crossfire is not a solved problem, whether you do it spacially, temporally, or you offload Async tasks to the second GPU you always run into the NUMA problem (shove all that data down the PCIE bus fast enough to stitch together well within 1 frame is a tall order) and the results is terrible tearing and super niche bugs so it’s just not worth the cost of support.

    Developer support could help, but why would they? A lot more work for <1% of their player base?

    • Grofit@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      16 hours ago

      There have been some decent results historically with checkerboard and other separated reconstruction techniques. Nvidia was working on some new checkerboard approaches before they killed off SLI.

      A decade or two ago most people I knew had dual GPUs, itbwas quite common for gamers and while you were not getting 100x utilisation it was enough to be noticeable and the prices were not mega bucks back then.

      On your point of buying 1 card vs many, I get that, but it seems like we are reaching some limitations with monolithic dies. Shrinking chips seems to be getting far harder and costly, so to keep performance moving forward we are now jacking up the power throughput etc.

      Anyway the point I’m trying to make is that it’s going to become so costly to keep getting these more powerful monolithic gpus and their power requirements will keep going up, so if it’s 2 mid range gpus for $500 each or 1 high end gpu for $1.5k with possibly higher power usage im not sure if it will be as much of a shoe in as you say.

      Also if multi chiplet designs are already having to solve the problem of multiple gpu cores communicating and acting like one big one, maybe some of that R&D could benefit high level multi gpu setups.