Higher supply chain consolidation under Nvidia?

  • partial_accumen@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 days ago

    The launch of Nvidia’s Vera Rubin platform for AI and HPC next year could mark significant changes in the AI hardware supply chain as Nvidia plans to ship its partners fully assembled Level-10 (L10) VR200 compute trays with all compute hardware, cooling systems, and interfaces pre-installed, according to J.P. Morgan (via @Jukanlosreve). The move would leave major ODMs with very little design or integration work, making their lives easier, but would also trim their margins in favor of Nvidia’s. The information remains unofficial at this stage.

    If the only way Nvidia is planning on offering these high end GPUs in the future is in a fully baked prebuilt server, this is big mistake for Nvidia.

    For smaller partners, this would probably be a benefit, but I would guess the largest percentage of the volume of product is being consumed by extremely large hyperscale companies. These companies have optimized their datacenters, supply chain, electrical design and even thermal management to maximize efficiency in their specific use cases. A one-size-fits-all COTS chassis would likely be unwelcome at scale.

    Further, Nvidia, by providing the chassis, board, power supplies, etc, are likely charging retail grade pricing for these things. For hyperscale consumers, they don’t pay retail for those. They design and manufacture those things in-house pocketing the profits as well as benefiting from the custom designs that work best in their environments.

    Unless Nvidia is planning on selling these significantly less expensively (who are we kidding), then this is a fantastic opportunity for a GPU competitor to step up and fill the gap of sales of discrete GPS that Nvidia is potentially abandoning.