NVIDIA had earlier described DLSS 5 in a way that suggested a deeper understanding of the scene. The follow-up answers paint a narrower picture. When asked whether the model reads PBR (Physically Based Rendering) properties from the engine, NVIDIA said: “DLSS 5 only takes the rendered frame and: “DLSS 5 only takes the rendered frame and motion vectors as inputs. Materials are inferred from the rendered frame.” In other words, the model is not reading metallic, roughness, normal maps, or other underlying material properties directly.

That may explain why some preview images raised concerns. In one example, a character appears to gain hair detail in an area where it was not visible before. In another, facial details appear altered enough (like the nose) to raise questions about whether the model is changing the look of the character rather than only improving lighting. NVIDIA’s response was that “the underlying geometry is unchanged,”

  • MyTurtleSwimsUpsideDown@fedia.io
    link
    fedilink
    arrow-up
    43
    ·
    1 day ago

    NVIDIA’s response was that “the underlying geometry is unchanged,”

    Slow down there Kenobi. It may technically not change the underlying geometry, but that’s because It ignores the geometry. You never see it.

    • brsrklf@jlai.lu
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      They’re right, the geometry is just lying under an opaque layer of shit.

      Also everyone is technically naked 100% of the time, you just can’t see it under the clothes they’re wearing.

  • inclementimmigrant@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    55
    ·
    1 day ago

    So from fake frames to fake ray tracing with a AI Only Fans filter.

    Plus, yeah that nose

    For leaving the underlying geometry alone, that nostril grew twice times its size.

  • paraphrand@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    1 day ago

    This was intuitive and obvious to anyone paying attention to AI and who knows video game engines. Nvidia trying to imply anything else is really shitty.

    When it comes to generating lighting and atmospheric and dynamic surface effects in real time, calculating each mesh or surface in the scene instead of one big collective pass would be even more demanding. And will likely be beyond their capabilities for a very long time.

  • NekoKoneko@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    1 day ago

    I think we’re being too quick to judgment on this. We’re forgetting that this is a vital step in Jensen Huang’s plan to make $1 trillion from selling AI accelerators to new data centers, which I think we can agree is what really matters to most gamers.