- cross-posted to:
- pcgaming@lemmy.ca
- cross-posted to:
- pcgaming@lemmy.ca
NVIDIA had earlier described DLSS 5 in a way that suggested a deeper understanding of the scene. The follow-up answers paint a narrower picture. When asked whether the model reads PBR (Physically Based Rendering) properties from the engine, NVIDIA said: “DLSS 5 only takes the rendered frame and: “DLSS 5 only takes the rendered frame and motion vectors as inputs. Materials are inferred from the rendered frame.” In other words, the model is not reading metallic, roughness, normal maps, or other underlying material properties directly.
That may explain why some preview images raised concerns. In one example, a character appears to gain hair detail in an area where it was not visible before. In another, facial details appear altered enough (like the nose) to raise questions about whether the model is changing the look of the character rather than only improving lighting. NVIDIA’s response was that “the underlying geometry is unchanged,”
NVIDIA’s response was that “the underlying geometry is unchanged,”
Slow down there Kenobi. It may technically not change the underlying geometry, but that’s because It ignores the geometry. You never see it.
They’re right, the geometry is just lying under an opaque layer of shit.
Also everyone is technically naked 100% of the time, you just can’t see it under the clothes they’re wearing.
Technically correct is normally the best kind of correct but here we are…
So from fake frames to fake ray tracing with a AI Only Fans filter.
Plus, yeah that nose

For leaving the underlying geometry alone, that nostril grew twice times its size.
Oh my god I see what happened - DLSS thought the SHADOW of the nose was actually part of the nose. That’s hysterical.
You are forgetting the €5000 graphic card you’ll need to access these coveted features.
You mean the second graphics card you’ll need.
You mean the subscription to Nvidias data centers.
This was intuitive and obvious to anyone paying attention to AI and who knows video game engines. Nvidia trying to imply anything else is really shitty.
When it comes to generating lighting and atmospheric and dynamic surface effects in real time, calculating each mesh or surface in the scene instead of one big collective pass would be even more demanding. And will likely be beyond their capabilities for a very long time.
So is not so different from an Instagram filter, huh
No, it’s like adding a filter to someone else’s Instagram post, changing what they wanted it to look like.
So it’s worse
It’s a really fast instagram filter.
- Really fast Instagram filter (when run on a second dedicated 5090).
It sure does seem like it.
I think we’re being too quick to judgment on this. We’re forgetting that this is a vital step in Jensen Huang’s plan to make $1 trillion from selling AI accelerators to new data centers, which I think we can agree is what really matters to most gamers.
Daniel Owen’s interview with the Nvidia’s Jacob Freeman which is where the analysis of this article is even more illuminating:





