Can DLSS Render "Better Than Native" Graphics?

"Can DLSS Render "Better Than Native" Graphics?"

Since DLSS is using Deep Learning for it's upscaling technic, which means it is PREDICTING the colour of pixels, than the answer is a flat and blatant NO!
 
"Can DLSS Render "Better Than Native" Graphics?"

Since DLSS is using Deep Learning for it's upscaling technic, which means it is PREDICTING the colour of pixels, than the answer is a flat and blatant NO!
Lool, this is why I like TechSpot chat sense of humor so much.
 
"Can DLSS Render "Better Than Native" Graphics?" Since DLSS is using Deep Learning for it's upscaling technic, which means it is PREDICTING the colour of pixels, than the answer is a flat and blatant NO!
The models used were specifically trained in examining frame buffers for image artifacts caused by temporal aliasing; it then corrects them based on further training and values from the game's motion vectors buffer. So if the frame that's presented in native form contains such artifacts and DLSS accurately fixes them, then yes -- DLSS can be better than native graphics.
 
It shows improvement. Improvement = Better. It's that simple. No-one cares if a game runs native or not, when image quality is better.
Patient after years of therapy: Doctor my arm, is it getting better.
Doctor: no it is showing improvement.
Patient: so it is better.
Doctor: no it is just Improving.
 
Last edited:
This world it's nuts promoting fake resolution and fake frames as better than native.
Like you can say fake boobs are better than natural and only needs an "repair shop" each 5-10 years.
When your GPU cannot give you a steady 60 fps, I think some form of upscaling can be useful. Whether it's "better" may be a personal opinion. I would rather have a steady 60 fps, along with fake frames, than to be stuttering along at 30-45 fps with only native frames.
 
I think there is a small flaw in this testing and that is why not use a lower-end GPU instead of a 4080? Most of the time a 4080 is going to give you 60+ fps in many games. A 4070, on the other hand may not. I see the use-case for DLSS/FSR as being when my GPU isn't sufficient to deliver 60 fps natively, and consistently.
 
I think there is a small flaw in this testing and that is why not use a lower-end GPU instead of a 4080? Most of the time a 4080 is going to give you 60+ fps in many games. A 4070, on the other hand may not. I see the use-case for DLSS/FSR as being when my GPU isn't sufficient to deliver 60 fps natively, and consistently.
The performance improvement of upscaling wasn't the point of this article -- it's purely about the visual quality of the output versus native rendering.
 
The performance improvement of upscaling wasn't the point of this article -- it's purely about the visual quality of the output versus native rendering.
But, isn't the point of using DLSS/FSR to squeeze out more FPS? And, I would ask whether frame generation is equal across all GPU models? In other words, can the 4080 w/DLSS produce better frame quality than say, a 4070 w/DLSS?
 
But, isn't the point of using DLSS/FSR to squeeze out more FPS? And, I would ask whether frame generation is equal across all GPU models? In other words, can the 4080 w/DLSS produce better frame quality than say, a 4070 w/DLSS?
4080/DLSS definitely can produce better frame quality than, say, any GTX 1xxx/DLSS.
This I pointed out in my posts about DLSS almost "universally" superiority :laughing:
 
But, isn't the point of using DLSS/FSR to squeeze out more FPS?
Yes, but that's simply not the purpose of this article. Another way of looking at it is how much visual quality is sacrificed, if any, when using upscaling.

In other words, can the 4080 w/DLSS produce better frame quality than say, a 4070 w/DLSS?
Not in the same GPU generation but as DSirius has pointed out, there's scope for better results with Ada Lovelace chips compared to Turing ones due to better hardware. For example, this document from Nvidia highlights the differences relating to optical flow acceleration between the three architectures. Whether there are any visible differences in output, though, is another matter, as Nvidia may well just utilize the better capabilities for performance only.

Edit: Having done a brief scan through the DLSS SDK documentation, there's nothing inherently different between the three architectures when it comes to quality -- I.e. a Turing chip can produce the same quality of output as an Ada Lovelace one. The biggest factors that do affect quality are how developers implement it and what the rest of the rendering pipeline is like. For example, DLSS is affected by the precision and resolution of the motion vector buffer.
 
Last edited:
Yes, but that's simply not the purpose of this article. Another way of looking at it is how much visual quality is sacrificed, if any, when using upscaling.


Not in the same GPU generation but as DSirius has pointed out, there's scope for better results with Ada Lovelace chips compared to Turing ones due to better hardware. For example, this document from Nvidia highlights the differences relating to optical flow acceleration between the three architectures. Whether there are any visible differences in output, though, is another matter, as Nvidia may well just utilize the better capabilities for performance only.

Edit: Having done a brief scan through the DLSS SDK documentation, there's nothing inherently different between the three architectures when it comes to quality -- I.e. a Turing chip can produce the same quality of output as an Ada Lovelace one. The biggest factors that do affect quality are how developers implement it and what the rest of the rendering pipeline is like. For example, DLSS is affected by the precision and resolution of the motion vector buffer.

First, I will say, if it's not obvious, that I don't know how the tech works. I was thinking, however, that the GPU must be a factor in some way. Maybe not enough to matter in regards to quality.
 
I was thinking, however, that the GPU must be a factor in some way.
Only in regards to performance, unless developers deliberately go out of their way to use different resolution motion vectors across different GPUs (which they're never going to do). That said, Nvidia might have something going on in the actual DLSS algorithm that's different, depending on the architecture, but if there is, it's never said anything about it nor is it mentioned in any developer document.
 
I've uploaded some screenshots from The Last of Us to Google Drive -- click here. These were taken at 4K with all the graphics settings put to maximum, bar the texture qualities. There are two images, at native resolution, using Low and Ultra textures, then two more with DLSS Performance mode, for the same two texture settings. You can see for yourself how DLSS affects the impact high-resolution textures have on visual fidelity.

DLSS muddies the right shoulder corderoy details and color gradient. That's the problem with DLSS and upscalers. They have trouble with color gradient, that's why sharpening is required to "fix" the color gradient issue/aka blurriness.

DLSS is great for anti-aliasing but let's not kid ourselves on the image quality of the textures. Flat images like the No Truck sign is easy crisp up with sharpening but for details on color it's a laughable no, it's not better than native.
 
DLSS muddies the right shoulder corderoy details and color gradient. That's the problem with DLSS and upscalers. They have trouble with color gradient, that's why sharpening is required to "fix" the color gradient issue/aka blurriness.
It's actually about the fact that textures are sampled at the native frame resolution, and not at the lowered resolution used for DLSS/FSR/etc. Any assets that have been created and tuned for native texture sampling bias can potentially show blurring/muddiness when using the upscaling bias.

This can all be resolved by developers through the manual fine-tuning of individual materials' biases, but as this is clearly a lot of work, one tends to just see it applied sporadically (e.g. to materials that display a lot of very frequent contrasting pixels) or typically for wonky PC ports, a global bias clamp and make do with that.

Additionally, the algorithm is sensitive to the sub-pixel jitter used across consecutive frames and the quality of the motion vector buffer in use. Again, all of this can be fine-tuned accordingly, but it's too much work to do it properly.

DLSS can (or has the potential to, if you prefer) render better-looking graphics than native rendering that's just using TAA, but that doesn't mean it always does.
 
Emperor's new clothes. DLSS is inherently worse as it is lower resolution. I never will use it again; what a joke.
 
DLSS is like that fable "The Emperor's New Clothes". It's pretending but it isn't; it cannot look as good as native by it's very implementation; it is not rendering at full native resolution; they can say supersampling all they want; but it is actually rendering with less fidelity than native; hence the blurry awfulness in games.
 
The best use of dlss for me is using 1.78x res via dldsr, then recouping the performance hit with dlss ver. 2.5.1
Native 1440p stands absolutely no chance in quality.
at native vs dlss alone, it's comparable,but not really better imo. I'm a 60fps+controller gamer anyway though, so I appreciate the dldsr+dlss trick a lot more than just fps gain.
 
Back