Nvidia brings DLSS to VR, starting with No Man's Sky, Wrench, and Into the Radius

No-one actually cares about another make-image-quality-worse thing. Real gamers want better image quality, but DLSS is just going backwards. Nothing else.

Quite soon competition will be about who can make game run faster with worst image quality ever imaginable. Sounds "good", yeah.
DLSS can improve visual quality beyond native while also increasing performance. There is no downside.
 
Lmao, you can say on here that DLSS 2.0 sacrifices image quality. But that would be a big fat lie. Several tech journalists have showcased and demonstrated that in some titles DLSS 2.0 does actually improve image quality and I can attest to that, Death Stranding looks noticeable visually better with DLSS on than it does with it turned off, as is confirmed by many tech outlets (I can link them if you still won’t accept these facts).
Go ahead. Rendering on lower resolution (than native) means data will be missing. That missing data is filled with AI help. That is supposed to make image quality better :joy:

Let me illustrate. Original "image" is:

-

(yeah, that's just short line). Now: let's use AI to fill something that is "missing" and make it look "better":

maxresdefault.jpg


Compared to - that image looks much "better" but for example -- is much closer to original. Good luck with your "facts".
You’re fanboy denial is amusing however. As is your clear misunderstanding of the industry. AMD do make GPUs for the consoles. But they don’t use the same APIs or drivers as Radeon. And these upscaling solutions are integrated into the driver. Radeons market share is tiny so if AMD want devs to spend the time and effort implementing their solution, it needs to work on both Nvidia and AMD drivers otherwise devs wont bother. Currently control runs better with RT on on a PS5 than it does on a 6800XT. Clearly the devs couldn’t be bothered with optimising Radeon drivers for ray tracing in control.
Because console chips share almost equal architecture with Radeon chips, it's much easier to develop for consoles AND Radeon than for consoles AND Nvidia. Of course, if you pay enough, developers will develop for anyone.

Perhaps control just sucks as game or Nvidia paid for developers to make it run badly on Radeon? Crysis 2 and tesselation is good example where Nvidia paid to make game run badly on Nvidia but even worse on AMD.
I think you may misunderstand what DLSS is for. If you have a game that only runs at acceptable framerates at 1080p, or at 2160p with DLSS, the latter will *always* look far better. So it provides an increase in image quality, not a decrease.

Turning off DLSS at 2160p to gain image quality would be useless if the result is the game running at 20fps.

But when compared against 2160p, DLSS look worse. If game runs 20 FPS then you buy better hardware, use lower resolution, use lower settings or play another game. That's how it has worked for decades.

DLSS can improve visual quality beyond native while also increasing performance. There is no downside.
Like I said above: when data is missing and it's filled with AI, result cannot be better than original.

Take a picture, any picture. Take 10% visual content out of it. Now, use AI to fill that missing 10%. How that could be better than original? It cannot. At most, it could be equally good if AI somehow fills missing parts perfectly.

That's what DLSS is all about. Take out information and try to guess what is should be. Unless DLSS guesses everything right, result is worse than original.
 
DLSS can improve visual quality beyond native while also increasing performance. There is no downside.
It's easy to make things appear better than they are by blurring the truth. So just to clarify the above statement...

DLSS can improve visual quality beyond its native resolution at a cost to performance. So, 1080p with DLSS can look better than 1080p native, but at say 5% less performance.

DLSS can approach the visual quality of higher resolutions while retaining performance close to its lower resolution. So, 1080p with DLSS can look close but not quite like 1440p native, while having performance more similar to 1080p and less like 1440p.

So in short... That's not the same thing as looking better than native and increasing performance. It increases performance compared to the higher resolution while having better visual quality than the lower resolution. That is the most accurate description.
 
But when compared against 2160p, DLSS look worse. If game runs 20 FPS then you buy better hardware, use lower resolution, use lower settings or play another game. That's how it has worked for decades.
Huh? Hardware is not unlimited. Budgets are not unlimited. If they were, eveybody would be playing games at 16K.

If your hardware can handle a particular game at 1080p or 2160p with DLSS, you are going to choose the latter, because it is much higher quality. Period.

If you "buy better hardware", you are going to go with an even higher DLSS resolution for the best quality. So you buy two 3090s for $6K and an 8K display and you run 8K DLSS (not 2160p), because it looks better.
 
Last edited:
It's easy to make things appear better than they are by blurring the truth. So just to clarify the above statement...

DLSS can improve visual quality beyond its native resolution at a cost to performance. So, 1080p with DLSS can look better than 1080p native, but at say 5% less performance.

DLSS can approach the visual quality of higher resolutions while retaining performance close to its lower resolution. So, 1080p with DLSS can look close but not quite like 1440p native, while having performance more similar to 1080p and less like 1440p.

So in short... That's not the same thing as looking better than native and increasing performance. It increases performance compared to the higher resolution while having better visual quality than the lower resolution. That is the most accurate description.
If you look at analysis by digital foundry or other tech tubers they demonstrate how in games like Control and death stranding there is more visible fidelity with DLSS quality compared to native with no loss in frame rate, in fact the frame rate improved. They zoom in on frozen sections of the game and they do appear to look sharper on the DLSS implementation. From my eyes on my RTX 2080 equipped machine death stranding does look noticeably more anti-aliased with DLSS quality than it does at Native and the DLSS mode offers quite obviously a better image than native

This does appear to be exclusive to some implementations of DLSS 2.0 so for the most part a loss of image quality is felt in games when turning DLSS on. But as time goes on more and more games are using DLSS 2.0 (or later) and we should see more and more games delivering better visual fidelity (on top of better frame rates).

 
Huh? Hardware is not unlimited. Budgets are not unlimited. If they were, eveybody would be playing games at 16K.

If your hardware can handle a particular game at 1080p or 2160p with DLSS, you are going to choose the latter, because it is much higher quality. Period.
Because it's not. It contains AI generated content that may suck
If you "buy better hardware", you are going to go with an even higher DLSS resolution for the best quality. So you buy two 3090s for $6K and an 8K display and you run 8K DLSS (not 2160p), because it looks better.
Who uses 6K or 8K display for gaming, really? Very few people for obvious reasons.

Again, better is subjective and DLSS does not look better on absolute terms.

If you look at analysis by digital foundry or other tech tubers they demonstrate how in games like Control and death stranding there is more visible fidelity with DLSS quality compared to native with no loss in frame rate, in fact the frame rate improved. They zoom in on frozen sections of the game and they do appear to look sharper on the DLSS implementation. From my eyes on my RTX 2080 equipped machine death stranding does look noticeably more anti-aliased with DLSS quality than it does at Native and the DLSS mode offers quite obviously a better image than native
Sharper= better? If it's that easy, then DLSS is useless since there are much better ways to create sharper images. For example Radeon image sharpening that only lowers framerate around 1-2% without any DLSS created AI crap. Miles better than DLSS solution.

Using your logic, Radeon image sharpening always creates better quality than native while working on virtually Every DirectX 9-11 and Vulkan game. Too bad it's not available for Nvidia yet.
 
Take a picture, any picture. Take 10% visual content out of it. Now, use AI to fill that missing 10%. How that could be better than original? It cannot. At most, it could be equally good if AI somehow fills missing parts perfectly.
The DNN isn't trained to aim for 'original', or at the very least, the default DNN isn't - the training target used for creating the DNN in the first place is a 16K anti-aliased image. So with default weights in the network, it's trying to create a high resolution anti-aliased version of an aliased lower resolution image, using a reference that's visually better than both. Thus, there is the potential for an AI-based temporally upscaled frame to be visually better than the original.
 
The DNN isn't trained to aim for 'original', or at the very least, the default DNN isn't - the training target used for creating the DNN in the first place is a 16K anti-aliased image. So with default weights in the network, it's trying to create a high resolution anti-aliased version of an aliased lower resolution image, using a reference that's visually better than both. Thus, there is the potential for an AI-based temporally upscaled frame to be visually better than the original.
I don't see it that way. When resolution is lowered (against native), data will be missing. When resolution is higher (against native), there is more data. Taking data from higher resolution and using that data to fill gaps on lower resolution data can at most be equal to native resolution data. It's just impossible to be better than native image in absolute terms. It's possible to be "better" though.
 
I don't see it that way. When resolution is lowered (against native), data will be missing. When resolution is higher (against native), there is more data. Taking data from higher resolution and using that data to fill gaps on lower resolution data can at most be equal to native resolution data. It's just impossible to be better than native image in absolute terms. It's possible to be "better" though.
This argument assumes that the data produced by rendering at native resolution is visually 'perfect' - I.e. there is no means of producing a more aesthetically pleasing set of values. However, if one took a larger data set and sampled it down, then it arguably would be better. For example, if one took an 8K image and used a 2x2 rotated grid sampling kernel, where each sample location was a blend of 4 stochastically distributed sub-pixels, to produce a 4K result, then this final image would almost certainly look better than a directly rendered 4K image.

So if one knew what a 'better looking' 4K set of data should look like, it's possible to replicate this from a smaller set of data. This 'upscaled' data set won't be the same as the 'native' but that's not important - after all, the set produced from sampling the larger data set isn't the same either. This example demonstrates it quite nicely. The following image is the original 'native' one:

original.png

The upscaler is given the following to work with:

input.png

And the result is:

EdsrOutput.png

If one subtracts the original from the DNN-upscaled image, it becomes clear that the data sets are not the same:

difference.png

However, the DNN one is visually better than the original, and that's what the likes of DLSS, or any other DNN-based upscale algorithm, aims to achieve. They're not designed to reproduce a bit-by-bit exact replication of a native data set (although, in theory, the DNN could be trained to do so); instead, the goal is to obtain as visually pleasing a result as possible.
 
This argument assumes that the data produced by rendering at native resolution is visually 'perfect' - I.e. there is no means of producing a more aesthetically pleasing set of values. However, if one took a larger data set and sampled it down, then it arguably would be better. For example, if one took an 8K image and used a 2x2 rotated grid sampling kernel, where each sample location was a blend of 4 stochastically distributed sub-pixels, to produce a 4K result, then this final image would almost certainly look better than a directly rendered 4K image.
Exactly because native is best possible objectively. There is nothing that could be improved on best possible. Using things like anti-aliasing may make picture look "better" than original, but same can be applied to any graphical material. Because if altered image, while being visually "better" is better on absolute terms too, every images will be altered. This is very evident on modelling pictures. I prefer as natural as possible.

What you say above is basically what AMD's virtual super resolution does but it's missing lower rendering, that is what DLSS does.
So if one knew what a 'better looking' 4K set of data should look like, it's possible to replicate this from a smaller set of data. This 'upscaled' data set won't be the same as the 'native' but that's not important - after all, the set produced from sampling the larger data set isn't the same either. This example demonstrates it quite nicely. The following image is the original 'native' one:

View attachment 87739

The upscaler is given the following to work with:

View attachment 87740

And the result is:

View attachment 87741

If one subtracts the original from the DNN-upscaled image, it becomes clear that the data sets are not the same:

View attachment 87742

However, the DNN one is visually better than the original, and that's what the likes of DLSS, or any other DNN-based upscale algorithm, aims to achieve. They're not designed to reproduce a bit-by-bit exact replication of a native data set (although, in theory, the DNN could be trained to do so); instead, the goal is to obtain as visually pleasing a result as possible.
That altered image is visually better but it has one big problem: it quite far from original. If you look closely for example white spots on bottom, they are very different shape on upscaled picture. Essentially they are smoothed out on final result but original clearly states those spots are quite rough ones.

Perfect example where upscaling provides "better" quality that "looks better" because it's not nowhere near original.
 
I understand fairly adequately, but perhaps you aren't seeing how your wording is not accurate.

To 'lower' the resolution suggests there was a starting point of a high resolution in the render pipeline. To lower describes a relative situation.

'Lowered' suggests a higher resolution has already been rendered and is available. Well it isn't if you don't have the performance.

So DLSS isn't 'lowered' at all. It starts from a fixed resolution be it 720p, or 1080p or whatever, and increases that by adding information to reach frame. It's a technicality but there is a distinction here.

It's an important one, because some people here see this as some kind of pollution to a purist 'native' image, as if you are taking something away and thus it is inferior.

That's not the case if you can't render at a 'native' resolution in the first place, due to lack of performance or whatever. People that rally against DLSS are glass half empty people, and view that as such.

People that see the ability of it to increase resolution forming near pristine image quality are glass half full attitudes. I'm getting something I couldn't have in the first place- more pixels. I'm not losing out.


Again, you are using a word salad, to try and dismiss what DLSS does.

Yes, YOU MUST USE A LOWER RESOLUTION, for you to gain performance from DLSS. <---FACT


You gain NOTHING from DLSS, if you game at the same resolution. The technology would be pointless. Again, you know this and keep playing word salad, because you trying to spread misinformation and propaganda.

You also can't game at 4k, on a 1080p monitor, like you suggest.
 
Lmao, you can say on here that DLSS 2.0 sacrifices image quality. But that would be a big fat lie. Several tech journalists have showcased and demonstrated that in some titles DLSS 2.0 does actually improve image quality and I can attest to that, Death Stranding looks noticeable visually better with DLSS on than it does with it turned off, as is confirmed by many tech outlets (I can link them if you still won’t accept these facts).

You’re fanboy denial is amusing however. As is your clear misunderstanding of the industry. AMD do make GPUs for the consoles. But they don’t use the same APIs or drivers as Radeon. And these upscaling solutions are integrated into the driver. Radeons market share is tiny so if AMD want devs to spend the time and effort implementing their solution, it needs to work on both Nvidia and AMD drivers otherwise devs wont bother. Currently control runs better with RT on on a PS5 than it does on a 6800XT. Clearly the devs couldn’t be bothered with optimising Radeon drivers for ray tracing in control.

The fact is DLSS is amazing and die hard AMD fanboys like yourself are clearly triggered by it and going around making a fool of themselves online. Keep it up, it’s hilarious!

It seems you are making a case for AMD's solution, FidelityFX Super Resolution (FSR), then.

Seeing that AMD's solution, is hardware agnostic and nearly every Game Developer will be using it, instead of the Proprietary nVidia method of DLSS.

Both Game Consoles use RDNA and that makes it much easier to have a unified Game Industry... as so many Developers have been begging for. Because it is the Game Makers who use the hardware, not end-users. So you make a game once... and distro it on many platforms.



Dr Su engineered RDNA to work with what Game Devs wanted... and needed. And Fidelity FS doesn't put a strain on Game Development, or needs nVidia's involvement. It just works.

DLSS is already pushed aside bro (shadowboxer).... why would a game developer worry about it... (or the measly 5% RTX 30 owners, when they can just run FSR anyways...?)

There will be 40 million people gaming on RDNA by this Holiday.... Game Developers have no time for nVidia's non-gaming hardware.
 
That altered image is visually better but it has one big problem: it quite far from original.
True, it's more visually pleasing to the eye, so it looks better. The original looks like crap in comparison. Any sane person shown these two pictures would say the AI upscaled one looks better. Because it does, even if it's not "faithful". You are fetishizing "native image" to the point of absurdity. The fact of the image being "native" (whatever it even means in the current state of rendering pipelines where various buffers can be of sub native resolution by design because of performance consideration) is not inherently "better" by definition.
"It must be better because it's native, as native is always better than non-native" is a tautology and a fallacy.
If a game was designed to use DLSS and you would not be able to disable it, wouldn't you not consider the generated image "native"? It's just like the developers intended, they just used DLSS to achieve their goal.
 
Again, you are using a word salad, to try and dismiss what DLSS does.

Yes, YOU MUST USE A LOWER RESOLUTION, for you to gain performance from DLSS. <---FACT


You gain NOTHING from DLSS, if you game at the same resolution. The technology would be pointless. Again, you know this and keep playing word salad, because you trying to spread misinformation and propaganda.

You also can't game at 4k, on a 1080p monitor, like you suggest.

I said why the distinction is important. DLSS enables people to play at a level of image quality higher than they otherwise would with their given hardware. Thus you aren't lowering in all those use case scenarios. You're increasing.

If you feel you had ample performance for 'native resolution' then you wouldn't use it, obviously. Only negative nellies here say it takes something away from their experience and try to cast aspersions on it.

Luckily for those with such a poor view of this blatantly amazing and impressive technology it's currently optional.
 
True, it's more visually pleasing to the eye, so it looks better. The original looks like crap in comparison. Any sane person shown these two pictures would say the AI upscaled one looks better. Because it does, even if it's not "faithful". You are fetishizing "native image" to the point of absurdity. The fact of the image being "native" (whatever it even means in the current state of rendering pipelines where various buffers can be of sub native resolution by design because of performance consideration) is not inherently "better" by definition.
"It must be better because it's native, as native is always better than non-native" is a tautology and a fallacy.
Problem is: if those white spots are NOT supposed to be symmetrical, then AI enhanced image truly sucks. If there is asymmetric spot, I really don't want any AI to "enhance" it to be symmetric.

Simple as that. AI could easily create "better looking" images just doing them more symmetric. But, sometimes image should really contain rough or asymmetric spots and when AI "fixes" them, AI also present something that should not exist. That's very point of native: it is (at least on some degree) what is should be while AI enhanced is something else.

If a game was designed to use DLSS and you would not be able to disable it, wouldn't you not consider the generated image "native"? It's just like the developers intended, they just used DLSS to achieve their goal.
In that case, what is comparison we can make? If there is only DLSS, how we can compare it against native? Not possible so there's answer for that.
 
I said why the distinction is important. DLSS enables people to play at a level of image quality higher than they otherwise would with their given hardware. Thus you aren't lowering in all those use case scenarios. You're increasing.

If you feel you had ample performance for 'native resolution' then you wouldn't use it, obviously. Only negative nellies here say it takes something away from their experience and try to cast aspersions on it.

Luckily for those with such a poor view of this blatantly amazing and impressive technology it's currently optional.

lol...
So if you have a 1440p Monitor, DLSS allows you to game at 4k..?

And more to my point, as you yourself just said^, that if you could play at native resolution, why would you needs DLSS (at 720p) and upscale to 1440p for performance...
 
lol...
So if you have a 1440p Monitor, DLSS allows you to game at 4k..?

And more to my point, as you yourself just said^, that if you could play at native resolution, why would you needs DLSS (at 720p) and upscale to 1440p for performance...

DLSS allows you to game at an image quality higher than you might otherwise be able to with your hardware configuration.

"I want to game at 1440p with full ray tracing effects and a buttery smooth 60FPS."

You hardware says no ray tracing for 60FPS. Or lower settings. Or lower resolution with a significant image quality hit.

Your hardware says no.

DLSS says yes. DLSS says you can have higher image quality than you would otherwise be able to achieve.

I probably can't labour the point any more than this for you to understand it, sorry.
 
DLSS allows you to game at an image quality higher than you might otherwise be able to with your hardware configuration.

"I want to game at 1440p with full ray tracing effects and a buttery smooth 60FPS."

You hardware says no ray tracing for 60FPS. Or lower settings. Or lower resolution with a significant image quality hit.

Your hardware says no.

DLSS says yes. DLSS says you can have higher image quality than you would otherwise be able to achieve.

I probably can't labour the point any more than this for you to understand it, sorry.

You are moving the goal post.

Everyone understands the concept of upscaling. And for 3 posts you have been back-tracking on what you said and suggested earlier. So it is good that you have come to the understanding what DLSS is...


But do understand, that nVidia can not sustain the business model of paying for each DLSS/RTX game. nVidia is only doing that, to get lemmings excited over their "specialized" hardware, that Game Developer's don't want to code for, unless nVidia pays them.

That is why AMD's FSR is superior in deployment and acceptance.
 
You are moving the goal post.

Everyone understands the concept of upscaling. And for 3 posts you have been back-tracking on what you said and suggested earlier. So it is good that you have come to the understanding what DLSS is...


But do understand, that nVidia can not sustain the business model of paying for each DLSS/RTX game. nVidia is only doing that, to get lemmings excited over their "specialized" hardware, that Game Developer's don't want to code for, unless nVidia pays them.

That is why AMD's FSR is superior in deployment and acceptance.

No goalposts have been moved. I defined the difference multiple times. Painfully, laboriously and politely so you can comprehend that DLSS as a process is an addition, not a subtraction. It makes the experience better for the have nots in terms of GPU performance, not worse for the haves....

Which apparently you failed to comprehend. Oh well. I even explained why there is a distinct difference in viewpoints of the technology, with many rather partisan fools blindly denying the incredible aid that it is especially for gamers with limited hardware.

Nvidia have demonstrated they needed multiple software iterations to get DLSS to work as well as it does. Requiring strong hardware accelerated deep learning performance. Which AMD does not particularly have in consumer GPUs. Multi part hardware and software combination.

An open source solution would be desirable for the industry but there is no guarantee it'll work as well. The FSR shot that has been released for Godfall looks distinctly poorer than any result DLSS manages.

So much of this work requires a close relationship with the GPU designer willing to invest heavily in projects with developers if need be to advance the core technology of the industry.

Big bad Nvidia throwing billions of dollars at hardware, software and developers to try and improve the experience of everyday gamer joe using their GPUs. Forcing a response from AMD. Or they could just sit on their asses and let AMD never come up with it because they don't feel the need. Big investment must mitigate the risks and then the consumer can win. It seems you only see a negative despite the situation evidently being far more nuanced. Not surprised.
 
Last edited:
Back