Nvidia brings DLSS to VR, starting with No Man's Sky, Wrench, and Into the Radius

jsilva

Posts: 325   +2
What just happened? Nvidia's monthly update on DLSS-supported games adds nine new titles to the list, including the first VR ones — No Man's Sky, Wrench, and Into the Radius. Besides VR titles, Nvidia also announced that No Man's Sky and Wrench now support DLSS in desktop mode, alongside other titles such as Amid Evil, Aron's Adventure, Everspace 2, Redout: Space Assault, Scavengers, and Metro Exodus PC Enhanced Edition.

When playing No Man's Sky at 4K resolution with DLSS 'Performance Mode' enabled, players should expect up to a 70% increase in average FPS. In virtual reality mode, Nvidia claims that "DLSS doubles VR performance at the Ultra graphics preset," which is apparently enough to keep the frame rate above 90 FPS on the Oculus Quest 2 headset when using a GeForce RTX 3080.

The implementation of Nvidia's DLSS and the Expeditions update Hello Games launched on No Man's Sky might be enough to justify some gamers to hop back into a new space adventure.

It should be noted, these performance claims come directly from Nvidia, but based on our previous tests, DLSS 2.0 does deliver. So you can either recover back most of the performance lost after enabling ray tracing effects, or you can also get a huge performance boost if you run standard graphics but leveraging DLSS alone.

Also read: Ray Tracing & DLSS with the GeForce RTX 3080

Wrench also gets a significant performance increase with DLSS enabled. Depending on your system's configuration, driver version, and game settings, Nvidia DLSS can improve Wrench's framerate by up to 80% in VR mode and more than double the performance at 4K resolution.

The new Wrench update also adds support for ray-tracing effects, enabling enhanced ambient occlusion, reflections and shadows to add a new layer of realism into the mechanic simulator.

Nvidia hasn't shared any data regarding Into The Radius performance improvements with DLSS. In the other six titles mentioned in Nvidia's announcement, performance bumps from using DLSS range from 40% in Scavengers to +100% in Metro Exodus PC Enhanced Edition.

Now with 50 titles supporting DLSS, including a few VR games, it looks like we're finally seeing the adoption the technology arguably deserves. We can expect to see even more games with DLSS as the Unity engine will receive native support for DLSS, as well as the rumored new Nintendo Switch.

Don't forget AMD is also developing its own "DLSS" called FidelityFX Super Resolution. Although it hasn't been confirmed yet, AMD's upscaling technology should be based on Microsoft's DirectML API and is set for a release later this year.

Permalink to story.

 
Hard to complain about that. It's clearly an awesome piece of tech where it works. It's a slow rollout though. It clearly requires collaboration between developer and Nvidia. Is it sustainable? Is it a good approach for the community in general?
 
Last edited:
No-one actually cares about another make-image-quality-worse thing. Real gamers want better image quality, but DLSS is just going backwards. Nothing else.

Quite soon competition will be about who can make game run faster with worst image quality ever imaginable. Sounds "good", yeah.
 
No-one actually cares about another make-image-quality-worse thing. Real gamers want better image quality, but DLSS is just going backwards. Nothing else.

Quite soon competition will be about who can make game run faster with worst image quality ever imaginable. Sounds "good", yeah.

Well said, what worries me is when will it stop. If we look back we may see a big difference during these stages as time goes on.
 
DLSS is amazing. Hoping it arrives on the new Nintendo switch along with ray tracing. Ray traced Zelda: breath of the wild at 4K please Nvidia. Nintendo games would likely benefit a lot from DLSS as they are kinda bright and chunky and don’t need the super fine detail that you notice you lose on DLSS.

Remember when the tech was barely a few months old and some of the less professional tech tubers and journalists were already calling a fail? Looks like the egg is on their faces now! *cough* Techspot *cough*.

Personally I’m quite interested to see AMDs response. It will likely work on Nvidia cards as I don’t think game devs would bother implementing it if it’s stuck to the tiny percentage of people who buy Radeons for gaming.
 
No-one actually cares about another make-image-quality-worse thing. Real gamers want better image quality, but DLSS is just going backwards. Nothing else.

Quite soon competition will be about who can make game run faster with worst image quality ever imaginable. Sounds "good", yeah.
Your facts about DLSS 2.0 are incorrect. In several iterations it looks better than native resolution. Examples of this are control, death stranding and Metro Exodus enhanced edition. Go check out digital foundry showcasing the improved performance with DLSS quality.

But of course we all knew you wouldn’t like it, it wasn’t made by AMD!
 
"Don't forget AMD is also developing its own "DLSS" called FidelityFX Super Resolution. Although it hasn't been confirmed yet, AMD's upscaling technology should be based on Microsoft's DirectML API and is set for a release later this year."

AMD already said that their solution wouldnt be "machine leaning" based
 
Hard to complain about that. It's clearly an awesome piece of tech where it works. It's a slow rollout though. It clearly requires collaboration between developer and Nvidia. Is it sustainable? Is it a good approach for the community on general?
Not at all.
 
Personally I’m quite interested to see AMDs response. It will likely work on Nvidia cards as I don’t think game devs would bother implementing it if it’s stuck to the tiny percentage of people who buy Radeons for gaming.
Oh yeah, tiny percentage of people who own like PS5 or X-Box series X, all those consoles have Radeon GPU inside :joy:

Anyway AMD's solution should be much easier for developers than DLSS so expect ti to gain attention very fast.
Your facts about DLSS 2.0 are incorrect. In several iterations it looks better than native resolution. Examples of this are control, death stranding and Metro Exodus enhanced edition. Go check out digital foundry showcasing the improved performance with DLSS quality.

But of course we all knew you wouldn’t like it, it wasn’t made by AMD!
No, it does not. Taking something from native image and replacing it with something AI thinks is right is not better. It simply cannot be. Of course you can add some AI to make image look "better" but in absolute terms, original image is always best and any modifications to it will make it worse.

Like I have said earlier, if (and it probably will) AMD's solution also sacrifices image quality, I won't approve it. I have always opposed sacrifice image quality to make it run better technologies and won't change my opinion. You screwed up there.
 
No, it does not. Taking something from native image and replacing it with something AI thinks is right is not better. It simply cannot be. Of course you can add some AI to make image look "better" but in absolute terms, original image is always best and any modifications to it will make it worse.

Like I have said earlier, if (and it probably will) AMD's solution also sacrifices image quality, I won't approve it. I have always opposed sacrifice image quality to make it run better technologies and won't change my opinion. You screwed up there.
...Do you understand the technology at all?

If you want a demanding game to run at a high FPS and look good, DLSS and equivalent will most certainly look better than "native image" (as you'd have to crank the quality down for that).

It was never just about making an image look better (where did you get that ridiculous idea?), but making a somewhat lower quality image look a lot better with much better performance. And FYI, it delivers in spectacular fashion.
 
...Do you understand the technology at all?

If you want a demanding game to run at a high FPS and look good, DLSS and equivalent will most certainly look better than "native image" (as you'd have to crank the quality down for that).
When talking about image quality, FPS is irrelevant. Also DLSS compared to native quality will always be worse.
It was never just about making an image look better (where did you get that ridiculous idea?), but making a somewhat lower quality image look a lot better with much better performance. And FYI, it delivers in spectacular fashion.
Where did I say that? I said it could not make image quality better.

Like I said, it sacrifices image quality for speed guessing what image "should" look like. Something I would not approve.
 
When talking about image quality, FPS is irrelevant. Also DLSS compared to native quality will always be worse.

Where did I say that? I said it could not make image quality better.

Like I said, it sacrifices image quality for speed guessing what image "should" look like. Something I would not approve.
What are you talking about? No one was talking about pure image quality. You responded to "performance" with "image quality".
If you can't see what the actual use of the technology is (for you to "approve" of), then whatever. You play your demanding games on ultra with minimal FPS, the rest of us will use DLSS for a better experience overall.
 
Remember when the tech was barely a few months old and some of the less professional tech tubers and journalists were already calling a fail? Looks like the egg is on their faces now! *cough* Techspot *cough*.

They called DLSS a fail because it *was* a fail. They were correct, and it was Nvidia that came away from the reviews with "egg on their face" when a simple sharpening filter outperformed DLSS.

So Nvidia went back to work and came up with a version of DLSS a year later that was actually *good* and the same reviewers lauded them for that. Seems like the reviewing system worked out pretty well for everyone.

Don't try to play "holier than thou," it's embarrassing.
 
Last edited:
Taking something from native image and replacing it with something AI thinks is right is not better. It simply cannot be. Of course you can add some AI to make image look "better" but in absolute terms, original image is always best and any modifications to it will make it worse.

What I would say is that DLSS does not take anything away, it only adds information. It isn't downgrading a high resolution image, it upgrades a lower resolution one. As I am sure you are aware, clarifying this.

These are video games after all, rendered entirely inside a machine. It isn't a photo of the real world or a replication of it where even the best AI up scaling may distort or change what you might class as 'originality.'

No, the 'original' scene for a video game is whatever the machine spits out. Whatever pixel is calculated at that moment through the thousands of lines of shader code is the reality of the image. Adding in simulation or artificiality prior to ultimate output might somehow pollute an original photo or video, but arguably not so much an already entirely artificially created image!

DLSS works so well because of this fact. You can have an algorithm that can perfectly extrapolate and smooth out a render better than a raw unfiltered native image. All based on what the pixel would be if it was followed through the entire pipeline at a higher resolution as intended by the original developer. It's a clever shortcut with little downside.

Virtually every media we consume has this kind of pollution to the image, usually for artistic reasons. Nobody really consumes digital photos or video that aren't manipulated in some way these days. Why DLSS would be considered the devil's work by some people I don't know.
 
What I would say is that DLSS does not take anything away, it only adds information. It isn't downgrading a high resolution image, it upgrades a lower resolution one. As I am sure you are aware, clarifying this.

These are video games after all, rendered entirely inside a machine. It isn't a photo of the real world or a replication of it where even the best AI up scaling may distort or change what you might class as 'originality.'

No, the 'original' scene for a video game is whatever the machine spits out. Whatever pixel is calculated at that moment through the thousands of lines of shader code is the reality of the image. Adding in simulation or artificiality prior to ultimate output might somehow pollute an original photo or video, but arguably not so much an already entirely artificially created image!

DLSS works so well because of this fact. You can have an algorithm that can perfectly extrapolate and smooth out a render better than a raw unfiltered native image. All based on what the pixel would be if it was followed through the entire pipeline at a higher resolution as intended by the original developer. It's a clever shortcut with little downside.

Virtually every media we consume has this kind of pollution to the image, usually for artistic reasons. Nobody really consumes digital photos or video that aren't manipulated in some way these days. Why DLSS would be considered the devil's work by some people I don't know.
If all that is true, there is no point in increasing the power of the graphics cards with every generation. We should simply use AI to play in 8K, 12K and such. Things would be much easier for everybody. Unfortunately, I guess the whole thing is not so simple.
 
50 titles with DLSS support is not an achievement in my opinion. It just paints the picture of a very slow pickup rate considering its been out for at least 3 years now with the introduction of Turing. Contrary to what Nvidia claims of ease of application, the fact that only a handful of games have implemented DLSS proves otherwise. With it baked into a certain version of the Unreal 4 engine and also Unity (by end of the 2021), the take up rate may trend higher. But a big chunk of game are not built using Unreal/ Unity engine. With AMD's FSR coming up some time this year that is hardware agnostic, it may hamper Nvidia's DLSS take up further.

There are tradeoffs with using DLSS, but I think the general consensus is that the pros far outweigh the cons, I.e. there may be some visual downgrades like flickering and ghosting for moving images, but image quality in most cases are as good if not sharper.
 
If all that is true, there is no point in increasing the power of the graphics cards with every generation. We should simply use AI to play in 8K, 12K and such. Things would be much easier for everybody. Unfortunately, I guess the whole thing is not so simple.

It's not simple obviously but it's true enough that for more advanced AI you also need more advanced hardware.

If you want to extrapolate out with refinement on current titles then starting at a higher resolution can help. My point was that DLSS does not necessarily compromise the 'original' image if the end result is still the developer's vision for the game.

If you end up where DLSS or something like it is heavily integrated into the render pipeline of every game in future that'll just be a part of how the developer intends the game.

It'll just be another technique applied to reach the performance level required to service the vision of developers.
 
Last edited:
It'll just be another technique applied to reach the performance level required to service the vision of developers.
Yup. This is the case for all rendering algorithms that have been created over the years. Phong lighting, LOD, mipmapping, bump mapping, tessellation, ambient occlusion, et al were all methods to simulate a particular visual effect at a reduced performance overhead. Even current ray tracing techniques are full of similar simplifications and tricks. AI-based temporal upscaling will eventually be commonplace but, for now, it's simply an option - one doesn't have to use it, so why complain?
 
It's not simple obviously but it's true enough that for more advanced AI you also need more advanced hardware.

If you want to extrapolate out with refinement on current titles then starting at a higher resolution can help. My point was that DLSS does not necessarily compromise the 'original' image if the end result is still the developer's vision for the game.

If you end up where DLSS or something like it is heavily integrated into the render pipeline of every game in future that'll just be a part of how the developer intends the game.

It'll just be another technique applied to reach the performance level required to service the vision of developers.

It seems you you do not understand what DLSS is.
(" It isn't downgrading a high resolution image, it upgrades a lower resolution one. As I am sure you are aware, clarifying this." -vulcanproject )

To gain the performance they are speaking of, you have to lower the games resolution.... otherwise you gain no performance. I think you know this and are playing d0mb, or games with words. Because having the end-user lower the games resolution is what DLSS is all about and how it makes it's performance gains. (Visual quality aside).

Secondly, ("If you end up where DLSS or something like it is heavily integrated into the render pipeline of every game in future that'll just be a part of how the developer intends the game.")

IF you do that^, you end up with AMD's FidelityFX, which is already supported in over 40 games...! It it the reason nVidia is trying to make noise and stay relevant.
 
Last edited:
Most likely the faults of DLSS will be a lot more noticeable with VR, simply because everything is so much closer to your eyes. It's good for the ones that want to try VR with less powerful hardware, but ultimately, VR is one of those instances where you really need all the resolution you can get.
 
What are you talking about? No one was talking about pure image quality. You responded to "performance" with "image quality".
If you can't see what the actual use of the technology is (for you to "approve" of), then whatever. You play your demanding games on ultra with minimal FPS, the rest of us will use DLSS for a better experience overall.
I play games with less than ultra settings without any AI guessing crap.
What I would say is that DLSS does not take anything away, it only adds information. It isn't downgrading a high resolution image, it upgrades a lower resolution one. As I am sure you are aware, clarifying this.
Compared to original high resolution image, it takes quite much away because and guesses what it should look like.
These are video games after all, rendered entirely inside a machine. It isn't a photo of the real world or a replication of it where even the best AI up scaling may distort or change what you might class as 'originality.'

No, the 'original' scene for a video game is whatever the machine spits out. Whatever pixel is calculated at that moment through the thousands of lines of shader code is the reality of the image. Adding in simulation or artificiality prior to ultimate output might somehow pollute an original photo or video, but arguably not so much an already entirely artificially created image!

DLSS works so well because of this fact. You can have an algorithm that can perfectly extrapolate and smooth out a render better than a raw unfiltered native image. All based on what the pixel would be if it was followed through the entire pipeline at a higher resolution as intended by the original developer. It's a clever shortcut with little downside.

Virtually every media we consume has this kind of pollution to the image, usually for artistic reasons. Nobody really consumes digital photos or video that aren't manipulated in some way these days. Why DLSS would be considered the devil's work by some people I don't know.
Partially agreed but still I like to see what I'm supposed to see. Not what AI thinks I should see. Also on many cases DLSS just creates quality that is very far from "original" (saying original, I mean without DLSS), it just doesn't work perfectly and that is why I won't approve it.
 
It seems you you do not understand what DLSS is.
(" It isn't downgrading a high resolution image, it upgrades a lower resolution one. As I am sure you are aware, clarifying this." -vulcanproject )

To gain the performance they are speaking of, you have to lower the games resolution....
I understand fairly adequately, but perhaps you aren't seeing how your wording is not accurate.

To 'lower' the resolution suggests there was a starting point of a high resolution in the render pipeline. To lower describes a relative situation.

'Lowered' suggests a higher resolution has already been rendered and is available. Well it isn't if you don't have the performance.

So DLSS isn't 'lowered' at all. It starts from a fixed resolution be it 720p, or 1080p or whatever, and increases that by adding information to reach frame. It's a technicality but there is a distinction here.

It's an important one, because some people here see this as some kind of pollution to a purist 'native' image, as if you are taking something away and thus it is inferior.

That's not the case if you can't render at a 'native' resolution in the first place, due to lack of performance or whatever. People that rally against DLSS are glass half empty people, and view that as such.

People that see the ability of it to increase resolution forming near pristine image quality are glass half full attitudes. I'm getting something I couldn't have in the first place- more pixels. I'm not losing out.
 
When talking about image quality, FPS is irrelevant. Also DLSS compared to native quality will always be worse.

Where did I say that? I said it could not make image quality better.

Like I said, it sacrifices image quality for speed guessing what image "should" look like. Something I would not approve.
Lmao, you can say on here that DLSS 2.0 sacrifices image quality. But that would be a big fat lie. Several tech journalists have showcased and demonstrated that in some titles DLSS 2.0 does actually improve image quality and I can attest to that, Death Stranding looks noticeable visually better with DLSS on than it does with it turned off, as is confirmed by many tech outlets (I can link them if you still won’t accept these facts).

You’re fanboy denial is amusing however. As is your clear misunderstanding of the industry. AMD do make GPUs for the consoles. But they don’t use the same APIs or drivers as Radeon. And these upscaling solutions are integrated into the driver. Radeons market share is tiny so if AMD want devs to spend the time and effort implementing their solution, it needs to work on both Nvidia and AMD drivers otherwise devs wont bother. Currently control runs better with RT on on a PS5 than it does on a 6800XT. Clearly the devs couldn’t be bothered with optimising Radeon drivers for ray tracing in control.

The fact is DLSS is amazing and die hard AMD fanboys like yourself are clearly triggered by it and going around making a fool of themselves online. Keep it up, it’s hilarious!
 
I understand fairly adequately, but perhaps you aren't seeing how your wording is not accurate.

To 'lower' the resolution suggests there was a starting point of a high resolution in the render pipeline. To lower describes a relative situation.

'Lowered' suggests a higher resolution has already been rendered and is available. Well it isn't if you don't have the performance.

So DLSS isn't 'lowered' at all. It starts from a fixed resolution be it 720p, or 1080p or whatever, and increases that by adding information to reach frame. It's a technicality but there is a distinction here.

It's an important one, because some people here see this as some kind of pollution to a purist 'native' image, as if you are taking something away and thus it is inferior.

That's not the case if you can't render at a 'native' resolution in the first place, due to lack of performance or whatever. People that rally against DLSS are glass half empty people, and view that as such.

People that see the ability of it to increase resolution forming near pristine image quality are glass half full attitudes. I'm getting something I couldn't have in the first place- more pixels. I'm not losing out.
Completely agreed on your glass half full or half empty analogy. With many implementations DLSS does take something away from the final result (although in some the image quality actually improves).

However my experience is that turning DLSS on is less of a visual impact than lowering other settings. So for example, playing with RT and DLSS on looks better than playing with both turned off, despite the native res matching the internal res, the loss of other settings is more noticeable than the loss of DLSS.

At the very worst, it gives you more options to fine tune your experience. At the very best it improves both frame rate and image quality. Normal tech enthusiasts have no complaints.

However all the people on here claiming it’s garbage or that it ruins your games etc are all the same people who will always tell you anything a Nvidia makes is awful. These people tend to be emotionally attached to AMD. And AMD are literally the only people negatively affected by DLSS. So that’s what how you end up with comments from the likes of meta, hard reset etc. They are just upset that their favourite American Corporation is losing mindshare that’s all.
 
No-one actually cares about another make-image-quality-worse thing. Real gamers want better image quality, but DLSS is just going backwards. Nothing else.

Quite soon competition will be about who can make game run faster with worst image quality ever imaginable. Sounds "good", yeah.

I think you may misunderstand what DLSS is for. If you have a game that only runs at acceptable framerates at 1080p, or at 2160p with DLSS, the latter will *always* look far better. So it provides an increase in image quality, not a decrease.

Turning off DLSS at 2160p to gain image quality would be useless if the result is the game running at 20fps.
 
Back