Nvidia DLSS in 2020: This Could Be a Game-Changer

Honestly I couldn't care less about DLSS or RTX.

Still quite happy with my good old overclocked GTX 1080 Ti which does the job just fine on my humble 1440p display.
 
Couldn’t they just keep working on the shader version and open it up to everyone, such as GTX 16 series owners? We asked Nvidia the question, and the answer was pretty straightforward: We don't want the competition to have it.
 
Maybe I'm not really understanding this...

That would be an understatement.

From the article:
This will include our usual suite of visual comparisons looking at DLSS compared to native image quality, resolution scaling, and various other post processing techniques. Then, of course, there will be a look at performance across all of Nvidia’s RTX GPUs.
 
Some readers criticized our original features...

From how the tech was described, there was no reason at the time to believe it wouldn't get better. It didn't get better in BFV and Metro, but it eventually got better to become a "game changer."
 
Lol, what a gimmick.

DLSS is at a tipping point. The recently released DLSS 2.0 is clearly an excellent technology and a superb revision that fixes many of its initial issues. It will be a genuine selling point for RTX GPUs moving forward, especially if they can get DLSS in a significant number of games. By the time Nvidia’s next generation of GPUs comes around, DLSS should be ready for prime time and AMD might need to respond in a big way.

A gimmick? Sign me up!
 
1) Makes Navi a tough sell at the same non-DLSS price point even the option DLSS might get wider adoption must be worth something.

2) Wonder how this dovetails into game devs pulling support from GeForce Now. NVIDIA needs dev support to build DLSS and RTX into builds. Gives devs leverage over NVIDIA over whether to allow their games to run on GFN.
 
Why do the last graphs highlight “performance improvements” instead of giving us the difference between the native resolutions?

Example: FPS at native 1440p compared to FPS at 1440p with DLSS set to 4K compared to FPS at native 4K

What users want to know is how many FPS they are sacrificing when using DLSS.

Techspot must have this info as they keep saying DLSS is a “free” performance upgrade... show us the numbers!
 
Glad to see nVidia went back to the drawing board with DLSS and improved it drastically with this iteration. I'll admit I was one of those who dismissed the first DLSS version as an ineffective marketing gimmick; glad to be proven wrong.
 
If I understand, DLSS uses the custom Tensor Cores to implement an AI based image up-sampler. If you are not using DLSS, the tensor cores go unused for gaming. They can be used for other AI applications, but with out DLSS, gamers are paying for addition silicon they don't need.

I think the key to this technology is going to be pricing, assuming games adopt this tech. Will the cost you pay for the RTX and its tensor cores for DLSS to gain you 15% improvement be less than buying a more 15% more powerful AMD product?

All in all, pretty cool technology. Glad to see they fixed it. If the game supports it (big if), then you can get a 15% boost in frame rate "no strings attached." (though you did pay a premium of that feature)
 
Lol, what a gimmick.
Did you even read the article?
The author wrote an amazing review of the new DLSS, how it's superior to SMAA, the performance results and came to this conclusion: .
By the time Nvidia’s next generation of GPUs comes around, DLSS should be ready for prime time and AMD might need to respond in a big way.
And you chime in and say "what a gimmick"...?
 
Well a step backwards in rendering tech this is.
this is server tech used for desktop so they an get rid of the silicon.
 
No one would ever build a gpu this way for desktop users.
This is a server gpu jerry rigged to have an application on desktop.
No one seems to see this.
 
"This first batch of results playing Control with the shader version of DLSS are impressive. This begs the question, why did Nvidia feel the need to go back to an AI model running on tensor cores for the latest version of DLSS? Couldn’t they just keep working on the shader version and open it up to everyone, such as GTX 16 series owners? We asked Nvidia the question, and the answer was pretty straightforward: Nvidia’s engineers felt that they had reached the limits with the shader version.

Concretely, switching back to tensor cores and using an AI model allows Nvidia to achieve better image quality, better handling of some pain points like motion, better low resolution support and a more flexible approach. Apparently this implementation for Control required a lot of hand tuning and was found to not work well with other types of games, whereas DLSS 2.0 back on the tensor cores is more generalized and more easily applicable to a wide range of games without per-game training."

The real question is, how is Nvidia going to deal with conflicting AI training results on the large scale? Typically AI training is an iterative process where you feed the AI data, it outputs results, and you adjust parameters according to the results. This is called a training step. You repeat this process until you get desirable results.

The problem being that during the validation test, I don't see how Nvidia can test the adjusted parameters against all video games. It's also begs the question of how Nvidia will deal with conflicting training results. As the author mentioned in the article, flickering is still an issue. Most likely, they could train the AI to fix that issue but it would likely come at the cost of something else. That's just across 2 games. Imagine for a second that the training model is running on a game and it causes all geometry to flicker. Entirely possible unless Nvidia are training each iteration of the AI against every game.


1) Makes Navi a tough sell at the same non-DLSS price point even the option DLSS might get wider adoption must be worth something.

2) Wonder how this dovetails into game devs pulling support from GeForce Now. NVIDIA needs dev support to build DLSS and RTX into builds. Gives devs leverage over NVIDIA over whether to allow their games to run on GFN.

Not really given that both Nvidia and AMD have sharpening filters with a much lower performance impact.

2 games isn't much of a sample size either. Hence why the article states "could be a game changers". Especially when those 2 games have a hand tuned implementation by Nvidia. They certainly aren't going to spend that kind of time on every game just for DLSS. Reminds me of the Porty Royal benchmark with DLSS enabled. The quality of DLSS in that game did not represent the whole.


Why in god's name are these graphs not just in FPS is beyond me.
 
Instead of my long winded post I had typed up. I'm just going to say this; Why are we so focused on a technology that is still inferior to MSAA and SSAA? Image Quality, isn't it what all of those who purchase high end GPU's are after. So why sacrifice? And at the end of they day, most any game that is even remotely fast paced, you likely won't even notice the difference, especially at the crazy high resolutions we play at here in 2020. Funny looking back 10-15 years ago, people would buy higher resolution monitors so they would need less use of Anti-Aliasing, now here we are in 2020 still complaining about jaggies at 4k. Just laughable to me.
 
Glad to see nVidia went back to the drawing board with DLSS and improved it drastically with this iteration. I'll admit I was one of those who dismissed the first DLSS version as an ineffective marketing gimmick; glad to be proven wrong.
I'm quite certain that the effort Nvidia has put in to try to make this actually not completely suck, is testament that your original sentiment that it was an ineffective marketing gimmick was indeed 100% correct. So you were actually proven right. ;)
 
Instead of my long winded post I had typed up. I'm just going to say this; Why are we so focused on a technology that is still inferior to MSAA and SSAA? ...

Because else you can't sell RTX, so lots of marketing effort needs to go into selling stuff that makes things look crappier, so they can sell the features that ruin performance ... by a lesser margin.

I dunno ... from what I constantly am hearing, most gamers that use Intel CPUs, turn all their settings down and game at low resolutions anyway in hopes of bottlenecking their CPU so they can have a 5% advantage over AMD ... so I guess none of this really matters at all to gamers ... No one cares about image quality anymore or fancy visual effects like raytracing - that 5% FPS lead over AMD is the only thing anyone cares about ... right?

Right now all RTX does is trade quality from somewhere in the game, to try to spruce it up in another area. It still needs much more time to mature to be taken seriously.
 
Last edited:
Back