Nvidia DLSS in 2020: This Could Be a Game-Changer

Lol kinda sad that the best DLSS implementation so far has been wasted on a woke trash of a game :/. Oh well let hope that Cyberpunk 2077 will also feature DLSS, even via a later patch, that will certainly help boosting RTX sale through the roof.
Anyone who think the whole AI image enhancing is a gimmick better check out some clip from 2kliksphilip


In short AI can make an upscaled image that is 1/3 the size of the original image look better than the original, quite mind blowing actually. However it also have some flaw just like DLSS such as shimmering...
 
Discuss the DLSS technology only, please, so that no further comments have to be removed and/or edited.
 
Lol kinda sad that the best DLSS implementation so far has been wasted on a woke trash of a game :/. Oh well let hope that Cyberpunk 2077 will also feature DLSS, even via a later patch, that will certainly help boosting RTX sale through the roof.
Anyone who think the whole AI image enhancing is a gimmick better check out some clip from 2kliksphilip


In short AI can make an upscaled image that is 1/3 the size of the original image look better than the original, quite mind blowing actually. However it also have some flaw just like DLSS such as shimmering...

You do realize that say for the example above, the algorithm didn't necessarily make the blurry image accurate, right? (well it is up to a point) The way this tech works is that the rest of the image is filled in based on a library that most likely doesn't contain that image. So it uses pieces from the other images that are similar to make it look acceptably clear, but it won't necessarily be accurate to the original. There's a difference between "pleasing to the eye" and "accurate to the original" in this regard. The more the machine learning has to fill in details, the less accurate to the original the image will become. The "shimmering" is likely a result of these various random "similar" bits being added from the library. Still pics work best. Sometimes "pleasing to the eye" is exactly what we are after though, when accuracy is less important. (in other words, this has limited use as magical forensic tool that can pull data from images that doesn't exist - although it works well enough to be a basic image enhancer to a point)

If you go to 12:10 you can see a blurry brick chimney ... then wait for the AI version - its an "alligator skin" chimney - not accurate. DLSS contains this flaw at its core because it is inherent in the technology.

-------

My main issue with DLSS is that it was touted as an AA technique, and compared to other AA techniques, when this is not really a type of AA technique at all. AA, in its inception was a "render higher res, then downscale" technique and it primarily still is except we have fancy edge detect algorithms, etc. that try to lessen the performance hit. DLSS does the opposite - you set your resolution to say 1440p but the game is then rendered at 1080, and upscaled with selective artificial sharpening applied to certain (sometimes wrong) areas.

If I render at 720p and upscale to 1080p and it looks blurry, can I say that I applied anti-aliasing? lol ... no ... that's the opposite of what the essence of AA is. So to call DLSS an "AA solution" is a bit of a misnomer - the only "super sampling" happening, was done at the Nvidia server when they try to build libraries, no super sampling actually is processed with DLSS. Its a selective sharpener, and that's about it. I hate deceptive marketing. One thing Jensen and Nvidia knows how to sell is hype and awe. At least if I apply Radeon sharpening, the resolution I set is the resolution I get, and there's still no performance hit.

I probably would have less a sour taste in my mouth if Nvidia was honest about what it really was, what it really was designed for -- only to try to make RT performance acceptable at a given "resolution" (in air quotes), that's really its only purpose .... when RT performance becomes mainstream usable, DLSS probably won't have a whole lot of usefulness.

I can see other selective technologies, along the lines of Radeon Boost, and that other Nvidia selective true AA (I can't recall what it is called) being actually more useful tools.

Real-time RT still needs some time before it is matured enough to be solid feature. One or two more generations will see some significant improvements.
 
Last edited:
What's the point in side-by-side picture comparisons when you're just talking about performance gains? Maybe I'm not really understanding this, but then I was never much into eye candy.
The comparison is to show you that DLSS looks like 4K native with 30-40% better performance.
 
Too many comments still trying to bash DLSS despite techspot proof that it's majorly improved (some times it's even better than native resolution as it is shown in the screenshots)

Same people calling it as "Gimmick tech or blurry tech" all the time. A guy even attacked the author of the article calling it as "vaporware article"...

The proof is here DLSS in 2020 has been improved and looks stunning. It's obviously what kind of people are continuously bashing DLSS all the time their behaviour is getting predictable. Take your AMD fanatism away from here !!!
People are stupid, that is all there is to it.
AMD is doing basically nothing in the GPU space, with a cheap card built on 7nm that consumes more power than 12nm parts from nvidia, with lots of driver issues, blackscreens, no ray tracing, no deep learning, nothing and people are still buying these piece of ****.
 
People are stupid, that is all there is to it.
AMD is doing basically nothing in the GPU space, with a cheap card built on 7nm that consumes more power than 12nm parts from nvidia, with lots of driver issues, blackscreens, no ray tracing, no deep learning, nothing and people are still buying these piece of ****.

Is there not a mod warning just a couple posts up about sticking to topic? ...
 
The comparison is to show you that DLSS looks like 4K native with 30-40% better performance.

Nobody is going to go out and buy a 4k Monitor and power it with a 2070 SUPER and upscale the image. That is why DLSS is a gimmick...

It is a way for nVidia to sell lower performance cards, as higher end. When in fact 4k looks better than upscaled images. The problem is, nVidia doesn't have a card that can push 4k... at gaming speeds, so they are marketing this gimmick, so they don't have to put more memory on the card, or offer higher performance cards.

Developer's are not going to invest time into this, when rdna2 is just around the corner.
 
Why not? If it gives you more FPS with similar visual quality why not use DLSS? That is your own personal opinion that 4K looks better than DLSS. To my eyes, it looks like the editor said. Usually the same and in some cases better than 4K.
 
Why not? If it gives you more FPS with similar visual quality why not use DLSS? That is your own personal opinion that 4K looks better than DLSS. To my eyes, it looks like the editor said. Usually the same and in some cases better than 4K.

It will depend on individual tastes as to the acceptability or the lack thereof. People might be leaving, dismissive posts without explaining anything, but that doesn't necessarily make them "stupid" (they might be stupid for other reasons though ... ;))

For me, I find myself on a couple games rendering higher then down-scaling, and with Youtube, I downscale 1440p or 4k content to 1080p.

The reason I do this so that I can improve image quality - I am a big fan of eyecandy, and I don't really want to trade what RT can potentially add in that area, for anything, even if it is a "greatly improved" solution. So for me, DLSS is out; in fact, right now RT is out, as while I am excited for what real time RT will be able to do, I don't want compromises. So I am happy to wait until it becomes a commonplace feature, that doesn't have quite the performance hit.

There's some inherent flaws with the base technology behind DLSS (I pointed one out in my long post above), that prevents me from really seeing a great future for this tech. As I mentioned in that post above, once RT can be had without that massive performance hit, DLSS no longer has a purpose. Its a bit of a stopgap solution. For some, it's an acceptable solution to get RT performance, for others, it is not.
 
I remember when AA (Anti-aliasing) first came out (yes im that old), people called it gimmicky and BS, "just raise the resolution and you won't need AA" they said! Now AA is an industry standard and in all games. Don't know if this tech will be the same as AA in the future, but a lot of the comments here are reminiscent of that.
 
Except for one set of pictures, it was hard for me to tell the difference. But it's clear to me that this kind of technology has a great potential. Obviously, if "upscaling" video is easy, needing only limited processing power - and, indeed, if it wasn't, how could it be built into TV sets - then you could get a better visual experience on a 4K monitor than you would by playing a game at 1080p resolution while getting nearly 1080p frame rates.
In fact, these cards ought to offer a mode where the game is just told you have a 1080p monitor - so the game doean't need to have any support for this at all. But of course support, so the game can draw selected critical parts of the scene at full resolution instead of having them upscaled, is even better.
 
I wonder if you could stack DLSS. Like first DLSS 1080p to 1440p and then DLSS the result to 4k.
 
PCGamesHardware delve deeper into DLSS 2.0 with Control and MechWarrior 5 and the results speak for themselves, a 60% uplift in performance in Control with DLSS 2.0 in quality mode (which offer better quality than native rendering in some area and worse in other but overall an equivalent image quality).
Another thing is you can use Nvidia DSR (Dynamic Super Resolution) with DLSS so that DLSS is using native resolution before upscale them to the DSR ratio (which Nvidia was saying DLSS 2x at the begining) for a superior image quality
Do it yourself DLSS 2x
And I really think DLSS 2.0 performance should be included in Techspot future game benchmark when the game support it.
 
Last edited:
Why do the last graphs highlight “performance improvements” instead of giving us the difference between the native resolutions?

Example: FPS at native 1440p compared to FPS at 1440p with DLSS set to 4K compared to FPS at native 4K

What users want to know is how many FPS they are sacrificing when using DLSS.

Techspot must have this info as they keep saying DLSS is a “free” performance upgrade... show us the numbers!

so you have a 4K screen
you can :
A) run game at 1444P + DLSS to 4K
B) run game at 4K

and you question is how slower will 1440p + DLSS do 4K compared to doing straight 4K?

Interesting question. Maybe think about it for a second?
.

.

.

it would run faster at 1440p +DLSS =4K
than it would if you just ran 4K. Otherwise it would be silly to even make the tech.
 
so you have a 4K screen
you can :
A) run game at 1444P + DLSS to 4K
B) run game at 4K

and you question is how slower will 1440p + DLSS do 4K compared to doing straight 4K?

Interesting question. Maybe think about it for a second?
.

.

.

it would run faster at 1440p +DLSS =4K
than it would if you just ran 4K. Otherwise it would be silly to even make the tech.
Well yes... but by how much! No one is saying that it doesn’t make a difference- I want to know by how much.
We benchmark cpus all the time... we all know that the 9900 is faster than the 8500... the point is: is the extra performance worth the added cost?

I want to know how many FPS I sacrifice by running “true 4K” instead of DLSS and 1440p...
I also want to know how many FPS I sacrifice by running 1440p + DLSS (4K) vs just 1440p....
 
Well yes... but by how much! No one is saying that it doesn’t make a difference- I want to know by how much.
We benchmark cpus all the time... we all know that the 9900 is faster than the 8500... the point is: is the extra performance worth the added cost?

I want to know how many FPS I sacrifice by running “true 4K” instead of DLSS and 1440p...
I also want to know how many FPS I sacrifice by running 1440p + DLSS (4K) vs just 1440p....

Well, for now, you should assume less?! :)
Should be about <15% over whatever resolution is set (my conclusion from watching some FPS on youtube)

edit: originally, your question was framed as if 1440p +dlss would give less than native 4K ;)
 
Well, for now, you should assume less?! :)
Should be about <15% over whatever resolution is set (my conclusion from watching some FPS on youtube)

edit: originally, your question was framed as if 1440p +dlss would give less than native 4K ;)
I don’t think you understood my point... I want techspot to provide the FPS numbers so we can see exactly what we gain/lose when using DLSS!
 
It's not DLSS 2.0 and it's only a quick test, but this is Metro Exodus, using the built-in benchmark. The test runs are at 1440p, 1440p + RTX, 1440p + RTX + DLSS, then again at 4K. The benchmark won't run with just DLSS enabled.

dlsstest01.jpg
dlsstest02.jpg
dlsstest03.jpg
dlsstest04.jpg
dlsstest05.jpg
dlsstest06.jpg

Results summary:
dlsstest07.jpg

System used:
Intel Core i7-9700K (stock)
32 GB DDR4-3000 CL15
Nvidia GeForce 2080 Super

In the process of downloading Shadow of the Tomb Raider and Control, to do more DLSS testing, but my net connection is super slow this morning (<10 Mbps) so it's going to be a while before I can do any more results.
 
Well, there is only "Control" which isn't really DLSS 2.0 in full but it's not as bad as DLSS 1.0. That's why there are no benchmarks as of yet.
 
Well, there is only "Control" which isn't really DLSS 2.0 in full but it's not as bad as DLSS 1.0. That's why there are no benchmarks as of yet.

Control was patched to DLSS 2.0 on Mar 26th. Here are my results with DLSS 2.0 in Control, MechWarrior 5 and Deliver Us the Moon

2020d2b63407-9948-43e7-995d-6f61c26f1d28.jpg


As you can see using DLSS Quality mode improve performance by 70-85% in Control, 30-45% in MechWarrior % and 58% in Deliver Us the Moon (I can't use higher res in Deliver Us the Moon)
Note 1: Control and Deliver Us the Moon are tested with RTX.
Note 2: I use DSR to increase my resolution (widescreen 1440p) to 5160x2160, which is higher than 4K
Note 3: 2160p DLSS in "Quality" use 1440p as internal rendering, but is vastly superior to native 1440p in term of image quality while losing only 10% performance in Control and 15% in Mechwarrior 5

So with DSR set at 2.25x, I'm able to recreate what Nvidia advertised as DLSS 2x where DLSS in Quality mode is using the native 1440p resolution to create 2160p images before downsampling it back to 1440p, losing 10-15% performance but achieve way better image quality


 
Last edited:
Which card are you using? I am thinking, the results will vary with number of tensor cores...

I'm using a watercooled 2080 Ti, I think the DLSS results are only affected when you are CPU bottlenecked, otherwise the performance gain should still be the same for RTX 2060.
 
It is very weird that you mentioned that you are water cooling your card, since that has nothing to do with what you were doing, or what I have asked you :)
But thanks anyway, of course.
 
It is very weird that you mentioned that you are water cooling your card, since that has nothing to do with what you were doing, or what I have asked you :)
But thanks anyway, of course.

water cooling my card meaning the result I get would be slightly better than the average air cooled 2080Ti, does it upset you that I use 1 extra word to answer your question ?
 
Back