Nvidia says resolution upscaling like DLSS (and not native resolution) is the future

Daniel Sims

Posts: 1,375   +43
Staff
The big picture: As technologies like Nvidia's DLSS and AMD's FSR allow games to improve performance by generating pixels and frames with AI, some wonder if upscaling is a crutch allowing developers to release unoptimized titles. Nvidia recently countered the assertion, saying the company intends to focus on DLSS and AI for the foreseeable future.

In a lengthy discussion over DLSS and ray tracing technology, Nvidia predicted that the industry will eventually move beyond running games at native resolution. The company sees resolution upscaling as a way of working smarter rather than harder.

Solutions like DLSS, FSR, and XeSS significantly increase framerates while minimizing image quality loss, but users have begun to worry if they will become necessary for good performance. Remnant II set what some consider a dangerous precedent by listing system requirements that assume players are using upscaling. Nvidia launched a defense of this future while promoting its AI technologies.

Digital Foundry and PCMR recently held an in-depth interview with Nvidia and CD Projekt Red, covering the recently unveiled DLSS 3.5 ray reconstruction feature and the new path-tracing update for Cyberpunk 2077. The conversation broached topics such as Nvidia's roadmap and how DLSS impacted Cyberpunk's development.

Nvidia's Bryan Catanzaro said that because of DLSS, rendering at native resolution is no longer best for image or graphical quality. Nvidia and CDPR argue that path tracing alters Cyberpunk's presentation fundamentally and that it's impossible without DLSS. Earlier tests showed the GPU manufacturer's flagship RTX 4090 struggling to reach 30fps at native 4K with path tracing. Furthermore, prior analyses suggest that in certain situations, DLSS 4K quality mode (native 1440p) can outshine native 4K.

Catanzaro likened native resolution to using brute force. He and CDPR's Jakub Knapik explained that graphics rendering has always been full of "cheats" such as mipmapping and Level of Detail. Catanzaro also reiterated Nvidia CEO Jensen Huang's assertion that Moore's Law is dead and that further significant advances necessitate techniques that conserve horsepower.

The company's comments are unsurprising given its heavy investments in the massively profitable AI sector, but game developers' behavior supports the stance. Performance breakdowns show that many major PlayStation 5 and Xbox Series X titles upscale from resolutions below 4K, often using FSR. If they do reach native 4K, it's usually at 30fps.

Such behavior predates the emergence of 4K displays and AI-based upscaling. Games from the prior PlayStation 4 and PlayStation 3 eras typically struggled to reach 1080p, and sub-native resolution games weren't uncommon on earlier consoles. The argument that AI techniques improve developers' ability to compensate for differences in pixel counts is not without merit.

Permalink to story.

 
I called it: Nvidia will plan on just abandoning most rasterization advances and make everybody rely on DLSS because they want everyone to depend on those tensor cores, be free to prize gauge and better control planned obsolescence and just trickle down products meant for ML workloads instead of investing on R&D for a gaming-centric product.

I know my other post on the subject went far in the assumption realm (I did mention I'd go from the plausible to the implausible though) but now that Nvidia has exploded in value over the "AI" craze they'll not only abandon PC gaming but do their damnest to kill PC AAA gaming in the process.
 
"The games will continue to have terrible optimization and we will create Blur 4.0 super AI tech plus to save the day, only U$ 2999"
The offset until now to all these bad optimizations and gpu pricing inflating at a constant rate is delaying support by 1 year for software or hardware generational jump. The problem with the later is actually is going against the standard of waiting for graphics cards by 1 generation to get a better value ( some tiers worse than others). With AMD rumored to cancel next gen high end gpus for gaming and instead attempting to improve it's ai footprint, Nvidia will become more brazen than it already is. Hence the push for a software premium model instead of a hardware improvement model. This actually aligns with my theory of the price to performance ratio is going to be similar to the 4090 ( 10/22 launch) to Blackwell rumored q1/25. With a software premium push like we never seen. Smoke and mirrors baby! 😅
 
I kinda get it, but I'm not entirely convinced.
DLSS has never looked worse for me using Balanced and higher so I'm not freaking out - yet.

I wanna see how hard it will be to choose and play at native, prices and what performance boosts we get gen to gen leading up to this proposed plan.
 
Nvidia far from learning it's lesson with woeful launch of the garbage class 4050 erm 4060 GPU's, are doubling down of gimping specs, ignoring native raster performance and going full tilt on AI BS. Expect the 5060 Ti to be on a 96 bit bus, and rely on large clock speeds, GDDR 7 and DLSS to surpass even the overpriced rubbish served up by the 4060 (Ti). I'm sure it'll be 50% faster with DLSS enabled but raster will be a wash. They better give the next gen a lot more memory though to pursue this strategy.

Well it's up to the sheep to stop supporting these clowns but that will never happen, and Huang will be laughing all the way to the bank.
 
What else can you say when a 4090 is barely able to run Cyberpunk with Path Tracing at 20 FPS at native 2160p...

DLSS will never take over FSR as long as Sony and Microsoft are using AMD hardware for their consoles, which the industry is using as their developing platform for their games.
 
What does it mean for small game studios?
Will it be as easy as pressing a button to add upscaling into a game?
If not they will not be able to use it, or rather not able to afford to do it.
In any case, I would rather avoid it purposely just to make sure I do
not support their evil plans.
 
In that case then stop charging 1200usd/euros for a GPU when it needs a "cheating" feature to provide playable fps.
Learn about tech before making such absurd comments.
Every frame you see on screen is generated frame. It is not real.
Previously only Shader Cores did the generation. Now, with Tensor Cores the generation can be done using less power with more performance.

 
Nvidia far from learning it's lesson with woeful launch of the garbage class 4050 erm 4060 GPU's, are doubling down of gimping specs, ignoring native raster performance and going full tilt on AI BS. Expect the 5060 Ti to be on a 96 bit bus, and rely on large clock speeds, GDDR 7 and DLSS to surpass even the overpriced rubbish served up by the 4060 (Ti). I'm sure it'll be 50% faster with DLSS enabled but raster will be a wash. They better give the next gen a lot more memory though to pursue this strategy.

Well it's up to the sheep to stop supporting these clowns but that will never happen, and Huang will be laughing all the way to the bank.
Show Nvidia, how to gain more performance from transistors with just Shader Cores!

Why is that amd's 520 mm 5/6nm 7900xtx same performance as 380mm 5nm 4080 in Raster?
By your logic 7900xtx should be much faster in Raster.

Transistors have a limit. You can not get performance out of air!!!!

We are hitting native performance limits cause of transistors. Amd is leaving high end market and silently left laptop gpu market because of this.

Nvidia figured out Tensor Cores can generate frame more efficiently vs Shader cores.
This is why they are focusing on AI.
 
What else can you say when a 4090 is barely able to run Cyberpunk with Path Tracing at 20 FPS at native 2160p...

DLSS will never take over FSR as long as Sony and Microsoft are using AMD hardware for their consoles, which the industry is using as their developing platform for their games.
It can barely run Starfield at 4k60, pc gaming is in a horrible rut when it comes to GPU/AAA products, with each one seemingly worse than the other.
 
100% factually correct on technical level. Take any game, and compare 1080p native vs 1440p dlss performance. Dlss 2.5.1/3.5.0 look better and run faster. Same for 1440 vs 4K dlssp.
1080p native+taa is just dead. It never looked good in the first place,and now there's at least a solution for people who don't wanna spend on a 1440p.
I haven't used native 1440p for months. 1920p dldsr+dlssb is always better than native. always.

I called it: Nvidia will plan on just abandoning most rasterization advances and make everybody rely on DLSS because they want everyone to depend on those tensor cores, be free to prize gauge and better control planned obsolescence and just trickle down products meant for ML workloads instead of investing on R&D for a gaming-centric product.

I know my other post on the subject went far in the assumption realm (I did mention I'd go from the plausible to the implausible though) but now that Nvidia has exploded in value over the "AI" craze they'll not only abandon PC gaming but do their damnest to kill PC AAA gaming in the process.
rdna3 also has AI hardware, but it's currently doing nothing since amd's vision for rdna1/2 did not include image reconstruction, and now they'd have to develop two separate techniques, one for rdna1/2 and one for rdna3. It's not amd's christmas gift to make fsr free, it's about their r/d being late to the party and having to battle dlss with simple upscalers.
 
Last edited:
Learn about tech before making such absurd comments.
Every frame you see on screen is generated frame. It is not real.
Previously only Shader Cores did the generation. Now, with Tensor Cores the generation can be done using less power with more performance.
You are juggling words to try to deceive the inattentive. The truth is that Nvidia tries to create false frames between the real rendered frames, people with already damaged vision or brains taken over by fanaticism will say that this is the future and everything is fine. While reality shows the opposite, with games falling in quality and optimization, to the point of requiring upscaling as the only way to play.
 
First response was Nvidia go ...... yourself
after a few more seconds

2 things will converge 4K will be good enough for most people going forward - that can still be outputted to 8K Tvs anyway - TVs can do there upscaling
AI and learning tools will get better - so game design will use more advanced Journey like tools - with human input for created look and prompts
A stone giant ambling through the foliage with leather hide clothes with also generate sound - snapping branches - leaves scrapping leather , raspy breathing with an arrow in a lung

Fuzzy AI in the GPU will then output - to get the best compromise for the then monitor/VR 4000Nit 4K HDR screen - and associated sound field 7.1 headphones

So Nvidia is kind of right - but their DLSS will be over taken with new tools that EPIC/UNITY and others will provide
 
A full on digital render might need 40 minutes for a perfect look on a 4090 for one 4K image. Light is complicated, nature is complicated . My issue is that the AI aspect of DLSS the more it improves the more chances we have to become a subscription, "if you want the latest training data subscribe to Nvidia AI experience." On the other hand there is huge room for improvement on the software design side of things (game engines, 3d Software etc) we might see serious generational improvements on the same hardware.
 
One of the more interesting and intelligent points was 'a ray-traced image, that utilises the full fleet of DLSS tech (2.0,3.0, 3.5) in a way is less fake than a traditionally rendered version of the same scene'. A lot of the fuss and complaint over 'native' vs 'non-native' and 'fake frame' vs 'real frame' seem to come from the false assumption that devs are only now starting to play around with circumventing having to render the scene fully. In reality all optimisation has been stuff akin to DLSS; it doesn't make sense to say that DLSS is a crutch to optimisation because it's a form of optimisation.
 
You are juggling words to try to deceive the inattentive. The truth is that Nvidia tries to create false frames between the real rendered frames, people with already damaged vision or brains taken over by fanaticism will say that this is the future and everything is fine. While reality shows the opposite, with games falling in quality and optimization, to the point of requiring upscaling as the only way to play.
More lies. Go read the research papers.
As if Tensor Cores do not exist.
What is this "Fake Frames" nonsense!
Ofcourse, everything we see on a display screen is gpu generated!!!

Who cares how Nvidia generates it. As long as it looks same or better than native.
According to TechSpot's detailed testing DLSS does look same or better than NAtive.
 
More lies. Go read the research papers.
As if Tensor Cores do not exist.
What is this "Fake Frames" nonsense!
Ofcourse, everything we see on a display screen is gpu generated!!!

Who cares how Nvidia generates it. As long as it looks same or better than native.
According to TechSpot's detailed testing DLSS does look same or better than NAtive.
If tensor cores and fake RT weren't being pushed on the market, GPUs would either be 30-40% stronger or equally cheaper.

They are not the same, only if you have some damage to your vision you might not notice, native is better, but sometimes a bad implementation of TAA brings a blurred appearance and that's why there is this illusion that Upscaling is better. The second point is that the games lost so much in optimization that even if I ignore the rendering problems, one thing cancels out the other.

The technology works exactly as I said to fake transition frames between the real rendered frames. Do you have any valid arguments other than telling me to read Nvidia's texts?
 
Upscaling tech is great when it's used to achieve higher framerates, like 120+. I don't like it when it's required to hit 30 or 60 FPS like Starfield. Unfortunately, developers seem to push back to 720p 20 FPS again and again. It's just an uncontrollable urge for them. Don't believe me? Check out recent releases :(
 
The amount of misunderstanding of realtime rendering in the comments is really crazy.
Accelerated realtime 3D rendering has always been about approximation: rasterization, texture filtering, anti-aliasing, light baking, shadow maps, shaders, ambiant occlusion. Recent AI based techniques like DLSS/AA, Raytracing, ray reconstruction, frame generation are just new ways to approximate. They're just using the hardware in a new, optimized way because it would be impossible with the old way. Exactly like a lot of other rendering techniques slowly became obsolete when newer, faster or better technologies disrupted the market.
This native vs upscaled debate is nonsense because a native frame is already a mix of every possible optimization technique to simulate a 3D environment.
Would you say a 4k "native" frame is fake because it has antialiasing, ambiant occlusion and shadow maps? No, but still, they are all cheap and fake techniques to prevent the gpu from having to render an image to unreal resolution with physically accurate lighting.
That's exactly what these new AI accelerated technologies are about : to be as close as possible to physically accurate rendering with the least possible computing power.
You may or may not like the different artifacts/inaccuracies these have brought, but you still can't deny all the old rasterization techniques were also compromises we accepted to achieve real time rendering.
The transition from rasterization to fully ray traced rendering may probably take 20 years to be finalized, but there's no going back and just adding more transistors in gpus won't be the solution alone. AI optimized rendering will get better to a point you won't even think about it because your eyes won't be able to make the difference between a frame generated in real time with a frame that took 60 hours to render a few years ago.
 
More lies. Go read the research papers.
As if Tensor Cores do not exist.
What is this "Fake Frames" nonsense!
Ofcourse, everything we see on a display screen is gpu generated!!!

Who cares how Nvidia generates it. As long as it looks same or better than native.
According to TechSpot's detailed testing DLSS does look same or better than NAtive.
dlss super resolution + rr + rt is better than native + taa + rt, but frame generation is still hit/miss, I think they're good enough for the trade off, but it's not part of generational leap cause a lot of times ai frames will have more artifacts.

The amount of misunderstanding of realtime rendering in the comments is really crazy.
Accelerated realtime 3D rendering has always been about approximation: rasterization, texture filtering, anti-aliasing, light baking, shadow maps, shaders, ambiant occlusion. Recent AI based techniques like DLSS/AA, Raytracing, ray reconstruction, frame generation are just new ways to approximate. They're just using the hardware in a new, optimized way because it would be impossible with the old way. Exactly like a lot of other rendering techniques slowly became obsolete when newer, faster or better technologies disrupted the market.
This native vs upscaled debate is nonsense because a native frame is already a mix of every possible optimization technique to simulate a 3D environment.
Would you say a 4k "native" frame is fake because it has antialiasing, ambiant occlusion and shadow maps? No, but still, they are all cheap and fake techniques to prevent the gpu from having to render an image to unreal resolution with physically accurate lighting.
That's exactly what these new AI accelerated technologies are about : to be as close as possible to physically accurate rendering with the least possible computing power.
You may or may not like the different artifacts/inaccuracies these have brought, but you still can't deny all the old rasterization techniques were also compromises we accepted to achieve real time rendering.
The transition from rasterization to fully ray traced rendering may probably take 20 years to be finalized, but there's no going back and just adding more transistors in gpus won't be the solution alone. AI optimized rendering will get better to a point you won't even think about it because your eyes won't be able to make the difference between a frame generated in real time with a frame that took 60 hours to render a few years ago.
Yeah, I still don't understand how ray traced + reconstruction is "fake", and rasterized at native is "real". Rasterized light/reflections are as fake as it can get. I think people don't see the bigger picture. I talked to a dude (5700xt owner btw) who said 4K native is better than 4K dlss, even when 4K dlss has more detail, since reconstructed 4K is fake.

If tensor cores and fake RT weren't being pushed on the market, GPUs would either be 30-40% stronger or equally cheaper.
Can you prove it ? Seems kinda funny you accuse everyone of defending dlss, but this random theory is your evidence. Cause imo you cannot prove it. you know 40% more gpu speed would require a lot more space on die and power than dlss, when tensor cores can do that with almost no extra die/power cost.
 
Last edited:
Back