"This first batch of results playing Control with the shader version of DLSS are impressive. This begs the question, why did Nvidia feel the need to go back to an AI model running on tensor cores for the latest version of DLSS? Couldn’t they just keep working on the shader version and open it up to everyone, such as GTX 16 series owners? We asked Nvidia the question, and the answer was pretty straightforward: Nvidia’s engineers felt that they had reached the limits with the shader version.
Concretely, switching back to tensor cores and using an AI model allows Nvidia to achieve better image quality, better handling of some pain points like motion, better low resolution support and a more flexible approach. Apparently this implementation for Control required a lot of hand tuning and was found to not work well with other types of games, whereas DLSS 2.0 back on the tensor cores is more generalized and more easily applicable to a wide range of games without per-game training."
The real question is, how is Nvidia going to deal with conflicting AI training results on the large scale? Typically AI training is an iterative process where you feed the AI data, it outputs results, and you adjust parameters according to the results. This is called a training step. You repeat this process until you get desirable results.
The problem being that during the validation test, I don't see how Nvidia can test the adjusted parameters against all video games. It's also begs the question of how Nvidia will deal with conflicting training results. As the author mentioned in the article, flickering is still an issue. Most likely, they could train the AI to fix that issue but it would likely come at the cost of something else. That's just across 2 games. Imagine for a second that the training model is running on a game and it causes all geometry to flicker. Entirely possible unless Nvidia are training each iteration of the AI against every game.
1) Makes Navi a tough sell at the same non-DLSS price point even the option DLSS might get wider adoption must be worth something.
2) Wonder how this dovetails into game devs pulling support from GeForce Now. NVIDIA needs dev support to build DLSS and RTX into builds. Gives devs leverage over NVIDIA over whether to allow their games to run on GFN.
Not really given that both Nvidia and AMD have sharpening filters with a much lower performance impact.
2 games isn't much of a sample size either. Hence why the article states "could be a game changers". Especially when those 2 games have a hand tuned implementation by Nvidia. They certainly aren't going to spend that kind of time on every game just for DLSS. Reminds me of the Porty Royal benchmark with DLSS enabled. The quality of DLSS in that game did not represent the whole.
Why in god's name are these graphs not just in FPS is beyond me.