People have given up because they have no understanding of Deep Learning. I only had to google for a few minutes to find out how it works. We have countless people working on autonomous cars that use Deep Learning and it's STILL in development, yet it's supposed to work right away when it comes to gaming? Get outta here!
Even before we get to how much latency is improved using AMD's software, you have to ask yourself why would someone that cares so much about input lag, limit their frame rate to 60-90fps? Thanks but no thanks, AMD.
At the very least, I'd want to see how this app compares to simply lowering graphic detail, because that's what people that really care about reducing input lag currently do. Period.
How does Deep Learning work:
Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. Deep learning is a key technology behind driverless cars, enabling them to recognize a stop sign, or to distinguish a pedestrian from a lamppost. It is the key to voice control in consumer devices like phones, tablets, TVs, and hands-free speakers. Deep learning is getting lots of attention lately and for good reason. It’s achieving results that were not possible before.
In deep learning, a computer model learns to perform classification tasks directly from images, text, or sound. Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding human-level performance. Models are trained by using a large set of labeled data and neural network architectures that contain many layers.
And here is the answer from NVIDIA about blurry frames:
Q: Some users mentioned blurry frames. Can you explain?
A: DLSS is a new technology and we are working hard to perfect it.
We built DLSS to leverage the Turing architecture’s Tensor Cores and to provide the largest benefit when GPU load is high. To this end, we concentrated on high resolutions during development (where GPU load is highest) with 4K (3840x2160) being the most common training target. Running at 4K is beneficial when it comes to image quality as the number of input pixels is high. Typically for 4K DLSS, we have around 3.5-5.5 million pixels from which to generate the final frame, while at 1920x1080 we only have around 1.0-1.5 million pixels. The less source data, the greater the challenge for DLSS to detect features in the input frame and predict the final frame.
We have seen the screenshots and are listening to the community’s feedback about DLSS at lower resolutions, and are focusing on it as a top priority. We are adding more training data and some new techniques to improve quality, and will continue to train the deep neural network so that it improves over time.
The rest of the Q&A:
https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-your-questions-answered/