Testing AMD's new Radeon Anti-Lag Feature

AMD has NEVER excelled in software development so the results do not surprise me. They must just be trying to tick feature boxes to look as attractive as the features Turing offers, aside from RT and DLSS of course.

Input lag sensitive gamers like myself and countless others usually lower graphic details to reduce the lag, so I would have liked to have seen those results included alongside the anti-lag software ON and OFF results.

Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K.
 
AMD has NEVER excelled in software development so the results do not surprise me. They must just be trying to tick feature boxes to look as attractive as the features Turing offers, aside from RT and DLSS of course.

Input lag sensitive gamers like myself and countless others usually lower graphic details to reduce the lag, so I would have liked to have seen those results included alongside the anti-lag software ON and OFF results.

Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K.

AMD has never excelled in software? Is this why Relive is better and far more feature filled than Shadowplay? Or why AMD drivers consistently provide longer historic gains than Nvidia drivers (Source:https://www.hardocp.com/article/2017/02/08/nvidia_video_card_driver_performance_review/13)? Or why Freesync is the universal standard and GSync isn't?

Just admit you don't know what you're talking about.
 
AMD has never excelled in software? Is this why Relive is better and far more feature filled than Shadowplay? Or why AMD drivers consistently provide longer historic gains than Nvidia drivers (Source:https://www.hardocp.com/article/2017/02/08/nvidia_video_card_driver_performance_review/13)? Or why Freesync is the universal standard and GSync isn't?

Just admit you don't know what you're talking about.

I wasn't talking about drivers specifically, but I'll bite.

Here is the conclusion:

When we compared AMD Radeon R9 Fury X and GeForce GTX 980 Ti performance using the latest drivers we found these to be very competitive. Both video card launched at an MSRP of $649, and today seem to be almost evenly matched in performance.

Almost every reviewer recommended the 980 Ti over the Fury X btw. Techspot has already proven AMD is slow to maximize their driver performance over time, so not sure how you're bragging that their drivers take longer just to match the card their competing with. I prefer to not wait months for AMD to do their job.

You're also confusingly missing the link that shows ReLive as a streaming competitior against OBS, XSplit and Shadowplay. To you, ReLive is more feature rich yet I have yet to see ReLive ahead of Shadowplay in any Top 5 list of 2019.

Also, assuming everyone with a Freesync monitor has an AMD GPU is like saying everyone with an Intel CPU is using the IGP.

Nvidia is a software company first with a massive portfolio, and you come at me with driver performance link? You need to step your game up. If AMD's drivers and hardware are so good, then tell me why the 9700K has significantly better frame times than a 3700X? You may want to look at Digital Foundry's video on it.

Show me some links showing AMD on par or better than NVIDIA when it comes to raytracing, AI, Deep Learning and GPU acceleration used for productivity.

AMD is a component company. Nothing more.
 
Last edited:
I would have loved to see some 1440p results with an 5700XT. It should have helped to get closer to the sweet spot in some games. It's nice to see this feature just work.

This is a feature that I can see all gamers enable and use, especially on Polaris and older GCN GPUs (or soon to come Navi 5600/5500 GPUs).

AMD has NEVER excelled in software development so the results do not surprise me. They must just be trying to tick feature boxes to look as attractive as the features Turing offers, aside from RT and DLSS of course.

Input lag sensitive gamers like myself and countless others usually lower graphic details to reduce the lag, so I would have liked to have seen those results included alongside the anti-lag software ON and OFF results.

Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K.
Nobody considers DLSS to be attractive. The hype is gone and the only thing that remains is a broken feature that people have moved on from. You are much better of using ReShade instead because they ported AMD's CAS (open source FTW). You are not limited to RTX either.
 
Last edited:
One Question and One Comment:

Question: WHAT exactly is AMD doing that reduces the latency? The article isn't really clear; it explains *why* said latency exists, but not what AMD is doing to address it.

Comment: The measured latency figures are meaningless without also providing FPS, as you can't convert to how many frames improvement the feature gives without providing both numbers. At 60FPS, a 2ms improvement is nothing (given a frame refresh would occur every 16ms). But at 200ms? Different story.
 
One Question and One Comment:

Question: WHAT exactly is AMD doing that reduces the latency? The article isn't really clear; it explains *why* said latency exists, but not what AMD is doing to address it.
Even AMD's press documentation doesn't provide too much detail but what one can essentially make out is that the CPU 'frame time' is being increased and the time between each respective CPU frames is being increased too. This stops the CPU/game engine from running too far ahead of the frame that the GPU is processing to be displayed. The exact mechanism by how this is being done isn't clear.

Comment: The measured latency figures are meaningless without also providing FPS, as you can't convert to how many frames improvement the feature gives without providing both numbers. At 60FPS, a 2ms improvement is nothing (given a frame refresh would occur every 16ms). But at 200ms? Different story.
It's mentioned a few times in the body of the text, for example:
Let’s start with Rainbow Six Siege, a game that generally has very low input latency. The game runs well on most hardware and we were pushing above 200 FPS without much sweat.
 
Uh... lag is a delay in network communications. This is broadly know by older gamers when people were on slow connections. Delays, which was immediately relevant when ping times would spike. Lag could also be caused by a server being overwhelmed, which usually also shows as high ping times. I hosted my own game servers for several years, and was complimented several times how well my server ran.

Stuttering is a better term, but means it could be anything on the computer.

Hitching is the best term as it's more related to specifically to the video card.
 
I would have loved to see some 1440p results with an 5700XT. It should have helped to get closer to the sweet spot in some games. It's nice to see this feature just work.

This is a feature that I can see all gamers enable and use, especially on Polaris and older GCN GPUs (or soon to come Navi 5600/5500 GPUs).

AMD has NEVER excelled in software development so the results do not surprise me. They must just be trying to tick feature boxes to look as attractive as the features Turing offers, aside from RT and DLSS of course.

Input lag sensitive gamers like myself and countless others usually lower graphic details to reduce the lag, so I would have liked to have seen those results included alongside the anti-lag software ON and OFF results.

Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K.
Nobody considers DLSS to be attractive. The hype is gone and the only thing that remains is a broken feature that people have moved on from. You are much better of using ReShade instead because they ported AMD's CAS (open source FTW). You are not limited to RTX either.

People have given up because they have no understanding of Deep Learning. I only had to google for a few minutes to find out how it works. We have countless people working on autonomous cars that use Deep Learning and it's STILL in development, yet it's supposed to work right away when it comes to gaming? Get outta here!

Even before we get to how much latency is improved using AMD's software, you have to ask yourself why would someone that cares so much about input lag, limit their frame rate to 60-90fps? Thanks but no thanks, AMD.

At the very least, I'd want to see how this app compares to simply lowering graphic detail, because that's what people that really care about reducing input lag currently do. Period.

How does Deep Learning work:
Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. Deep learning is a key technology behind driverless cars, enabling them to recognize a stop sign, or to distinguish a pedestrian from a lamppost. It is the key to voice control in consumer devices like phones, tablets, TVs, and hands-free speakers. Deep learning is getting lots of attention lately and for good reason. It’s achieving results that were not possible before.

In deep learning, a computer model learns to perform classification tasks directly from images, text, or sound. Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding human-level performance. Models are trained by using a large set of labeled data and neural network architectures that contain many layers.

And here is the answer from NVIDIA about blurry frames:

Q: Some users mentioned blurry frames. Can you explain?

A: DLSS is a new technology and we are working hard to perfect it.

We built DLSS to leverage the Turing architecture’s Tensor Cores and to provide the largest benefit when GPU load is high. To this end, we concentrated on high resolutions during development (where GPU load is highest) with 4K (3840x2160) being the most common training target. Running at 4K is beneficial when it comes to image quality as the number of input pixels is high. Typically for 4K DLSS, we have around 3.5-5.5 million pixels from which to generate the final frame, while at 1920x1080 we only have around 1.0-1.5 million pixels. The less source data, the greater the challenge for DLSS to detect features in the input frame and predict the final frame.

We have seen the screenshots and are listening to the community’s feedback about DLSS at lower resolutions, and are focusing on it as a top priority. We are adding more training data and some new techniques to improve quality, and will continue to train the deep neural network so that it improves over time.

The rest of the Q&A:
https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-your-questions-answered/
 
Last edited:
I would have loved to see some 1440p results with an 5700XT. It should have helped to get closer to the sweet spot in some games. It's nice to see this feature just work.

This is a feature that I can see all gamers enable and use, especially on Polaris and older GCN GPUs (or soon to come Navi 5600/5500 GPUs).

AMD has NEVER excelled in software development so the results do not surprise me. They must just be trying to tick feature boxes to look as attractive as the features Turing offers, aside from RT and DLSS of course.

Input lag sensitive gamers like myself and countless others usually lower graphic details to reduce the lag, so I would have liked to have seen those results included alongside the anti-lag software ON and OFF results.

Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K.
Nobody considers DLSS to be attractive. The hype is gone and the only thing that remains is a broken feature that people have moved on from. You are much better of using ReShade instead because they ported AMD's CAS (open source FTW). You are not limited to RTX either.

People have given up because they have no understanding of Deep Learning. I only had to google for a few minutes to find out how it works. We have countless people working on autonomous cars that use Deep Learning and it's STILL in development, yet it's supposed to work right away when it comes to gaming? Get outta here!

Even before we get to how much latency is improved using AMD's software, you have to ask yourself why would someone that cares so much about input lag, limit their frame rate to 60-90fps? Thanks but no thanks, AMD.

At the very least, I'd want to see how this app compares to simply lowering graphic detail, because that's what people that really care about reducing input lag currently do. Period.

How does Deep Learning work:
Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. Deep learning is a key technology behind driverless cars, enabling them to recognize a stop sign, or to distinguish a pedestrian from a lamppost. It is the key to voice control in consumer devices like phones, tablets, TVs, and hands-free speakers. Deep learning is getting lots of attention lately and for good reason. It’s achieving results that were not possible before.

In deep learning, a computer model learns to perform classification tasks directly from images, text, or sound. Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding human-level performance. Models are trained by using a large set of labeled data and neural network architectures that contain many layers.

And here is the answer from NVIDIA about blurry frames:

Q: Some users mentioned blurry frames. Can you explain?

A: DLSS is a new technology and we are working hard to perfect it.

We built DLSS to leverage the Turing architecture’s Tensor Cores and to provide the largest benefit when GPU load is high. To this end, we concentrated on high resolutions during development (where GPU load is highest) with 4K (3840x2160) being the most common training target. Running at 4K is beneficial when it comes to image quality as the number of input pixels is high. Typically for 4K DLSS, we have around 3.5-5.5 million pixels from which to generate the final frame, while at 1920x1080 we only have around 1.0-1.5 million pixels. The less source data, the greater the challenge for DLSS to detect features in the input frame and predict the final frame.

We have seen the screenshots and are listening to the community’s feedback about DLSS at lower resolutions, and are focusing on it as a top priority. We are adding more training data and some new techniques to improve quality, and will continue to train the deep neural network so that it improves over time.

The rest of the Q&A:
https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-your-questions-answered/
lol, trying too hard to make a point for Nvidia in a AMD centered article, you must be reeeally desperate to see Nvidia winning these days.
 
AMD has NEVER excelled in software development so the results do not surprise me. They must just be trying to tick feature boxes to look as attractive as the features Turing offers, aside from RT and DLSS of course.

Input lag sensitive gamers like myself and countless others usually lower graphic details to reduce the lag, so I would have liked to have seen those results included alongside the anti-lag software ON and OFF results.

Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K.

AMD has never excelled in software? Is this why Relive is better and far more feature filled than Shadowplay? Or why AMD drivers consistently provide longer historic gains than Nvidia drivers (Source:https://www.hardocp.com/article/2017/02/08/nvidia_video_card_driver_performance_review/13)? Or why Freesync is the universal standard and GSync isn't?

Just admit you don't know what you're talking about.
I find AMD driver support generally inferior to Nvidias. There are more bugs and issues on AMD drivers and they tend to take longer to release than Nvidias drivers do when it comes to responding to new game launches and new bugs on new drivers. AMD drivers have been proven to be more stable than Nvidia drivers however resulting in less crashes.

Also I’d argue that yes AMD drivers often reap bigger improvements over time but I think that’s more down to poor optimisation in the first instance.

Also Nvidia support their products for far longer.

At a choice I’d rather be at the mercy of Nvidias driver support than AMD.
 
"The only conclusion we could come to is Radeon Anti-Lag really isn’t designed for true competitive gamers that want super low input latency, because the gains you get in latency-tuned scenarios are minimal."

I have to completely disagree with this summary. I'm a competitive gamer and follow eSports and can say for a fact that every MS shaved counts. 5-10ms? eSports gamers kill for that kind of reduction. What's not mentioned in the article is that the PC remains responsive with anti-lag on as the FPS fluctuates. It delivers consistency during times in game when performance may not be the best. I know for CSGO FPS on a single machine can fluctuate between 220 - 550 in one match. For overwatch between 184 - 218. These are just my observations with my 1080 Ti in these games.

Nvidia should definitely implement this feature as well.
 
Last edited:
lol, trying too hard to make a point for Nvidia in a AMD centered article, you must be reeeally desperate to see Nvidia winning these days.

*sigh*
That's what you got from my reply?! Wow. And did you even read my original comment? I don't think you did, because your reply didn't show it. How much of my original comment was pro-NVIDIA, huh? How much of it was DLSS focused? Did you miss the part where I said, "aside from RT and DLSS?" Because that's all your boy Puiu saw. Funny how that happens so often, huh? I thought so when I saw it. You need to comprehend what's in front of you, instead of rushing to click that reply button to "go after a fanboy" with nothing technical nor intelligent to add to the discussion. I mean, look at what you wrote? How old are you? lol

If you don't want to educate yourself about Deep Learning before you include yourself in discussions about it, then you should reeeally stay away from anything and everything involving it.

Forgive me for trying to educate people reading from hater scripts about the DL part of DLSS that may think it's just software NVIDIA cooked up in a couple months, because it's not.

BTW, NVIDIA is the software company, not AMD, so I'm actually patient enough to wait a while longer to see how this plays out rather than being a follower and calling DLSS or RT DOA this soon.
 
I have to completely disagree with this summary. I'm a competitive gamer and follow eSports and can say for a fact that every MS shaved counts. 5-10ms? eSports gamers kill for that kind of reduction.
Typical reaction times to visual and aural stimuli are in the order of a couple hundred milliseconds, so 5 to 10 ms doesn't seem to be much of a gain at all, given that one is reacting to a visual change.
 
"The only conclusion we could come to is Radeon Anti-Lag really isn’t designed for true competitive gamers that want super low input latency, because the gains you get in latency-tuned scenarios are minimal."

I have to completely disagree with this summary. I'm a competitive gamer and follow eSports and can say for a fact that every MS shaved counts. 5-10ms? eSports gamers kill for that kind of reduction. What's not mentioned in the article is that the PC remains responsive with anti-lag on as the FPS fluctuates. It delivers consistency during times in game when performance may not be the best. I know for CSGO FPS on a single machine can fluctuate between 220 - 550 in one match. For overwatch between 84 - 218. These are just my observations with my 1080 Ti in these games.

Nvidia should definitely implement this feature as well.

Well then you're disagreeing with AMD, because the sweet spot for this software to work it's magic is 60-90fps @ 4K, and that is def NOT competitive gamer framerate OR resolution. You yourself are talking about FPS in the triple digits and you're already wanting NVIDIA to implement this, so maybe you have a link to a competitive gamers discussion rejoicing over this software from AMD. Something to back up your claims?

Your comment lacks any technical jargon and/or source links, so I'm gonna remain skeptical and others should too until you can produce some hard numbers and testimonials from other competitive gamers.

Perhaps you could tell me what the actual difference in latency is with Radeon Anti-lag ON at lets say 144fps or higher compared to just lowering graphic detail. That's what I wanna see more than anything before I can say anything positive about this software.

BTW, if your frames per second are going as low as 84 up to triple digits and you're a competitive gamer with a 1080Ti, you're doing something seriously wrong with your setup. That I know for a fact.
 
Well then you're disagreeing with AMD, because the sweet spot for this software to work it's magic is 60-90fps @ 4K, and that is def NOT competitive gamer framerate OR resolution. You yourself are talking about FPS in the triple digits and you're already wanting NVIDIA to implement this, so maybe you have a link to a competitive gamers discussion rejoicing over this software from AMD. Something to back up your claims?

Test done with CSGO on a 240 Hz monitor. Input lag reduced significantly and the visual different was noticeable.


FYI the sweet spot is only for that up to 16ms savings so in fact these results do not contradict what AMD has said. Just because AMD specified a sweet spot doesn't mean you won't get anything outside of it, you completely misread that.

Wendel also did a video on this. From every bit of information I've seen, Radeon Anti-lag always reduced input lag regardless of how many FPS the person was getting.

BTW, if your frames per second are going as low as 84 up to triple digits and you're a competitive gamer with a 1080Ti, you're doing something seriously wrong with your setup. That I know for a fact.

typo, fixed.
 
Typical reaction times to visual and aural stimuli are in the order of a couple hundred milliseconds, so 5 to 10 ms doesn't seem to be much of a gain at all, given that one is reacting to a visual change.

This is true but you need stimuli to react to in the first place. If you have a display and GPU that is 32ms faster then your previous setup then in fact you are able to react 32ms faster. Any reduction in the time it takes to display will in turn allow you to start reacting before you would have been able to previously.
 
People have given up because they have no understanding of Deep Learning. I only had to google for a few minutes to find out how it works. We have countless people working on autonomous cars that use Deep Learning and it's STILL in development, yet it's supposed to work right away when it comes to gaming? Get outta here!

Even before we get to how much latency is improved using AMD's software, you have to ask yourself why would someone that cares so much about input lag, limit their frame rate to 60-90fps? Thanks but no thanks, AMD.

At the very least, I'd want to see how this app compares to simply lowering graphic detail, because that's what people that really care about reducing input lag currently do. Period.

How does Deep Learning work:
Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. Deep learning is a key technology behind driverless cars, enabling them to recognize a stop sign, or to distinguish a pedestrian from a lamppost. It is the key to voice control in consumer devices like phones, tablets, TVs, and hands-free speakers. Deep learning is getting lots of attention lately and for good reason. It’s achieving results that were not possible before.

In deep learning, a computer model learns to perform classification tasks directly from images, text, or sound. Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding human-level performance. Models are trained by using a large set of labeled data and neural network architectures that contain many layers.

And here is the answer from NVIDIA about blurry frames:

Q: Some users mentioned blurry frames. Can you explain?

A: DLSS is a new technology and we are working hard to perfect it.

We built DLSS to leverage the Turing architecture’s Tensor Cores and to provide the largest benefit when GPU load is high. To this end, we concentrated on high resolutions during development (where GPU load is highest) with 4K (3840x2160) being the most common training target. Running at 4K is beneficial when it comes to image quality as the number of input pixels is high. Typically for 4K DLSS, we have around 3.5-5.5 million pixels from which to generate the final frame, while at 1920x1080 we only have around 1.0-1.5 million pixels. The less source data, the greater the challenge for DLSS to detect features in the input frame and predict the final frame.

We have seen the screenshots and are listening to the community’s feedback about DLSS at lower resolutions, and are focusing on it as a top priority. We are adding more training data and some new techniques to improve quality, and will continue to train the deep neural network so that it improves over time.

The rest of the Q&A:
https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-your-questions-answered/
Deep Learning my a*s. It's been on the market for how long now? How many years should we wait? How many games have it? It's a joke dude and you have zero proper answers to those questions.
Why are you even telling us all of the technical details when the end result is known by everybody? All of that "technology" beaten by a sharpening filter.
FYI DLSS is not the only thing that "might" get better. Sharpening filters have been making big strides too with improved algorithms and efficiency.
 
Last edited:
This is true but you need stimuli to react to in the first place. If you have a display and GPU that is 32ms faster then your previous setup then in fact you are able to react 32ms faster. Any reduction in the time it takes to display will in turn allow you to start reacting before you would have been able to previously.
Here's the rub, though: the GPU isn't processing the frames any faster with anti-lag, the procedure is just reducing the time it takes for your inputs to be displayed on screen. By then, you've already reacted to the visual stimulus - you're just waiting for the visual affirmation that this has taken place. Makes me wonder whether the benefit of the anti-lag system is more of a psychological one, rather than an actual temporal one.

That said, I have the competitive reaction time of a roadkill raccoon, so even if it is a psychological boost, the likes of me won't gain anything. Real comp players, though, just like in any sport, will want to utilise anything that provides an advantage, regardless as to the true nature of the improvement. Sports psychology is hugely important at the elite level of the likes of F1, MotoGP, et al, so it must be true for esports.
 
From AMD's vague description of their new tech it does seem that it is equivalent to NVIDIA's "Max pre-rendered frames" setting that has been availible in their drivers for a long time now. Description of said option talks basically about the same thing as Anti-Lag.

Would be cool to see some results from a GTX/RTX card using different Max pre-rendered frames settings to see what's their impact and how they compare to Anti-Lag.
 
Last edited:
AMD has NEVER excelled in software development so the results do not surprise me. They must just be trying to tick feature boxes to look as attractive as the features Turing offers, aside from RT and DLSS of course.

Input lag sensitive gamers like myself and countless others usually lower graphic details to reduce the lag, so I would have liked to have seen those results included alongside the anti-lag software ON and OFF results.

Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K.

AMD has never excelled in software? Is this why Relive is better and far more feature filled than Shadowplay? Or why AMD drivers consistently provide longer historic gains than Nvidia drivers (Source:https://www.hardocp.com/article/2017/02/08/nvidia_video_card_driver_performance_review/13)? Or why Freesync is the universal standard and GSync isn't?

Just admit you don't know what you're talking about.
Wtf is Relive?
Shadowplay, while its not without its issues, at least most know about it.

Isnt there a ton if 3rd party software that can do what they both do n better? im sure there is.
 
NVIDIA's setting does exactly what it says: limit the number of frames to be CPU processed while the GPU is busy. AMD's anti-lag seems to actively change (a) how long each CPU frame takes to be processed and (b) the time between each successive CPU frame. The end result is to try and do the same thing, but they're chalk and cheese when it comes to comparing what they're doing.
 
I would have loved to see some 1440p results with an 5700XT. It should have helped to get closer to the sweet spot in some games. It's nice to see this feature just work.

This is a feature that I can see all gamers enable and use, especially on Polaris and older GCN GPUs (or soon to come Navi 5600/5500 GPUs).

AMD has NEVER excelled in software development so the results do not surprise me. They must just be trying to tick feature boxes to look as attractive as the features Turing offers, aside from RT and DLSS of course.

Input lag sensitive gamers like myself and countless others usually lower graphic details to reduce the lag, so I would have liked to have seen those results included alongside the anti-lag software ON and OFF results.

Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K.
Nobody considers DLSS to be attractive. The hype is gone and the only thing that remains is a broken feature that people have moved on from. You are much better of using ReShade instead because they ported AMD's CAS (open source FTW). You are not limited to RTX either.

People have given up because they have no understanding of Deep Learning. I only had to google for a few minutes to find out how it works. We have countless people working on autonomous cars that use Deep Learning and it's STILL in development, yet it's supposed to work right away when it comes to gaming? Get outta here!

Even before we get to how much latency is improved using AMD's software, you have to ask yourself why would someone that cares so much about input lag, limit their frame rate to 60-90fps? Thanks but no thanks, AMD.

At the very least, I'd want to see how this app compares to simply lowering graphic detail, because that's what people that really care about reducing input lag currently do. Period.

How does Deep Learning work:
Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. Deep learning is a key technology behind driverless cars, enabling them to recognize a stop sign, or to distinguish a pedestrian from a lamppost. It is the key to voice control in consumer devices like phones, tablets, TVs, and hands-free speakers. Deep learning is getting lots of attention lately and for good reason. It’s achieving results that were not possible before.

In deep learning, a computer model learns to perform classification tasks directly from images, text, or sound. Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding human-level performance. Models are trained by using a large set of labeled data and neural network architectures that contain many layers.

And here is the answer from NVIDIA about blurry frames:

Q: Some users mentioned blurry frames. Can you explain?

A: DLSS is a new technology and we are working hard to perfect it.

We built DLSS to leverage the Turing architecture’s Tensor Cores and to provide the largest benefit when GPU load is high. To this end, we concentrated on high resolutions during development (where GPU load is highest) with 4K (3840x2160) being the most common training target. Running at 4K is beneficial when it comes to image quality as the number of input pixels is high. Typically for 4K DLSS, we have around 3.5-5.5 million pixels from which to generate the final frame, while at 1920x1080 we only have around 1.0-1.5 million pixels. The less source data, the greater the challenge for DLSS to detect features in the input frame and predict the final frame.

We have seen the screenshots and are listening to the community’s feedback about DLSS at lower resolutions, and are focusing on it as a top priority. We are adding more training data and some new techniques to improve quality, and will continue to train the deep neural network so that it improves over time.

The rest of the Q&A:
https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-your-questions-answered/
lol, trying too hard to make a point for Nvidia in a AMD centered article, you must be reeeally desperate to see Nvidia winning these days.
You seem so focused on not listening like most users.
nvidia isnt desperate for anything, they won a long time ago. users defend what they like.
 
Wtf is Relive?
Shadowplay, while its not without its issues, at least most know about it.

Isnt there a ton if 3rd party software that can do what they both do n better? im sure there is.

I think everyone that knows streaming can say OBS is used far more than anything else. I've never heard of anyone using ReLive over any other popular option currently available.
 
Back