Testing AMD's new Radeon Anti-Lag Feature

NVIDIA's setting does exactly what it says: limit the number of frames to be CPU processed while the GPU is busy. AMD's anti-lag seems to actively change (a) how long each CPU frame takes to be processed and (b) the time between each successive CPU frame. The end result is to try and do the same thing, but they're chalk and cheese when it comes to comparing what they're doing.

Doesn't "limit the number of frames to be CPU processed while the GPU is busy" mean basically the same as not letting the CPU to get too far ahead of the GPU ? Sure sounds very similar even though they are worded differently.

I also have no idea where you got the additional info on how Anti-Lag works from but I'd be happy to read through anything you have on the subject :)
 
Doesn't "limit the number of frames to be CPU processed while the GPU is busy" mean basically the same as not letting the CPU to get too far ahead of the GPU ? Sure sounds very similar even though they are worded differently.
Yes, it is similar but the NVIDIA system is essentially "process no more than xxx frames while the GPU is busy"; the AMD system doesn't alter that amount, just alter the timings of the processing.

I also have no idea where you got the additional info on how Anti-Lag works from but I'd be happy to read through anything you have on the subject :)
AMD press release pack from Next Horizon: Gaming (not sure if it's public domain material). I've misinterpreted the presentation though - I think the AMD just controls when the CPU processed frames are released to the GPU, not how long they take to be executed.

In a perfect world, it would take exactly the same amount of time for the CPU/engine to process and prepare a frame for the GPU to render as it does for the GPU to render the previous frame. The game engine will tick along at a set rate, polling for input changes and so on. Every engine cycle, the CPU puts together the relevant changes to game environment, creates the instruction list for the 3D frame, and then issues it the GPU.

It then renders that frame and once done, performs a 'present' command, where the frame is ready to be displayed. In our ideal world, this would synchronise exactly with the engine finishing off its next cycle and issuing a new frame to be rendered. In reality, a '4K ultra-graphics-setting frame' will take longer to be GPU processed than it takes for the engine and CPU to run through a game cycle. That means it will still be polling for input changes, creating new frames that can't be displayed yet, as it will check to see if the GPU is available or not (and since it isn't, it just goes ahead onto the next engine cycle).

NVIDIA's system essentially stops the CPU/engine from running too many frames ahead; however, the sequence can still lag behind. The AMD system appears to be stalling when the frames are sent to the GPU, so that the sequence falls more in line.

Edit: AMD's utterly cheesy video is very similar to the press pack:

 
Last edited:
AMD has NEVER excelled in software development so the results do not surprise me. They must just be trying to tick feature boxes to look as attractive as the features Turing offers, aside from RT and DLSS of course.

Input lag sensitive gamers like myself and countless others usually lower graphic details to reduce the lag, so I would have liked to have seen those results included alongside the anti-lag software ON and OFF results.

Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K.

AMD has never excelled in software? Is this why Relive is better and far more feature filled than Shadowplay? Or why AMD drivers consistently provide longer historic gains than Nvidia drivers (Source:https://www.hardocp.com/article/2017/02/08/nvidia_video_card_driver_performance_review/13)? Or why Freesync is the universal standard and GSync isn't?

Just admit you don't know what you're talking about.

I found this funny.

"Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K."

Was totally not necessary to add this to what was written but the inner intel fan boy couldn't be stopped.
You have no idea why amd choose that as you don't work for amd and were not part of the build process but let just keep assuming...
 
AMD has NEVER excelled in software development so the results do not surprise me. They must just be trying to tick feature boxes to look as attractive as the features Turing offers, aside from RT and DLSS of course.

Input lag sensitive gamers like myself and countless others usually lower graphic details to reduce the lag, so I would have liked to have seen those results included alongside the anti-lag software ON and OFF results.

Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K.

AMD has never excelled in software? Is this why Relive is better and far more feature filled than Shadowplay? Or why AMD drivers consistently provide longer historic gains than Nvidia drivers (Source:https://www.hardocp.com/article/2017/02/08/nvidia_video_card_driver_performance_review/13)? Or why Freesync is the universal standard and GSync isn't?

Just admit you don't know what you're talking about.
Wtf is Relive?
Shadowplay, while its not without its issues, at least most know about it.

Isnt there a ton if 3rd party software that can do what they both do n better? im sure there is.


Relive is AMD's counterpart for Shadowplay (obviously given the context). From HardOCP (https://I.imgur.com/urHeMoY.png) it's a the clear better choice.

Sure, OBS works better if you want to fiddle with things. But it's a hell of a lot easier to enable a setting in the drivers page than install and set up OBS. If you just want to record gameplay, stream yourself, or instant replay clips it's an easy choice for a ton of gamers who just aren't doing this for twitch money. It doesn't make sense to go through setting up OBS scenes and inputs and adjusting all of its settings for the right quality/performance ratio.
 
Doesn't "limit the number of frames to be CPU processed while the GPU is busy" mean basically the same as not letting the CPU to get too far ahead of the GPU ? Sure sounds very similar even though they are worded differently.
Yes, it is similar but the NVIDIA system is essentially "process no more than xxx frames while the GPU is busy"; the AMD system doesn't alter that amount, just alter the timings of the processing.

I also have no idea where you got the additional info on how Anti-Lag works from but I'd be happy to read through anything you have on the subject :)
AMD press release pack from Next Horizon: Gaming (not sure if it's public domain material). I've misinterpreted the presentation though - I think the AMD just controls when the CPU processed frames are released to the GPU, not how long they take to be executed.

In a perfect world, it would take exactly the same amount of time for the CPU/engine to process and prepare a frame for the GPU to render as it does for the GPU to render the previous frame. The game engine will tick along at a set rate, polling for input changes and so on. Every engine cycle, the CPU puts together the relevant changes to game environment, creates the instruction list for the 3D frame, and then issues it the GPU.

It then renders that frame and once done, performs a 'present' command, where the frame is ready to be displayed. In our ideal world, this would synchronise exactly with the engine finishing off its next cycle and issuing a new frame to be rendered. In reality, a '4K ultra-graphics-setting frame' will take longer to be GPU processed than it takes for the engine and CPU to run through a game cycle. That means it will still be polling for input changes, creating new frames that can't be displayed yet, as it will check to see if the GPU is available or not (and since it isn't, it just goes ahead onto the next engine cycle).

NVIDIA's system essentially stops the CPU/engine from running too many frames ahead; however, the sequence can still lag behind. The AMD system appears to be stalling when the frames are sent to the GPU, so that the sequence falls more in line.

Edit: AMD's utterly cheesy video is very similar to the press pack:


OK, let's use a fictional example of a frame far, far away:
CPU --------------------------X
GPU -------------------------------------------------------Y

If I understand it correctly, what you're saying is that NVIDIA's implementation halts any input from the user (or stops registering it) at X and waits for the GPU to reach Y. It then sends the frame it was "holding" to the GPU and starts to work on the next one.

CPU -------------------------------------------------------X
GPU -------------------------------------------------------Y
AMD however extends the period the CPU registers user's input and is able to implement it to the frame it actually "holds" or possibly register it just as it's ready to send the frame to the GPU at Y making the effects/results of user's input more recent.

Am I getting this right ?
 
It seems to be more a case of:

No anti-lag
CPU:X---1----X~~X---2----X~~X----3---X~~X----4---
GPU:~~~~~~X-------------------1----------------X~~X---------------2---------------X~~X---------3------

becomes:

Anti-lag
CPU:X----1---X~~~~~~~~wait~~~~~X---2-----X~~~~~~~~wait~~~~~X----3---X
GPU:~~~~~~X-------------------1----------------X~~X---------------2---------------X~~X---------3------

Note that AMD are saying that this is really for GPU limited situations, I.e. the GPU render time is much greater than the CPU frame time and engine tick rate. That means you're not going to notice the waiting between the CPU frames because of the length of time it takes to get the frame out onto the monitor.

I think! Unusually for AMD's documentation, it's really not very clear.
 
Relive is AMD's counterpart for Shadowplay (obviously given the context). From HardOCP (https://I.imgur.com/urHeMoY.png) it's a the clear better choice.

Sure, OBS works better if you want to fiddle with things. But it's a hell of a lot easier to enable a setting in the drivers page than install and set up OBS. If you just want to record gameplay, stream yourself, or instant replay clips it's an easy choice for a ton of gamers who just aren't doing this for twitch money. It doesn't make sense to go through setting up OBS scenes and inputs and adjusting all of its settings for the right quality/performance ratio.

AMD does like trying to compete with NVIDIA with lazy software, hence why AMD sucks at software development. Remember how Raptr came to be? Instead of doing the work themselves, AMD choose to get their data from users input. Raptr no longer exists. Nvidia developed G-Sync in house, and AMD piggybacked the HDMI spec and left Freesync improvements to monitor manufacturers while Nvidia was, and still is, hands on and side by side with manufacturers. Remember AMD Ramdisk software? Trash. Mantle? Punted to Khronos and Vulkan is STILL on version 1.0!

AMD sucks at software, and I won't praise AMD until I see consistent acceptable improvements. Needing three process nodes and still behind Intel in gaming is not winning me over. I, as well as about 80% of consumers have no need for the high number of cores Ryzen has. Is definitely not an enthusiasts' CPU. AMD sucks at thermals and therefore clock speeds suffer. Thanks, but no thanks.
 
It seems to be more a case of:

No anti-lag
CPU:X---1----X~~X---2----X~~X----3---X~~X----4---
GPU:~~~~~~X-------------------1----------------X~~X---------------2---------------X~~X---------3------

becomes:

Anti-lag
CPU:X----1---X~~~~~~~~wait~~~~~X---2-----X~~~~~~~~wait~~~~~X----3---X
GPU:~~~~~~X-------------------1----------------X~~X---------------2---------------X~~X---------3------

Note that AMD are saying that this is really for GPU limited situations, I.e. the GPU render time is much greater than the CPU frame time and engine tick rate. That means you're not going to notice the waiting between the CPU frames because of the length of time it takes to get the frame out onto the monitor.

I think! Unusually for AMD's documentation, it's really not very clear.

My comparison was actually NVIDIA's vs AMD's implementation - first being a visual representation of NVIDIA's max 1 setting, the second being AMD's AL ON. It focused on just a single frame. It should be noted that the frames the CPU and GPU are working on in my visualisation are not the same. Your take is more clear in that regard.
The point of my comparison was to make sure my understanding of what both comapnies do is correct. Also wanted to confirm if I'm correct in my understanding of why AMD's implementation can yield more recent input data than NVIDIA's. So, am I ? :)

Do we have a concrete description of what NVIDIA is doing with their max pre-ren setting ? Did some light searching but couldn't find anything besides some old forum/reddit posts.

I'm also interested if the extended time the CPU has to hold on to each frame actually is relevant. That probably goes too deep into game engines and such but it's interesting. If a game engine sends frame data to the CPU in "bundles" and those "bundles" also include input data, then that would make the prolonged period of CPU time a given frame has irrelevant as the input data wouldn't change. If on the other hand input data is sent/updated independently (separate thread perhaps ?) it would make sense for the CPU to receive input updates while still holding onto the frame.
 
AMD has NEVER excelled in software development so the results do not surprise me. They must just be trying to tick feature boxes to look as attractive as the features Turing offers, aside from RT and DLSS of course.

Input lag sensitive gamers like myself and countless others usually lower graphic details to reduce the lag, so I would have liked to have seen those results included alongside the anti-lag software ON and OFF results.

Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K.

AMD has never excelled in software? Is this why Relive is better and far more feature filled than Shadowplay? Or why AMD drivers consistently provide longer historic gains than Nvidia drivers (Source:https://www.hardocp.com/article/2017/02/08/nvidia_video_card_driver_performance_review/13)? Or why Freesync is the universal standard and GSync isn't?

Just admit you don't know what you're talking about.
I've owned both AMD and Nvidia in a years time. AMD has more features....that don't mean anything. Nvidias game filter destroys AMD color crap. the fact that amd can record in x265 format is absolutely pointless considering it's not a default format for youtube and you can just convert your nvidia recordings with mediacoder. the gallery for finding your videos in amds software vs nvidias is garbage. amds youtube video stuff like denoise etc is garbage compared to nvidias and youtube videos look worse. especially lower quality videos. amd enables their enhanced sync by default in software which sucks compared to nvidias fast sync. I could go on. amd also needs this antilag crap considering their frametimes feel shittier than nvidias by default. amd is terrible.
 
Here's the rub, though: the GPU isn't processing the frames any faster with anti-lag, the procedure is just reducing the time it takes for your inputs to be displayed on screen. By then, you've already reacted to the visual stimulus - you're just waiting for the visual affirmation that this has taken place. Makes me wonder whether the benefit of the anti-lag system is more of a psychological one, rather than an actual temporal one.

That said, I have the competitive reaction time of a roadkill raccoon, so even if it is a psychological boost, the likes of me won't gain anything. Real comp players, though, just like in any sport, will want to utilise anything that provides an advantage, regardless as to the true nature of the improvement. Sports psychology is hugely important at the elite level of the likes of F1, MotoGP, et al, so it must be true for esports.

Seeing the results of your actions as quickly as possible is important in any game, especially eSports titles.. The difference between the two player's reaction time is important as is latency to the server and any input lag added by their own system. If all other values are equal, a player with lower input lag can peak a corner and get their shot off first simply because their input lag is a few MS lower. This is because when they press "A" to go left, their screen will show what's around that corner before the other player's will. In this case every MS counts as it is the difference between a kill and getting killed.

Mind you that is just a single example, you have to extrapolate that to every action in the game. This is especially important the more buttons you are pressing at once. If your are able to see the result of your actions more quickly this means you can perform you next set of actions faster then those with higher input lag. The human brain can naturally filter out a good amount of latency so this is likely why people who are used to regular gaming don't feel a difference. The lower the latency, the more drawn in I feel when gaming, similar to the effect that VR has.

I do agree on the psychological aspect.
 
Last edited:
I've owned both AMD and Nvidia in a years time. AMD has more features....that don't mean anything. Nvidias game filter destroys AMD color crap. the fact that amd can record in x265 format is absolutely pointless considering it's not a default format for youtube and you can just convert your nvidia recordings with mediacoder. the gallery for finding your videos in amds software vs nvidias is garbage. amds youtube video stuff like denoise etc is garbage compared to nvidias and youtube videos look worse. especially lower quality videos. amd enables their enhanced sync by default in software which sucks compared to nvidias fast sync. I could go on. amd also needs this antilag crap considering their frametimes feel shittier than nvidias by default. amd is terrible.

I don't doubt that. AMD is a hardware company first while NVIDIA is software first, and with AMD's software fails in the past, of course I was skeptical when I seen this app. I remember buying two 6950's and I returned them the next day for massive stutter in BFBC2 which came out 9 MONTHS before the 6950 came out.

Digital Foundry tested 3700X vs 9700K and Ryzen consistently had the higher frame time, so I could believe this anti-lag app is possibly cancelling some of that out, which would make this app even more disappointing.

What I'm truly waiting for is Battle(non)sense to test this new app. There are so many variables to consider, it will take a lot of testing and experience to find AMD's "secret sauce", and I think he or Digital Foundry are the ones to do it. Reddit seems to be torn, so it's def not a slam dunk for AMD right now.

One dude here says he's a competitive CSGO gamer with a 1080Ti, but says his frames dip to sub 100fps, and that isn't right at all. He says this app helps though, so I'll need to see further testing from the above.
 
Last edited:
One dude here says he's a competitive CSGO gamer with a 1080Ti, but says his frames dip to sub 100fps, and that isn't right at all. He says this app helps though, so I'll need to see further testing from the above.

That was a typo. I forgot the 1. I said as such above in my reply to you. I also never specified which game I play competitively in, I just gave two examples of games I play.
 
Not the best marketing material, as it seems to me. To my ear a "lag" sounds like some issue, while "anti-lag" is what you need to do to workaround it.

Adding "Radeon" doesn't make "anti-lag" sound better for Radeons. Now it feels like one of Radeon-only specific issues, which has to be addressed in drivers (that's why we discuss it here, on forums).
 
Here's the rub, though: the GPU isn't processing the frames any faster with anti-lag, the procedure is just reducing the time it takes for your inputs to be displayed on screen. By then, you've already reacted to the visual stimulus - you're just waiting for the visual affirmation that this has taken place. Makes me wonder whether the benefit of the anti-lag system is more of a psychological one, rather than an actual temporal one.

That said, I have the competitive reaction time of a roadkill raccoon, so even if it is a psychological boost, the likes of me won't gain anything. Real comp players, though, just like in any sport, will want to utilise anything that provides an advantage, regardless as to the true nature of the improvement. Sports psychology is hugely important at the elite level of the likes of F1, MotoGP, et al, so it must be true for esports.
The benefit is definitely real. Having the latest input added to the updated frame makes it feel much more responsive. 1-2ms at 200-400FPS may not be noticeable by anybody, but you'll feel it when you have FPS drops (GPU bound) or in more graphics intensive games. It's like when you've moved from a regular mouse to a gaming mouse with low input lag (not taking the sensor into account, just the clicks).

Not the best marketing material, as it seems to me. To my ear a "lag" sounds like some issue, while "anti-lag" is what you need to do to workaround it.

Adding "Radeon" doesn't make "anti-lag" sound better for Radeons. Now it feels like one of Radeon-only specific issues, which has to be addressed in drivers (that's why we discuss it here, on forums).
it can definitely be read like that, but fortunately for AMD users it is not a workaround to an issue, just an improvement to how the CPU handles requests that they implemented.
 
Last edited:
Not the best marketing material, as it seems to me. To my ear a "lag" sounds like some issue, while "anti-lag" is what you need to do to workaround it.

Adding "Radeon" doesn't make "anti-lag" sound better for Radeons. Now it feels like one of Radeon-only specific issues, which has to be addressed in drivers (that's why we discuss it here, on forums).

Very impressive. I actually never thought about that.
 
Even AMD's press documentation doesn't provide too much detail but what one can essentially make out is that the CPU 'frame time' is being increased and the time between each respective CPU frames is being increased too. This stops the CPU/game engine from running too far ahead of the frame that the GPU is processing to be displayed. The exact mechanism by how this is being done isn't clear.

Well, that would explain why you see FPS drops when the feature is enabled.

This is why I'm hoping HDMI 2.1 makes VRR go mainstream so we don't have to deal with this nonsense anymore; we really need to get away from fixed refresh windows.
 
My comparison was actually NVIDIA's vs AMD's implementation - first being a visual representation of NVIDIA's max 1 setting, the second being AMD's AL ON. It focused on just a single frame. It should be noted that the frames the CPU and GPU are working on in my visualisation are not the same. Your take is more clear in that regard.
The point of my comparison was to make sure my understanding of what both comapnies do is correct. Also wanted to confirm if I'm correct in my understanding of why AMD's implementation can yield more recent input data than NVIDIA's. So, am I ? :)

Do we have a concrete description of what NVIDIA is doing with their max pre-ren setting ? Did some light searching but couldn't find anything besides some old forum/reddit posts.
NVIDIA's setting controls the number of frame entries in what is called the context queue or flip queue; it overrides the default value that Windows/Direct3D uses (which is 3, I think). Left to its own management, Direct3D will let an application fill up a buffer with frames ready for the GPU to render but given that the D3D runtime interacts with the driver, the latter can make the former allow more or fewer frames to be buffered. So the use of NVIDIA's MPRF would be something like this:

MPRF 0
CPU:X----1----X~~~~~~~~~~~~~~X----2----X~~~~~~~~~~~~~X----3----X
GPU:~~~~~~~X-----------1---------X~~~~~~~X----------2----------X~~~~~~X------ etc

MPRF 1
CPU:X----1----X~X----2----X~~~~~~~~~~~~~~~~~~~~X----3----X~~~ etc
GPU:~~~~~~~X-----------1---------X~X----------2----------X~~~~~~X----------3----------X

MPRF 2
CPU:X----1----X~X----2----X~X----3----X~X----4----X~~~~~~~~~~~~~~ etc
GPU:~~~~~~~X-----------1---------X~X----------2----------X~X----------3----------X

As you can, increasing the MPRF value allows the GPU to flip onto a new frame almost all of the time, as the buffer is packed with multiple frames ready to be rendered; however, it also means the displayed frame can potential trail behind what the game engine has generated as a 'current state' frame, aka input lag. Decreasing the MPRF value counters this, but will make the rendered frame rate somewhat janky,

I'm also interested if the extended time the CPU has to hold on to each frame actually is relevant. That probably goes too deep into game engines and such but it's interesting. If a game engine sends frame data to the CPU in "bundles" and those "bundles" also include input data, then that would make the prolonged period of CPU time a given frame has irrelevant as the input data wouldn't change. If on the other hand input data is sent/updated independently (separate thread perhaps ?) it would make sense for the CPU to receive input updates while still holding onto the frame.
Accessing memory is slow, even if it's super fast DRAM, and once the context buffer is full, it will almost certainly be locked against further writes (there again, it might not be given that altering the context of the context queue after you've generated it, isn't how any game that I know of operates).
 
This is why I'm hoping HDMI 2.1 makes VRR go mainstream so we don't have to deal with this nonsense anymore; we really need to get away from fixed refresh windows.
In the case of AMD's AL system, buffer swaps doesn't really come into consideration, as game engines can generate new frames for rendering, whether or not the GPU has performed a buffer swap; it's this that's the source of input lag in question.

Of course, systems such as vsync, Gsync, and Freesync can also induce input lag via a different mechanism, and VRR would go a long way to help fix this. Mind you, so would a continuous direct neural input from the GPU into our brains :D
 
The benefit is definitely real. Having the latest input added to the updated frame makes it feel much more responsive.
I completely agree, especially with the use of the word 'feel', hence my earlier remarks about the psychological aspect of it. It would make for a fascinating experiment, in the form of a double blind study of casual, serious, and professional gamers where they play multiple games (different genres, etc) with and without anti-lag running, and with one half of the study group being given a 'placebo' anti-lag.

It's worth noting that there will always be input lag - it's physically impossible to remove, as the GPU cannot render and present a frame, until it has been issued after the engine has polled for input changes. So the point to consider is: at what point, in terms of time between input poll and frame present, does it stop being noticeable? This too would make for a great study.
 
Less then 1% of people own a monitor over 144hz. The average gamer will see higher lag reduction as mentioned in the article. When you are only using 40% of the GPU don't expect much... /waiting for 2k 4k reviews
 
Less then 1% of people own a monitor over 144hz. The average gamer will see higher lag reduction as mentioned in the article. When you are only using 40% of the GPU don't expect much... /waiting for 2k 4k reviews
The obligatory 2k is the cinema version of 1080p/FHD comment. :D
 
Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K.

I found this funny.

Was totally not necessary to add this to what was written but the inner intel fan boy couldn't be stopped.
You have no idea why amd choose that as you don't work for amd and were not part of the build process but let just keep assuming...
I found it funny too. I'll assume they were getting such terrible results while testing with a Ryzen that they had to go to an Intel chip to make their graphs looks good.
 
NVIDIA's setting controls the number of frame entries in what is called the context queue or flip queue; it overrides the default value that Windows/Direct3D uses (which is 3, I think). Left to its own management, Direct3D will let an application fill up a buffer with frames ready for the GPU to render but given that the D3D runtime interacts with the driver, the latter can make the former allow more or fewer frames to be buffered. So the use of NVIDIA's MPRF would be something like this:

MPRF 0
CPU:X----1----X~~~~~~~~~~~~~~X----2----X~~~~~~~~~~~~~X----3----X
GPU:~~~~~~~X-----------1---------X~~~~~~~X----------2----------X~~~~~~X------ etc

MPRF 1
CPU:X----1----X~X----2----X~~~~~~~~~~~~~~~~~~~~X----3----X~~~ etc
GPU:~~~~~~~X-----------1---------X~X----------2----------X~~~~~~X----------3----------X

MPRF 2
CPU:X----1----X~X----2----X~X----3----X~X----4----X~~~~~~~~~~~~~~ etc
GPU:~~~~~~~X-----------1---------X~X----------2----------X~X----------3----------X

As you can, increasing the MPRF value allows the GPU to flip onto a new frame almost all of the time, as the buffer is packed with multiple frames ready to be rendered; however, it also means the displayed frame can potential trail behind what the game engine has generated as a 'current state' frame, aka input lag. Decreasing the MPRF value counters this, but will make the rendered frame rate somewhat janky,


Accessing memory is slow, even if it's super fast DRAM, and once the context buffer is full, it will almost certainly be locked against further writes (there again, it might not be given that altering the context of the context queue after you've generated it, isn't how any game that I know of operates).

Yeah I get it now. Looking back on yesterday's convo I have no idea where I got the whole "extending frame hold time" concept from o_O Your example and AMD's own presentation where quite clear on what is going on. I have to have a word with my brain :p Either way, thank you, sir, for explaining the whole concept behind both techniques.

Looks like AMD actually gave some thought to their tech. It's not as crude as NVIDIA's implementation. It's technically superior, no doubt. As to it's practical benefit, especially in an eSports environment, I'm not really sold on it.

On a more general note, I fail to see why AMD chose to market their AL tech as something big for eSports. Sure, they chose to test some popular eSport games but settings they used are not something a serious competetive player (let alone a pro) would even consider. I understand it's only natural they used those settings to show off their tech in the best light possible but why not market it to those that would see the most benefit ? Weird choice.
 
I found it funny too. I'll assume they were getting such terrible results while testing with a Ryzen that they had to go to an Intel chip to make their graphs looks good.

AMD has done it countless times in the past, but it was especially funny this time, because of all the hype surrounding AMD's comeback with 3rd gen Ryzen and being first to 7nm.
 
Back