hahahanoobs
Posts: 5,571 +3,360
So many salty Nvidia users trying to disguise their envy.
Its rather entertaining.
*sigh*
Yet another non technical response. Why even comment if you're not even going to try? That's what I'm saying.
So many salty Nvidia users trying to disguise their envy.
Its rather entertaining.
NVIDIA's setting does exactly what it says: limit the number of frames to be CPU processed while the GPU is busy. AMD's anti-lag seems to actively change (a) how long each CPU frame takes to be processed and (b) the time between each successive CPU frame. The end result is to try and do the same thing, but they're chalk and cheese when it comes to comparing what they're doing.
Yes, it is similar but the NVIDIA system is essentially "process no more than xxx frames while the GPU is busy"; the AMD system doesn't alter that amount, just alter the timings of the processing.Doesn't "limit the number of frames to be CPU processed while the GPU is busy" mean basically the same as not letting the CPU to get too far ahead of the GPU ? Sure sounds very similar even though they are worded differently.
AMD press release pack from Next Horizon: Gaming (not sure if it's public domain material). I've misinterpreted the presentation though - I think the AMD just controls when the CPU processed frames are released to the GPU, not how long they take to be executed.I also have no idea where you got the additional info on how Anti-Lag works from but I'd be happy to read through anything you have on the subject![]()
AMD has NEVER excelled in software development so the results do not surprise me. They must just be trying to tick feature boxes to look as attractive as the features Turing offers, aside from RT and DLSS of course.
Input lag sensitive gamers like myself and countless others usually lower graphic details to reduce the lag, so I would have liked to have seen those results included alongside the anti-lag software ON and OFF results.
Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K.
AMD has never excelled in software? Is this why Relive is better and far more feature filled than Shadowplay? Or why AMD drivers consistently provide longer historic gains than Nvidia drivers (Source:https://www.hardocp.com/article/2017/02/08/nvidia_video_card_driver_performance_review/13)? Or why Freesync is the universal standard and GSync isn't?
Just admit you don't know what you're talking about.
Wtf is Relive?AMD has NEVER excelled in software development so the results do not surprise me. They must just be trying to tick feature boxes to look as attractive as the features Turing offers, aside from RT and DLSS of course.
Input lag sensitive gamers like myself and countless others usually lower graphic details to reduce the lag, so I would have liked to have seen those results included alongside the anti-lag software ON and OFF results.
Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K.
AMD has never excelled in software? Is this why Relive is better and far more feature filled than Shadowplay? Or why AMD drivers consistently provide longer historic gains than Nvidia drivers (Source:https://www.hardocp.com/article/2017/02/08/nvidia_video_card_driver_performance_review/13)? Or why Freesync is the universal standard and GSync isn't?
Just admit you don't know what you're talking about.
Shadowplay, while its not without its issues, at least most know about it.
Isnt there a ton if 3rd party software that can do what they both do n better? im sure there is.
Yes, it is similar but the NVIDIA system is essentially "process no more than xxx frames while the GPU is busy"; the AMD system doesn't alter that amount, just alter the timings of the processing.Doesn't "limit the number of frames to be CPU processed while the GPU is busy" mean basically the same as not letting the CPU to get too far ahead of the GPU ? Sure sounds very similar even though they are worded differently.
AMD press release pack from Next Horizon: Gaming (not sure if it's public domain material). I've misinterpreted the presentation though - I think the AMD just controls when the CPU processed frames are released to the GPU, not how long they take to be executed.I also have no idea where you got the additional info on how Anti-Lag works from but I'd be happy to read through anything you have on the subject![]()
In a perfect world, it would take exactly the same amount of time for the CPU/engine to process and prepare a frame for the GPU to render as it does for the GPU to render the previous frame. The game engine will tick along at a set rate, polling for input changes and so on. Every engine cycle, the CPU puts together the relevant changes to game environment, creates the instruction list for the 3D frame, and then issues it the GPU.
It then renders that frame and once done, performs a 'present' command, where the frame is ready to be displayed. In our ideal world, this would synchronise exactly with the engine finishing off its next cycle and issuing a new frame to be rendered. In reality, a '4K ultra-graphics-setting frame' will take longer to be GPU processed than it takes for the engine and CPU to run through a game cycle. That means it will still be polling for input changes, creating new frames that can't be displayed yet, as it will check to see if the GPU is available or not (and since it isn't, it just goes ahead onto the next engine cycle).
NVIDIA's system essentially stops the CPU/engine from running too many frames ahead; however, the sequence can still lag behind. The AMD system appears to be stalling when the frames are sent to the GPU, so that the sequence falls more in line.
Edit: AMD's utterly cheesy video is very similar to the press pack:
Relive is AMD's counterpart for Shadowplay (obviously given the context). From HardOCP (https://I.imgur.com/urHeMoY.png) it's a the clear better choice.
Sure, OBS works better if you want to fiddle with things. But it's a hell of a lot easier to enable a setting in the drivers page than install and set up OBS. If you just want to record gameplay, stream yourself, or instant replay clips it's an easy choice for a ton of gamers who just aren't doing this for twitch money. It doesn't make sense to go through setting up OBS scenes and inputs and adjusting all of its settings for the right quality/performance ratio.
It seems to be more a case of:
No anti-lag
CPU:X---1----X~~X---2----X~~X----3---X~~X----4---
GPU:~~~~~~X-------------------1----------------X~~X---------------2---------------X~~X---------3------
becomes:
Anti-lag
CPU:X----1---X~~~~~~~~wait~~~~~X---2-----X~~~~~~~~wait~~~~~X----3---X
GPU:~~~~~~X-------------------1----------------X~~X---------------2---------------X~~X---------3------
Note that AMD are saying that this is really for GPU limited situations, I.e. the GPU render time is much greater than the CPU frame time and engine tick rate. That means you're not going to notice the waiting between the CPU frames because of the length of time it takes to get the frame out onto the monitor.
I think! Unusually for AMD's documentation, it's really not very clear.
I've owned both AMD and Nvidia in a years time. AMD has more features....that don't mean anything. Nvidias game filter destroys AMD color crap. the fact that amd can record in x265 format is absolutely pointless considering it's not a default format for youtube and you can just convert your nvidia recordings with mediacoder. the gallery for finding your videos in amds software vs nvidias is garbage. amds youtube video stuff like denoise etc is garbage compared to nvidias and youtube videos look worse. especially lower quality videos. amd enables their enhanced sync by default in software which sucks compared to nvidias fast sync. I could go on. amd also needs this antilag crap considering their frametimes feel shittier than nvidias by default. amd is terrible.AMD has NEVER excelled in software development so the results do not surprise me. They must just be trying to tick feature boxes to look as attractive as the features Turing offers, aside from RT and DLSS of course.
Input lag sensitive gamers like myself and countless others usually lower graphic details to reduce the lag, so I would have liked to have seen those results included alongside the anti-lag software ON and OFF results.
Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K.
AMD has never excelled in software? Is this why Relive is better and far more feature filled than Shadowplay? Or why AMD drivers consistently provide longer historic gains than Nvidia drivers (Source:https://www.hardocp.com/article/2017/02/08/nvidia_video_card_driver_performance_review/13)? Or why Freesync is the universal standard and GSync isn't?
Just admit you don't know what you're talking about.
Here's the rub, though: the GPU isn't processing the frames any faster with anti-lag, the procedure is just reducing the time it takes for your inputs to be displayed on screen. By then, you've already reacted to the visual stimulus - you're just waiting for the visual affirmation that this has taken place. Makes me wonder whether the benefit of the anti-lag system is more of a psychological one, rather than an actual temporal one.
That said, I have the competitive reaction time of a roadkill raccoon, so even if it is a psychological boost, the likes of me won't gain anything. Real comp players, though, just like in any sport, will want to utilise anything that provides an advantage, regardless as to the true nature of the improvement. Sports psychology is hugely important at the elite level of the likes of F1, MotoGP, et al, so it must be true for esports.
I've owned both AMD and Nvidia in a years time. AMD has more features....that don't mean anything. Nvidias game filter destroys AMD color crap. the fact that amd can record in x265 format is absolutely pointless considering it's not a default format for youtube and you can just convert your nvidia recordings with mediacoder. the gallery for finding your videos in amds software vs nvidias is garbage. amds youtube video stuff like denoise etc is garbage compared to nvidias and youtube videos look worse. especially lower quality videos. amd enables their enhanced sync by default in software which sucks compared to nvidias fast sync. I could go on. amd also needs this antilag crap considering their frametimes feel shittier than nvidias by default. amd is terrible.
One dude here says he's a competitive CSGO gamer with a 1080Ti, but says his frames dip to sub 100fps, and that isn't right at all. He says this app helps though, so I'll need to see further testing from the above.
The benefit is definitely real. Having the latest input added to the updated frame makes it feel much more responsive. 1-2ms at 200-400FPS may not be noticeable by anybody, but you'll feel it when you have FPS drops (GPU bound) or in more graphics intensive games. It's like when you've moved from a regular mouse to a gaming mouse with low input lag (not taking the sensor into account, just the clicks).Here's the rub, though: the GPU isn't processing the frames any faster with anti-lag, the procedure is just reducing the time it takes for your inputs to be displayed on screen. By then, you've already reacted to the visual stimulus - you're just waiting for the visual affirmation that this has taken place. Makes me wonder whether the benefit of the anti-lag system is more of a psychological one, rather than an actual temporal one.
That said, I have the competitive reaction time of a roadkill raccoon, so even if it is a psychological boost, the likes of me won't gain anything. Real comp players, though, just like in any sport, will want to utilise anything that provides an advantage, regardless as to the true nature of the improvement. Sports psychology is hugely important at the elite level of the likes of F1, MotoGP, et al, so it must be true for esports.
it can definitely be read like that, but fortunately for AMD users it is not a workaround to an issue, just an improvement to how the CPU handles requests that they implemented.Not the best marketing material, as it seems to me. To my ear a "lag" sounds like some issue, while "anti-lag" is what you need to do to workaround it.
Adding "Radeon" doesn't make "anti-lag" sound better for Radeons. Now it feels like one of Radeon-only specific issues, which has to be addressed in drivers (that's why we discuss it here, on forums).
Not the best marketing material, as it seems to me. To my ear a "lag" sounds like some issue, while "anti-lag" is what you need to do to workaround it.
Adding "Radeon" doesn't make "anti-lag" sound better for Radeons. Now it feels like one of Radeon-only specific issues, which has to be addressed in drivers (that's why we discuss it here, on forums).
Even AMD's press documentation doesn't provide too much detail but what one can essentially make out is that the CPU 'frame time' is being increased and the time between each respective CPU frames is being increased too. This stops the CPU/game engine from running too far ahead of the frame that the GPU is processing to be displayed. The exact mechanism by how this is being done isn't clear.
NVIDIA's setting controls the number of frame entries in what is called the context queue or flip queue; it overrides the default value that Windows/Direct3D uses (which is 3, I think). Left to its own management, Direct3D will let an application fill up a buffer with frames ready for the GPU to render but given that the D3D runtime interacts with the driver, the latter can make the former allow more or fewer frames to be buffered. So the use of NVIDIA's MPRF would be something like this:My comparison was actually NVIDIA's vs AMD's implementation - first being a visual representation of NVIDIA's max 1 setting, the second being AMD's AL ON. It focused on just a single frame. It should be noted that the frames the CPU and GPU are working on in my visualisation are not the same. Your take is more clear in that regard.
The point of my comparison was to make sure my understanding of what both comapnies do is correct. Also wanted to confirm if I'm correct in my understanding of why AMD's implementation can yield more recent input data than NVIDIA's. So, am I ?
Do we have a concrete description of what NVIDIA is doing with their max pre-ren setting ? Did some light searching but couldn't find anything besides some old forum/reddit posts.
Accessing memory is slow, even if it's super fast DRAM, and once the context buffer is full, it will almost certainly be locked against further writes (there again, it might not be given that altering the context of the context queue after you've generated it, isn't how any game that I know of operates).I'm also interested if the extended time the CPU has to hold on to each frame actually is relevant. That probably goes too deep into game engines and such but it's interesting. If a game engine sends frame data to the CPU in "bundles" and those "bundles" also include input data, then that would make the prolonged period of CPU time a given frame has irrelevant as the input data wouldn't change. If on the other hand input data is sent/updated independently (separate thread perhaps ?) it would make sense for the CPU to receive input updates while still holding onto the frame.
In the case of AMD's AL system, buffer swaps doesn't really come into consideration, as game engines can generate new frames for rendering, whether or not the GPU has performed a buffer swap; it's this that's the source of input lag in question.This is why I'm hoping HDMI 2.1 makes VRR go mainstream so we don't have to deal with this nonsense anymore; we really need to get away from fixed refresh windows.
I completely agree, especially with the use of the word 'feel', hence my earlier remarks about the psychological aspect of it. It would make for a fascinating experiment, in the form of a double blind study of casual, serious, and professional gamers where they play multiple games (different genres, etc) with and without anti-lag running, and with one half of the study group being given a 'placebo' anti-lag.The benefit is definitely real. Having the latest input added to the updated frame makes it feel much more responsive.
The obligatory 2k is the cinema version of 1080p/FHD comment.Less then 1% of people own a monitor over 144hz. The average gamer will see higher lag reduction as mentioned in the article. When you are only using 40% of the GPU don't expect much... /waiting for 2k 4k reviews
I found it funny too. I'll assume they were getting such terrible results while testing with a Ryzen that they had to go to an Intel chip to make their graphs looks good.Oh, and who else was surprised AMD used a 9700K? I'm wondering if this was a not so subtle way of AMD admitting the 9700K is the second best gaming CPU. First being the 9900K.
I found this funny.
Was totally not necessary to add this to what was written but the inner intel fan boy couldn't be stopped.
You have no idea why amd choose that as you don't work for amd and were not part of the build process but let just keep assuming...
NVIDIA's setting controls the number of frame entries in what is called the context queue or flip queue; it overrides the default value that Windows/Direct3D uses (which is 3, I think). Left to its own management, Direct3D will let an application fill up a buffer with frames ready for the GPU to render but given that the D3D runtime interacts with the driver, the latter can make the former allow more or fewer frames to be buffered. So the use of NVIDIA's MPRF would be something like this:
MPRF 0
CPU:X----1----X~~~~~~~~~~~~~~X----2----X~~~~~~~~~~~~~X----3----X
GPU:~~~~~~~X-----------1---------X~~~~~~~X----------2----------X~~~~~~X------ etc
MPRF 1
CPU:X----1----X~X----2----X~~~~~~~~~~~~~~~~~~~~X----3----X~~~ etc
GPU:~~~~~~~X-----------1---------X~X----------2----------X~~~~~~X----------3----------X
MPRF 2
CPU:X----1----X~X----2----X~X----3----X~X----4----X~~~~~~~~~~~~~~ etc
GPU:~~~~~~~X-----------1---------X~X----------2----------X~X----------3----------X
As you can, increasing the MPRF value allows the GPU to flip onto a new frame almost all of the time, as the buffer is packed with multiple frames ready to be rendered; however, it also means the displayed frame can potential trail behind what the game engine has generated as a 'current state' frame, aka input lag. Decreasing the MPRF value counters this, but will make the rendered frame rate somewhat janky,
Accessing memory is slow, even if it's super fast DRAM, and once the context buffer is full, it will almost certainly be locked against further writes (there again, it might not be given that altering the context of the context queue after you've generated it, isn't how any game that I know of operates).
I found it funny too. I'll assume they were getting such terrible results while testing with a Ryzen that they had to go to an Intel chip to make their graphs looks good.