Tackling the subject of GPU bottlenecking and CPU gaming benchmarks, using Ryzen as an...

Status
Not open for further replies.

Steve

Posts: 3,035   +3,142
Staff member

Today I’m going to try and tackle a topic that seems to be causing a great deal of confusion, especially after our initial review of Ryzen and the gaming performance observed when testing at 1080p. There is a huge mix of results regarding Ryzen’s gaming performance and a big part of that has to do with GPU bottlenecks.

Since our Ryzen review, we've followed up with a gaming-specific feature testing 16 games at 1080p and 1440p, and yes, I’m also testing with SMT enabled and disabled. Results are very interesting, but this article is about addressing GPU bottlenecks and explaining when and where you are seeing them and why.

So first, why is avoiding a GPU bottleneck so important for testing and understanding CPU performance?

You could test at 4K like some have and show that Ryzen CPUs can indeed match high-end Skylake and Kaby Lake processors when coupled with a GTX 1080 or even a Titan XP. It is true, the 1800X can match the 7700K at 4K in the latest games using high-end GPUs.

Does that mean the 1800X is as fast or possibly faster than the 7700K? No!

We know this isn't true because when we test at a lower resolution, the 7700K is quite a bit faster in most cases. Now you might say "Steve, I don’t care about 1080p gaming, I have an ultrawide 1440p display so I only care how Ryzen performs here." That’s fine, albeit you are sticking your head in the sand and that can come back to bite you.

"Steve, I don’t care about 1080p gaming, I have an ultrawide 1440p display so I only care how Ryzen performs here."

Before I explain why, let me just touch on why we test CPU gaming performance at 1080p and why 4K results, at least on their own, are completely useless. If you test at 1080p, technically you don’t need to show 4K results, whereas if you test at 4K you very much need to show 1080p performance.

Here’s a hypothetical example:

Let’s say the Titan XP is capable of rendering a minimum of 40 fps in Battlefield 1 at 4K. Now if we benchmark half a dozen CPUs including some Core i3, i5, i7 and Ryzen models, all of which allow the Titan XP to deliver its optimal 4K performance, does that mean they are all equal in terms of gaming performance?

It certainly appears so when looking exclusively at 4K performance, however that simply means all are capable of keeping frame rates above 40 fps at all times.

By lowering the resolution and/or game quality visual settings, which reduces GPU load, we start to show the CPU as the weakest link. You can do this by testing at extremely low resolutions such as 720p with low quality settings, though that'd be going a bit too far the other way. I have found 1080p using high to ultra-high quality settings using a Pascal Titan X is a realistic configuration for measuring CPU gaming performance.

So other than showing that there may actually be a difference between certain CPUs in games, why else should you ensure that your results aren't shaped by a GPU bottleneck? Or why is it a bad idea to stick your head in the sand by saying you only care about how they compare at ultrawide or 4K resolutions?

For the longest time myself and other respected tech reviewers claimed that all PC gamers need is a Core i5 processor as you reach a point of diminishing returns with a Core i7 when gaming. This was true about a year ago and there wasn't much evidence that suggested otherwise. Of course, we often noted that things would no doubt change in the future, we just didn't know when that change would happen.

A year later things have changed. A few games are indeed more demanding, but the biggest change is seen on the GPUs. The GTX 980 and Fury graphics cards were considered real weapons a year ago, but today they can be considered mid-range with graphics chips such as the Titan XP and soon to be released GTX 1080 Ti delivering over twice as much performance in many cases.

If we compare the Core i5-7600K and Core i7-7700K in CPU demanding games using the GTX 980 at 1080p, the eight-threaded i7 processor isn't much faster. Given the margins I would say buy a Core i5 as the Core i7 isn't worth the investment.

I recently compared the Pentium G4560, Core i3-7350K along with the i5-7600K and i7-6700K processors using the GTX 1050 Ti, GTX 1060 and GTX 1080. If we look at a game such as Hitman we see that the i5 and i7 processors are on par when testing with the GTX 1060 which is equivalent to the GTX 980 in terms of performance. Look at those results and the Core i7 is clearly a bad investment. However if we test with the GTX 1080 the Core i7 processor is now delivering over 20% more performance.

Increasing the resolution to 1440p we see that when testing with the GTX 1060 the dual-core Pentium processor delivers the same performance as the i7-6700K. The 6700K is obviously a much more powerful CPU but the GPU bottleneck simply hides the fact. That said using a more powerful GPU in the GTX 1080 we see that the 6700K is now 43% faster than the G4560.

At 1440p we find that the 6700K is just 6% faster than the 7600K, whereas it was 23% faster at 1080p. I found similar margins in games such as Mafia III, Overwatch, Total War Warhammer, and many others.

Keeping those results in mind, the Titan XP and soon to be released GTX 1080 Ti are much faster than the GTX 1080 again. So the Ryzen gaming performance that you saw in my review and other well-researched reviews from outlets who tested correctly like Gamers Nexus and Tom’s Hardware we can assume a few things:

First, if future games aren't able to better utilize the Ryzen processors than they are presently -- in other words, optimization doesn't occur -- then the misleading 4K results become an even bigger issue. In a couple of years when the GTX 1080 Ti or AMD Vega GPUs go from high-end to mid-range contenders, how will Ryzen look in regards to the current Skylake and Kaby Lake processors, presumably the 4K performance would start to look like what we are seeing at 1080p.

The opposite of that would be that games are optimized for Ryzen, which is of course my hope, then we'll start to see the 8-core, 16-thread AMD processors laying waste to Intel’s 4-core, 8-threaded Core i7 Kaby Lake and Skylake CPUs.

The obvious problem being we don’t know how this is going to play out. I remember AMD fans giving me a hard time back in 2011 when I said the FX-8150 didn't deliver, especially in games. The argument was that games back then used 1 or 2 cores and by the time they were using 4 or more the FX series would prevail. Well, no need to dredge that one up all over again but we know how it played out. That said, I have much more hope for Ryzen and I honestly do believe there is more performance to be seen yet. I said the opposite for the FX processors many years ago, so that’s something.

I hope this helps those of you who were confused by the variance in results and now have a better understanding of where and why a GPU bottleneck is occurring and more importantly why it should be avoided when showing CPU performance.

Permalink to story.

 
Can someone please explain some stuff to me here because I'm confused? How is the CPU a bottleneck? Isn't accessing the video card done using Direct Memory Access which bypasses the CPU? Shouldn't using DMA be able to bypass the CPU since you're writing directly to the video RAM?
 
"First, if future games aren't able to better utilize the Ryzen processors than they are presently -- in other words, optimization doesn't occur -- then the misleading 4K results become an even bigger issue. In a couple of years when the GTX 1080 Ti or AMD Vega GPUs go from high-end to mid-range contenders, how will Ryzen look in regards to the current Skylake and Kaby Lake processors, presumably the 4K performance would start to look like what we are seeing at 1080p."

While using 1080p to benchmark a CPU removes the GPU bottleneck, I would most certainly not call benchmarking in 4k "misleading" as you have here. When you are benchmarking in 1080p you are giving us high FPS results with the assumption that those results will translate into future performance.

As AdoredTV pointed out in his video, do reviewers actually have any proof that benchmarking at lower resultions actually translates to better performance in the future?
 
"First, if future games aren't able to better utilize the Ryzen processors than they are presently -- in other words, optimization doesn't occur -- then the misleading 4K results become an even bigger issue. In a couple of years when the GTX 1080 Ti or AMD Vega GPUs go from high-end to mid-range contenders, how will Ryzen look in regards to the current Skylake and Kaby Lake processors, presumably the 4K performance would start to look like what we are seeing at 1080p."

While using 1080p to benchmark a CPU removes the GPU bottleneck, I would most certainly not call benchmarking in 4k "misleading" as you have here. When you are benchmarking in 1080p you are giving us high FPS results with the assumption that those results will translate into future performance.

As AdoredTV pointed out in his video, do reviewers actually have any proof that benchmarking at lower resultions actually translates to better performance in the future?

I respectfully disagree with Jim on this topic, I think his findings might be flawed and I don't think he took the early teething issues of the FX-series into account.

What is the alternative anyway? Testing with a heavy GPU bottleneck, that's down right stupid. You might as well not test at all. If you look at the Ryzen review that LinusTechTips did, you would conclude that Ryzen is really no faster than the FX-series for gaming, yeah that's not misleading at all.

For testing you want to remove any GPU bottlenecks, that much should be obvious. Normally Jim is pretty well on point, that was one of the few videos he has done that really had me scratching my head.
 
Thanks for this article; while informative I have a few requests please.

1, why no 1% - 0.1% and processor utillisation numbers to back up your conclusions. If all these processors are good for 1440p - 4K game playing can you prove this; in referenc to your 1080p ultra conclusions.

2, As has been shown by Adored TV analysis ( extreme example, but valid non the less! ), when gpu power increases, the burden upon the cpu to feed all the increased calls to gpu increases; moreso when resolution increases. The bottleneck is not just on gpu.

3, many times the i7700k @ 5Ghz has been shown to have very high cpu utillisation while gaming ( 90-95% ); any other game or system request is goint to introduce that stutter fcuk; as evidence on many gamer videos show.

It would really help this discussion if uitllisation on these cpu's are shown at 1080p ultra, 1440p ultra and 4K high-ultra. If said cpu with really high utillisation at 1080p ultra is already stutter fcuk, can you conclusively show that this is not the case @ 1440p - 4K.

Your conclusion that their " maybe " optomisations to cmoe for Ryzen in the future ( not talking 5+ yeasrs here, are we? ) is bad journalism, which reduces; almost to nothing, the quality of the information included in this article.

Always liked your site, never before felt the need to comment; but you and many others seem determined to knock down; what is essentially a workstation / "prosumer" ( hate that word, but is a thing ! ) cpu, with seriously magnificient gaming chops; up their with the current " VERY BEST " that intel has to offer.

Given the trend to higher thread utillisation in all modern game ( as we move forward through 2017 and onwards ); where exactly will that i7700k @ 5Ghz be; when already it is starting to stutter fcuk?

Please can you produce frame time ananlysis and cpu utillisation numbers to bck up your future proof claims.

Thanks.
 
"Now if you play on 1080p where the CPU becomes the bottleneck, a faster CPU is obviously better, but who would pair a $300+ processor with a mid range $200 GPU? It doesn't make any sense. Ryzen processors are 100% great when it comes to gaming, especially if you are gaming at 1440p and higher."

I couldn't agree more...

One more thing I was thinking about... What would happen if you pair Ryzen 8 core / 16 thread CPU with multi GPU setup and test it in 1080p? Wouldn't it change the performance results? I mean more cores to feed GPUs... you know.
 
Can someone please explain some stuff to me here because I'm confused? How is the CPU a bottleneck? Isn't accessing the video card done using Direct Memory Access which bypasses the CPU? Shouldn't using DMA be able to bypass the CPU since you're writing directly to the video RAM?

A game does not only use the GPU, because a game is not just a video benchmark. A (well written, code-wise) game will let the CPU handle the AI (you know, the intelligence that governs the likes of Lydia in Skyrim, or the behavior of an enemy when reacting to a stone thrown near in Far Cry...). The CPU also has to calculate the path, speed and amount of damage your units are doing when pitted against an enemy army in Starcraft and Total War - Rome, or the hits and drops in an RPG like Grim Dawn or Diablo. Sure, games like Crysis or Resident Evil are easier on the CPU, because there isn't much to calculate, and more so when the game makes heavy use of scripting, like Broken Sword or any adventure game, but the CPU still has to handle the logic of feeding what data and when to the GPU/RAM from the moment you start the game.
More to the point, the slower the CPU calculates speed, paths, hit/miss ratio, AI behavior, the lower your fps will be at a given resolution + details level. And the higher the resolution and the graphical settings, the more the GPU will have to work, while the CPU is always going to process the same quantity of data for the AI and everything else listed above. So when the GPU has its hands full with geometry, physics and rendering, the CPU will just wait for it before feeding it further with data.
 
Last edited:
Thanks for this article; while informative I have a few requests please.

1, why no 1% - 0.1% and processor utillisation numbers to back up your conclusions. If all these processors are good for 1440p - 4K game playing can you prove this; in referenc to your 1080p ultra conclusions.

2, As has been shown by Adored TV analysis ( extreme example, but valid non the less! ), when gpu power increases, the burden upon the cpu to feed all the increased calls to gpu increases; moreso when resolution increases. The bottleneck is not just on gpu.

3, many times the i7700k @ 5Ghz has been shown to have very high cpu utillisation while gaming ( 90-95% ); any other game or system request is goint to introduce that stutter fcuk; as evidence on many gamer videos show.

It would really help this discussion if uitllisation on these cpu's are shown at 1080p ultra, 1440p ultra and 4K high-ultra. If said cpu with really high utillisation at 1080p ultra is already stutter fcuk, can you conclusively show that this is not the case @ 1440p - 4K.

Your conclusion that their " maybe " optomisations to cmoe for Ryzen in the future ( not talking 5+ yeasrs here, are we? ) is bad journalism, which reduces; almost to nothing, the quality of the information included in this article.

Always liked your site, never before felt the need to comment; but you and many others seem determined to knock down; what is essentially a workstation / "prosumer" ( hate that word, but is a thing ! ) cpu, with seriously magnificient gaming chops; up their with the current " VERY BEST " that intel has to offer.

Given the trend to higher thread utillisation in all modern game ( as we move forward through 2017 and onwards ); where exactly will that i7700k @ 5Ghz be; when already it is starting to stutter fcuk?

Please can you produce frame time ananlysis and cpu utillisation numbers to bck up your future proof claims.

Thanks.

1, why no 1% - 0.1% and processor utillisation numbers to back up your conclusions. If all these processors are good for 1440p - 4K game playing can you prove this; in referenc to your 1080p ultra conclusions.

Processor utilization doesn’t really have anything to do with what we are discussing here as we aren’t looking at individual games, we are discussing testing methods. The second part of your question seems to illustrate that you might not understand the subject correctly.

2, As has been shown by Adored TV analysis ( extreme example, but valid non the less! ), when gpu power increases, the burden upon the cpu to feed all the increased calls to gpu increases; moreso when resolution increases. The bottleneck is not just on gpu.

I respectfully disagree with Jim in regards to how you test CPU gaming performance. I also don’t know what the alternative is other than showing GPU bound results. Also no, the second part of your comment/question is wrong, that is now how it works. When you increase load on the GPU you bog it down with work which frees up the CPU.

3, many times the i7700k @ 5Ghz has been shown to have very high cpu utillisation while gaming ( 90-95% ); any other game or system request is goint to introduce that stutter fcuk; as evidence on many gamer videos show.

If a game uses 4-cores efficiently then you will see high utilization on a 4-core processor. If it only uses 4-core then naturally you will see 50% utilization on an 8-core processor. An overclocked Core i7-7700K does not stutter heavily in any game, the stutter while not completely none existent is very rare even in the most demanding modern games.

“Your conclusion that their " maybe " optomisations to cmoe for Ryzen in the future ( not talking 5+ yeasrs here, are we? ) is bad journalism, which reduces; almost to nothing, the quality of the information included in this article.”

So maybes are bad journalism here but Jim at Adored TV can make a highly speculative video and somehow that valid :S Yep gotcha.

“Always liked your site, never before felt the need to comment; but you and many others seem determined to knock down; what is essentially a workstation / "prosumer" ( hate that word, but is a thing ! ) cpu, with seriously magnificient gaming chops; up their with the current " VERY BEST " that intel has to offer.”

Your taking this like a fan boy and completely glossing over the positives, so that’s on you.

“Please can you produce frame time ananlysis and cpu utillisation numbers to bck up your future proof claims.”

Neither of those things would backup this topic, you clearly don’t understand the subject.

"Now if you play on 1080p where the CPU becomes the bottleneck, a faster CPU is obviously better, but who would pair a $300+ processor with a mid range $200 GPU? It doesn't make any sense. Ryzen processors are 100% great when it comes to gaming, especially if you are gaming at 1440p and higher."

I couldn't agree more...

One more thing I was thinking about... What would happen if you pair Ryzen 8 core / 16 thread CPU with multi GPU setup and test it in 1080p? Wouldn't it change the performance results? I mean more cores to feed GPUs... you know.

No, this would likly play further into the 7700K hands unfortunately due to the way modern games are designed. I will look into this soon with a pair of Titan X (Pascal) SLI graphics cards.
 
Last edited:
This guys is a total noob. If you game at 1440p or above the CPU speed doesn't matter. Why would anyone care how it performs at 1440p? He is literally spewing garbage. Hey if you have GTX 1070 or 1080 and game at 1440p, you should care about 1080p performance, why? It doesn't make any sense.

Plus 4 cores processors do tend to have better STP because they have smaller cache and cores are closer to each other and can talk faster, thus reduced latency and 4 cores can have faster GHz speeds, as it would consume less power and you need less wattage to fuel 4 cores as opposed to 8 cores. Overall the Ryzen is very similar in STP to the 6900k, which does have a bit more STP, but its just about 5% more and again most games and applications are optimized for Intel, because they have almost market monopoly. Why care about AMD when no one buys their processors?

Now if you play on 1080p where the CPU becomes the bottleneck, a faster CPU is obviously better, but who would pair a $300+ processor with a mid range $200 GPU? It doesn't make any sense. Ryzen processors are 100% great when it comes to gaming, especially if you are gaming at 1440p and higher.

Also to note that Ryzen processors are about 5-10fps slower at 1080p in some games, which is not that big of a deal. First we got to remember that games have purely been optimized for Intel, because they've had 90% market share for gaming. Now that AMD has a true competitor we will see developers optimize their code for Ryzen as well, and we might even get patches to improve Ryzen performance, but certainly moving forward they are going to be taking Ryzen into account as well.
While Ryzen will likely rise AMD from the all but forgotten era lol, Ryzen itself has a long way to go. The main stream customers know Intel is the best and has been for a long time. Will AMD make things more competitive, maybe, only time will tell.
Even if Ryzen does take off n does everything that we all want it, doesnt mean Intel is going to sit buy n watch everything they built go away. For all we know Intel may surprise every one n have one hell of a chip later this year. Then bam, we are all back to AMD who lol

In the end all we want as consumers is for AMD to be competitive so that Intels prices come down. No one and yes I mean no one really believes AMD will ever topple Intel.
Hopefully things get better for AMD but we have all been down this road before and Intel has crushed them every time. Maybe things have finally changed, we all know its time for vheaper prices. Lets all be honest, no one cares what processor we use, we care about how much we have to pay for it. Afterall, if I can spend $300 n get the same or close performance of a $500+ proc, do I give a **** whos name is on it, nope. I am paying for performance to $ ratio. Why spend more when your getting lil to no gain. Spend the money on better pc hardware, say a video card or a ssd.
 
I completely disagree with Jim on this who topic, I think his findings are flawed and I don't think he took the early teething issues of the FX-series into account.

What is the alternative anyway? Testing with a heavy GPU bottleneck, that's down right stupid. You might as well not test at all. If you look at the Ryzen review that LinusTechTips did, you would conclude that Ryzen is really no faster than the FX-series for gaming, yeah that's not misleading at all.

For testing you want to remove any GPU bottlenecks, that much should be obvious. Normally Jim is pretty well on point, that was one of the few videos he has done that really had me scratching my head.

I'm not asking you you to test an alternate way, I'm asking if there is proof that using these testing methods actually translates to the real world. Sure testing at 1080p will give you a good indicator of the performance of a CPU now, you can tell by how the performance scales as you go to 1440p and 4k. That doesn't say much about the future though and I'd be interested to see if those low resolution tests do translate to future performance on a wider scale than what AdoredTV provided.
 
While Ryzen will likely rise AMD from the all but forgotten era lol, Ryzen itself has a long way to go. The main stream customers know Intel is the best and has been for a long time. Will AMD make things more competitive, maybe, only time will tell.
Even if Ryzen does take off n does everything that we all want it, doesnt mean Intel is going to sit buy n watch everything they built go away. For all we know Intel may surprise every one n have one hell of a chip later this year. Then bam, we are all back to AMD who lol

In the end all we want as consumers is for AMD to be competitive so that Intels prices come down. No one and yes I mean no one really believes AMD will ever topple Intel.
Hopefully things get better for AMD but we have all been down this road before and Intel has crushed them every time. Maybe things have finally changed, we all know its time for vheaper prices. Lets all be honest, no one cares what processor we use, we care about how much we have to pay for it. Afterall, if I can spend $300 n get the same or close performance of a $500+ proc, do I give a **** whos name is on it, nope. I am paying for performance to $ ratio. Why spend more when your getting lil to no gain. Spend the money on better pc hardware, say a video card or a ssd.

Ryzen only really lags in gaming performance, which isn't a big deal for a CPU targeted at professionals. Ryzen's launch has been better than LGA 2011 as well. I don't exactly know where "Ryzen itself has a long way to go", because it's already as good as or beating it's Intel's LGA 2011-V3 counterparts at a fraction of the cost. There is zero reason to buy those CPUs at the price they are at since Ryzen launched.

People need to stop acting like gaming is the end all be all for every CPU in the world.
 
Good article Steve. Several questions keep popping up and several of us have attempted to explain why tech sites use top-end CPU's to test top-end GPU's (and lower resolutions to test CPU's), but it's nice to see you nail down exactly why with charts. A lot of "Why can you not test only at 4K, Why, why, why cos it's the future!" is little more than confirmation bias. Someone just bought themselves a new 4K toy, then has a habit to declare it "every man's new baseline, 1080p is so obsolete, man" regardless of observable reality (Steam HW Survey = 0.69% of gamers have 4K monitors or TV's, / 1.81% have 1440p / 0.73% have 2.35:1 / 95.0% have 1080p or below resolution). It's pretty obvious why they test for 1080p outside of the vocal minority bubble when 19 out of 20 gamers are using resolutions no higher.

As for those who still don't understand bottlenecks, the CPU "feeds" the GPU frames. If the GPU is being bogged down (either because it's a lower end model or from using too high resolutions / settings in extremely demanding games on even the top end GPU), then all the CPU's being tested will just sit there idling to varying degrees which throws all benchmark data out the window. It's why on the 4K Battlefield chart, even a Pentium G4560 can keep up with Ryzen (both 40fps). If G4560 can hit 98fps (1080p), but is limited to 40fps (4K) it'll be sitting there with a 40-45% load and 55-60% idling (waiting for the GPU which takes more than twice as long rendering each frame as the CPU does to prepare the next one). With a Ryzen (136fps 1080p limited to 40fps 4K), it'll spend 25-30% load preparing a frame, and then idling 70-75% waiting for the GPU which is 3x slower due to all those pixels.

All you'd end up doing for benchmarking CPU's under 4K GPU bottlenecks is testing "idle time" of what the CPU's aren't doing (because they're all sitting around waiting) instead of what they are doing if each were pushed to the max.


It's also why Techspot's previous reviews on lower-end stuff (eg, benchmarking i3's not just on GTX 980 / 1080's but also GTX 960 / 1060 and 1050Ti's) are to be praised, as it gives readers far more useful information of where bottleneck "sweet-spots" are for any given class of hardware. No-one's going to match a Titan X with a Celeron, but at the same time for the same money, an i3 / G4560 + GTX 1060 has already proven far better overall than an i5-7600K + RX460 / 1050 (because all that extra CPU horsepower does is spend longer idling waiting for the GPU):-
https://www.techspot.com/review/1325-intel-pentium-g4560/page4.html
 
Last edited:
I'm not asking you you to test an alternate way, I'm asking if there is proof that using these testing methods actually translates to the real world. Sure testing at 1080p will give you a good indicator of the performance of a CPU now, you can tell by how the performance scales as you go to 1440p and 4k. That doesn't say much about the future though and I'd be interested to see if those low resolution tests do translate to future performance on a wider scale than what AdoredTV provided.

Why are we debating this all of a sudden? Ohh right Ryzen launched.

Why are we forgetting that for the past 4 years or so pretty much everyone including myself said gamers reach a point of diminishing returns with the Core i7 processors, “just get a Core i5 they are better value and aren’t really any slower”. Now GPUs are more powerful and games are more demanding and we are seeing the Core i7’s pull well ahead in popular titles.

People are getting hung up on the resolution thing it's bloody insane. This is how you show the actual difference the CPU makes in games when placed underload. If nothing changes with the way games are designed in 5 years’ time and Ryzen doesn’t improve from what we are seeing at 1080p, then with much more powerful GPUs it will fall well behind.

Good article Steve. Several questions keep popping up and several of us have attempted to explain why tech sites use top-end CPU's to test top-end GPU's (and lower resolutions to test CPU's), but it's nice to see you nail down exactly why with charts. A lot of "Why can you not test only at 4K, Why, why, why cos it's the future!" is little more than confirmation bias. Someone just bought themselves a new 4K toy, then has a habit to declare it "every man's new baseline, 1080p is so obsolete, man" regardless of observable reality (Steam HW Survey = 0.69% of gamers have 4K monitors or TV's, / 1.81% have 1440p / 0.73% have 2.35:1 / 95.0% have 1080p or below resolution). It's pretty obvious why they test for 1080p outside of the vocal minority bubble when 19 out of 20 gamers are using resolutions no higher.

As for those who still don't understand bottlenecks, the CPU "feeds" the GPU frames. If the GPU is being held back (either because it's a lower end model or from using too high resolutions / settings in extremely demanding games on even the top end GPU), then all the CPU's being tested will just sit there idling to varying degrees which throws all benchmark data out the window. It's why on the 4K Battlefield chart, even a Pentium G4560 can keep up with Ryzen (both 40fps). If G4560 can hit 98fps (1080p), but is limited to 40fps (4K) it'll be sitting there with a 40-45% load and 55-60% idling (waiting for the GPU which takes more than twice as long rendering each frame as the CPU does to prepare the next one). With a Ryzen (136fps 1080p limited to 40fps 4K), it'll spend 25-30% load preparing a frame, and then idling 70-75% waiting for the GPU which is 3x slower due to all those pixels.

All you'd end up doing for benchmarking CPU's under 4K GPU bottlenecks is testing "idle time" of what the CPU's aren't doing (because they're all sitting around waiting) instead of what they are doing if each were pushed to the max.


It's also why Techspot's previous reviews on lower-end stuff (eg, benchmarking i3's not just on GTX 980 / 1080's but also GTX 960 / 1060 and 1050Ti's) are to be praised, as it gives readers far more useful information of where bottleneck "sweet-spots" are for any given class of hardware. No-one's going to match a Titan X with a Celeron, but at the same time for the same money, an i3 / G4560 + GTX 1060 has already proven far better overall than an i5-7600K + RX460 / 1050 (because all that extra CPU horsepower does is spend longer idling waiting for the GPU).

Thanks for getting it mate, saved me from going insane lol.

Like I said before look at the LinusTechTips Ryzen video where they tested at 4K. Is it accurate to show the FX-8350 a few frames behind Ryzen and high-end Core i7 processors, freaking LOL
 
If you agree that this is how the FX-8350 series compares to a Kaby Lake Core i7 and the new Ryzen 7 series in games then cool, you have worked it out :) Check video link for 4K results...

Video
 
Ryzen only really lags in gaming performance, which isn't a big deal for a CPU targeted at professionals. Ryzen's launch has been better than LGA 2011 as well. I don't exactly know where "Ryzen itself has a long way to go", because it's already as good as or beating it's Intel's LGA 2011-V3 counterparts at a fraction of the cost. There is zero reason to buy those CPUs at the price they are at since Ryzen launched.

People need to stop acting like gaming is the end all be all for every CPU in the world.
Gaming is what has a long way to go. As thats what gamers care about n what this article is about. Your avg. consumer dont care what pc they have as long as it works. Gamers on the other hand, well its very different.
 
Thanks for this article; while informative I have a few requests please.

1, why no 1% - 0.1% and processor utillisation numbers to back up your conclusions. If all these processors are good for 1440p - 4K game playing can you prove this; in referenc to your 1080p ultra conclusions.

2, As has been shown by Adored TV analysis ( extreme example, but valid non the less! ), when gpu power increases, the burden upon the cpu to feed all the increased calls to gpu increases; moreso when resolution increases. The bottleneck is not just on gpu.

3, many times the i7700k @ 5Ghz has been shown to have very high cpu utillisation while gaming ( 90-95% ); any other game or system request is goint to introduce that stutter fcuk; as evidence on many gamer videos show.

It would really help this discussion if uitllisation on these cpu's are shown at 1080p ultra, 1440p ultra and 4K high-ultra. If said cpu with really high utillisation at 1080p ultra is already stutter fcuk, can you conclusively show that this is not the case @ 1440p - 4K.

Your conclusion that their " maybe " optomisations to cmoe for Ryzen in the future ( not talking 5+ yeasrs here, are we? ) is bad journalism, which reduces; almost to nothing, the quality of the information included in this article.

Always liked your site, never before felt the need to comment; but you and many others seem determined to knock down; what is essentially a workstation / "prosumer" ( hate that word, but is a thing ! ) cpu, with seriously magnificient gaming chops; up their with the current " VERY BEST " that intel has to offer.

Given the trend to higher thread utillisation in all modern game ( as we move forward through 2017 and onwards ); where exactly will that i7700k @ 5Ghz be; when already it is starting to stutter fcuk?

Please can you produce frame time ananlysis and cpu utillisation numbers to bck up your future proof claims.

Thanks.

1, why no 1% - 0.1% and processor utillisation numbers to back up your conclusions. If all these processors are good for 1440p - 4K game playing can you prove this; in referenc to your 1080p ultra conclusions.

Processor utilization doesn’t really have anything to do with what we are discussing here as we aren’t looking at individual games, we are discussing testing methods. The second part of your question seems to illustrate that you might not understand the subject correctly.

2, As has been shown by Adored TV analysis ( extreme example, but valid non the less! ), when gpu power increases, the burden upon the cpu to feed all the increased calls to gpu increases; moreso when resolution increases. The bottleneck is not just on gpu.

Adored TV analysis doesn’t appear valid to me. I also don’t know what the alternative is other than showing GPU bound results. Also no, the second part of your comment/question is wrong, that is now how it works. When you increase load on the GPU you bog it down with work which frees up the CPU.

3, many times the i7700k @ 5Ghz has been shown to have very high cpu utillisation while gaming ( 90-95% ); any other game or system request is goint to introduce that stutter fcuk; as evidence on many gamer videos show.

If a game uses 4-cores efficiently then you will see high utilization on a 4-core processor. If it only uses 4-core then naturally you will see 50% utilization on an 8-core processor. An overclocked Core i7-7700K does not stutter heavily in any game, the stutter while not completely none existent is very rare even in the most demanding modern games.

“Your conclusion that their " maybe " optomisations to cmoe for Ryzen in the future ( not talking 5+ yeasrs here, are we? ) is bad journalism, which reduces; almost to nothing, the quality of the information included in this article.”

So maybes are bad journalism here but Jim at Adored TV can make a highly speculative video and somehow that valid :S Yep gotcha.

“Always liked your site, never before felt the need to comment; but you and many others seem determined to knock down; what is essentially a workstation / "prosumer" ( hate that word, but is a thing ! ) cpu, with seriously magnificient gaming chops; up their with the current " VERY BEST " that intel has to offer.”

Your taking this like a fan boy and completely glossing over the positives, so that’s on you.

“Please can you produce frame time ananlysis and cpu utillisation numbers to bck up your future proof claims.”

Neither of those things would backup this topic, you clearly don’t understand the subject.

"Now if you play on 1080p where the CPU becomes the bottleneck, a faster CPU is obviously better, but who would pair a $300+ processor with a mid range $200 GPU? It doesn't make any sense. Ryzen processors are 100% great when it comes to gaming, especially if you are gaming at 1440p and higher."

I couldn't agree more...

One more thing I was thinking about... What would happen if you pair Ryzen 8 core / 16 thread CPU with multi GPU setup and test it in 1080p? Wouldn't it change the performance results? I mean more cores to feed GPUs... you know.

No, this would likly play further into the 7700K hands unfortunately due to the way modern games are designed. I will look into this soon with a pair of Titan X (Pascal) SLI graphics cards.


@Steve TechSpot Editor
Thanks for your quick response.
1, " Processor utilization doesn't really have anything to do with what we are discussing here as we aren't looking at individual games, we are discussing testing methods. "
If the testing method does not show all relevant information then surely; it is flawed no? Ultimate fps numbers ( for me! ), only have value in context to the actual gameplay experience; ergo; if said processor is already maxed out at 1080p ultra ( all other things being equal ) and starting to stutter fcuk, then im certain that at same quality settings but higher resolution things will only get worse ( would like somebody to prove me wrong? )?

2, " Adored TV analysis doesn't appear valid to me. I also don't know what the alternative is other than showing GPU bound results. Also no, the second part of your comment/question is wrong, that is now how it works. When you increase load on the GPU you bog it down with work which frees up the CPU. "
Passing by mine or your interpretation of Adored TV's analysis; can you conclusively show cpu utilisation numbers falling as resolution increases; so cpu would be maxed out at 640x480, but then fall to 50% at 4K, correct?

3, " So maybes are bad journalism here but Jim at Adored TV can make a highly speculative video and somehow that valid :S Yep gotcha. "
Please Steve; I have asked for more information to back up your claims in this article; I have not attacked you personally. @ number 2, our opinions of Adored tv;s analysis are are own.

4, " Your taking this like a fan boy and completely glossing over the positives, so that's on you. "
Still have P4 ( HT ) , Q6600 , E 5860 ( or is that 40, on a shelf somewhere ) and x4 860k. As you can see, im ready for upgrade and looking for best information to ( future proof; as much as possible ) my next purchase. YOU, coming across as a paid shill is equivalent to me being a " fan boy " !!!!!

5, " Neither of those things would backup this topic, you clearly don't understand the subject. "

I'm 47 and have been a computer geek ( gamer ) for 35 years. You have made some conclusions in this article about future performance; please can you prove that the information I requested of you is NOT relevant to this discussion?


One last thing : " "Now if you play on 1080p where the CPU becomes the bottleneck, a faster CPU is obviously better, but who would pair a $300+ processor with a mid range $200 GPU? It doesn't make any sense. Ryzen processors are 100% great when it comes to gaming, especially if you are gaming at 1440p and higher."

I couldn't agree more...

One more thing I was thinking about... What would happen if you pair Ryzen 8 core / 16 thread CPU with multi GPU setup and test it in 1080p? Wouldn't it change the performance results? I mean more cores to feed GPUs... you know.

No, this would likly play further into the 7700K hands unfortunately due to the way modern games are designed. I will look into this soon with a pair of Titan X (Pascal) SLI graphics cards. "

Not my comment, lets keep our discussion away from other peoples comments, ok?
 
Why are we debating this all of a sudden? Ohh right Ryzen launched.

Why are we forgetting that for the past 4 years or so pretty much everyone including myself said gamers reach a point of diminishing returns with the Core i7 processors, “just get a Core i5 they are better value and aren’t really any slower”. Now GPUs are more powerful and games are more demanding and we are seeing the Core i7’s pull well ahead in popular titles.

People are getting hung up on the resolution thing it's bloody insane. This is how you show the actual difference the CPU makes in games when placed underload. If nothing changes with the way games are designed in 5 years’ time and Ryzen doesn’t improve from what we are seeing at 1080p, then with much more powerful GPUs it will fall well behind.



Thanks for getting it mate, saved me from going insane lol.

Like I said before look at the LinusTechTips Ryzen video where they tested at 4K. Is it accurate to show the FX-8350 a few frames behind Ryzen and high-end Core i7 processors, freaking LOL

Yeah, I've heard it's really bad for Tech reviewers right now. I don't do any social media, so I didn't notice until recent articles covering the topic.

On topic, isn't it rather unlikely that nothing changes in regards to ryzen in the next 5 years? Whether that be AMD optimized compilers, A windows scheduler update, or just devs using more cores, it seems to me that it would be hard for devs not to fall into benefiting Ryzen, especially given 4 core CPUs are kind of reaching their limits in games like BF1, as you alluded to with i7s becoming more popular.

As a reviewer though, you likely can't comment based on just assumption. I guess I kind of answered my own question.
 
@Steve TechSpot Editor
Thanks for your quick response.
1, " Processor utilization doesn't really have anything to do with what we are discussing here as we aren't looking at individual games, we are discussing testing methods. "
If the testing method does not show all relevant information then surely; it is flawed no? Ultimate fps numbers ( for me! ), only have value in context to the actual gameplay experience; ergo; if said processor is already maxed out at 1080p ultra ( all other things being equal ) and starting to stutter fcuk, then im certain that at same quality settings but higher resolution things will only get worse ( would like somebody to prove me wrong? )?

2, " Adored TV analysis doesn't appear valid to me. I also don't know what the alternative is other than showing GPU bound results. Also no, the second part of your comment/question is wrong, that is now how it works. When you increase load on the GPU you bog it down with work which frees up the CPU. "
Passing by mine or your interpretation of Adored TV's analysis; can you conclusively show cpu utilisation numbers falling as resolution increases; so cpu would be maxed out at 640x480, but then fall to 50% at 4K, correct?

3, " So maybes are bad journalism here but Jim at Adored TV can make a highly speculative video and somehow that valid :S Yep gotcha. "
Please Steve; I have asked for more information to back up your claims in this article; I have not attacked you personally. @ number 2, our opinions of Adored tv;s analysis are are own.

4, " Your taking this like a fan boy and completely glossing over the positives, so that's on you. "
Still have P4 ( HT ) , Q6600 , E 5860 ( or is that 40, on a shelf somewhere ) and x4 860k. As you can see, im ready for upgrade and looking for best information to ( future proof; as much as possible ) my next purchase. YOU, coming across as a paid shill is equivalent to me being a " fan boy " !!!!!

5, " Neither of those things would backup this topic, you clearly don't understand the subject. "

I'm 47 and have been a computer geek ( gamer ) for 35 years. You have made some conclusions in this article about future performance; please can you prove that the information I requested of you is NOT relevant to this discussion?


One last thing : " "Now if you play on 1080p where the CPU becomes the bottleneck, a faster CPU is obviously better, but who would pair a $300+ processor with a mid range $200 GPU? It doesn't make any sense. Ryzen processors are 100% great when it comes to gaming, especially if you are gaming at 1440p and higher."

I couldn't agree more...

One more thing I was thinking about... What would happen if you pair Ryzen 8 core / 16 thread CPU with multi GPU setup and test it in 1080p? Wouldn't it change the performance results? I mean more cores to feed GPUs... you know.

No, this would likly play further into the 7700K hands unfortunately due to the way modern games are designed. I will look into this soon with a pair of Titan X (Pascal) SLI graphics cards. "

Not my comment, lets keep our discussion away from other peoples comments, ok?
Dude I think you missed the boat on this topic. You've been given answers, you just didnt like them.
 
@Steve TechSpot Editor
Thanks for your quick response.
1, " Processor utilization doesn't really have anything to do with what we are discussing here as we aren't looking at individual games, we are discussing testing methods. "
If the testing method does not show all relevant information then surely; it is flawed no? Ultimate fps numbers ( for me! ), only have value in context to the actual gameplay experience; ergo; if said processor is already maxed out at 1080p ultra ( all other things being equal ) and starting to stutter fcuk, then im certain that at same quality settings but higher resolution things will only get worse ( would like somebody to prove me wrong? )?

2, " Adored TV analysis doesn't appear valid to me. I also don't know what the alternative is other than showing GPU bound results. Also no, the second part of your comment/question is wrong, that is now how it works. When you increase load on the GPU you bog it down with work which frees up the CPU. "
Passing by mine or your interpretation of Adored TV's analysis; can you conclusively show cpu utilisation numbers falling as resolution increases; so cpu would be maxed out at 640x480, but then fall to 50% at 4K, correct?

3, " So maybes are bad journalism here but Jim at Adored TV can make a highly speculative video and somehow that valid :S Yep gotcha. "
Please Steve; I have asked for more information to back up your claims in this article; I have not attacked you personally. @ number 2, our opinions of Adored tv;s analysis are are own.

4, " Your taking this like a fan boy and completely glossing over the positives, so that's on you. "
Still have P4 ( HT ) , Q6600 , E 5860 ( or is that 40, on a shelf somewhere ) and x4 860k. As you can see, im ready for upgrade and looking for best information to ( future proof; as much as possible ) my next purchase. YOU, coming across as a paid shill is equivalent to me being a " fan boy " !!!!!

5, " Neither of those things would backup this topic, you clearly don't understand the subject. "

I'm 47 and have been a computer geek ( gamer ) for 35 years. You have made some conclusions in this article about future performance; please can you prove that the information I requested of you is NOT relevant to this discussion?


One last thing : " "Now if you play on 1080p where the CPU becomes the bottleneck, a faster CPU is obviously better, but who would pair a $300+ processor with a mid range $200 GPU? It doesn't make any sense. Ryzen processors are 100% great when it comes to gaming, especially if you are gaming at 1440p and higher."

I couldn't agree more...

One more thing I was thinking about... What would happen if you pair Ryzen 8 core / 16 thread CPU with multi GPU setup and test it in 1080p? Wouldn't it change the performance results? I mean more cores to feed GPUs... you know.

No, this would likly play further into the 7700K hands unfortunately due to the way modern games are designed. I will look into this soon with a pair of Titan X (Pascal) SLI graphics cards. "

Not my comment, lets keep our discussion away from other peoples comments, ok?
Dude I think you missed the boat on this topic. You've been given answers, you just didnt like them.

@ texasrattler
Thanks for the extra information; obviously I can use this to further solidify my purchasing decisions!
 
People need to stop acting like gaming is the end all be all for every CPU in the world.

I'm asking if there is proof that using these testing methods actually translates to the real world.

I agree, but this IS a gaming article (and many hyping Ryzen were gamers). Is Ryzen a bad CPU? No. It's a BIG step forward from the FX chips, and an amazing value productivity chip. BUT, are the demands coming from some overly-enthusiastic advocates to suddenly start testing primarily at 4K to "skew" the scores by introducing a GPU bottleneck which disproportionately nerfs any faster chips valid or honest? Equally no. Especially with no pre-Ryzen similar outcry of the plethora of Intel i7 vs Intel i5 gaming benchmarks. Brand-warfare aside, bottleneck elimination scaling isn't new and goes all the way back to 1990's testing at 640x480 resolutions for CPU benchmarks when many had 1024x768 monitors.

It wasn't because that's what they believed "the future" would be, but to highlight how much "headroom" the CPU's have for future cards once the GPU bottleneck was removed. And yes it does scale pretty well as anyone who enjoys replaying older games a couple of years down the line with the same CPU but a newer GPU has long figured out. Look at how many people have kept the same i5-2500K for +5 years, but gone through 2-3 GPU's as an example. How many fps they've gained 3-4 years down the line by upping the resolution on much more powerful cards is indeed roughly in line with benchmarking on lower resolutions.

I'll repeat just for clarity - Testing CPU's with a heavy 4K dGPU bottleneck is like benchmarking SSD's by timing how long it takes to install a program from a USB 2.0 external HDD capped at 30-40MB/s then arguing over "future scaling" semantics. It's not only unfair to do that, it's completely, totally and utterly pointless when even the slowest SSD's will be +50% idle.
 
Last edited:
This is a very good article, @Steve.

Someone made a really good point about stuttering if the CPU utilisation is high despite a system being a GPU bottleneck at high resolutions. Another way of thinking about it is if all CPU's let GPU hit 40 fps then in the near future more demanding games are going to differentiate the CPU's, especially those struggling to hit 40 fps and hobbling the GPU performance.

I only found one thing unusual in the article and that is the quote in the centre in bold. Usually I have seen such quote to come before it is mentioned in the main body, which is better because it maintains the interest in the upcoming text rather than just emphasis.
 
'Recent discovery revealed a bug that’s significantly detrimental to Ryzen’s performance in Microsoft’s Windows 10 scheduler. The scheduler does not appropriately recognize Ryzen’s cache size and cannot distinguish physical cores from SMT threads. This in turn is causing it to often incorrectly schedule tasks in the much slower — approximately 4 times slower — SMT threads rather than primary physical core threads.'

'This clearly demonstrated in games optimized for heavy hyperthreading usage like the Total War series where disabling SMT improves performance by as much as 17%'

REF;
http://wccftech.com/amd-ryzen-launc...ponse/?utm_source=wccftech&utm_medium=related
 
Here's a question for everyone:

If games are not fully optimized (and optimization might take more than we'd like), won't the highest R5 SKU match the gaming performance of the R7 1800X? They are supposed to be clocked the same, but the R5 will be almost half the price.
 
Status
Not open for further replies.
Back