Tackling the subject of GPU bottlenecking and CPU gaming benchmarks, using Ryzen as an...

Status
Not open for further replies.
Here's a question for everyone:

If games are not fully optimized (and optimization might take more than we'd like), won't the highest R5 SKU match the gaming performance of the R7 1800X? They are supposed to be clocked the same, but the R5 will be almost half the price.

Yes, that will quite possibly be true for a lot of games.
 
Sounds like a religious person wrapping an argument around specific angles to get the outcome/message they want to give. this is called confirmation bias.
 
Here's a question for everyone:

If games are not fully optimized (and optimization might take more than we'd like), won't the highest R5 SKU match the gaming performance of the R7 1800X? They are supposed to be clocked the same, but the R5 will be almost half the price.

Exactly, just like in 90% of games i7 7700K is faster than 3x more expensive 6900K, and even more than those 22-core Xeon monsters.
 
Can someone please explain some stuff to me here because I'm confused? How is the CPU a bottleneck? Isn't accessing the video card done using Direct Memory Access which bypasses the CPU? Shouldn't using DMA be able to bypass the CPU since you're writing directly to the video RAM?

A game does not only use the GPU, because a game is not just a video benchmark. A (well written, code-wise) game will let the CPU handle the AI (you know, the intelligence that governs the likes of Lydia in Skyrim, or the behavior of an enemy when reacting to a stone thrown near in Far Cry...). The CPU also has to calculate the path, speed and amount of damage your units are doing when pitted against an enemy army in Starcraft and Total War - Rome, or the hits and drops in an RPG like Grim Dawn or Diablo. Sure, games like Crysis or Resident Evil are easier on the CPU, because there isn't much to calculate, and more so when the game makes heavy use of scripting, like Broken Sword or any adventure game, but the CPU still has to handle the logic of feeding what data and when to the GPU/RAM from the moment you start the game.
More to the point, the slower the CPU calculates speed, paths, hit/miss ratio, AI behavior, the lower your fps will be at a given resolution + details level. And the higher the resolution and the graphical settings, the more the GPU will have to work, while the CPU is always going to process the same quantity of data for the AI and everything else listed above. So when the GPU has its hands full with geometry, physics and rendering, the CPU will just wait for it before feeding it further with data.
Thanks for this. TIL.
 
These benchmarks just can't tell us how the future will unfold in game developement.
Wouldn't it be better to actually ask game developers about these kind of things.
It would be interesting to know what actual developers think of Ryzen, and what they think of cores vs clock speed.
 
Steve, you and I don't always agree on everything, but you've done a great job covering this controversial launch. Thank you.

To everyone else out there, here's the short, short version! Steve can only report on what's out there now, not in the future, just today. Intel's 4c/8t CPUS have a higher IPC and clock speed compared to Ryzen. This means yesterday's games, today's games, and the games over the next 1-2 years will run fastest on those CPUS.

After a year or two is anyone's guess. I believe 6-8 core CPUS will become the standard as they are the standard in consoles already, but there's no 100% way to prove it. Right now is an odd time as there literally is no clearly superior cpu. If you upgrade all the time stick with Intel. If not you may want the ryzen cpu for a longer term solution (plus if you do anything other than gaming on your pc.) it's up to YOU to decide what's best for YOUR situation. No review site can tell you definitely which option is better right now. All they can do is provide you are today's numbers using today's programs.

-Logic Out
 
As an avid multiplayer gaming I am still disappointed with the testing methodology. Single player benchmarks don't mean anything to multiplayer gamers. Multiplayer gaming requires a high minimum FPS, and playing multiplayer games uses a lot more CPU resources than SP games.

Large scale multiplayer games are way more demanding for CPU than singleplayer games.

for example, when I play Overwatch, it doesnt matter if I look at the ground or have 10 enemies in my screen. FPS is mostly the same. Resolution or graphic settings dont' matter as well.

In fact in Overwatch MULTIPLAYER and Battlefield 1 MULTIPLAYER, my minimum FPS is the SAME on LOW or ULTRA.

Playing on i5 2500K & Geforce 1070.
 
Game engines have always been coded to run great on subpar hardware so their target market is greater, or in other words, so more people can use their product with better results.
All the AMD fannies rambled on about the FX processors being the way to go for the future, but turns out threading and architectural quality were the most important.
So, many years later here we are again.
And BTW, Intel has released an 8 core/ 16 thread CPU in the 6900K Broadwell-E but even that loses to the 7700K, as Steve has shown (The 7700K's ability to hit 4.5GHz in stock form is pretty impressive). Games have taken years to jump from using 4 logical cores to 8 logical, how long will it be before we see them actually use 16 cores? And properly?
You AMD guys are trying to dice up these results, use varied testing results as an excuse and act like this is some complicated thing with compute ability, threading and resolution but its really quite simple. For gaming and gaming alone, who knows what developers will be targeting for hardware, I am sure it will gradually incline like it is now, but its not going to change overnight.
The 7700K will remain a great gaming CPU for 3-5 more years and I don't see the power of a 6900K or 1800X being used properly for atleast another 3-5 years for gaming purposes. Certain benches and programs? Yes. Gaming? We know the answer there, and Ryzen is doing quite well, so be happy and end this!

Edit:
I was thinking about doing a 6800K build myself, I love that 6 core and see some people hitting 5.0GHz with it, but after seeing these results with gaming, I am holding onto my overclocked i7 for now since I am back to gaming at 1080p @ 100+Hz.
 
Last edited:
I think we all agree that most people appreciate the work you reviewers do, it serves as the reference points for most people to decide which piece of hardware to get, now, the fact that reviews serve as reference to people is pretty damn important, it means your work has direct influence over what people chooses, that means seriousness and dedication is not a luxury but a need, of course, all virtually all reviewers have tons of this, but sadly, due to time and resource constraints most reviewers limit themselves to the most practical factors and often ignore factors that not only could, but should be always noted as they do, as mentioned earlier, directly influence which product a viewer will get.

So lets check out why ignoring factors is bad.

First lets consider the scenario, a CPU test. What we want to know? How does the CPU perform and how it compares to other CPUs. On synthetic benchmarks and most production software this is hard, even tough there are complex cases it's mostly all easily comparable. On games tough, things take a turn for the worse, even while they are software just like other production stuff, they are in fact, one of the most commonly complex kind of softwares, they include an immense variety of tasks and are very often pushing hardware to their limits.

Now we are testing games,we want to compare CPU, what do we want to know? *How fast is it*. Here is the essential mistake, video games are not about "how fast" its about how people experience it. Why is the experience important? Because if you have to choose between 500 fps with constant drops to 1fps or steady 80 fps any sane person would pick the CPU that gives you 80 steady fps with no drops.

So the truth comes forward, fps comparison on games is silly if you are misrepresenting the user experience, and this means you can't longer just focus on FPS, you have to consider a much wider spectrum of variables:
- Am I testing the CPU intensive settings that harm my CPU capacities? (Like amount of ppl)
- Am I getting any kind of "pop in"? Does game have the same graphic fidelity? (Models/textures loading slowly bla bla who knows)
- Does the amount of cores/threads have an impact on what I'm testing?
- Do the Clockspeed impact the game performance heavily?
- How is the CPU being utilised? How does this change comparing older to newer titles? How much spare room do I get for background applications?
- Am I considering the most demanding parts of the game? (Flying in the sky vs walking on a city full of NPC)
- Can I make a good projection of the capacities of the processor vs future games? (Here AdoredTV point is strong)
- Is it strong enough for my GPU? Which GPU do I need to benefit from it's extra power? (This has been a serious concern for GTX 1070 owners as no one tested it)

Now these might not be all the points but there are enough there to raise the question for reviewers "Are reviews giving good advise to viewers with only this data*? And I know, we all know, this **** takes tons of time, but lets be serious, with reviews you are, want it or not, giving advise to viewers on what to buy, so if there is a case like this one, where TONS of data are needed for good advise, why are you so fixated on considering only a small amount of variables and calling it "a good amount of data" when your own discrepancies among reviewers prove it is not a good amount of data.

Not gonna ask you to do a super long review, just asking to keep in mind that simple methodologies on complex subjects are full of holes, and you should keep that in mind and mention it when you are influencing/advising people on what they should get.
 
@Steve This resembles a Call of Cthulhu game, each time there is a new reply sanity decays... and it will certainly never be recovered. It's amazing how after pointing out everything really clear out there, people still question the most unimaginable things.

Good work!!

If you still have questions, please read again, it's extremely clear. If you still have questions, please read again, it's extremely clear. And, if you still have question, please read again, it's extremely clear (Rinse and repeat).

@ texasrattler
Thanks for the extra information; obviously I can use this to further solidify my purchasing decisions!
No you are not trying to solidify your purchasing decision, you are trying to convince yourself the sky is bright yellow and the ocean pink.
 
These benchmarks just can't tell us how the future will unfold in game developement.
Wouldn't it be better to actually ask game developers about these kind of things.
It would be interesting to know what actual developers think of Ryzen, and what they think of cores vs clock speed.
Single player benchmarks don't mean anything to multiplayer gamers.
...want it or not, giving advise to viewers on what to buy...
Bros, do you even read?
Tackling the subject of GPU bottlenecking and CPU gaming benchmarks
 
As an avid multiplayer gaming I am still disappointed with the testing methodology. Single player benchmarks don't mean anything to multiplayer gamers. Multiplayer gaming requires a high minimum FPS, and playing multiplayer games uses a lot more CPU resources than SP games.

Large scale multiplayer games are way more demanding for CPU than singleplayer games.

for example, when I play Overwatch, it doesnt matter if I look at the ground or have 10 enemies in my screen. FPS is mostly the same. Resolution or graphic settings dont' matter as well.

In fact in Overwatch MULTIPLAYER and Battlefield 1 MULTIPLAYER, my minimum FPS is the SAME on LOW or ULTRA.

Playing on i5 2500K & Geforce 1070.
That means your cpu is the bottleneck. If changing quality settings does nothing then your gpu has more overhead than your cpu. I've tested the 1070 personally and proved anything lower than a sandy bridge i7 would be a bottleneck (including a 4690k overclocked.) if pure fps is all you care about then you need a 7700k AND overclock it hard. The reason mp games aren't used in benchmarks are they can't be replicated so it's not a provable metric.
 
It's also why Techspot's previous reviews on lower-end stuff (eg, benchmarking i3's not just on GTX 980 / 1080's but also GTX 960 / 1060 and 1050Ti's) are to be praised, as it gives readers far more useful information of where bottleneck "sweet-spots" are for any given class of hardware.

You. Nailed it.
 
Good article Steve. Several questions keep popping up and several of us have attempted to explain why tech sites use top-end CPU's to test top-end GPU's (and lower resolutions to test CPU's), but it's nice to see you nail down exactly why with charts. A lot of "Why can you not test only at 4K, Why, why, why cos it's the future!" is little more than confirmation bias. Someone just bought themselves a new 4K toy, then has a habit to declare it "every man's new baseline, 1080p is so obsolete, man" regardless of observable reality (Steam HW Survey = 0.69% of gamers have 4K monitors or TV's, / 1.81% have 1440p / 0.73% have 2.35:1 / 95.0% have 1080p or below resolution). It's pretty obvious why they test for 1080p outside of the vocal minority bubble when 19 out of 20 gamers are using resolutions no higher.

As for those who still don't understand bottlenecks, the CPU "feeds" the GPU frames. If the GPU is being bogged down (either because it's a lower end model or from using too high resolutions / settings in extremely demanding games on even the top end GPU), then all the CPU's being tested will just sit there idling to varying degrees which throws all benchmark data out the window. It's why on the 4K Battlefield chart, even a Pentium G4560 can keep up with Ryzen (both 40fps). If G4560 can hit 98fps (1080p), but is limited to 40fps (4K) it'll be sitting there with a 40-45% load and 55-60% idling (waiting for the GPU which takes more than twice as long rendering each frame as the CPU does to prepare the next one). With a Ryzen (136fps 1080p limited to 40fps 4K), it'll spend 25-30% load preparing a frame, and then idling 70-75% waiting for the GPU which is 3x slower due to all those pixels.

All you'd end up doing for benchmarking CPU's under 4K GPU bottlenecks is testing "idle time" of what the CPU's aren't doing (because they're all sitting around waiting) instead of what they are doing if each were pushed to the max.


It's also why Techspot's previous reviews on lower-end stuff (eg, benchmarking i3's not just on GTX 980 / 1080's but also GTX 960 / 1060 and 1050Ti's) are to be praised, as it gives readers far more useful information of where bottleneck "sweet-spots" are for any given class of hardware. No-one's going to match a Titan X with a Celeron, but at the same time for the same money, an i3 / G4560 + GTX 1060 has already proven far better overall than an i5-7600K + RX460 / 1050 (because all that extra CPU horsepower does is spend longer idling waiting for the GPU):-
https://www.techspot.com/review/1325-intel-pentium-g4560/page4.html

For your information, since you are using steam survey as your argument, please check also what GPUs are linked to those monitors.
Steam users GPUs - 26.27% are useless GPUs which will bottleneck even on 720p. And only 1.41% of total 73.73% DX12 GPUs, which is about 1% of total, have Geforce GTX 1080.
So more people have 1440p or 4K monitors (1.81% + 0.69% = 2.5%) then High End GPUs (1.03%).
You can add also Geforce GTX 980 Ti (0.76% of total GPUs) to this equation and still there will be more users with 1440p or 4K monitors.

Testing high end cards on 1080p low quality, even high, for showing CPU bottleneck is misleading for REAL LIFE scenarios.
You will always be GPU bound no matter what. We don't buy high end GPUs to play Overwatch on low quality.
If lets say GTX 1080 play one given game on 1440p at 180fps we will crank up anti-aliasing in order to have better picture quality.

As for G4560, there is not way that it is going to hold up with 7700k even on 4k.
 
Not sure what the gripe is but Ryzen will get Windows updates with better core parking/scheduling and major motherboard bios updates should be coming as well. Check back in 3 months and it should be a whole new ball game for a new cpu that is already very good out of the gates. Win, win for everybody.
 
If a game uses 4-cores efficiently then you will see high utilization on a 4-core processor. If it only uses 4-core then naturally you will see 50% utilization on an 8-core processor. An overclocked Core i7-7700K does not stutter heavily in any game, the stutter while not completely none existent is very rare even in the most demanding modern games.

OK, what happens if background task puts one core into 100% load. Then 4-core CPU would have 125% load and 8-core CPU 62.5% load.

I expect 4-core CPU will stutter and 8-core CPU not.
 
By now I'm only reading to get a good laugh, it's like talking to a brick wall... keep on saying the sky is bright yellow and the ocean pink...
If you still have questions, please read again, it's extremely clear. If you still have questions, please read again, it's extremely clear. And, if you still have question, please read again, it's extremely clear (Rinse and repeat).
 
If you watch adoredTV, he says that 720 resolution is quote "a crock of ****." And shows you why, like always.
 
Great clarification for previous reviews, it's just sad that people still need to justify their own insanity and pursue some of the most random claims despite the information being laid out right in front of them. But they love to speculate about the future, ignore the past, and pretend like your doing this all wrong. At this point these people should just go out and write their own reviews, test the hardware the way they want to test it, obtain the results they want to obtain as biased as they might be. Then let the world try as they will to discredit them and call them fake results, and get told that's not how it should be done...

But wait, that would mean they would have to have all the hardware, the time, the patience to do so. Damn I guess it'll just be easier for them to criticize every review they disagree with until the end of time. Hey, it was worth a shot.
 
Can someone please explain some stuff to me here because I'm confused? How is the CPU a bottleneck? Isn't accessing the video card done using Direct Memory Access which bypasses the CPU? Shouldn't using DMA be able to bypass the CPU since you're writing directly to the video RAM?

A game does not only use the GPU, because a game is not just a video benchmark. A (well written, code-wise) game will let the CPU handle the AI (you know, the intelligence that governs the likes of Lydia in Skyrim, or the behavior of an enemy when reacting to a stone thrown near in Far Cry...). The CPU also has to calculate the path, speed and amount of damage your units are doing when pitted against an enemy army in Starcraft and Total War - Rome, or the hits and drops in an RPG like Grim Dawn or Diablo. Sure, games like Crysis or Resident Evil are easier on the CPU, because there isn't much to calculate, and more so when the game makes heavy use of scripting, like Broken Sword or any adventure game, but the CPU still has to handle the logic of feeding what data and when to the GPU/RAM from the moment you start the game.
More to the point, the slower the CPU calculates speed, paths, hit/miss ratio, AI behavior, the lower your fps will be at a given resolution + details level. And the higher the resolution and the graphical settings, the more the GPU will have to work, while the CPU is always going to process the same quantity of data for the AI and everything else listed above. So when the GPU has its hands full with geometry, physics and rendering, the CPU will just wait for it before feeding it further with data.

It's also worth noting the CPU also does the initial setup and management of the 3d environment. While the GPU does the majority of the workload, the CPU does play a part. In addition, the GPU driver itself runs on the CPU, and itself accounts for a good 30-40% of the total program workload.

I'd go so far as to say in most non-RTS games, 70-80% of the total CPU workload is managing data the GPU acts upon in some form. The actual logic in most games isn't that terribly large, all things considered, which is why there's hard limits to how many cores games can use. Aside from the GPU driver and its processing, there isn't a lot of stuff to offload.
 
@ Steve Walton

Another view : http://techreport.com/review/31546/where-minimum-fps-figures-mislead-frame-time-analysis-shines

Apparently the sky is yellow and the ocean pink; unlike the stuff which is floating around here ( what is that colour when you mix blue and green? )
It seems you don't read the things you quote as -apparently- bible, it backs the results shown here (That the new-ish i7s beat Ryzen) but they add another graph that shows how many times the card reaches the minimum frame rate in a determined period of time. The sad part, is that they are not the same games which means there can't be a toe-to-toe comparison, only that -again- i7 wins.
 
If a game uses 4-cores efficiently then you will see high utilization on a 4-core processor. If it only uses 4-core then naturally you will see 50% utilization on an 8-core processor. An overclocked Core i7-7700K does not stutter heavily in any game, the stutter while not completely none existent is very rare even in the most demanding modern games.

OK, what happens if background task puts one core into 100% load. Then 4-core CPU would have 125% load and 8-core CPU 62.5% load.

I expect 4-core CPU will stutter and 8-core CPU not.

If both of these theoretical CPUs had cores with the same exact performance, then you would be correct.

If however your four core chip had CPUs with double the individual performance compared to your eight core CPU, then the eight core CPU would be expected to stutter more as any single INDIVIDUAL core would have a higher chance to bottleneck.
 
For your information, since you are using steam survey as your argument
My "argument" was simply an observation of how amusing it is for some of the 2% of 4K monitor owners to question why 1080p benchmarks "still exist" for the 98% of mainstream gamers (regardless of what GPU's they own) but only when testing AMD CPU's. Techspot has included 1080p benchmarks on high end Intel CPU's for years (including last month's i7-7700K review), and has also done i3 vs i5 vs i7 reviews mixing up mid-range and high-end cards. And yet only now is it an "issue" for some AMD fans to not want to see ANY 1080p benchmarks despite Ryzen actually being treated equally...

Some of you are acting like everyone's calling for ONLY 1080p benchmarks or trying to "ban 4K". Even Techspot's headline dispells this nonsense "AMD Ryzen Gaming Performance: 16 Games Played at 1080p & 1440p" with 4K being demonstrated in this article (and promptly showing why 40fps on $500 high-end CPU's at 4K whilst $65 budget chips get 90-100fps isn't even benchmarking the CPU (in a CPU review) when the GPU bottleneck is such a high 8:1 factor (ie, a chip with 1/4 of the cores is getting double the frame-rates once the bottleneck is removed).

Nor does "People are only allowed to buy Ryzen if they play at 4K resolution" make any sense at all. Ryzen's strength is productivity and plenty of people may buy one for work whilst playing at 1080p / 60Hz games in the evening. Again, resolution increases scale up on the GPU, not the CPU. There is zero "law" that ties what CPU you buy to what resolution you use. Many buy beefy CPU's for 1080p @ 120-144Hz.

Testing high end cards on 1080p low quality, even high, for showing CPU bottleneck is misleading for REAL LIFE scenarios.
No one said anything about "low quality" presets. And as I commented, decent benchmark sites test both 1080p and higher to let people see both "likely match" (performance today) and GPU bottlenecks at 4K (ie, future growth potential). Yet that still seems to irritate the "I don't want to see ANY 1080p benchmarks" crowd that suspiciously only sprung up recently immediately post-Ryzen launch yet simultaneously had zero complaints of testing 1080p on older AMD chips (or Intel's 7700K last month). Not to mention some feedback reviewers got directly from AMD themselves:-

"When we approached AMD with these results pre-publication, the company defended its product by suggesting that intentionally creating a GPU bottleneck (read: no longer benchmarking the CPU’s performance) would serve as a great equalizer."
http://www.gamersnexus.net/hwreview...review-premiere-blender-fps-benchmarks/page-7

It's obvious what "the game" is with aggressively trying to hide 1080p mainstream benchmarking (but only for Ryzen not literally any of the dozens of other CPU's reviewed over the past few years) regardless of how some try and play dumb as to the underlying intention. ;)
 
Last edited:
Status
Not open for further replies.
Back