Tackling the subject of GPU bottlenecking and CPU gaming benchmarks, using Ryzen as an...

Status
Not open for further replies.
I... I'm amazed at some of the replies here. I did not think trying to explain the concept could create a discussion like this.

I was going to go through some of the more recent comments and heck wrote half of it out, but It’s honestly beyond the scope of my time and effort to add (or muddy) to the conversation.

If anything, you’re all arguing about different parts of Steve’s methodology that you don’t agree with (what measurements/data points and how they are compared) rather than the main point that is being presented/defended in this article, which is minimizing the effect of other variables, specifically the GPU’s performance to attempt to only test the CPU. I’m not saying that several points made are not without merit (though most are grasping at straws or are painting situations so specific that it is impossible to reasonably test), but they are beyond the scope of this particular explanation for why Steven did what he did regarding his GPU choice. That and testing different resolutions is his attempt at showing the relationship between CPU power and GPU power, and when one becomes the more demanded resource over the other.

All the hubbub aside, I still think Ryzen is starting out quite strong (thank god honestly, I don't think AMD can weather another dud), regardless of how other users seem to be interpreting the results.
 
I agree, but this IS a gaming article (and many hyping Ryzen were gamers). Is Ryzen a bad CPU? No. It's a BIG step forward from the FX chips, and an amazing value productivity chip. BUT, are the demands coming from some overly-enthusiastic advocates to suddenly start testing primarily at 4K to "skew" the scores by introducing a GPU bottleneck which disproportionately nerfs any faster chips valid or honest? Equally no. Especially with no pre-Ryzen similar outcry of the plethora of Intel i7 vs Intel i5 gaming benchmarks. Brand-warfare aside, bottleneck elimination scaling isn't new and goes all the way back to 1990's testing at 640x480 resolutions for CPU benchmarks when many had 1024x768 monitors.

It wasn't because that's what they believed "the future" would be, but to highlight how much "headroom" the CPU's have for future cards once the GPU bottleneck was removed. And yes it does scale pretty well as anyone who enjoys replaying older games a couple of years down the line with the same CPU but a newer GPU has long figured out. Look at how many people have kept the same i5-2500K for +5 years, but gone through 2-3 GPU's as an example. How many fps they've gained 3-4 years down the line by upping the resolution on much more powerful cards is indeed roughly in line with benchmarking on lower resolutions.

I'll repeat just for clarity - Testing CPU's with a heavy 4K dGPU bottleneck is like benchmarking SSD's by timing how long it takes to install a program from a USB 2.0 external HDD capped at 30-40MB/s then arguing over "future scaling" semantics. It's not only unfair to do that, it's completely, totally and utterly pointless when even the slowest SSD's will be +50% idle.

Hopefully the R3 and R5 CPUs have more of overclocking headroom. AMD is going to need it to dethrone intel from the gaming crowd.
 
Status
Not open for further replies.
Back