Is the Ryzen 9800X3D Truly Faster for Real-World 4K Gaming?

Now, imagine the same race but without that pace car (game is now @ 1080p) meaning the CPUs are not held back waiting for the pace car but are free to rip down the track as fast a fluffing possible.

Yeah... except that the typical gamer, who owns a 4060 or equivalent, always has a pace car present. So the only way for them to know the value of a CPU upgrade is to see test results with that pace car in place.
 
It's true that native 4K benchmarks aren't standard, but at least show a 1440p benchmark.

Performance difference between 14600K ($237) and 9800X3D ($480) in Cyberpunk 2077 and Hogwarts Legacy at 1440p with RTX 4090 GPU:

Cyberpunk 2077 (1440p) :
9800X3D + RTX 4090 = 155 FPS
14600K + RTX 4090 = 152 FPS
Only +3 FPS but twice the price !

Hogwarts Legacy (1440p) :
9800X3D + RTX 4090 = 207 FPS
14600K + RTX 4090 = 203 FPS
Only +4 FPS but twice the price !

Source: techpowerup
 
It's true that native 4K benchmarks aren't standard, but at least show a 1440p benchmark.

Performance difference between 14600K ($237) and 9800X3D ($480) in Cyberpunk 2077 and Hogwarts Legacy at 1440p with RTX 4090 GPU:

Cyberpunk 2077 (1440p) :
9800X3D + RTX 4090 = 155 FPS
14600K + RTX 4090 = 152 FPS
Only +3 FPS but twice the price !

Hogwarts Legacy (1440p) :
9800X3D + RTX 4090 = 207 FPS
14600K + RTX 4090 = 203 FPS
Only +4 FPS but twice the price !

Source: techpowerup

Why stop at an expensive and higher-wattage 14600K? Ryzen 7600X is the same 3-4 FPS slower than the 9800X3D in those games for only:

$195!

And you're not stuck on a dead platform.
 
Why stop at an expensive and higher-wattage 14600K? Ryzen 7600X is the same 3-4 FPS slower than the 9800X3D in those games for only:

$195!

And you're not stuck on a dead platform.
Why stop at an expensive and higher-wattage 14600K? Ryzen 7600X is the same 3-4 FPS slower than the 9800X3D in those games for only:

$195!

And you're not stuck on a dead platform.
If our use of the CPU is only for gaming, then you are absolutely right.

But for someone like me who is an Autodesk Maya user, the 14600k is much more performance than the price.
 
Frankly, at this point the only people who still argue against 1080p CPU testing are Intel shills who will change their tune the moment Intel produces a better gaming CPU. It is the only explanation for the willful misunderstanding and mischaracterisation of the facts presented. It is getting into flat earth territory, at some point you just need to walk away and let people keep their delusions, particularly when they aren't based in ignorance but ideology.
Yet here I am, trying to find if I can play Star Citizen at higher rates with the new AMD vs my 14900KF. SC doesn't care much about GPU, unless you pump specific options, but it is CPU bound at a level most games are not. And there is one thing my fellow reviewers never get - there are always games which are borderline unplayable at a given resolution, and while it doesn't really matter if you play with 120 or 144 FPS, it matters a lot if you can gain 5 FPS when you are around the 60 FPS.

Another issue with 1080p, and even present in the current article, is that it amplifies the differences well beyond what is to be reasonably expected, and this won't matter in RL. (Sure, 3-4 years later, blah, blah. Blah, because 3+ years from now these top CPUs are obsolete tech surpassed at games by mid level CPU)

Say 30+% here, but in RL in anything but the lowest resolution, you get half of that. 2 years from now it would be the same and even half of the benchmarks would be the same. Oh, you could apply some common sense and realize the difference is not that big and it won't be, but do people really do it or they remember X was 30% faster than Y (more like 15% in reality, which they won't even know bevlcause the reviewers are too busy to care).

I have 15 years as a reviewer at PCW, Computer Bild, hardwareBG and various magazines, I've been on the other side too (PR at distribution). IMO it's laziness not to simply include the information which would render the whole argument moot.
 
I genuinely still can't understand why people don't get the reason for testing like this.

Imagine a 16 lane drag strip with all the CPUs lined up for the race. 3. 2. 1. Go!

All CPUs are off and blitz down the track but there is a pace car out (game @ 4k) and it's preventing all the CPUs from passing because they're waiting on the pace car (GPU bound/bottleneck). The CPUs are jockeying back and forth but still held back by the pace car and essentially arrive at the finish line together.

Now, imagine the same race but without that pace car (game is now @ 1080p) meaning the CPUs are not held back waiting for the pace car but are free to rip down the track as fast a fluffing possible.

You've now efficiently tested the CPU and removed any constraints from it.

The end!

It's still bullshit! By not including 4K benchmarks there is no real-world picture of the performance gains to be expected for someone who games at 4K. This is especially true when pairing a GPU with a top-tier Nvidia GPU - where that GPU was ALWAYS chosen because of it's 4K performance. You either provide a complete picture of the performance (at multiple resolutions) or admit that you don't really care about the actual user experience ... and then no one should read your review in that case.
 
It's still bullshit! By not including 4K benchmarks there is no real-world picture of the performance gains to be expected for someone who games at 4K. This is especially true when pairing a GPU with a top-tier Nvidia GPU - where that GPU was ALWAYS chosen because of it's 4K performance. You either provide a complete picture of the performance (at multiple resolutions) or admit that you don't really care about the actual user experience ... and then no one should read your review in that case.
They aren’t testing GPU performance, they’re testing CPU performance, why do you refuse to understand this?
 
Back