Ryzen 5 5600X vs. Ryzen 7 5800X vs. Ryzen 9 5900X vs. 5950X: GPU Scaling Benchmark

Although the results are valid, the value of this test is precisely zero.

Playing on pc differs from playing on a console - you can have 50 opened tabs in your web browser, play music through Spotify, Tidal etc., Discord app running, maybe downloading something to use later. F.e., if I die in a multiplayer match I may jump to Windows and browse the internet and move between multiple apps. How will it impact the smoothness of the system will all the things running simultaneously? Nobody knows.

The above stated is exactly the reason why somebody would consider 5800x or 5900x over 5600x "for gaming". Because who just turns on the pc, starts up the game, and does nothing else? And not because we know that 8-core CPU will help, but it will mitigate the potential problems tremendously.

Right given we know most games will be GPU bound in a best case scenario for a given CPU why are we still testing best case scenarios?

Find different ways to tax the CPU while running a game. How about a few tabs open in chrome and and steam updating as the basic entry level test then escalate things with streaming, voicecomms, multiple resource hogging startup apps, spyware, etc
 
Why the hell did you use old amd gpus? we could have gotten alot more from this if 6000series was used.
I get your point, but at the same time, I would wager that the number of people who actually own 6000 series GPUs is quite small proportionally, and therefore that scenario would make for a very small segment of readers.
 
There could well be some fringe heavy background use that justifies it, but a lot of that stuff is often wildly over-exaggerated usually by people who've bought themselves a shiny new 16C/32T CPU, find that most games still won't come close to using all cores, and then start looking for ways to fill up the unused ones with cr*p to justify potentially over-spending for their needs.

..or... the people who have 16C/32T cpus who use them for tasks other than gaming, and therefore, absolutely benefit from more cores. I agree with you though that for games, at the present time, 16C/32T is overkill.

Now for the TL;DR.
That won't however be the case for long. The way parallelism is being done at the programmatic and language has been evolving fundamentally, particularly since C++ 11 introduced std::async. Future games won't have to do nearly as much work to optimize for more cores as game engines will evolve to automatically use relatively recent additions of C++ language such as Tasks (tasks, futures, etc) and co-routines. These newer constructs are much less reliant on the programmer performing individual thread management and more aligned towards letting the runtime create a task or work item and let it run on whatever resources are available. It does however take time for programmers to learn and adopt how to best use such game-changing features, and then there's the adaptation of existing code to use the new functionality. Before that happens however the compiler vendors have to support those features in the first place.
 
Although the results are valid, the value of this test is precisely zero.

Playing on pc differs from playing on a console - you can have 50 opened tabs in your web browser, play music through Spotify, Tidal etc., Discord app running, maybe downloading something to use later. F.e., if I die in a multiplayer match I may jump to Windows and browse the internet and move between multiple apps. How will it impact the smoothness of the system will all the things running simultaneously? Nobody knows.

The above stated is exactly the reason why somebody would consider 5800x or 5900x over 5600x "for gaming". Because who just turns on the pc, starts up the game, and does nothing else? And not because we know that 8-core CPU will help, but it will mitigate the potential problems tremendously.

Fair enough. I could just close my apps and save 120 quid :D
 
Although the results are valid, the value of this test is precisely zero.

Playing on pc differs from playing on a console - you can have 50 opened tabs in your web browser, play music through Spotify, Tidal etc., Discord app running, maybe downloading something to use later. F.e., if I die in a multiplayer match I may jump to Windows and browse the internet and move between multiple apps. How will it impact the smoothness of the system will all the things running simultaneously? Nobody knows.

The above stated is exactly the reason why somebody would consider 5800x or 5900x over 5600x "for gaming". Because who just turns on the pc, starts up the game, and does nothing else? And not because we know that 8-core CPU will help, but it will mitigate the potential problems tremendously.

I remember when playing music and a chat app used to bring my PC to it's knees as well, I think I was when I was learning to drive back in 2001.
 
"We guess the lesson here is that sometimes you need to read a little further into the results to get the answers you’re after."
Too true.Reading between the lines always helps to form a judgement when buying new hardware.
 
I remember when playing music and a chat app used to bring my PC to it's knees as well, I think I was when I was learning to drive back in 2001.

Besides the point. Testing with a clean installation of Windows, having nothing running in the background ... 6core and 12core with +- same frequencies are putting out almost the same numbers. Never would have expected it.
 
Last edited:
While I'm not playing the latest AAA titles, I only play at 1440p and even my ancient i5 3570k + 1070 can handle most games I throw at it with high settings. I imagine how much things will improve with my 3700X + 2080 super system I'm about to start building. Good to see it doesn't hurt to have 8 core+ cpu's where once they had lower clocks. So I get great gaming and better non-gaming performance that the 6 core 5600X and moire future proofing as games support more cores/threads. I will now skip Zen 3 and wait for Zen 4 or maybe 5 or even the successor to Alder Lake.
 
In regard to your comment on users getting mad when you don’t benchmark competitive settings:

While a good point, it’s interesting (and useful) to see when exactly the cpu becomes the bottleneck. We see here that the Ryzen 5 5600x is powerful enough that it never does.

If it was a 1st gen Ryzen 3, for example, it would be bad to pair that with the 3080. You’d be better off upgrading the cpu and spending the remaining money on a 3070. It is useful to see when that cutoff is. Benchmarking weaker CPU’s will show that, I’m sure.

Not to mention that this was done with medium settings, not low settings.

Still, great job with the benchmarks. I just wanted to show why benchmarking competitive settings is still useful, rather than have the reader scale the results down in a linear fashion. What’s the best cpu+gpu that $300 can buy? $400? $500? Etc. Once you get to the weaker systems, it’ll show why it is useful.
 
Hm, but since no one likes 5800x it seems to be the only Zen 3 that is available now lol, just ordered one from Amazon.

Pre-tarrifs that was going to be the exact reason I would buy a 3090. The Microcenter an hour from me would have them well into the afternoon on delivery days, which would mean I would pay the "convenience tax" and wouldn't be wasting my time on a dice roll.

The 5800x is a fine chip and availability > performance per dollar. A $600 chip you can get now is better than a $400 chip you might be able to find in a couple months.
 
Wow, the endless succession of mostly near identical charts really drives home that when it comes to gaming, you will probably be just fine with any of a large range of modern processors. A great result given current low supply and high prices.

If you were looking for justifications for the more powerful CPUs, I wonder if there's anything measurable that shows up when it comes to load times, app switching, maybe even downloads (for those with fiber class connections), etc. My intuition is it's probably subtle if it's there at all, meaning it mostly comes down to what else you use your rig for.
 
While you make some valid points, one could also argue that some might say, "Who games without turning off excess browsing tabs?"
Me for one. I suspect it comes down to what types of games and what type of gamer you are. Especially when I'm playing MMOs, I'm likely also chatting, reading, downloading, watching vids, checking on background work processing jobs I have running, etc. I am definitely not shutting everything else down to get into and out of a game - it's more like everything is up all the time and liable to be switched into at a moment's notice.

The great thing is I have one old 4 core system that can usually handle most of that just fine, give or take certain work tasks.
 
Besides the point. Testing with a clean installation of Windows, having nothing running in the background ... 6core and 12core with +- same frequencies are putting out almost the same numbers. Never would have expected it.

It's no cleaner than my daily driver. It has all the same stuff installed so no idea what you're on about. Also again, since when did not having a clean install mean you needed more cores :S
 
Pre-tarrifs that was going to be the exact reason I would buy a 3090. The Microcenter an hour from me would have them well into the afternoon on delivery days, which would mean I would pay the "convenience tax" and wouldn't be wasting my time on a dice roll.

The 5800x is a fine chip and availability > performance per dollar. A $600 chip you can get now is better than a $400 chip you might be able to find in a couple months.

Oh I got a RTX3090 too because it was too the only RTX 3000 available last month lol, got a Ryzen 3100 to go with it first because couldn't find any of the Zen 3 CPUs back then, surprisingly the puny Ryzen 3100 handled Cyberpunk at 4K pretty well, had a lot of fun with it, no regrets at all for $1600 RTX3090, Cyberpunk in RTX really is something unforgettable.
 
Last edited:
Wth u guys took the work of hardware unboxed and are showing it as yours.How are we supposed to trust you after this? The background of the pics are from hardware unboxed studio so is the thumbnail.Even he did the same test yesterday .It's just a copying, the scores are also the same. his video :
 
Hey Steve I'm sorry,I did not see that you were the editor. I thought you were being ripped off just like some websites that use videos without crediting.I could not find the remove comment button.
 
Hey Steve I'm sorry,I did not see that you were the editor. I thought you were being ripped off just like some websites that use videos without crediting.I could not find the remove comment button.

Yeah.... Yer not the first to make that mistake. Techspot and Hardware Unboxed are related, LOL! I've seen people make that accusation before during the 3080 launch. Was hilarious then, now not so much, so no worries, nobody laughing at ya. It happens.
 
This article is about GPU scaling and in that specific instance we can see the results. I will continue to stand up for more cores because I get frustrated that everyone acts like it's a waste of money to go up-trim.

- this is unfortunately the only video that covers this and they show that while the 5600x is better in gaming, once you turn on x264 encoding it's a different world.

There are many games where the 5600x would struggle to run the game @1080p with 720p encoding at high Kbps, while the 3700x ran it.

In COD:MV the 5600x cant run 8000/6000/4000 kbps x264 @720p and can only do 2000. The 3700x by comparison can run up to 6000 kbps.

In Doom Eternal which can run on a toaster, the 331 fps @ High 5600x has to again go down to 2000 kbps with 264 fps, while the 298 fps @ High 3700x can still hit 4000 kbps @ 238 fps.

In RDR2, both CPUs start at 116fps @ High, but the 5600x can only encode at 2000/3000 kbps (for 99 fps), while the 3700x goes all the way up to 8000 kbps.

Especially with how streaming exploded during these COVID Times - saying that "not everyone is running a million programs", or "modern browsers are sophisticated enough to not suck resources anymore", or "this isn't 2001", is disingenuous and missing the point that more people than ever are picking up streaming and that more cores can be beneficial for people outside of those doing workstation level tasks.
 
What is disingenuous is your assertions about performance drop during encoding and I call bullshit on you OR you are using crap software. Most encode is GPU intensive now and most games on 3090 have TONS of headroom to spare some cycles for an encode.

What the HELL are you doing leaning on your CPU for encoding for? That is lame. Streaming on a Core i7 7700K + 3090 runs better than your description, which tells me you ain't doing it right.
 
What is disingenuous is your assertions about performance drop during encoding and I call bullshit on you OR you are using crap software. Most encode is GPU intensive now and most games on 3090 have TONS of headroom to spare some cycles for an encode.

What the HELL are you doing leaning on your CPU for encoding for? That is lame. Streaming on a Core i7 7700K + 3090 runs better than your description, which tells me you ain't doing it right.

You obviously didn't watch the video. If you want more info, the Stream Doctor himself EposVox does another video as well comparing the 3700x to the 3900x for streams, running at 1440p and found x264 slow/medium/fast.

Would it surprise you to learn that not everyone uses NVENC? Would it surprise you to learn that not everyone has a 3000 series GPU let alone a 3090?

Do you even know how much processing headroom OBS uses? Don't take my word for it - it's well documented by people more knowledgeable than me.

NVENC isn't "free" and it has a GPU cost. There are optimizations for all sorts of streaming setups but since we're usually GPU bottlenecked in games these days anyway, it makes sense to throw a beefier CPU at it. If the Stream Doctor himself doesn't use NVENC on his 2080 Ti + I'm going to take his word for it.


EposVox even talks about frame capping games as well as disabling OBS preview for some games and a whole gamut of settings and GPU optimizations.

 
An extra couple of cores wouldn't hurt for the <1% of gamers out there who stream. For the 99%ers, Steve's numbers are relevant.

As another anecdote added to those few posts above this one: my kid wanted to record some gameplay and upload to YT (not stream, just record) and you know what? A crappy low-end slot-power-only GTX 1050Ti with NVENC actually did a damn good job in Fortnite at 900p 60fps locked, once you adjusted the settings (undervolt & a small underclock) so the card wasn't trying to draw too much power from the slot. I swapped in a GTX 1060 6G as we were asking quite a bit from that little guy and now everything just works, no adjustments needed.

So those extra cores must only be for the few streamers out there.
 
Crikey Steve! Do you ever sleep? It's kind of important you know! :D

A great piece as always. Sometimes I swear that you must be a machine. :D
 
Totally expected. If you don't need the higher core count CPUs for other workloads, a 5600X is quite capable, but there are times when the 1% low takes a hit on the 6 core part and maybe the 5800X is worth it for you, especially if you plan on having the build for 5 - 6 years, like me, and different game companies have said they're working on their game engines to make better use of CPUs with more cores. I would expect that to be the case with UE5, but we'll see.
 
Back