Ryzen 9 3900X vs. Core i9-9900K: 36 Game Benchmark

There is no stock cooling for the 9900k. You will *always* pay for aftermarket cooling, as none is included with the cpu or the mb.

Yeah, that's why I want them to explain their "stock" 9900 set up... That's why you should *always* report all relevant information, or your results are fairly worthless
 
There is no stock cooling for the 9900k. You will *always* pay for aftermarket cooling, as none is included with the cpu or the mb.

Yeah, that's why I want them to explain their "stock" 9900 set up... That's why you should *always* report all relevant information, or your results are fairly worthless

They did. From the article:

The stock or out of the box configuration uses DDR4-3200 CL14 memory with the Corsair H115i on the Gigabyte Z390 Ultra.
 
Couple things. While it's implied, there is no actual statement as to what cooling was used in either of the i9 tests. Stock cooling? Explain.

How many times was each benchmark ran? Was this an average of results? Was this using the same parts except what was stated? Was it an open test case? Were they performed at the same time?

Is 1080p considered a benchmark anymore? Why use a 2080ti for 1080p? Can we get cpu and gpu (gpu especially) usage percentage scores (high/low/average)?

Jesus, people act like it's tough to properly report findings. This just comes off as sloppy, I expected better, honestly.

Your first question is answered in the article text. The rest of your questions are answered in the day one reviews which are the first links in the article text. This is a follow-up article so I guess they figure you read the original article in order to have the proper perspective on these tests, but it's not like it would have wasted much space by including that.

The rest are all opinions and Jesus is over at GamersNexus. You got the wrong Steve.
 
Your first question is answered in the article text. The rest of your questions are answered in the day one reviews which are the first links in the article text. This is a follow-up article so I guess they figure you read the original article in order to have the proper perspective on these tests, but it's not like it would have wasted much space by including that.

The rest are all opinions and Jesus is over at GamersNexus. You got the wrong Steve.

Yeah, you're right, like them or not, gamers Nexus knows how to report testing conditions and results properly in a per article basis.
 
Doesn't matter how many cores the CPU has, when the crappy games can't use them. They are still being coded the same way as in 1990'es. We need better programmers and programming languages, not hardware. Nowadays programming languages suck.

Programmers suck too, because most of them are in it for the money. Not because they love programming. Which means they suck in what they do. From the game benchmarks it's pretty evident their products are crap, since most of the AMD CPU is totally unused. It would be interesting to display the CPU usage along with the average FPS. Because with AMD you can obviously do video encoding in the background, considering how much computing power is unused by the "modern" games...
 
I think there is still some wonky behavior going on with the Windows scheduler as well. Linus discovered this with a few games, and once they assigned affinity to a couple of cores, the scheduler stopped ping-ponging the CPU cores and performance was increased quite a bit.

At any rate, even if AMD was 10% slower at 1080p than Intel, I'd still have more cores for production work, and since I'm not playing games on a 144hz monitor, I'm not really missing out on any of that FPS advantage Intel has. That's literally all they have, and sure, for people pushing 200hz panels with 2080Tis, good on them, buy your Intel CPUs and have nice day.

Not that I'd buy a 3900X or 9900K anyways. I'm looking at the 3700X or 3600. But since I do a decent amount of video and sound production on my PC the 3700X is looking pretty good and a good match for my RTX 2060.
 
In the words of the great philosopher band Midnight Oil and WoW military leader "Dives"

a fact's a fact, handle it!

If your CPU is not ranked how you like on the gaming charts below, deal with it! Tossing a hissy fit on the internet (however comical) does not make it any faster. If you fail to understand the charts and why it's important to test at 720p and 1080p, that is a you problem and your lack of understanding. Get educated then come back and make a post rather then embarrassing yourself.

If you need 7% more frames when gaming at 720p using a $1200 graphics card, then by all means buy the 9900K.

For the rest of the people out there gaming with a measly <$1000 graphics card at 1440p or similar, feel free to buy an AMD or Intel CPU as there's no difference.

Yeah, it seems some will fight tooth and nail over that 7% margin. AMD has caught up in IPC, yes their architecture still has some teething issues with Infinity Fabric latencies. Intel will as well when they ditch their monolithic designs and go chiplets as well. I don't think some people understand Intel has to radically change the way they've been building CPUs to compete with AMD. If they do not, they will get left behind.

It's abhorrently expensive to create huge monolithic dies anymore. AMD innovated while Intel was happy to keep giving 5% IPC gains every generation and keep core counts down to 4! Now we have some real competition and AMD is taking full advantage of Intel when they were in cruise mode.

All you Intel fans should be pissed they sat on their butts and didn't progress. AMD did this with a smidgen of the RnD money Intel has available.

Now we have some competition, driving prices down, everyone should be happy. If you game and those 7% extra frames are that important to you, knock yourself out and go Intel. I'll go with AMD, because they are actually innovating and I'd rather have my money go to a tech company who's trying to push CPU technology rather than get fat and lazy like Intel did.
 
While I support AMD and their efforts, they are barely competing on 7nm when Intel is still on 12nm. When Intel moves to 10nm, the IPC gap will again widen with Intel maintaining it's lead. AMD needs to go back to the drawing board on IPC. On the bright side, AMD does have a good handle on the multi-processor performance. If game developers were able to better write for multi-processor usage, then it may be all mute and won't matter who you go with.


Even with the 2000's, AMD was already pulling ahead of Intel in the DIY market where people know what they are doing and what they want. The 3000's are going to demolish Intel here, despite Intel's tiny gaming advantage in extreme atypical scenarios. AMD already wins in DIY and the 'Intel for gaming' folks will slowly quiet down over the next 2 years. They may have a resurgence if Intel releases something great in 2021 or 2022..we will see.
It's mobile and OEM's where Intel has so much leverage, and the best product won't necessarily win.
 
Well I went with the 3900x over the 9900k even though my rig is mainly for gaming. I play with a 2080ti at 3440x1440 120hz and at that resolution I should be gpu bound so the difference would be minimal. I just couldn't buy an 8 core over a 12 core for 5% in gaming. Dont get me wrong the 9900k is a damn good Cpu I also have an 8700k delided 5.1ghz and a 2700x. I'm still debating what cpu to put with what gpu I have a 8700k, 3900x, 2700x, 5820k, 3770k and for gpu’s 2080ti, 2080, radeon 7, 5700xt, 1070 plus 6 of 7 other gpu’s. Think the 3900x is going with the 2080ti, 8700k and 2080, 2700x radeon 7, 5820k with 5700xt, 3770k 1070. I'm not sure what's faster the 2700x or 5820k in gaming my 5820k will do 4.7 but I like 4.6 better for voltages it might be faster in games than the 2700x at 4.2. I'm still impressed with navi just wish they would've had a cooler to the quality of the founders cooler. I undervolt to 972mv to keep temps and sound down.
 
While I support AMD and their efforts, they are barely competing on 7nm when Intel is still on 12nm. When Intel moves to 10nm, the IPC gap will again widen with Intel maintaining it's lead. AMD needs to go back to the drawing board on IPC. On the bright side, AMD does have a good handle on the multi-processor performance. If game developers were able to better write for multi-processor usage, then it may be all mute and won't matter who you go with.


Even with the 2000's, AMD was already pulling ahead of Intel in the DIY market where people know what they are doing and what they want. The 3000's are going to demolish Intel here, despite Intel's tiny gaming advantage in extreme atypical scenarios. AMD already wins in DIY and the 'Intel for gaming' folks will slowly quiet down over the next 2 years. They may have a resurgence if Intel releases something great in 2021 or 2022..we will see.
It's mobile and OEM's where Intel has so much leverage, and the best product won't necessarily win.
I guess if atypical means your amd flagship constantly loses to mid level Intel in gaming but hey whatever AMD fan boys need to stop the hurt of Intel's 80% market share even on steam hardware. Remember it's ok to cry.
 
Well I went with the 3900x over the 9900k even though my rig is mainly for gaming. I play with a 2080ti at 3440x1440 120hz and at that resolution I should be gpu bound so the difference would be minimal. I just couldn't buy an 8 core over a 12 core for 5% in gaming. Dont get me wrong the 9900k is a damn good Cpu I also have an 8700k delided 5.1ghz and a 2700x. I'm still debating what cpu to put with what gpu I have a 8700k, 3900x, 2700x, 5820k, 3770k and for gpu’s 2080ti, 2080, radeon 7, 5700xt, 1070 plus 6 of 7 other gpu’s. Think the 3900x is going with the 2080ti, 8700k and 2080, 2700x radeon 7, 5820k with 5700xt, 3770k 1070. I'm not sure what's faster the 2700x or 5820k in gaming my 5820k will do 4.7 but I like 4.6 better for voltages it might be faster in games than the 2700x at 4.2. I'm still impressed with navi just wish they would've had a cooler to the quality of the founders cooler. I undervolt to 972mv to keep temps and sound down.

lol you only got two arms how are you using 5 different PC's and why?
 
Doesn't matter how many cores the CPU has, when the crappy games can't use them. They are still being coded the same way as in 1990'es. We need better programmers and programming languages, not hardware. Nowadays programming languages suck.

Programmers suck too, because most of them are in it for the money. Not because they love programming. Which means they suck in what they do. From the game benchmarks it's pretty evident their products are crap, since most of the AMD CPU is totally unused. It would be interesting to display the CPU usage along with the average FPS. Because with AMD you can obviously do video encoding in the background, considering how much computing power is unused by the "modern" games...

I accept your offense to us, because you basically said we suck and we only want the money. I will take that as just a tiny rant and not be offended. But you have to read what I will tell you too.

I have no idea if you are a programmer, probably not (according to your post), and nothing wrong with it, but let me explai you.

Coding taking account many threads is 100x times harder than coding taking into account isolated pipelines and raw speed per unit dealing with draw calls. Plus it can give you unexpected results.

The easiest and less prone to error way of programming 3d engines (games or not), is to try to fully saturate each core and only then proceed to the next one. So there isn´t an equal distribution amongst every thread. If we did that, not only it would take 100x more time, but we ended up on a situation where someone is using a 10 threads CPU, and the other is using 20 threads, and someone is using 8. How could we take that into account? We had to saturate the cores more, on lower thread count CPUs. Then we had to take into account the CPU speed and then to make it even harder, logical threads wich are not like physical cores (they are much weaker and can basically allocate and do minimal processing). Then to make it even worse, dealing now with CCX, inter latencies, communication between basically different CPUs (Ryzen design).

This would be a disaster and a stutter fest. The right way to do things is to try to saturate what is at disposal, and not balance the usage between every thread. This way we assure the engine will work on future arch/CPUs too, and we assure a steady frame time across the board.

Intel design is way more programmer friendly than Ryzen design. This is also the reason why next gen consoles will most likekly pack 8 physical cores CPUs without SMT (according to the rumours). This will only make it worse for CPUs with a lot of logical threads on PC. This is why you see a lot of games having gaings when you disable SMT/HT (as long as it isn´t a low core count CPU with 2 or 4 cores, where HT actually helps a lot).

AMD made the decision of going this path of using CCX, chiplets, a lot of cores, a lot of threads. This assured them a selling point, value for a lot of tasks. But you can´t have everything, it has its drawbacks, and as a programmer let me 99,9% assure you, you won´t see any specific Ryzen optimizations at all.

We are not the ones that need to take risks and take 100x more time to program. AMD is the one that needs and will find a way to use more cores on a single CCX, and more cores on a CPU in total, eliminating latencies and maybe SMT.

This is the only and sole reason Intel still beats AMD in games. Their ring bus tech (wich was robbed from AMD btw), is the greatest thing ever, easy to program, great performance, low latency.

Now if you read all of this, thank you. And please think twice before you call us lazy next time.
 
Nope, it's still Intel for games.

Unless you game and do anything else on your PC, the AMD chip would be better now and into the future. FYI, look at 7700k vs 1800X, 1800X now destroys the Intel chip but when it was released, the Intel chip was regarded as a superior gaming chip. That same thing is going to happen to the 9900k, it'll just take some time for developers to properly thread their workload.

If they have already optimized for threading on the 1800x what more will they do for the new Zen chips?
 
When comparing the 8/16's, there are plenty of titles they struggle to push over 100 FPS, or run between 70-120fps. If gaming on a 1440p 120hz/144hz/165hz monitor, you will need everything you can get and a 15-25FPS difference is significant, even if your over 60FPS doesn't mean its going to be butter smooth, there is a difference and that's why they make 120hz/144hz/and 165hz gaming monitors with 1MS and Gsync, its not a gimmick.
Yes, if you are maxing out a Gsync monitor, then Gsync will be worse, the reason for Gsync is to kill tearing, it's basically "dynamic vsync".
If you can keep a 120fps at all times (if you a bit more then it will be butter smooth) on a 120hz monitor, Gsync is not needed (actually it will make it even worse), vsync will be much more profitable for butter smooth feel. There many test, and you test it for yourself to lock a game at 60 fps and lock the monitor to 60 you will feel something is wrong, then unlock the fps and try it again, it will be more butter smooth because you have more frames in case there some lost due to issues with the game.
If you can't keep a steady 120fps then (if you can) lock it to 110 or 115 so gsync will be active at all times and gives you butter smooth experience.
Before you write something silly again read some tips & tricks and the best way to utilise Gsync/Freesync at https://www.blurbusters.com

Some games are heavily unoptimised like insurgency Sandstorm, if I'm playing 1440p in 21:9 all ultra with a 8700k + 1080 Ti the average fps is between 50-60 just because some maps are better than the others.
Gamers tend to match a high end CPU with high end GPU, therefore using this config at 1080p really waste of money, test on 1440p in 16:9 and 21:9 and 4k would be better representation for gaming benchmarks.
Streaming with Intel cpu at 60fps a game like Insurgency is absolutely painful for the CPU, but for a 2700x no FPS drops at all.
For me I'd be happy if I can see a 'gaming while streaming' test at 60FPS at full resolution both NVENC and CPU (h.264). For me the 3900x is the winner, more cores, more future proof MOBO, no impact if recording/streaming on the games FPS.
If I'd be a pure gamer like I was before then 9900K, but in Intel's case every new gen means new MOBO :(
 
Couple things. While it's implied, there is no actual statement as to what cooling was used in either of the i9 tests. Stock cooling? Explain.

They did at the very beginning of the article, & even indicated that they were air-quoting "stock" for both the 3900X & 9900K more correctly meant "out of the box" (I.e. only the necessary XMP profiles needed to run the RAM at the RAM's stock speed, the stock cooler on the 3900X, & the Corsair H115i cooler used for the 9900K but without any manual changes to clock speed).

How many times was each benchmark ran? Was this an average of results? Was this using the same parts except what was stated? Was it an open test case? Were they performed at the same time?

Although not mentioned in this particular article, they have mentioned in enough articles over the (many) years of their reviews that they do multiple passes through the same benchmark on each game so that they get an average result. Guess they assumed by now that people would remember that.

Is 1080p considered a benchmark anymore? Why use a 2080ti for 1080p? Can we get cpu and gpu (gpu especially) usage percentage scores (high/low/average)?

Jesus, people act like it's tough to properly report findings. This just comes off as sloppy, I expected better, honestly.

They go into it in much more depth in an article from last November (https://www.techspot.com/article/1637-how-we-test-cpu-gaming-benchmarks/). However, it's a very long article with a lot of different examples, so I'll summarize:
-- There are 3 ways that a game/system combination can be bottlenecked: CPU, GPU, & a combination of the two.
-- The higher the resolution, the higher the chance of a GPU bottleneck occurring...& even with the mighty RTX 2080TI, there are still games that at 4K resolutions that will be GPU bottlenecked. It's simply that the GPU takes so much time to process & display the data it gets from the CPU that the CPU spends more & more time idling while it waits for the GPU to catch up. That's also why there are a number of games where, at 4K resolutions, pairing almost any CPU with the "wrong" type of GPU will show zero difference in performance.
-- The lower the resolution, the higher the chance of a CPU bottleneck occurring. In this case, even a mid-range GPU (let alone a high-performance one) easily blows through the data it's received, so now the GPU is waiting for the CPU to send it some new data.
-- They used to do CPU testing at 720p or lower resolutions, because only high-end GPUs could provide 1080p (or even 1440p) output. Nowadays, 1080p is considered the "normal" or "standard" resolution (Note that in both the recent May & June Steam hardware surveys, 1080p was the most common resolution used by gamers; of the top 26 GPUs from May, the top one was the GTX 1060, a 1080p GPU, & out of those 26 GPUs only 6 of them can be considered 1440p GPUs, let alone 4K, with the other 20 GPUs -- essentially being "1080p or lower", including the integrated Intel options -- providing 52.95% of the GPUs in the survey; the article on the June survey only included the top 19 GPUs, but again only 6 can be considered 1440p/4K cards, with the other 13 still providing 48.93% of the hardware out there), as a good mid-range GPU can still be bottlenecked by a weak CPU.
-- The last thing you want when testing is to use a system that's bottlenecked on both the CPU & the GPU. So, in order to maximize your ability to measure differences in the performance of various GPUs, you minimize any effects due to CPU performance by using the most powerful CPU available (or at least close enough for the majority of users), & test at the available resolutions to show how their performance scales; conversely, to measure differences in performance between different CPUs, you use the most powerful gaming GPU available (in this case, the RTX 2080TI), using a fixed resolution, & then optionally testing to see how overclocking affects the performance.
-- Expecting them to test all conceivable combinations of CPU, GPU, & resolution across all of the different motherboards, RAM speeds, & RAM brands is a complete pipe-dream. But hey, if you have unlimited cash & time on your hands to do it yourself, more power to you.

TL-DR: It's been explained for years & years, not just here but on all of the reputable benchmarking sites, why CPU testing is done this way. It's done that way because that's how things work. Otherwise, any "results" from the testing are essentially meaningless.
 
But, but, but I get 932.8 frames per second in my Quake3Demo with my one year old Ryzen 7 2700 system. What is left to say......
 
I accept your offense to us, because you basically said we suck and we only want the money. I will take that as just a tiny rant and not be offended. But you have to read what I will tell you too.

I have no idea if you are a programmer, probably not (according to your post), and nothing wrong with it, but let me explai you.

Coding taking account many threads is 100x times harder than coding taking into account isolated pipelines and raw speed per unit dealing with draw calls. Plus it can give you unexpected results.

The easiest and less prone to error way of programming 3d engines (games or not), is to try to fully saturate each core and only then proceed to the next one. So there isn´t an equal distribution amongst every thread. If we did that, not only it would take 100x more time, but we ended up on a situation where someone is using a 10 threads CPU, and the other is using 20 threads, and someone is using 8. How could we take that into account? We had to saturate the cores more, on lower thread count CPUs. Then we had to take into account the CPU speed and then to make it even harder, logical threads wich are not like physical cores (they are much weaker and can basically allocate and do minimal processing). Then to make it even worse, dealing now with CCX, inter latencies, communication between basically different CPUs (Ryzen design).

This would be a disaster and a stutter fest. The right way to do things is to try to saturate what is at disposal, and not balance the usage between every thread. This way we assure the engine will work on future arch/CPUs too, and we assure a steady frame time across the board.

Intel design is way more programmer friendly than Ryzen design. This is also the reason why next gen consoles will most likekly pack 8 physical cores CPUs without SMT (according to the rumours). This will only make it worse for CPUs with a lot of logical threads on PC. This is why you see a lot of games having gaings when you disable SMT/HT (as long as it isn´t a low core count CPU with 2 or 4 cores, where HT actually helps a lot).

AMD made the decision of going this path of using CCX, chiplets, a lot of cores, a lot of threads. This assured them a selling point, value for a lot of tasks. But you can´t have everything, it has its drawbacks, and as a programmer let me 99,9% assure you, you won´t see any specific Ryzen optimizations at all.

We are not the ones that need to take risks and take 100x more time to program. AMD is the one that needs and will find a way to use more cores on a single CCX, and more cores on a CPU in total, eliminating latencies and maybe SMT.

This is the only and sole reason Intel still beats AMD in games. Their ring bus tech (wich was robbed from AMD btw), is the greatest thing ever, easy to program, great performance, low latency.

Now if you read all of this, thank you. And please think twice before you call us lazy next time.
Good explanation. I would much rather have 8 raw cores running at 5-5.5 Ghz than 12 to 16 running at 4.5-4.7 Ghz. I'd imagine that 2 4-core, non-SMT CPUs could handle the higher clocks and heat very well. As for the production crowd, I don't see any reason why a 8-core, 8-thread cpu couldn't still handle the production, publishing, and creator needs. But I digress. It's entirely possible that AMD is sandbagging a bit.
 
https://www.techspot.com/article/1876-4ghz-ryzen-3rd-gen-vs-core-i9/
Clock for clock AMD is now ahead Intel, even for single thread workloads, except for games which favor latency.
That's interesting. No doubt, games represent real-time processing more than any other type of program. So now if AMD can solve the latency issue. AMD did unveil some processing tricks to help improve detail at lower resolutions in their latest Navi GPUs. That's one way to try overcome it by moving to a lower resolution.
 
Ryzen is flourishing little by little taking into account their bios update is premature. I would choose a Ryzen for many reasons especially because I wouldn't like to change motherboard every year as intel does it.
 
Back