Intel Core i9-10900K Review: Can It Beat Ryzen 9?

For gaming at 1440p or better right now it seems academic whether you have a $500 10900k or a stock $175 Ryzen 3600. Just 10 percent between them. Presumably if you're at 1080p with such a high end GPU like the 2080Ti it's all about the frames, but we all know that configuration is a small niche across PC users. 1440p or better is where it's at now and the bottom end of where we'll all be in another 2 years.

For productivity surely you're going to go with the 3900x for circa $400, not least because you also know you can just drop a 3950x in as an upgrade later, or something even faster from Zen 3.

That's it really. If my budget was this level I would plump for a 3900x on a quality board, and then see what 16 core Zen 3 goodness I can plop in there in a year or so for a simple tangible upgrade.

For gaming there has been little reason to upgrade in over a decade; even my old 920/960/980X systems are plenty for 1080p/1440p gaming; heck even my i5 750 can still handle modern games @1080 with a 1050Ti.
 
Yes, I agree. I think of it this way:

If someone's going to spend $1200 on a GPU, IMO it's foolish not to buy the best CPU to extract the most frames from that hefty purchase. A ~$520 CPU and ~$200 AIO cooler is a very reasonable cost in comparison to make sure you get the most out of that 2080Ti.

Even so, I'm *really* curious how well the i5 and even the i3 do, seeing as the R3 3300X was quite a cheap yet reasonable gaming powerhouse for even a $4-500 GPU.
It's a very small subset, of a subset of gamers who spend $1200 on a graphics card to run at 1080p, at high refresh rates, because these are the only conditions the Intel 10900k will make a difference.
 
Even if everyone is praising AMD here, truth is lots of people jumped on buying these and they are going out of stock in record time
Intel has gained a userbase that is hard to earn by AMD. Not prices nor better multithreading performance will make them go AMD. Gaming performance is of utmost importance of many users and some people don't really care or believe that AMD could be the better deal. So as long Intel keeps the gaming crown, people will still buy intel. Even if they are basically a rebadged 6950x.
Which tbh is not a bad cpu.
Funnily enough - I have a pal who has bought Intel for the past 20 years. He just finished (today) building a new rig for himself based on an AMD 3950/Nvidia 2080ti.
I was gobsmacked...But he admitted that 16 cores and decent gaming couldn't be overlooked.
Intel lost a loyal customer.
 
Close to loosing another. But I'm not ready to jump ship yet. Maybe I should start thinking about it. I've been anti-AMD for so long it has taken a few years to loose my hatred (not completely gone yet).
Why hate? I've always gone Intel, but just because I'm a morbid creature of habit and intel was on the first few machines I ever used and used in the first I ever built. At this point I am just dumbfounded by this move other than what I said in my other post in this thread, it's a flex, nothing more. Literally trolling their rival with a business move and profiting probably considerably. There is no way to even justify the existence of the 10th gen at all, full stop.
 
Close to loosing another. But I'm not ready to jump ship yet. Maybe I should start thinking about it. I've been anti-AMD for so long it has taken a few years to loose my hatred (not completely gone yet).
I've never understood the emotional attachment to a brand. I ran Motorola 68k on my first pc, Intel next for a few, and AMD for a while, and now Threadripper for the last two. I could care less who makes it...it's about most bang for the buck, and 2-5% either way in any category is irrelevant. Disclaimer, not a pro gamer.
 
Thanks for working! Good job!

But there is one question about the Shadow of the Tomb Rider tests: why does the 2700x processor lose so much performance when switching from 1080p to 1440p resolution, while the 2600X remains at the same level?
 
Thanks for working! Good job!
But there is one question about the Shadow of the Tomb Rider tests: why does the 2700x processor lose so much performance when switching from 1080p to 1440p resolution, while the 2600X remains at the same level?
And RDR2 too.
 
Thanks for working! Good job!

But there is one question about the Shadow of the Tomb Rider tests: why does the 2700x processor lose so much performance when switching from 1080p to 1440p resolution, while the 2600X remains at the same level?
The 2600 has the same amount of L3 cache as the 2700X, but with fewer cores it’s under less pressure when the demand on the cache increases. The L3 cache also works differently to the L1/L2 cache as it just stores data booted out of the lower level caches, rather than storing prefetched data.

So it’s possible that this is the reason why the 2600 barely changes with the resolution increase - some of the rendering processes require a fair bit of CPU work before issuing the frame to the GPU and if this hits the caches hard, then the 2600 has more L3 cache room for all its cores compared to the 2700X, when their L1/L2 caches have to clear space.
 
The 2600 has the same amount of L3 cache as the 2700X, but with fewer cores it’s under less pressure when the demand on the cache increases. The L3 cache also works differently to the L1/L2 cache as it just stores data booted out of the lower level caches, rather than storing prefetched data.

So it’s possible that this is the reason why the 2600 barely changes with the resolution increase - some of the rendering processes require a fair bit of CPU work before issuing the frame to the GPU and if this hits the caches hard, then the 2600 has more L3 cache room for all its cores compared to the 2700X, when their L1/L2 caches have to clear space.
Ok, it's possible but why doesn't this happen between 3600 and 3700X?
 
What alot of PC Enthusiast forget is the Gaming Market is tiny compared to the Enterprise market and that is where the money is made. While I think Zen 3 will finally make AMD even with intel on that front. I believe their time is better spent in the enterprise sector and working with more OEMs to get into laptops etc. Multi year contracts are something that happends in the business space, consumers are fickle at best and have less money to spend.

Both markets go hand in hand - it‘s actually the large volume OEM / business market that enables the high end gaming CPU.
The reason is binning. Let‘s say that only the top 1% (the number is just a guess) of dies us good enough to reach top clock speeds at acceptable power levels. So if you make ten million CPU, that means you have a hundred thousand high end gaming CPU (i9). Let‘s say an additional 15% are good enough for the i7 tier and then you can sell the rest for very low prices to OEM as i3 or i5 office PC.

If you only make a million, that‘s ten thousand and probably not enough to warrant a separate CPU line.

So high volumes imho are what allows a company to have enough top binned parts to sell as a high end model.

Note: Due to both the process and architecture maturity Intel has with their current CPU on 14nm, I would assume that the top bin percentage may even be higher. Which makes the move to new architectures and process nodes even more difficult.
 
Ok, it's possible but why doesn't this happen between 3600 and 3700X?
In the Zen and Zen+ architectures, each core has 4 blocks of 0.5 MB L3 cache, giving a total of 2 MB per core, and 8 MB per CCX. Any core, in any CCX, can access any L3 block. In Zen 2, the blocks are 1 MB each, so each CCX has 16 MB, giving a lot more room for L2 data ejects.

And this cache is no longer global to any core, in any CCX - so when data is hoofed out of a core's L2, it only goes into the blocks associated with that core/CCX. Transfers between CCXs is also handled by the I/O chip, rather than letting one CCX poke about with the other's cache.

But this is just guessing. I wonder if one can possibly pick out the impact of the L3 cache in our 3300X/3100 review:

SotTR_1080p.png

SotTR_1440p.png


So the performance drops (1% Low/Avg) for some of the Ryzen chips are:

Zen 2
3700X = 16.2% / 20.2%
3600 = 15.6% / 18.3%
3300X = 12.0% / 7.8%
3300 = 6.3% / 2.5%

Zen/Zen +
2700X = 17.2% / 24.8%
2600 = 2.6% / 1.1%
1600 AF = 2.6% / 2.2%
1600 = 5.6% / 3.5%

The issue possibly involves CCX-to-CCX accesses, as even though the 3300X takes a hit, it's nowhere near as bad as the 3700X and 3600. But the 3300, 2600 (and it's carbon-copy, the 1600 AF) and original 1600 all have much smaller performance drops going from 1080p to 1440p.

The CCX/L3 cache structures are:

3700X = 2 CCXs, 4 cores each, 16 MB L3 cache per CCX
3600 = 2 CCXs, 3 cores each, 16 MB L3 per CCX
3300X = 1 CCX, 4 cores, 16 MB L3 per CCX
3300 = 2 CCXs, 2 cores each, 8 MB L3 cache per CCX

2700X = 2 CCXs, 4 cores each, 8 MB L3 cache per CCX
2600 = 2 CCXs, 3 cores each, 8 MB L3 cache per CCX
1600 AF = 2 CCXs, 3 cores each, 8 MB L3 cache per CCX
1600 = 2 CCXs, 3 cores each, 8 MB L3 cache per CCX

It's perhaps just the oddities of the respective architectures. In the original Zen, it would seem that the L3 cache doesn't like having 8 cores all trying to access each other's cache; in the Zen 2, the I/O chip should remove this issue, which would explain why the 3600 and 3700X drops are similar. The 3300X only has one CCX, so it doesn't have CCX-to-CCX concerns. But what about the 3300? It's possible that it's low performance drops are actually a sign that the chip's layout is just a lot less efficient than the 3300X's and it's already struggling.

It's all very peculiar!
 
"which is a pointless 10-core Cascade Lake-X part for the LGA2066 socket".....

(weeps quietly alone with a i7 7820X).....

- if anyone has one of these pointless chips I will give it a good home - honest...

Back on topic - an extra 3fps in games (from the 9900K) - that you need to buy a new motherboard for and a decent cooler without anything particularly compelling on the chipset. Seems Intel are big fans of Metallica's Frantic.......

 
If money is no object I still wouldn't buy the 10900K, got 1000usd to burn ? (10900K + Mainboard + cooler), better spend it on a custom liquid loop and high performance RAM for anyone who currently own a 8th and 9th Intel CPU.

Now I'm just itching for R9 4950X...
 
Considering that the 10900K is still essentially the same microach as a 7700K, the product itself is actually really not that bad, especially from an engineering angle. 10 cores@4.3, produced on 14nm, using an aging architecture, and still fitting into a 125W power envelope, I say it is actually impressive (I for one expected worse, to be honest).
However, I agree with the article's conclusion, that in the end it is hard to recommend it. The cooler is also a factor: 125W would definitely demand a decent cooler, a 'budget champ" 212 Evo variant won't cut it (no matter how much I actually love that cooler). And with a decent cooler taken into consideration, the price gap is getting even larger against a 3900X.
I also agree that for many, the small gaming difference is not going to justify the noticeable price premium (even if they don't care about productvity). And then there is the upgrade path, which is a different matter entirely...

Sounds very, very similar to my own thoughts.

Except 125W cooler isn't a new requirement in the PC CPU cooling history. I believe even a small tower (92mm fan @2000rpm), with 3 heat-pipes can handle 125W. It won't be quiet though. 212 Evo should be enough, also there's a fan upgrade path.
 
Considering that the 10900K is still essentially the same microach as a 7700K, the product itself is actually really not that bad, especially from an engineering angle. 10 cores@4.3, produced on 14nm, using an aging architecture, and still fitting into a 125W power envelope, I say it is actually impressive (I for one expected worse, to be honest).
It's actually the same microarchitecture as the 6700K (Skylake) - the only changes, outside of the manufacturing process refinements, have been tweaks to Speed Shift, faster memory support in the controller, and (obviously) more cores/L3 cache. There's clearly nothing wrong with the fundamental design, especially in light of the fact that it's nearly 5 years old, but only when one compares it to itself. Held up against the likes of a 3900X, which offers 12 cores in a lower power window, and it's like a Scooby Doo episode where the pesky kids solve the mystery by removing the mask, and showing everyone that it's Mr Netburst all over again.
 
I personally won't be upgrading any hardware till DDR5 comes to market...by then, the CPU and GPU game will be completely different.

From what I've seen of the 10900K - the only processor I'd be building a computer with now, it wins in gaming - something I actually do...but loses in "benchmarks" something I absolutely don't.

Microcenter shows motherboards for these CPU are about $250 so that means when I upgrade I'd need to spend around $250 - $300 more and transfer over my SSD drives.
 
I've never understood the emotional attachment to a brand. I ran Motorola 68k on my first pc, Intel next for a few, and AMD for a while, and now Threadripper for the last two. I could care less who makes it...it's about most bang for the buck, and 2-5% either way in any category is irrelevant. Disclaimer, not a pro gamer.

Agreed boss.

Unless you are a stock holder of said brand why be a cheerleader.

Then take it to the level where you are fighting with random people online based on their choice of cpu like the cost is coming out of your own pocket.

The logic escapes me.

For gaming there has been little reason to upgrade in over a decade; even my old 920/960/980X systems are plenty for 1080p/1440p gaming; heck even my i5 750 can still handle modern games @1080 with a 1050Ti.

This will depend on the type of games your play and their age aswell as your monitor resolution you are pushing.

I went from a I7-970 to my Current Ryzen 3800X system and yes majority of my games were playable on the older system with my current gpu. However there have been some games where I've seen improvement and I've been able to crank a few more settings on cause of the much faster cpu. After all there is about a 10 years difference in systems.

The primary reason for the upgrade is the other features.

X58 is only Sata 2 has no NVME, uses alot more power. The CPU doesn't have all the modern instruction sets, only DDR3 and PCi e 2.0.
 
Last edited:
It's a very small subset, of a subset of gamers who spend $1200 on a graphics card to run at 1080p, at high refresh rates, because these are the only conditions the Intel 10900k will make a difference.
Thats not true at all.
The Intel chips are better gamers across the board, from 720p, to 1080, and 1440p, at various refresh rates and settings.

I also agree that for many, the small gaming difference is not going to justify the noticeable price premium (even if they don't care about productvity). And then there is the upgrade path, which is a different matter entirely...
Some of those benchmarks show the Intel chips 15-30FPS faster.
Thats a massive difference.
Also, some of Intel's basic i7's and older i7's are roasting by Ryzen for gaming. If your only wanting to game, your best price to performance ratio is basically Intel across the board, except for the Ryzen 3600.
 
Thats not true at all.
The Intel chips are better gamers across the board, from 720p, to 1080, and 1440p, at various refresh rates and settings.


Some of those benchmarks show the Intel chips 15-30FPS faster.
Thats a massive difference.
Also, some of Intel's basic i7's and older i7's are roasting by Ryzen for gaming. If your only wanting to game, your best price to performance ratio is basically Intel across the board, except for the Ryzen 3600.

I will have to disagree with you at 1440p and greater since you are GPU bottlenecked.

15-30fps which is more like 10-20 :p doesn't matter when you are averaging 105 fps with 80s lowest frames. Playable is playable, the only people that can make this argument and have a leg to stand on are those with very high refresh rate monitors which is a small percentage of the market.

The difference you point out make no difference in real world play.

You are going to have to detail this older gen i7 "roasting" current gen Ryzen for games. With some proof and if the example are more both processors averaging tripple digit fps you will have to do better.

And I will even start it off for you based on Tech spots numbers.

BFV
1080p

10900k
Avg 168 low 135
3900X
Avg 156 low 114

Difference in avg 12 / low 21
playable difference Zero, noticeable only for someone on a 144hz high Refresh rate monitor.

1440p

10900k
Avg 137 / low 117
3900x
Avg 130 / low 107

Difference in avg 7 low 10 Playable difference Zero both too low for high refresh rate monitor at 144hz.

And going through rest of the games will provide similar results. Trying to keep my post somewhat short and not going to do the homework for you which is obvious. But I think you ignore the truth to stick to your point.
 
Last edited:
But I think you ignore the truth to stick to your point.
7-15 FPS @ 1440p in certain titles is a massive difference, especially if your LCD is locked in at 120/144hz.
The data is inarguable.
And more importantly those resolutions, 1440p and higher have the GPU doing the work, covering Ryzens shortcomings.
I think Ryzen is right there, but 15-30FPS is getting spanked, and 7-15 FPS is getting beat, pretty decisively.

 
Steve, the title of this article is sooo weak. Better change it to:
I9 10900K - can at least be on par with Ryzen 9 3900X?
(Spoiler - it can NOT).
 
Back