Intel Core i5-9400F vs. AMD Ryzen 5 2600X

You can't stream on a Pentium G unless maybe if you use the GPU encoding which looks ugly as hell compare to even low settings on CPU streaming. You game may be fine with not much FPS drop in-game but the stream will be a slideshow.
We were talking about watching a stream in the background. I would never endorse DOING streaming on the Pentium G :yum But like I said, people DOING streaming are in the minority. And for THEM I'd recommend the Ryzen hands down.

I did a quick check on the prices for full PC builds and I ended up saving about 25 euros with the Intel system (the 9400F CPU is 15 euros cheaper than the 2600x), but that's going with the lowest end mobos and the cheapest 2666MHz RAM I could find. I don't know if performance will be as good as the one seen here and as you've pointed out 1-2% are important for gamers.
I would not go that route. I would go with a Z390 UD and 3000MHz RAM. When all is said and done, the 9400F usually works out to about $10 cheaper if you're getting quality boards for both sides. But that's $10 cheaper for equal 1% lows and better average FPS, even if it is a small amount. I still don't think there's many cases where multithreading will be of use outside of just Windows updates, for your average gaming user.

Even at that, bear in mind that real world performance of multithreading does not exceed 30% at the MOST. It's still a 6 core part and those extra threads are just forcing the cores to work during their "waiting" time between clock cycles.
 
We were talking about watching a stream in the background. I would never endorse DOING streaming on the Pentium G :yum But like I said, people DOING streaming are in the minority. And for THEM I'd recommend the Ryzen hands down.


I would not go that route. I would go with a Z390 UD and 3000MHz RAM. When all is said and done, the 9400F usually works out to about $10 cheaper if you're getting quality boards for both sides. But that's $10 cheaper for equal 1% lows and better average FPS, even if it is a small amount. I still don't think there's many cases where multithreading will be of use outside of just Windows updates, for your average gaming user.

Even at that, bear in mind that real world performance of multithreading does not exceed 30% at the MOST. It's still a 6 core part and those extra threads are just forcing the cores to work during their "waiting" time between clock cycles.
Finally someone who can make a solid argument :D

My "bias" aside, you will never know when you'll use the extra threads and it's not wrong to want them to be there especially when you don't really gain anything from forgoing them.

Common sense dictates that someone buying an 9400F won't buy a GTX 1080 TI or better GPU and game at 1080p. And if they say that they'll upgrade later then that makes even less sense to go for for the Intel build (unless maybe he also got a good Z mobo to be able to pair it with an 9700/9900k a year or 2 down the line? O_o).
 
I tend to agree to this point.

Until someone to whom we can trust really shows the numers, I'm going to treat that kind of discussions as speculations. Yeah, just like that: playing a game vs same thing with light background task, which people can do, like having a couple of browser tabs opened, or background foobar/vlc with your fav tracks.

But they give us pure gaming or "gaming + streaming" scenario at worst. How many of us and how often do stream games? We surely need to talk about what could be between these scenarious, before willing to praise moar SMT/HT cores.

PS Software updates and AV-scanning are not light tasks, I'm afraid. But the problem with these cases is that they are hard to test, I believe. Tests are reliable and trusted when they are repeatable at least.

PPS Seems, 6 cores is enough for today. Anyone doubt? Ok, let's ask TS to show us some numbers! That's we, readers, need to see here -- relevant content pieces for thoughtful discussions. Not just misleading diagrams and AMD praise, but if it makes the boat float.. who cares.
 
Last edited:
The data in the 18 games result shows the 1% low being an exact match, so I'm not sure why people are saying the 2600x is more consistent. Consistency is literally the same on average, and average FPS on average is higher on 9400F.
as you can probably read (or watch in the video, when using 2666MHz ram the 9400f had noticeable issues which got fixed by using the faster RAM. this kinda puts a damper on the argument that is cheaper since you need a Z series mobo to run the ram at those speeds.
consistency, it's an important aspect of a platform. the B series from AMD is better suited to provide it.
you can blame Intel for only giving the high end chipset some of the "premium" features. The ability to OC the CPU and RAM are not the only restriction on their mid range chipset unfortunately.
 
Last edited:
as you can probably read (or watch in the video, when using 2666MHz ram the 9400f had noticeable issues which got fixed by using the faster RAM. this kinda puts a damper on the argument that is cheaper since you need a Z series mobo to run the ram at those speeds.
consistency, it's an important aspect of a platform. the B series from AMD is better suited to provide it.
As I've already stated, you should be using the Gigabyte Z390 UD anyway. It's only $15 more and it allows higher RAM usage and provides a real upgrade path to the 9700K and the 9900K, if not also Intel's next chips (we'll see there). The 2666MHz result should be compared to the stock 2600x result (which again had the SAME EXACT 1% low number) and the 3400MHz result should be compared to the OCed result where they are, again, the same.
 
Don't know what the huge fuss is about, when its pretty clear Intel is uncompetitive below the $200 mark after the 2600 non-X/X release, and this is coming from a 8700K owner.

The i3 8100 is competitive with Ryzen 3/ Ryzen 5 up to the 2600.
The 9400F is competitive with the 2600 (beating it by a higher margin than the 2600x for a few $ more) and with the 2600x as seen above for less and it's even competitive with the 2700
The 9600K is competitive with the 2700x for less
And then the 9700K/9900K is unrivaled unfortunately
 
As I've already stated, you should be using the Gigabyte Z390 UD anyway. It's only $15 more and it allows higher RAM usage and provides a real upgrade path to the 9700K and the 9900K, if not also Intel's next chips (we'll see there). The 2666MHz result should be compared to the stock 2600x result (which again had the SAME EXACT 1% low number) and the 3400MHz result should be compared to the OCed result where they are, again, the same.
unfortunately that mobo isn't that cheap where I live ;(
 
So basically:

Buy a cheap B360 board + cheap 2666 ram + i5 9400f. No need for overclocks or bios tweaks apart from activating XMP. Low power consumption. No need for great cooler or PSU.

OR

Buy a decent B450 board with decent VRMs + expensive Samsung B die 3200/3400 ram (200€ in Europe) + 2600x. Need to overclock it on the bios, tweak settings to find optimal perfrormance. Need a decent cooler for 4,2ghz on most chips, and it will use more power.

In the end you get the same performance as Intel, slightly better on some games, slightly worse on others.

I would stick to Intel for budget builds for GAMING. Simple. Great performance out of the box for a good price.

Intel CPUs are more power hungry, they also have security concerns like Meltdown, Spoiler and Spectre, the socket does not have longevity plus you are sacrificing over 50% of multi threaded performance on non-gaming scenarios. So if you decide to game and stream, you will be out of luck with the Intel solution. Can't justify the Intel platform.
 
Intel CPUs are more power hungry
What?? The 9400F is a 65W TDP chip outperforming a 95W 2600x. The 2600x draws more power and runs hotter. The 9400F pretty much stays within its TDP

they also have security concerns like Meltdown, Spoiler and Spectre
Ryzen is susceptible to Spectre, which is arguably the bigger security concern of the three. Meltdown is mostly patched.

the socket does not have longevity
Says who? First of all, Z390 might support the next Intel chip. We don't know yet. There are rumors, though, but of course take it with a grain of salt. Forgetting about that you still have an upgrade path to the 9700K or 9900K, both of which will retain excellent performance that will appease most for years to come.

plus you are sacrificing over 50% of multi threaded performance on non-gaming scenarios.So if you decide to game and stream, you will be out of luck with the Intel solution.
SMT/HT only provides a real world gain of 30% at the MOST, and only in productivity workloads really. Your average user won't ever really tap into that extra performance. Most people aren't going to stream. Everyone brings up streaming as if it's something 90% of the population is doing. It's not. It's more like maybe 5%? At the most. In reality someone buying a gaming rig will have better performance in everything they do on the 9400F as opposed to the 2600x until they are ready for a new CPU upgrade.

Since when aren't we recommending the better performing part at a lower price, lower power draw, and less heat for the given workload.

If a streamer was building a machine, I'd point them to Ryzen. If a 3D modeler wanted a machine I'd point them to Ryzen. If a gamer wants a machine I'd point them to Intel.

For a mix of the above, Ryzen as well most likely.

There are also productivity loads that favor Intel as well. Basically Adobe. So if you're doing gaming and Photoshop/After Effects/Premiere Pro, then you'll be doing better on the 9400F vs the 2600x as well.

So as usual it depends what you are doing.
 
Last edited:
Don't know what the huge fuss is about, when its pretty clear Intel is uncompetitive below the $200 mark after the 2600 non-X/X release, and this is coming from a 8700K owner.

The i3 8100 is competitive with Ryzen 3/ Ryzen 5 up to the 2600.
The 9400F is competitive with the 2600 (beating it by a higher margin than the 2600x for a few $ more) and with the 2600x as seen above for less and it's even competitive with the 2700
The 9600K is competitive with the 2700x for less
And then the 9700K/9900K is unrivaled unfortunately

Nobody really cares about new CPUs cheaper than the 2600 non-X for any serious modern-day gaming because the severe downgrade in thread count doesn't justify the minimal cost savings. Cash strapped people are better off buying used.

And the fact you have to mention $200+ CPUs here only proves just much of a troll you are.
 
Don't know what the huge fuss is about, when its pretty clear Intel is uncompetitive below the $200 mark after the 2600 non-X/X release, and this is coming from a 8700K owner.

The i3 8100 is competitive with Ryzen 3/ Ryzen 5 up to the 2600.
The 9400F is competitive with the 2600 (beating it by a higher margin than the 2600x for a few $ more) and with the 2600x as seen above for less and it's even competitive with the 2700
The 9600K is competitive with the 2700x for less
And then the 9700K/9900K is unrivaled unfortunately

Nobody really cares about new CPUs cheaper than the 2600 non-X for any serious modern-day gaming because the severe downgrade in thread count doesn't justify the minimal cost savings. Cash strapped people are better off buying used.

And the fact you have to mention $200+ CPUs here only proves just much of a troll you are.

No one is trolling... The data is the data. I was showing every price point. These claims are backed up by numerous review sites including Tom's, AnandTech, TechSpot, TechPowerUp, GamersNexus, etc. There are still plenty of people buying CPUs like the i3 8100 or the Ryzen 3 2200g or the Ryzen 5 2400g. The point of the above was that at every price point there is competition, period. Regardless of which one you think wins it's still competition. The very fact that it's debatable, such as what we are doing right now, proves that fact.

Again, people aren't understanding the concept of how threads work. People just keep saying "BUT you lose the threads!!" Yes you lose threads. Firstly it's still a 6-core chip. The extra threads provide UP TO an extra 30% performance gain in real world scenarios but no where near that in gaming (those gains are for productivity only such as 3D modeling and rendering). In gaming, the extra threads are SHOWN to not help much (or rather to not give it an advantage over the 9400F to be more accurate), as seen in Assassin's Creed Origins and Far Cry 5. AC:O makes use of 12 threads and yet the 9400F is still able to beat it with a 6 thread deficit (source TechPowerUp). The same goes for Far Cry 5. It uses 8 threads and yet is beaten by the 9400F with a 2 thread deficit (source: This very article (New Dawn is the same engine as the base game)). That means that even in future games making use of more threads, the 9400F will still either be tied or ahead for less money.

Outside of gaming, those extra threads won't do anything for an average user. Having a browser open while gaming, watching a video, checking email, etc. aren't cases in which multi-threading helps by any significant margin. Maybe you'd see an improvement in Windows Update install times. Outside of that where do you propose anyone seeing an improvement?

Streaming, yes. Streamers should absolutely get a 2700x. Your average gamer? NO.

I likewise would not recommend an 8700 over a 9400F. The price difference is too high (at least here in the US) with too little of a performance difference to justify it. They are functionally very close.
 
Last edited:
Your trolling attempts are weak, AMD's is not affected with the Spectre in the same grade as Intel and does not lose performance when its patched. cause AMD CPUs don't allow non-fault pages to be shared.
 
The data is the data.

"It is what it is" way isn't showing anything else than that we are not wise enough, mate. Sometimes I feel like we are dumb as f, when we are arguing against what is obviously ridiculously incorrectly represented.

Just imagine proper 9400f review. It is what it is, exactly. Nothing's new, nothing to hype except the price. Except it's a bit late, because the competitor's next-gen is just around the corner.

And here comes the troll (neither you are the real one, nor your buddy is). His inqenuity is unrivaled, he can produce a mass fuss (striked through) masterpiece from nothing. Shall he tell the story not the way it is.
 
Your trolling attempts are weak, AMD's is not affected with the Spectre in the same grade as Intel and does not lose performance when its patched. cause AMD CPUs don't allow non-fault pages to be shared.

Again, if you mean me, I'm not trolling. These are my true beliefs based on the data I am seeing. The word "troll" is being constantly misused. It does not mean "Someone with a varying opinion that doesn't match yours."

As for Spectre, sure AMD might be affected to a lesser extent, but it is still affected. People were implying that AMD had no vulnerabilities which is just not true. Anyway, you are implying that Intel is losing performance when patched. In gaming that is not true as seen here:

https://www.tomshardware.com/reviews/gaming-performance-meltdown-spectre-intel-amd,5457.html

They did extensive testing with both the OS and microcode update. As you can see there is basically no performance loss in gaming. There probably is in productivity, but in the spirit of the article we're commenting on, we're talking mostly about gaming and gaming rigs, and people using those rigs for that purpose, not for users doing gaming AND something like 3D modeling (in that case I'd recommend Ryzen).
 
Last edited:
"It is what it is" way isn't showing anything else than that we are not wise enough, mate. Sometimes I feel like we are dumb as f, when we are arguing against what is obviously ridiculously incorrectly represented.

Just imagine proper 9400f review. It is what it is, exactly. Nothing's new, nothing to hype except the price. Except it's a bit late, because the competitor's next-gen is just around the corner.

And here comes the troll (neither you are the real one, nor your buddy is). His inequity is unrivaled, he can produce a mass fuss (striked through) masterpiece from nothing. Shall he tell the story not the way it is.

Yes, Ryzen 3000 is right around the corner. Intel will release their response a few weeks later just like usual. So what exactly does that point mean? You're pointing to the fact that the 9400F is nothing new. That's correct, but does that matter? All most people care about at the end of the day is how their performance will look, be it on some new architecture or something old. If Intel's response is yet another refresh, but it performs on par with the new Ryzen's, who cares? As long as the numbers are bigger it largely doesn't matter.
 
Again, if you mean me, I'm not trolling. These are my true beliefs based on the data I am seeing. The word "troll" is being constantly misused. It does not mean "Someone with a varying opinion that doesn't match yours."

As for Spectre, sure AMD might be affected to a lesser extent, but it is still affected. People were implying that AMD had no vulnerabilities which is just not true. Anyway, you are implying that Intel is losing performance when patched. In gaming that is not true as seen here:

https://www.tomshardware.com/reviews/gaming-performance-meltdown-spectre-intel-amd,5457.html

They did extensive testing with both the OS and microcode update. As you can see there is basically no performance loss in gaming. There probably is in productivity, but in the spirit of the article we're commenting on, we're talking mostly about gaming and gaming rigs, and people using those rigs for that purpose, not for users doing gaming AND something like 3D modeling (in that case I'd recommend Ryzen).

The 30% gains its on the Intel implementation of SMT, cause they use dynamic distributions across threads which is known to force threads to fight for resources, cache trashing and bandwidth issues, AMD made some of the logic statically partitioned to avoid dirty cache and thread collisions, that is why the 2600X is around 50% faster on SMT compared to the 8400 which happens to be clocked higher but lack of SMT. So still does not make sense to lose lots of multi threaded performance for less than 10% on gaming performance.
 
The 30% gains its on the Intel implementation of SMT, cause they use dynamic distributions across threads which is known to force threads to fight for resources, cache trashing and bandwidth issues, AMD made some of the logic statically partitioned to avoid dirty cache and thread collisions, that is why the 2600X is around 50% faster on SMT compared to the 8400 which happens to be clocked higher but lack of SMT. So still does not make sense to lose lots of multi threaded performance for less than 10% on gaming performance.

Do you have any data sources that actually show a 50% gain in a real world application? I'd be curious to see that. I did some googling but can't seem to find anything myself. If so that is interesting indeed.

However, the point does still stand that the extra performance gain wouldn't be seen by a typical user playing games, browsing the web, checking email, and watching videos (even if they're doing some or all of the above at once). Those gains would only be seen in something truly productivity related like streaming, modeling, rendering, folding, constant file compression, etc.

There's still the fact that a game making use of 12 threads was still beaten by the 9400F as well.
 
This article shows the superior SMT scaling compared to Intel

hardwarecanucks.com/forum/hardware-canucks-reviews/74880-amd-ryzen-7-1700x-review-testing-smt-16.html

Other findings like this


ntel only has higher IPC because of specific optimizations, mostly, related to memory access. Ryzen can extract higher ILP when the data is already available or when more instructions are in flight.

Intel's unified scheduler allows somewhat more flexible pipeline issuing for when one pipeline is flushed. AMD uses that hole in the schedulers to sort jobs with a few cycles delay, whereas Intel's design allows cascade scheduling with less delay.

The net result is that Skylake has a whopping 5~7℅ higher IPC. Skylake-X suffers in certain workloads, such as gaming, as Intel has nearly reached the limits of their basic execution resources. How they address that remains to be seen.

Ryzen, meanwhile, has a relatively easy path to a 15℅ IPC increase, though this will diminish SMT scaling, as it has with Intel's design.

I thought up a little project after Ian Cutress of Anandtech mentioned seeing 80% speedup from SMT. This got my attention as my own experience with Intel CPUs up to that point had only shown up to 50% improvements. The software was something he wrote himself in the past, and I got a copy to try out. I've only had limited time to run it, but on two different Intel systems I got under 50% speedup with HT, and on a 1st gen Ryzen system it was 70-ish %. I kinda parked it at that point since it was during the heatwave where I am, and benching isn't fun under those conditions.

This forum has a very interesting chat regarding why.

linustechtips . com/main/topic/946917-intel-ht-vs-amd-smt-scaling/

All of this is in non-gaming scenarios. In gaming scenarios, even non-HT Intel CPUs can be faster than their HT enabled counterparts...
 
This article shows the superior SMT scaling compared to Intel

hardwarecanucks.com/forum/hardware-canucks-reviews/74880-amd-ryzen-7-1700x-review-testing-smt-16.html

Other findings like this


ntel only has higher IPC because of specific optimizations, mostly, related to memory access. Ryzen can extract higher ILP when the data is already available or when more instructions are in flight.

Intel's unified scheduler allows somewhat more flexible pipeline issuing for when one pipeline is flushed. AMD uses that hole in the schedulers to sort jobs with a few cycles delay, whereas Intel's design allows cascade scheduling with less delay.

The net result is that Skylake has a whopping 5~7℅ higher IPC. Skylake-X suffers in certain workloads, such as gaming, as Intel has nearly reached the limits of their basic execution resources. How they address that remains to be seen.

Ryzen, meanwhile, has a relatively easy path to a 15℅ IPC increase, though this will diminish SMT scaling, as it has with Intel's design.

I thought up a little project after Ian Cutress of Anandtech mentioned seeing 80% speedup from SMT. This got my attention as my own experience with Intel CPUs up to that point had only shown up to 50% improvements. The software was something he wrote himself in the past, and I got a copy to try out. I've only had limited time to run it, but on two different Intel systems I got under 50% speedup with HT, and on a 1st gen Ryzen system it was 70-ish %. I kinda parked it at that point since it was during the heatwave where I am, and benching isn't fun under those conditions.

This forum has a very interesting chat regarding why.

linustechtips . com/main/topic/946917-intel-ht-vs-amd-smt-scaling/

All of this is in non-gaming scenarios. In gaming scenarios, even non-HT Intel CPUs can be faster than their HT enabled counterparts...

VERY interesting reads. I usually recommend Ryzen for most people doing productivity (outside of Adobe), so this is yet another reason to add to the list to get it.

Your last two sentences was basically what I was trying to get at this whole time. In gaming loads we have a win here and most gamers aren't doing productivity :)
 
If you drop that ram to 3000, wich most people will buy as it costs 100-120€, you will automatically loose 10% to 20% performance, specially on the 1% lows.

Dropping from 3200 MHz RAM to 3000 MHz RAM does not cause a 10 to 20 % decrease in performance in most tasks. Ryzen definitely benefits from faster memory, but 3000 MHz RAM is plenty fast, and Anandtech even identified it as the sweet spot for Ryzen builds: https://www.anandtech.com/show/11857/memory-scaling-on-ryzen-7-with-team-groups-night-hawk-rgb/7
 
So basically:

Buy a cheap B360 board + cheap 2666 ram + i5 9400f. No need for overclocks or bios tweaks apart from activating XMP. Low power consumption. No need for great cooler or PSU.

OR

Buy a decent B450 board with decent VRMs + expensive Samsung B die 3200/3400 ram (200€ in Europe) + 2600x. Need to overclock it on the bios, tweak settings to find optimal perfrormance. Need a decent cooler for 4,2ghz on most chips, and it will use more power.

In the end you get the same performance as Intel, slightly better on some games, slightly worse on others.

I would stick to Intel for budget builds for GAMING. Simple. Great performance out of the box for a good price.
You copy-pasted the comment from youtube where it got destroyed because of it's awful arguments that break down once you actually look at the numbers and even prices (and yes, I live in the EU and I know the prices there).
Out of the box the 2600x is just better hands down. Nobody in his right mind will sacrifice double the threads and a more future proof platform just to get on average 4 extra FPS and 0 extra FPS at the 1% lows in games. You'll also need the most expensive GPU on the market to even see those 4 extra FPS.

FYI you can easily find 3200MHz RAM for 110-115 euro in the majority of Europe. Tighten the timings on those or OC them and you'll get within 1-2% of the 3400MHz RAM used here. You don't need to OC the 2600x.

On a side-note, unless they have a crappy system all gamers generally have software running in the background which will benefit the 2600x more with its extra 6 threads. I personally never close Chrome and even game while listening to youtube or Twitch.

Next time please don't "copy-pasta" and actually write something worthwhile.
So basically:

Buy a cheap B360 board + cheap 2666 ram + i5 9400f. No need for overclocks or bios tweaks apart from activating XMP. Low power consumption. No need for great cooler or PSU.

OR

Buy a decent B450 board with decent VRMs + expensive Samsung B die 3200/3400 ram (200€ in Europe) + 2600x. Need to overclock it on the bios, tweak settings to find optimal perfrormance. Need a decent cooler for 4,2ghz on most chips, and it will use more power.

In the end you get the same performance as Intel, slightly better on some games, slightly worse on others.

I would stick to Intel for budget builds for GAMING. Simple. Great performance out of the box for a good price.
You copy-pasted the comment from youtube where it got destroyed because of it's awful arguments that break down once you actually look at the numbers and even prices (and yes, I live in the EU and I know the prices there).
Out of the box the 2600x is just better hands down. Nobody in his right mind will sacrifice double the threads and a more future proof platform just to get on average 4 extra FPS and 0 extra FPS at the 1% lows in games. You'll also need the most expensive GPU on the market to even see those 4 extra FPS.

FYI you can easily find 3200MHz RAM for 110-115 euro in the majority of Europe. Tighten the timings on those or OC them and you'll get within 1-2% of the 3400MHz RAM used here. You don't need to OC the 2600x.

On a side-note, unless they have a crappy system all gamers generally have software running in the background which will benefit the 2600x more with its extra 6 threads. I personally never close Chrome and even game while listening to youtube or Twitch.

Next time please don't "copy-pasta" and actually write something worthwhile.


Very few people overclock their CPU. Even fewer will know 3400MHz performs better in [specific] games than 2933MHz. Fewer still won't overclock their memory beyond using XMP. I've talked to and read forum posts where people with Intel K chips that were not overclocked!

Techies know this stuff, but don't assume every system builder is a techie. As for playing Youtube videos playing in the background while gaming, I can do that with a 2500K...
 
Last edited:
If you drop that ram to 3000, wich most people will buy as it costs 100-120€, you will automatically loose 10% to 20% performance, specially on the 1% lows.

Dropping from 3200 MHz RAM to 3000 MHz RAM does not cause a 10 to 20 % decrease in performance in most tasks. Ryzen definitely benefits from faster memory, but 3000 MHz RAM is plenty fast, and Anandtech even identified it as the sweet spot for Ryzen builds: https://www.anandtech.com/show/11857/memory-scaling-on-ryzen-7-with-team-groups-night-hawk-rgb/7

Why you speak 3200, when there's 3400 CL15 with tighter sub-timings in the article? Regular 3000 is nowhere near this OC'd 3400.

Look here, for Steve's direct comparison between 2666 and 3200 for R7-1700:
https://static.techspot.com/articles-info/1457/bench/Civ.png

It probably is the worst case scenario for Ryzen, but it correlates to 10-20% drop.
 
Very few people overclock their CPU. Even fewer will know 3400MHz performs better in [specific] games than 2933MHz. Fewer still won't overclock their memory beyond using XMP. I've talked to and read forum posts where people with Intel K chips that were not overclocked!

I know those in person who don't OC their CPUs, having bought unlocked K-parts. But it doesn't imply that there's no reason to buy it at all. I'm sure you know that usually K-parts have higher clocks and boost more, so they are just the best CPU one can get in terms of performance. Many people buy the best and don't upgrade it at all, like they do it with flagship phones.
 
I know those in person who don't OC their CPUs, having bought unlocked K-parts. But it doesn't imply that there's no reason to buy it at all. I'm sure you know that usually K-parts have higher clocks and boost more, so they are just the best CPU one can get in terms of performance. Many people buy the best and don't upgrade it at all, like they do it with flagship phones.

I never said there was NO reason. I said manual memory overclocking is even more rare than manually overclocking a CPU. Know what's even rarer? Undervolting.
 
Back