Intel Core i9-14900K, i7-14700K and i5-14600K Review

And what about short workloads? Those don't count when it comes to power usage?

And the gaming workloads are 100% long term workloads. The system hit ~650W in Hitman 3 for crying out loud. Almost 160W more than the 7950X and 200W more than 7950X3D. I don't think you understand just how huge that is.

Repeat after me: it used 200 more watts in a game.
It would be more fruitful if we (well, mainly you) stopped jumping around points.

I specifically said that measuring efficiency on workloads that are going to be running for hours for professional use cases is absolutely useless at 500w with no power limits, cause nobody that cares about efficiency runs like that. How does that have anything to do with gaming?

We can talk about gaming if you want, but let's first address the above point
 
It would be more fruitful if we (well, mainly you) stopped jumping around points.

I specifically said that measuring efficiency on workloads that are going to be running for hours for professional use cases is absolutely useless at 500w with no power limits, cause nobody that cares about efficiency runs like that. How does that have anything to do with gaming?

We can talk about gaming if you want, but let's first address the above point
I'm talking about gaming to show that it is very inefficient even there because you are literally denying a part of the market.

Yes, there are people that will run the CPU full throttle for long periods of time. I've done it for renders that allow GPU+CPU time improvements. It's also needed for renders that can't be done on a GPU (yes, you may not know it, but there are such workloads).
 
I'm talking about gaming to show that it is very inefficient even there because you are literally denying a part of the market.

Yes, there are people that will run the CPU full throttle for long periods of time. I've done it for renders that allow GPU+CPU time improvements. It's also needed for renders that can't be done on a GPU (yes, you may not know it, but there are such workloads).
Again, I'm not saying that there arent people who will run blenders and the likes for many hours at a time. I'm saying these people either don't care about efficiency OR they are running these workloads with power limits in place. In both cases, a blender efficiency graph with cpus running at 400watts is useless.

And I know cause I have multiple friends that do these kinds of workloads, with both 7950xs and 13900ks. All of them run these workloads with power limits in place. Even the systems sold by puget for these specific workloads are configured at much lower wattage.

Comparing efficiency at different power limits is fundamentally flawed. And you can tell if you spent 10 seconds looking at the graphs. Check the 7950x vs the 7950x 3d. The former uses 66% more power for 3% more performance. That's not because the 7950x is less efficient, it's because it has a higher power limit.
 
Again, I'm not saying that there arent people who will run blenders and the likes for many hours at a time. I'm saying these people either don't care about efficiency OR they are running these workloads with power limits in place. In both cases, a blender efficiency graph with cpus running at 400watts is useless.

And I know cause I have multiple friends that do these kinds of workloads, with both 7950xs and 13900ks. All of them run these workloads with power limits in place. Even the systems sold by puget for these specific workloads are configured at much lower wattage.
The main reason OEMs do this is to skimp on components and improve profits. It allows them to use cheaper mobos and power supplies.

The main reason why a customer would set limits is for improved stability. Not everybody has good airflow in their homes and things can get really HOT during summer.

But I seriously doubt that your friends even know how to set these limits let alone that they exist. And I even more seriously doubt that you talked to them specifically for this thread.
 
The main reason OEMs do this is to skimp on components and improve profits. It allows them to use cheaper mobos and power supplies.

The main reason why a customer would set limits is for improved stability. Not everybody has good airflow in their homes and things can get really HOT during summer.

But I seriously doubt that your friends even know how to set these limits let alone that they exist. And I even more seriously doubt that you talked to them specifically for this thread.
I haven't talked to them specifically for this thread. And yes they know how to set power limits, tune ram etcetera. How is that relevant?

Anyways, I already gave you an example of why testing with different power limits is flawed. 7950x vs 7950x 3d paints a clear picture
 
Running blenders and cinebenches on a loop with no power limits is something nobody does, very informative data there
Most people that run Cinema4D or Blender do it with default settings.

- Work computers run almost every time on default settings
- OEM machines that do not have huge amount of BIOS tweaks also
- Majority of users have not even heard of "power limits" and such

That is problem here. Most of these hot CPUs run at default settings. While advanced user like you can tweak power limits so that you get efficient CPU, majority of people just don't. Because 14900K is very inefficient CPU using default settings, it is very inefficient CPU for majority of users. Nobody forced Intel to factory overclock it to be inefficient but hey, they wanted to match AMD performance with worse process and architecture and power consumption is direct result of that.
 
Most people that run Cinema4D or Blender do it with default settings.

- Work computers run almost every time on default settings
- OEM machines that do not have huge amount of BIOS tweaks also
- Majority of users have not even heard of "power limits" and such

That is problem here. Most of these hot CPUs run at default settings. While advanced user like you can tweak power limits so that you get efficient CPU, majority of people just don't. Because 14900K is very inefficient CPU using default settings, it is very inefficient CPU for majority of users. Nobody forced Intel to factory overclock it to be inefficient but hey, they wanted to match AMD performance with worse process and architecture and power consumption is direct result of that.
Not really, must oem pcs do not actually run default settings.
 
Not really, must oem pcs do not actually run default settings.

When CPU has configurable TDP, there is not necessarily just one default TDP. In this case I consider 14900K to have only one default power limit.

On other hand, OEMs may limit TDP to allow lower cost cooling/PSU. But then they risk lawsuits because of lower performance.

But yeah, you might be right on that one. Statistics would be nice to see but probably impossible to obtain.
 
Have you tried undervolting and overclocking. I'm using the same z790 board I could only get a stable 5ghz on all cores with a i7-12700k at 1.35v. Or a lot more for 5.1ghz.

With the i5-14600kf I can get 6ghz with adaptive to max 1.27v and -0.05v. With ring 4.6, e-cores 4.6 and cinebench 2024 a score of 27500k with max watts of 160w. On hwinfo64.

Seems to me the 14th gen, in my case uses less power and provides a decent uplift. Intel has just underclocked and overvolted
 
The price of flagship CPUs these days is beyond belief. And their motherboards. For me I just game really so its just a host for a graphics card. Personally id pick a 12700KF that I keep seeing dropping on offer below $200. You really don't need a $300+ CPU if you're a gamer. The X3D stuff seems better than this Intel stuff but I'm not paying that much for it just to play games. I thought AMD was supposed to be the value option!

 
Yea but marketing will always try to sell something you don't really need. Just the fact that my 4790k can run Starfield smoothly at average 45fps on 1440p high prooves your point. Sure I removed all bloatware from my PC and don't even have 20 tabs open while playing but game engines don't really utilise more than those 2-4 main cores anyway, Starfield has been supposedly in development for several years, Devs knew it was primarily going for AMD CPU in Xbox Series X and still failed to optimise it, funnily it runs better on Intel.

If they added atleast 2 ecores for 14100 and kept the same price I think it would sell because that's the bare minimum majority needs. But they are overcharging for extra cores anyway and diminishing returns in performance for extra core are in most cases crazy.
if I do buy a new intel cpu it will be more for my hobby; comparing CPUs; but my daily driver is a 2019 i9 10920X on X299 that I upraded from an 2017 i7 7800X; practically indistiguishable 7 10700kf running all 8 cores@5.139Ghz in any application or game unless it can utilize 4 more cores.
 
I just tested a stock 14900k (just fixed DC LL to report accurate power draw) and tried hogwarts, lowest resolution possible everything ultra. CPU power draw was between 110 and 140w. In your power draw chart you have teh 14900k drawing 140w MORE than the 7800x 3d. Which leads me to believe that the 7800x 3d is generating power.

Something is very iffy
 
More like 13.5th gen, it's i7-7700k all over again :D

(why is 11900k bad @ spider-man?)
If I had to take a wild guess, it's because spider man was built around the cache heavy zen 2 in the consoles. Rocket lake is built upon the tiger lake mobile cores, but with half the cache due to being backported to 14nm. That halving of cache really kneecaps it in certain scenarios where cache use is high.

11th gen really was a turd of a launch.
 
Back