CPU Cache vs. Cores: AMD Ryzen Edition

There probably will be a few games that actually get some performance from E-cores. Majority of games probably ignore them unless Intel gives E-cores more cache and/or better memory performance. That hardly makes any sense.

However even considering CPU with E- and P-cores is simply waste. Why this kind of anomaly even exist is because Intel had to lower power consumption at any cost to be competitive vs AMD. Too bad, hybrid CPU is not good for anything useful. Not even for multithreading. Those who disagree feel free to show Intel server CPU with hybrid architecture. If servers are not for multithreading, then what is. Well, at least hybrid crap looks good on benchmarks if ignoring power consumption.

* No, it goes like this: if process is considered background process by Intel Thread Director, thread goes into E-cores. That makes absolutely no sense, of course.
As you mentioned the purpose of it was to lower power consumption, and that's exactly what hybrid CPU is useful for, especially on power constrained platforms. They're also great for packing as many cores into a limited space and thermal envelope as possible, which is great in highly threaded, but low compute intensity scenarios.

You mentioned servers, which doesn't really lend itself well to the hybrid idea, but that doesn't impact how useful they are in the laptop and mobile space (the desktop space is more of a mixed bag and depends on what it is being used for). If I read the press releases right, Intel is shipping entirely P-core and entirely E-core Xeons, not a hybrid architecture because you would want one or the other based on what the server would be doing. Both make sense, depending on what you want to do with them.
 
As you mentioned the purpose of it was to lower power consumption, and that's exactly what hybrid CPU is useful for, especially on power constrained platforms. They're also great for packing as many cores into a limited space and thermal envelope as possible, which is great in highly threaded, but low compute intensity scenarios.

You mentioned servers, which doesn't really lend itself well to the hybrid idea, but that doesn't impact how useful they are in the laptop and mobile space (the desktop space is more of a mixed bag and depends on what it is being used for). If I read the press releases right, Intel is shipping entirely P-core and entirely E-core Xeons, not a hybrid architecture because you would want one or the other based on what the server would be doing. Both make sense, depending on what you want to do with them.
No, I didn't say hybrid core is useful for even lower power consumption because Intel CPUs still have much higher power consumption than AMD chips. That is, there is basically nothing Hybrid core thing is good for. Only thing that comes in mind is to put one ultra low power core for ultra low purposes that Intel is probably doing with Lunar Lake. But that's for low compute scenario and not high threaded, since there is probably just one core.

Servers do not have hybrid architecture because it's total mess and even Intel won't believe server customers would be OK with that.

Why Intel's hybrid stuff just plain and simple sucks? Real world scenario: I have CPU with (say) 6 P-cores totally unused and I have also (say) 8 E-cores. OK, I start compressing large file(2). Of course I don't want to look that compressing to progress, so I open another program that is now "foreground" one and that goes into P-cores. What happens for that compression? That is "background" process so that goes into E-cores regardless are P-cores available or not. That makes absolutely zero sense and is alone good enough reason to say every Intel hybrid CPU is crap for real life multitasking.

Another reason why Hybrid architecture sucks is that there are two different architectures around and therefore two different codepaths are needed. Again, switching between two codepaths on fly is slow and prone to problems. Another reason why Intel's hybrid architecture makes no sense at all.

For AMD hybrid architecture, at least they avoided latter problem because AMD C-cores have exactly same architecture than "normal" cores.
 
Agree, looking forward new article, smth like "CPU threads VS L3 cache": if intel drops HT in its new Arrow Lake cores - the effect on gaming seems to be very... nontrivial. Deeper investigation is needed to have the complicated topic covered: how many threads are required for gaming - 12 or maybe 8, HT(SMT) or not? And also there are P (LP) and E cores now which complicates things even more...Are E-cores completely useless for gaming? (they seem to only help lower CPU power/thermals if the performance of P-cores is not fully utilized)
HT is actually harmful in gaming unless you run out of cores. Ecores boost performance a lot, so the best configuration for gaming on intel is ecores on HT off. This boosts performance while it drops power consumption by a ton.
 
No, I didn't say hybrid core is useful for even lower power consumption because Intel CPUs still have much higher power consumption than AMD chips.
Your whole post is just pure nonsense and intel hatred. What the heck are you talking about? First of all intel cpus have the lower power consumption on the desktop. I have a 13900t running at 35w, nothing amd has can get nowhere near that thing in efficiency. Amd cpus need to start using hybrid approach cause even their low core count cpus (like 7600x / 7700x) draw as much as 135w, while Intel's 24 cores are running at 35. Amd needs to seriously work on their power draw, both full and light load ASAP. Browsing the web on a 7950x and this thing needs 60w just to scroll youtube. My 13900t is just pulling 6 to 8. Jesus lord.
 
Why Intel's hybrid stuff just plain and simple sucks? Real world scenario: I have CPU with (say) 6 P-cores totally unused and I have also (say) 8 E-cores. OK, I start compressing large file(2). Of course I don't want to look that compressing to progress, so I open another program that is now "foreground" one and that goes into P-cores. What happens for that compression? That is "background" process so that goes into E-cores regardless are P-cores available or not. That makes absolutely zero sense and is alone good enough reason to say every Intel hybrid CPU is crap for real life multitasking.
You have no idea what you are talking about. That's not how it works, lol.

Will you ever deal with facts? You are basically amds user benchmark on steroids. Just stop it already.
 
I'm not a gamer, but "tearing and blur" isn't due to a low frame rate, but rather a mismatch between the update rates of your graphics card and monitor.

I won't speak for the population at large, but when I moved from 60 to 75hz I saw a noticeable improvement in smoothness ... but 75 to 144hz was essentially imperceptible.
I can 100% relate to that. I used to play cs 1.6 in a clan waaay back in my study time. Nowadays I don't game that much anymore because of work and family. That being said now I have money for my old hobby, so I tested out my KFA2 4090 on 4k 60hz, wqhd 75hz, wqhd 165hz and fhd 144hz. Turns out I can see the enhaced smoothness but high hz does nothing for my kill/death ratio. This is of course personal, but as an old school gamer I prefer 60/75 hz more. Obviously this is very subjective, but the best feeling is gaming on an old 32" HP monitor with 75hz on 1440p. The 165hz monitor went to my wife, where she used it for home office. Call me crazy I don't care :) I started competitve gaming with crt monitors and quakeworld, so maybe that's an explanation...
 
No, I didn't say hybrid core is useful for even lower power consumption because Intel CPUs still have much higher power consumption than AMD chips.
First, I wasn't trying to put words in your mouth. I read this in your OP (Why this kind of anomaly even exist is because Intel had to lower power consumption at any cost to be competitive vs AMD.") and interpreted it to mean that you were saying that hybrid architecture was good for reducing power consumption.

Second, the scheduling issue seems like a bug that will get worked on over time and steadily improve. Smartphones have had hybrid CPUs for years. The use patterns are different but I would expect desktop/laptop to be able to effectively handle it.

Third, while the different architectures can be a problem for some limited set of instructions which are not supported, heterogeneous computing in general is getting more and more popular. It isn't just GPUs, CPUs, and SSE/AVX on CPUs, anymore, either. I suspect that in a few years we will have a lot more variation in the compute engines our devices have, if nothing else than the NPU (which remains to be seen in terms of how useful it will be). Restricting the conversation back to P-cores and e-cores, if a binary contains instructions that can't be executed on an E-core, I'm pretty sure the schedulers won't put it on the e-core. I don't know what kind of overhead or bugs there are with that but I have not heard of much incident there. Perhaps it just isn't widely reported or I am living under a rock.

I do agree with you that a hybrid CPU in the server space is a tougher sell, at least to all customers. I could see it being useful for specific workloads or types of workloads. But, in general, IT departments need to know the thermal and performance envelope of what they are getting, and they tend to have specific applications or use patterns in mind when they make big purchases, so they can tailor their purchase to a specific workload, lowering the need for hybrid.

You have no idea what you are talking about. That's not how it works, lol.
It is true that in the early days the thread scheduler had issues, and it sometimes still does depending on the application. Whether or not compression is the issue is beside the point. I can see why your username is Strawman.
 
It is true that in the early days the thread scheduler had issues, and it sometimes still does depending on the application. Whether or not compression is the issue is beside the point. I can see why your username is Strawman.
No it is not true at all. What he is describing there is completely made up. You can in fact choose within windows whether you want background or foreground tasks to take priority. It's a choice that you have BECAUSE of thread director, he is trying to present it as a con when in fact, I wish I was given that choice with amd cpus as well.

Also Ecores are terrible for reducing power draw, they are in fact worse than P cores in efficiency. They are there for performance / die space.
 
No it is not true at all. What he is describing there is completely made up. You can in fact choose within windows whether you want background or foreground tasks to take priority. It's a choice that you have BECAUSE of thread director, he is trying to present it as a con when in fact, I wish I was given that choice with amd cpus as well.

Also Ecores are terrible for reducing power draw, they are in fact worse than P cores in efficiency. They are there for performance / die space.
There are still cases where applications are putting in patches to deal with the scheduler, for example Cyberpunk 2077. https://www.tomshardware.com/pc-com...uld-be-great-if-it-didnt-cause-other-problems
 
Your whole post is just pure nonsense and intel hatred. What the heck are you talking about? First of all intel cpus have the lower power consumption on the desktop. I have a 13900t running at 35w, nothing amd has can get nowhere near that thing in efficiency. Amd cpus need to start using hybrid approach cause even their low core count cpus (like 7600x / 7700x) draw as much as 135w, while Intel's 24 cores are running at 35. Amd needs to seriously work on their power draw, both full and light load ASAP. Browsing the web on a 7950x and this thing needs 60w just to scroll youtube. My 13900t is just pulling 6 to 8. Jesus lord.

Lower power consumption on low loads, not high loads. Also AMD does not needhybrid cores for lower power consumption, they already have monolithic designs. High power is because of chiplt design, something I gave critics even before Zen2 launch. But it helps to keep costs down so...

13900T happens to be low power model, take low power model from AMD too for comparison.

You have no idea what you are talking about. That's not how it works, lol.

Will you ever deal with facts? You are basically amds user benchmark on steroids. Just stop it already.

That's exactly how it works. Intel's own documentation agrees.
 
First, I wasn't trying to put words in your mouth. I read this in your OP (Why this kind of anomaly even exist is because Intel had to lower power consumption at any cost to be competitive vs AMD.") and interpreted it to mean that you were saying that hybrid architecture was good for reducing power consumption.

Second, the scheduling issue seems like a bug that will get worked on over time and steadily improve. Smartphones have had hybrid CPUs for years. The use patterns are different but I would expect desktop/laptop to be able to effectively handle it.

Hybrid architecture is only for fact that Intel was unable to put 16 cores against AMD 16 cores. As 16 "big" cores would have meant huge power consumption. We also have definite proof that Hyybrid architecture was not originally planned to exist. Intel thought they could keep AVX-512 support on P-cores but ditched that at last moment because two different insctructions sets on same CPU is very problematic. Big "surprise". For some reason next gen E-cores happen to support AVX-512 too. So yes, Alder Lake was never supposed to exist.

Not bug but by design. Smartphones have had big little thing from beginning of multi core era. Also it was not problem to make all software from scratch. On x86 we have had non-hybrid approach for decades. And old software is still used. It won't never be fixed.

Third, while the different architectures can be a problem for some limited set of instructions which are not supported, heterogeneous computing in general is getting more and more popular. It isn't just GPUs, CPUs, and SSE/AVX on CPUs, anymore, either. I suspect that in a few years we will have a lot more variation in the compute engines our devices have, if nothing else than the NPU (which remains to be seen in terms of how useful it will be). Restricting the conversation back to P-cores and e-cores, if a binary contains instructions that can't be executed on an E-core, I'm pretty sure the schedulers won't put it on the e-core. I don't know what kind of overhead or bugs there are with that but I have not heard of much incident there. Perhaps it just isn't widely reported or I am living under a rock.

I do agree with you that a hybrid CPU in the server space is a tougher sell, at least to all customers. I could see it being useful for specific workloads or types of workloads. But, in general, IT departments need to know the thermal and performance envelope of what they are getting, and they tend to have specific applications or use patterns in mind when they make big purchases, so they can tailor their purchase to a specific workload, lowering the need for hybrid.

FYI there are tons of software thatn tend to do their own scheduling tricks and crash when using hybrid architecture. Denuvo is good example. It got fixed but there are still software that won't be. Also Hybrid CPU is supposed to run all old code too whereas HSA could be simply restricted to software that actually supports it. Same fo NPU. You just wont expect to use NPU with CPU that does not contain proper hardware.

For servers, it's simple. No-one want to optimize server software for two different CPU architectures on same server. Also since servers are supposed to run on full power almost all times, why waste any resources for lower power consumption that won't actually make any sense?
 
HT is actually harmful in gaming unless you run out of cores. Ecores boost performance a lot, so the best configuration for gaming on intel is ecores on HT off. This boosts performance while it drops power consumption by a ton.

I am pretty sure that HT is great in gaming for my 4770K CPU and I am absolutely sure that HT is absolutely required for gaming on 2C/4T CPUs such as my Skylake laptop's i5-6200U.

Turning off HT on a 4C/8T Haswell has been documented to hurt gaming performance and I'd guess it's the same thing for other 4C/8T CPU's.
 
It’s a difficult thing to do, because the cache sits exactly on top of the CCD, so the heat generated by the cores has to travel through the 3DVCache. And also, with the thick IHS to keep cooler compatibility, it’s even harder.
Yes but they did increase it from 5800X3D to 7800X3D and hopefully will do it again when 8800X3D/9800X3D launches.
 
Lower power consumption on low loads, not high loads. Also AMD does not needhybrid cores for lower power consumption, they already have monolithic designs. High power is because of chiplt design, something I gave critics even before Zen2 launch. But it helps to keep costs down so...

13900T happens to be low power model, take low power model from AMD too for comparison.
There is no AMD model that has as low power as the 13900T. AMD cpus need huge power draw, 230w for their 16 core part vs 35w for intels 24 core part.
 
Ecores boost performance a lot, so the best configuration for gaming on intel is ecores on HT off.
Hm, interesting point, do you have any proof? Never seen such a benchmark/comparison where HT off and e-cores on are used for gaming...I remember some publications where turning e-cores completely off helped boost fps in some games, but this maybe was windows 10. For example see this TPU article
 
Hm, interesting point, do you have any proof? Never seen such a benchmark/comparison where HT off and e-cores on are used for gaming...I remember some publications where turning e-cores completely off helped boost fps in some games, but this maybe was windows 10. For example see this TPU article
Those are averages, 1% lows literally skyrocket in some games with E cores on vs off. Warzone 2 literally goes from 120 1% lows all the way up to 190.
 
There is no AMD model that has as low power as the 13900T. AMD cpus need huge power draw, 230w for their 16 core part vs 35w for intels 24 core part.

That CPU has P-core base frequency whopping 1.1 GHz. Makes sense to put 24 cores that run at molasses clock speed :D

Also turbo power is over 100 watts.

This is the kind of stuff you keep making up. That is OBVIOUSLY wrong. 16 P Cores would be both faster and more efficient then 8P + 8E cores.

That depends on clock speeds. Underclocked E-cores vs overclocked P-cores, then not. Other way around, yes. That also includes factory overclock, AMD top CPUs are heavily overclocked default.
 
That CPU has P-core base frequency whopping 1.1 GHz. Makes sense to put 24 cores that run at molasses clock speed :D

Also turbo power is over 100 watts.
Clocks are irrelevant. The 13900t is the most efficient desktop CPU. AMD is miles behind, they need to fix their insane power draw.
That depends on clock speeds. Underclocked E-cores vs overclocked P-cores, then not. Other way around, yes. That also includes factory overclock, AMD top CPUs are heavily overclocked default.

At the same power 16 P cores will be faster and more efficient than 8+8 regardless. Are you suggesting otherwise? Say with a power limit of 150 watts, are you actually saying that 16 P cores wouldn't be faster than 8+8? Cause that would be laughably wrong.
 
Clocks are irrelevant. The 13900t is the most efficient desktop CPU. AMD is miles behind, they need to fix their insane power draw.
Not even Intel's most efficient. Also AMD has no interest on niche CPUs like that.
At the same power 16 P cores will be faster and more efficient than 8+8 regardless. Are you suggesting otherwise? Say with a power limit of 150 watts, are you actually saying that 16 P cores wouldn't be faster than 8+8? Cause that would be laughably wrong.
Who said anything about same power or anything? At certain clock speeds 16P is surely less efficient than 8+8.
 
Who said anything about same power or anything? At certain clock speeds 16P is surely less efficient than 8+8.
But who cares about clockspeeds? Performance is what matters. A 16P core configuration is both faster and more efficient than an 8+8, therefore power wasn't the reason Intel decided not to. It was stricly for performance per die reasons.

When it comes to efficiency, Intel is leading, by a huge margin. Review just came out, tested different CPUs. Thankfully had a 13700 non k in there, as expected, most efficient CPU out of their entire graph. Also their idle power numbers are very interesting. Measure from the 12v cablese, Intel drops as long as 1 watt vs 18 for the amd parts. From the wall, Intel idles at 44 to 60 depending on the CPU. all Zen 4 need 95. And here you are claiming how incredibly efficient amd is. Lol

 
But who cares about clockspeeds? Performance is what matters. A 16P core configuration is both faster and more efficient than an 8+8, therefore power wasn't the reason Intel decided not to. It was stricly for performance per die reasons.

When it comes to efficiency, Intel is leading, by a huge margin. Review just came out, tested different CPUs. Thankfully had a 13700 non k in there, as expected, most efficient CPU out of their entire graph. Also their idle power numbers are very interesting. Measure from the 12v cablese, Intel drops as long as 1 watt vs 18 for the amd parts. From the wall, Intel idles at 44 to 60 depending on the CPU. all Zen 4 need 95. And here you are claiming how incredibly efficient amd is. Lol

Efficiency is almost all about clock speeds (on same architecture/process CPU). There is efficiency/clock speed ratio for every CPU and at some clock speed that is highest.

Oh, all Zen 4 need 95. Then what are 8500G, 8600G and 8700G? Also remember AMD has much better connectivity that also consumes power. AMD is also not interested about idle consumption, no-one cares about it. If you are not using CPU for anything, better to put whole system on sleep.
 
Efficiency is almost all about clock speeds (on same architecture/process CPU). There is efficiency/clock speed ratio for every CPU and at some clock speed that is highest.

Oh, all Zen 4 need 95. Then what are 8500G, 8600G and 8700G? Also remember AMD has much better connectivity that also consumes power. AMD is also not interested about idle consumption, no-one cares about it. If you are not using CPU for anything, better to put whole system on sleep.
The G lineup are the non io die ones used on laptops. Those are actually great in terms of power draw. Desktop zen isn't. Browsing the web or watching youtube movies etc. largely falls into the idle category. So of course people care about it cause that's what they use their PCs for most of the time.

But regardless, even at full load, check the review, the 13700 is the most efficient CPU they tested measuring from the wall for heavy workloads. Now imagine if they included other non k models as well, 13900t, 14700t, 14900t etc etc. Not a single AMD cpu would be on the top 10 of efficiency. Yet here you are trying to convince people that they have efficient cpus, lol.
 
The G lineup are the non io die ones used on laptops. Those are actually great in terms of power draw. Desktop zen isn't. Browsing the web or watching youtube movies etc. largely falls into the idle category. So of course people care about it cause that's what they use their PCs for most of the time.

But regardless, even at full load, check the review, the 13700 is the most efficient CPU they tested measuring from the wall for heavy workloads. Now imagine if they included other non k models as well, 13900t, 14700t, 14900t etc etc. Not a single AMD cpu would be on the top 10 of efficiency. Yet here you are trying to convince people that they have efficient cpus, lol.
Those who are interested in low power consumption idle may go for Zen4 without IO die. Those who are not (like me), can go with Ryzen + IO die. There are alternatives.

i7-13700 consumes 83 watts from +12V line.

i7-13700K consumes 257 watts from +12V line.

Now, maximum turbo power for both are:

i7-13700: 219 W
i7-13700K: 253 W

So yeah, I totally agree "(y) (Y)"

(source: https://www.intel.com/content/www/us/en/products/compare.html?productIds=230500,230490 )
 
Back