Intel 12th-Gen Core Alder Lake Architectural Benchmark

So E cores clearly aren’t for gamers. But these Alder lake CPUs are clearly the best choice for gamers right now. We won’t see games using more than 8 P cores at full load for a long long time. BF2042 uses 45% of my 5800X in 128 player mode, so it’s got ample headroom and that’s by far the most demanding gaming load I’ve put on it.
That's not exactly how it works. 45% means it's already pegging the majority of the 8 cores, since the other 50% is just the hyperthreading. So 45% if the scheduler works right means it's basically using 75% of your CPU's total performance.
 
FX was 8 core CPU. You can run 8 tasks all assigned to different cores. Something you cannot do with 4 core CPU. Not even with HT. That's definite proof that FX-8350 is 8 core CPU.

On low loads yes but tbh quad cores are quite useless for even low level multitasking.

What about that settlement? AMD just decided to pay small amount of money to avoid more expensive court expenses. It doesn't prove anything.

Of course, there were no 8-core Excavator models ever released.
Yeah the FX8350 was not a true 8 core CPU. I mean it performed like the Intel quad cores of the day, actually it was often slower than Intels quad cores. Multi tasking was better on Intels quad cores. There were very few applications that could actually use an FX8350 as an 8 core. Most of them were stuck with the 4 compute modules.

We were just victims for AMDs marketing, if they hadn’t written “8 cores” on the box we would never have ever guessed it had 8 cores. Apparently you’re still believing their lies.

Oh and fyi, AMD settling the court case infers they were in the wrong. If they believed they would have won the case they would have let it go to a judge. If they had won it would have cost them nothing.
 
Last edited:
That's not exactly how it works. 45% means it's already pegging the majority of the 8 cores, since the other 50% is just the hyperthreading. So 45% if the scheduler works right means it's basically using 75% of your CPU's total performance.
Lol, I don’t think you’ve got a good grasp on how it works mate. If it’s reporting 45% it’s actually probably using less than 45% of your CPU actually. The usage percentage is based on thread load and clock speed. So unless all cores clocks are maxed out and I can assure they weren’t when playing BF, you won’t be using your CPU to the max. A good way to confirm this is to look at the temps and voltages, my 5800X doesn’t get very hot in any game, even in BF, if you were using 75% of all that silicon your voltages and temps would be shooting up.

If you want to learn more DM me, I’m more than happy to talk you through it.
 
Also e cores have no HT, so 1.5x-2x perf. drop comes only from this, even i7 2600k has HT, as you not switched smt off.

Hypertheading does not make a significant impact in modern processors (3000-series Ryzen and beyond, 9th-gen Intel and beyond) when talking about gaming. In CPU-only productivity tasks, you see an uplift of 33-70% because the threads don't work with completely separate resources. It's like having two teenagers help you haul wood from a cut-down tree to a trailer. When the task constitutes loads light enough for each to work independently of each other, the task is done twice as fast; however, if you have a large block that's too much for one of them to be able to take on their own, the other has to give it some of its own resources in the form of physical power. It's not a perfect analogy because one kid isn't using 100% of its resources while the other is only giving it 50% of its resources (more likely, they are to both give equal physical resources to the task), but it starts to allow you to understand benchmarks that don't scale with how many threads you have. 6C/6T will almost always beat out 4T/8T if the two processors are on the same architecture.
 
If you look at 9th gen i3 and 10th gen i3 gaming benchmarks you will see massive difference in gaming because 9th gen core i3 did not support HT........... 4 cores/4threads is not enough for gaming anymore

Hyperthreading does have some play here, but the more significant difference would be not only in that 10th-gen has a higher clock speed, but that 10th-gen intel brought an 18% IPC improvement to its processors. It's very difficult to take two different-generation processors and compare one single point between them. If you want to determine what role hyperthreading plays, you have to disable it on the 10100 and compare it to it not being disabled, which provides mixed results (see:
). Cores always beat out threads because cores don't share resources like threads do. 8C/8T almost always beats out 6C/12T and will certainly always beat out 4C/8T.
 
Hyperthreading does have some play here, but the more significant difference would be not only in that 10th-gen has a higher clock speed, but that 10th-gen intel brought an 18% IPC improvement to its processors. It's very difficult to take two different-generation processors and compare one single point between them. If you want to determine what role hyperthreading plays, you have to disable it on the 10100 and compare it to it not being disabled, which provides mixed results (see:
). Cores always beat out threads because cores don't share resources like threads do. 8C/8T almost always beats out 6C/12T and will certainly always beat out 4C/8T.
I read somewhere that on Alder lake the new thread scheduler tries to prevent the P cores from hyper threading by offloading lighter threads to the E cores, helping to improve the overall performance of the P cores.
 
https://regmedia.co.uk/2019/08/27/amd-eight-core-settlement.pdf

//shrug// that looks like fault admittance to me .. "here is money to go shut up because you are right in that the shared decoders + shared scheduler and only four classic FPUS prevent sustained true 8x8 execution except for specific circumstances."
Did you even read that document? That has large amount of false claims and only real point that made sense was AMD's advertising.

One could also make class action lawsuit about Intel claiming E cores are at Skylake performance level.
You are absolutely right, this hybrid architecture sucks so much in gaming it actually tops the charts in most games. Winning by a huge margin in some of them as well. You hit the nail on the goddamn head my man :p
Most games won't even start is much worse than small advantage on some.
 
One could also make class action lawsuit about Intel claiming E cores are at Skylake performance level.

But, it is on par, or better:
https://www.anandtech.com/show/1704...ybrid-performance-brings-hybrid-complexity/10

Most games won't even start is much worse than small advantage on some.

Only a select few right now. You also have to remember that new architectures come with growing pains when they deviate from the norm. Remember the issues that came about when Ryzen first released? Finding compatible RAM stunk and even then people had a hard time running higher than the 2000s in MT/s. The list of games that have trouble with 12th-gen. has significantly decreased down to only three: https://www.intel.com/content/www/u...0088261/processors/intel-core-processors.html
 
Great article. You took incredible pains to present your case.

While it would not have helped your case, I would have liked to see the charts presented in a hierarchical format just for readability. Possible as an 'also' format? Never the less, still great presentation.

I would hope that you do a similar article on 'real world' benchmarks, like Office, Blender, Cinebench, r20 and 23... An overall look at the 'e' cores in the real (non-gaming) world. In other words, are the 'e' cores just a waste of silicon?
 
Yeah the FX8350 was not a true 8 core CPU. I mean it performed like the Intel quad cores of the day, actually it was often slower than Intels quad cores. Multi tasking was better on Intels quad cores. There were very few applications that could actually use an FX8350 as an 8 core. Most of them were stuck with the 4 compute modules.

We were just victims for AMDs marketing, if they hadn’t written “8 cores” on the box we would never have ever guessed it had 8 cores. Apparently you’re still believing their lies.

Oh and fyi, AMD settling the court case infers they were in the wrong. If they believed they would have won the case they would have let it go to a judge. If they had won it would have cost them nothing.
Question was about amount of cores, not performance. But of course, you would expect $300 CPU to be equal against $2000 CPU just because both have 8 cores?

Again no, someone just expected that amount of cores = performance despite huge price difference.

Oh yeah, judge never makes any errors and you always win all lawsuits if you're "right" :p

According to this Techspot article we are discussing, not at all. On some scenarios, yes. But generally, no.
Only a select few right now. You also have to remember that new architectures come with growing pains when they deviate from the norm. Remember the issues that came about when Ryzen first released? Finding compatible RAM stunk and even then people had a hard time running higher than the 2000s in MT/s. The list of games that have trouble with 12th-gen. has significantly decreased down to only three: https://www.intel.com/content/www/u...0088261/processors/intel-core-processors.html
I had no problems with Ryzen RAM. Also those problems didn't matter normal user. Being not able to run games at all is worse problem.

You do realize that list does not cover all games? It covers just few games Intel has tested. Also this hybrid problem is something that's not easy to fix. Basically you'll need either new software or use trick like scroll lock to disable cores.
 
You do realize that list does not cover all games?

This list used to be expansive, but almost all have been fixed. So, what games are still having problems that aren't these? Keep in mind that the latest patch that fixed a couple dozen games was released just about a week ago, so that has to be taken into account. Alos, everything I'm seeing is that the issues that have been mostly patched now had to do with Denuvo, the issuer of a not-liked-at-all anti-piracy technology.

And, I realized I skimmed the AnandTech article and overlooked the fact they they used all eight E-cores.

However, hybrid CPUs aren't a big problem to deal with. Just requires firmware/OS support. Although you didn't have RAM issues, a lot of people did. That's why Ryzen-compatible RAM became a large selling point a few years ago. I was debating going with Ryzen when it first came out, but didn't because even now you can just Google "1st-gen Ryzen RAM," not even mention issue, and get many more stories talking about the issues with the new architecture than not. Ryzen 3000-series also wouldn't run on Destiny 2 when it first came out.
 
Hyperthreading does have some play here, but the more significant difference would be not only in that 10th-gen has a higher clock speed, but that 10th-gen intel brought an 18% IPC improvement to its processors.

WRONG

10th gen to 6th gen are all based on same exact architecture (skylake). ZERO IPC difference. only 11th gen and 12th gen has higher IPC

Core i3 10100 and 9100 has same base clock speed and only 100Mhz difference in turbo boost.

Only real difference is that 10th gen i3 has HT enabled

I'm even surprised that no one corrected you, and your comment even got liked by one user. LOL

If you want to determine what role hyperthreading plays, you have to disable it on the 10100 and compare it to it not being disabled, which provides mixed results (see:
). Cores always beat out threads because cores don't share resources like threads do. 8C/8T almost always beats out 6C/12T and will certainly always beat out 4C/8T.

Your video show i5 10400 (not core i3)

Comparing 6C/12T to 6C/6T is not same as 4C/8T to 4C/8T

I was talking turning off HT on quad core have huge impact on fps (not 6 cores and 8 cores)

Here is video shows i3 10100 SMT on and off.....

HT impact varies from game to game but in some games you get huge loss in fps when you turn off HT on quad core CPU

It's the right conclusion, but for different reasons. Core latency isn't the only killer. L3 cache access latency is, as well. The E- cores have no dedicated L3; they share the P- cores' L3 cache, so it's an effective "off core" traversal each time.

Also, lack of HT on quad core kill fps in some titles (specially minimum fps). The biggest killer is lack of HT

At least two of the games the review tested (Battlefeild V and shadow of tomb raider) shown that they need HT on quad core for smooth experience (i3 10100 and 9100F are same CPU (both are skylake architecture and same IPC) but 9100F has no HT)

If they tested 8 E cores, things would have been so different. 4c/4t won't run some games smoothly even if is big cores (specially if look at minimum fps or 1% lows)
 
Last edited:
This list used to be expansive, but almost all have been fixed. So, what games are still having problems that aren't these? Keep in mind that the latest patch that fixed a couple dozen games was released just about a week ago, so that has to be taken into account. Alos, everything I'm seeing is that the issues that have been mostly patched now had to do with Denuvo, the issuer of a not-liked-at-all anti-piracy technology.
Denuvo is one to blame. I never liked Denuvo anyway. However Denuvo is something you cannot fix with OS/driver updates. That's why I expect there are many older Denuvo games that have huge problems with Alder Lake.

However, hybrid CPUs aren't a big problem to deal with. Just requires firmware/OS support. Although you didn't have RAM issues, a lot of people did. That's why Ryzen-compatible RAM became a large selling point a few years ago. I was debating going with Ryzen when it first came out, but didn't because even now you can just Google "1st-gen Ryzen RAM," not even mention issue, and get many more stories talking about the issues with the new architecture than not. Ryzen 3000-series also wouldn't run on Destiny 2 when it first came out.
From here, mostly os/driver problem from here. But for older software, pretty impossible to fix.

Ryzen memory problems were mostly for high speed memory. Not surprisingly new architecture is optimized for stability, not for high speed memory or other niches. Just like fast DDR5 kits are pretty rare right now.

Destiny 2 is pretty small game and bug was fixed quickly.
 
Lol, I don’t think you’ve got a good grasp on how it works mate. If it’s reporting 45% it’s actually probably using less than 45% of your CPU actually. The usage percentage is based on thread load and clock speed. So unless all cores clocks are maxed out and I can assure they weren’t when playing BF, you won’t be using your CPU to the max. A good way to confirm this is to look at the temps and voltages, my 5800X doesn’t get very hot in any game, even in BF, if you were using 75% of all that silicon your voltages and temps would be shooting up.

If you want to learn more DM me, I’m more than happy to talk you through it.
It doesn't get very hot because it's probably not using HT. HT increases temperatures by a shitload, even with the same voltages. Run a cbr23 with and without SMT on and see the difference. It seems you don't understand how it works, if you have any questions DM me
 
Thanks a ton for this analysis, it must have been a LOT of work. Really cool to see old but relevant parts like the 2600K included, it really brings some perspective to the results. :)
And the conclusion was spot on!
 
You'd think applying Moore's law would mean current processors would be far beyond processors from 10 years ago but the i7 2600K from 2011 seems to do quite well in these tests. I still use a i5-3570K with a 1060 (6GB) GPU - I know I should upgrade but there's no point upgrading the CPU without upgrading the GPU and I point blank refuse to pay double MSRP for a GPU.
 
You'd think applying Moore's law would mean current processors would be far beyond processors from 10 years ago but the i7 2600K from 2011 seems to do quite well in these tests. I still use a i5-3570K with a 1060 (6GB) GPU - I know I should upgrade but there's no point upgrading the CPU without upgrading the GPU and I point blank refuse to pay double MSRP for a GPU.
Amd they are far beyond cpus from 10 yearals ago. The 12900k with 8 ecores and 4 p cores disabled has more than double the performance of the 2600k. Basically 1/3rd of the 12900k is twice as fast as the whole 2600k. That means the 12900k has 6 times the performance.
 
Hypertheading does not make a significant impact in modern processors (3000-series Ryzen and beyond, 9th-gen Intel and beyond) when talking about gaming.
True if the total number of threads is already ~enough for the game, which is 8-12 usually. In case of you have only 4 cores=threads is too small for almost any modern game, so even +4 HT threads could bring significant boost(of course not the same as +4 physical cores, but still noticeable). Then going up from 8 threads to 12-16 HT threads scales not so good and so on...The more threads you already have the less they scale with adding even more threads, down to +0% if total threads are > 16.

HT usually makes a significant impact if you have a right mix of FP/INT instructions executed by different threads: say 1-4T execute mostly INT instructions (loading mostly INT execution units of each core) while 5-8T execute mostly FP(loading mostly special FP execution units), then you may observe almost 2x perf boost - by fully utilizing the instruction-level parallelism of each core. This could be clearly seen when your load is pure FP/AVX (like linX/linpack) for example: HT has 0 effect, could be turned off without affecting the performance.
 
tldr
"Typically, P-cores take 37ns to communicate with one another whereas the E-cores take 57ns and this cripples performance in games and for any other workload that relies heavily on core crosstalk."
IE - its supposed to be an architectural answer to amd's fairly seamless expandability of chiplets/mcm/Fabric - it isnt even close when put to the test.

when cores at the extremity of the (limited) bus have to talk to each other - its crap latency
 
Ahhh I remember when AdoredTV essentially said that the FX series would age better and the gap would close against old i5 and i7 2nd gen parts, here the 2600k. The lolz.

The incredible thing is the 2600k is obviously very slow but still (kind of) viable at 1080p nearly eleven years later. Especially seeing as though you can dump another 20 percent clock speed on it quite easily.
Tbf Jim actually was basing the comparison between the 8350FX vs i5 2500K, the issue is that Jim didn't understand the architecture of the FX series, he later admitted he was wrong in later videos. The issue at the time was intel upped their prices massively, even i3's were as much as the FX 8350.
 
Remember when intel were saying that the E cores were on par with Skylake... well that was a lie.
 
You are absolutely right, this hybrid architecture sucks so much in gaming it actually tops the charts in most games. Winning by a huge margin in some of them as well. You hit the nail on the goddamn head my man :p
Did you read the article? In the 12900K the E cores are crap for gaming. Only the 8 P cores are important. So, it's an 8 core CPU in practice. As games don't use more than 8 cores, you see those benchmarks. The architecture, anyway, as a whole, is not desirable for gamers. Problematic.
 
That's not exactly how it works. 45% means it's already pegging the majority of the 8 cores, since the other 50% is just the hyperthreading. So 45% if the scheduler works right means it's basically using 75% of your CPU's total performance.
No idea what you are talking about. Maybe you are bringing a new knowledge about CPU's, who knows 😏
 
Did you read the article? In the 12900K the E cores are crap for gaming. Only the 8 P cores are important. So, it's an 8 core CPU in practice. As games don't use more than 8 cores, you see those benchmarks. The architecture, anyway, as a whole, is not desirable for gamers. Problematic.
Im a gamer and the architecture is desirable for me. Therefore you are wrong. Its the fastest gaming cpu, I dont know what the heck are you talking about
 
Back