Intel 12th-gen Core CPUs are official: Performance preview, Alder Lake models and specs

Scorpus

Posts: 2,162   +239
Staff member
Forward-looking: After much anticipation and tons of leaks for the last few months, Intel is finally ready to unveil their first "Alder Lake" 12th-gen Core desktop processors and they're giving us a performance preview based on their benchmark tests ran in-house. Come next week, you'll also have TechSpot's independent review of the CPUs as we start exploring all the related elements to Alder Lake, like DDR5 vs. DDR4 performance, which motherboards are worth buying, and so on. Lots of fun times ahead.

Intel had already given us a whole range of details about the Alder Lake architecture including the new hybrid design with P-cores and E-cores, but today that all comes together into the actual CPUs that will become available starting November 4.

In previous generations, Intel had launched their mobile processors first, but that's set to change with the 12th-generation lineup. The first CPUs to be released next week are enthusiast desktop K-series models, with everything else scheduled to launch early next year. Given Intel’s desktop parts are finally moving to a new process node with a brand new architecture, it seems they want to lead with their highest performing models before filling out the rest of the series.

As a quick refresher on Alder Lake, Intel is moving to a new hybrid design that features both performance cores (P-cores) and efficient cores (E-cores). The P-cores are an overhaul of Intel’s existing high performance cores bringing a bigger, wider, deeper design with more cache and improved features. Intel are claiming a 19% IPC improvement for these P-cores versus the Cypress Cove cores seen in Rocket Lake.

Meanwhile the E-cores are a big overhaul of Intel’s old Atom cores, improving performance into the range of Skylake, while being much smaller in terms of die space, and more efficient in terms of power consumption. Intel says that these cores are mostly designed for background applications, but they are no slouch and should significantly help with multi-thread performance in some applications.

Joining all of this together is Intel’s new cache architecture. Each P-core has 1.25 MB of L2 cache, and each group of four E-cores has 2 MB of L2 cache. Then, accessible across both P and E cores, Intel are providing up to 30 MB of shared L3 cache. Rocket Lake topped out at 16 MB of L3 cache for an 8 core design, so the jump up to 16 cores with Alder Lake sees that cache almost double.

Also part of Alder Lake is the Thread Director, a hardware scheduling feature that assists Windows 11 in allocating tasks to the appropriate cores. When you have a hybrid design, it’s of utmost importance that applications are run on the right cores – foreground, high performance apps on the P-cores, and background tasks on the E-cores. Thread Director provides feedback to Windows 11 that assists with that process.

The CPUs

Intel are launching six 12th-gen Core models, three in the K series and three in the KF series. So essentially we’re getting a Core i9 design, a Core i7 design and a Core i5 design, each with and without integrated graph.

The Core i9 model, the i9-12900K, is the fully unlocked Alder Lake die. It brings with it 8 P-cores and 8 E-cores for a total of 16 CPU cores and 24 threads. Why not 32 threads? Well, the E-cores do not have hyperthreading, so the P-core cluster is providing 8 cores and 16 threads, while the E-core cluster has 8 cores and 8 threads. There’s also 30 MB of L3 cache.

Because there are two types of cores in this CPU, clock speeds are more complex than before. The P-cores run at between 3.2 GHz base and 5.2 GHz boost, while the E-cores sit at 2.4 to 3.9 GHz. So while the top frequency the P-cores can hit is similar to previous generations, the E-cores are clocked a bit lower in addition to having lesser IPC.

The Core i7-12700K brings 8 P-cores and 4 E-cores for a total of 12 cores and 20 threads, with 25 MB of L3 cache. Boost clock speeds are slightly lower than the Core i9 model, though base frequency is higher: 3.6 to 5.0 GHz for the P-cores, and 2.7 to 3.8 GHz for the E-cores.

Then we have the Core i5-12600K, which has 6 P-cores and 4 E-cores for a total of 10 cores and 16 threads, plus 20 MB of L3 cache. The P-cores are clocked between 3.7 and 4.9 GHz, while the E-cores sit at 2.8 to 3.6 GHz.

All models support overclocking, and the K-SKUs with integrated graphics have Xe-based UHD Graphics 770, though Intel hasn't go into much detail on what this would bring. All models also include DDR5-4800 and DDR4-3200 memory support, though you’ll have to choose which technology to use, you can’t use both at the same time.

Intel is going straight for the throat by pricing the 16-core Core i9-12900K at $589

Intel are not messing around with pricing. Even though AMD’s competing Ryzen 9 5950X CPU with 16 cores has an MSRP of $800, Intel is going straight for the throat by pricing the 16-core Core i9-12900K at $589. You’ll then be able to save around $25 opting for the Core i9 KF model. This is looking very competitive, although we haven’t seen how it performs yet.

The Core i7 and Core i5 models are even more aggressive. The 12700KF is going for a $384 tray price, roughly the same as the current price for AMD’s Ryzen 7 5800X. However, Intel is offering not just 8 performance cores, like the 5800X, but four efficient cores as well, which could deliver a decent boost to multi-threaded performance. Intel is clearly looking to regain enthusiast market share with this pricing.

Then we have the Core i5-12600KF at just $264, lower than the Ryzen 5 5600X’s $300 price tag, but with 10 cores instead of 6. Personally, I’m pretty excited to see this sort of price war brought back to the desktop market as AMD hasn’t exactly been offering the best value parts with their Ryzen 5000 lineup, even though performance has been impressive. If Intel can deliver great performance with these CPUs at a lower price than AMD, that’s a big win.

Another interesting thing to note about the lineup is that Intel has changed the way they report power, depreciating the "TDP" in favor of two metrics: processor base power, and maximum turbo power. Processor base power is essentially the same as the “PL1” power limit of previous generations, and maximum turbo power is the same as PL2 – it’s just that both of these values are being exposed and listed on Intel’s spec sheet to make it much easier for buyers to see what power level these chips will run at. No more "125W" CPUs that run in excess of 200W: Intel are showing you the full turbo value, which ranges from 150W for the Core i5s, to 241W for the Core i9s.

But there’s a bigger change here as well...

Intel is finally updating their "default" power configuration to fall in line with how motherboard makers have been running these CPUs for years now. For all K-SKU processors, the default power mode will be to run the CPU at its maximum turbo power indefinitely (the graph on the right) for the best performance. Previously the default configuration was technically what you see in the left graph, except almost all motherboards overrode this by default to run the CPU like the graph on the right. Both configurations were always in-spec, but it caused confusion as to which configuration was “correct” or “default” and which mode reviewers should test with.

Intel is clearing that up this generation, so there’s no confusion and no argument over what way these CPUs should be run. The Intel default spec is now to run the CPU at the maximum turbo power indefinitely, which is the default out of the box configuration motherboards were running these CPUs in. Running at base power is still in-spec as well, but it’s now being clarified as being an optional configuration that you’ll have to enable.

Platform Features

Each Alder Lake CPU announced today has 20 PCIe lanes direct from the CPU. This is split into 16 lanes of PCIe 5.0, and 4 lanes of PCIe 4.0. In addition to this, the new Z690 chipset is providing up to 12 PCIe 4.0 lanes and up to 16 PCIe 3.0 lanes, plus various USB port configurations. This is a substantial improvement to the PCIe connectivity compared to prior generations, with faster lanes direct from the CPU, and the new addition of PCIe 4.0 lanes from the chipset.

The package for the CPUs is different, too, requiring the LGA 1700 socket which is larger and more rectangular than previous Intel sockets. The CPUs include what Intel is calling a “thick IHS + thin STIM” design: the die is 25% thinner, the solder thermal interface material is 15% thinner, and the IHS is now thicker.

For overclockers, there are lots of features provided with Alder Lake. You’ll have full control over the core ratios for both the P-cores and E-cores, as well as control over BCLK and ring/cache frequency. Intel lets you dial down even further if you want with stuff like per core enabling, full voltage controls, AVX offsets and so on. All of this stuff will be available in a new version of the Extreme Tuning Utility, including a one-clock overclocking tool.

Then for memory, Intel is introducing XMP 3.0 with DDR5. We already know Alder Lake will be the first desktop platform to support DDR5 memory – and again you’ll have to choose between using DDR4 and DDR5, they can’t be used at the same time and you’ll likely need to buy either a DDR4 or DDR5 motherboard. But if you do choose DDR5, you’ll gain access to XMP 3.0, which introduces new features.

The big one is the increase in stored profiles from 2 to 5: three of which are vendor profiles, and two are rewritable profiles that are user configurable. There’s also descriptive profile names which is going to make it much easier to know what profile does what, along with a few other improvements.

So that’s most of the platform features: P-cores and E-cores, new cache layout, DDR5 support with XMP 3.0, PCIe 5.0 support and improved PCIe from the chipset, new overclocking features and of course the SKU list.

Performance Preview

First up, we have gaming performance. Intel claims their new 12th-gen processors are the world’s best for gaming. Across 31 games tested, Intel is showing the Core i9-12900K to be 13% faster on average than the 11900K. These benchmarks were run at 1080p with High settings, using a GeForce RTX 3090. The 12th-gen platform is using DDR5-4400 memory, which Intel incorrectly lists as 4400 “MHz” – though Intel do say that DDR5 should be faster for gaming than DDR4 based on their testing, though mature DDR4 still performs well according to them.

Intel shows showing the 12900K being anywhere from slightly slower than the Ryzen 9 5950X, to being 30 percent faster. However, Intel does admit that these benchmarks were captured on Windows 11 before the performance patch for AMD CPUs was available, so the results aren’t as meaningful as they would have been had they tested the 5950X in its best performing mode, such as using Windows 10 or waiting for the patch to be available. Intel also claims that the 12900K would still be the “world’s fastest gaming CPU” if they used DDR4 memory, though they didn’t show the data to back this up.

It should also be noted that Intel tested with Windows 11’s virtualization-based security feature enabled. We recently showed in our Windows 11 vs Windows 10 benchmark test that VBS does hurt performance on the Core i9-11900K in games, though the degree to which performance is impacted depends on the game. If the 12900K is superior at managing or accelerating VBS, then the gains shown versus 11th-gen may be greater than they would have been if both platforms were tested with VBS disabled. That’s something we’ll have to explore in our review.

Intel spent some time talking about the software challenges of working with a hybrid CPU architecture and some of the best practices for developing applications. It sounds like while Alder Lake should work fine for a large number of applications, there will be some initial teething issues. Of specific interest for gamers is that Denuvo DRM wasn’t initially compatible with Alder Lake, although Intel worked with Denuvo to correct this, which will need to be applied as a patch to affected games. Intel said 91 games were impacted, and 32 have yet to be fixed, though 16 of those should be fixed before Alder Lake launches.

For productivity performance, Intel is talking about significant performance gains over previous parts. For example, Intel is showing gains in excess of 30% comparing the 12900K to the 11900K in Adobe applications, and similar gains in apps like Autodesk Revit. In more single-threaded productivity apps, Intel expect gains of at least 15%, such as in UL Procyon.

However, at no point during Intel’s presentation did they compare 12th-gen CPUs to AMD’s competitors in productivity workloads, and made no claims about being the world’s fastest chip for productivity, like they did for gaming. This suggests that Intel are unlikely to beat AMD in productivity, and it’ll also be interesting to see how the 12900K compares to the 10900K in some of these apps, given the 10900K was faster than the 11900K at times.

Intel also showed an IPC benchmark showing Intel’s various core architectures at the same frequency. 12th-gen P-cores ended up 28% faster than 10th-gen, and 14% faster than 11th-gen, while the E-cores effectively matched 10th-gen in IPC despite being significantly more efficient. In fact, Intel believes the 12900K is 50% faster than the 11900K at the same peak power level, 30% faster when limited to 125W, and on-par when using just 65W compared to the 11900K’s 250W. In other words, Intel claims this new architecture is significantly more efficient, especially at lower power levels.

Availability

Intel is opening up pre-orders for their 12th-gen Core parts right now, with availability scheduled for November 4th. What Intel has shown here is pretty decent and certainly exciting, especially from a value perspective, but it’ll still have to go through our full benchmark suite in the next week or so.

There are other questions that remain unanswered for now, such as how the 12900K compares to the Ryzen 9 5950X in productivity benchmarks. We don't have a clear idea about total platform cost either. While these 12th-gen CPUs seem cheaper than AMD’s counterparts, this isn’t factoring in the cost of Z690 motherboards and DDR5 memory, which we expect to be quite expensive. Most of these questions will be answered when we review the CPUs next week, so check back soon for all of that testing and beyond.

Permalink to story.

 
RIP Ryzen 5000 lol. I’ve had my 4790K for 7 years and I genuinely thought my next CPU would be a Ryzen part with the way Intel were going. But by the looks of this I will almost certainly be getting a 12600K.

I look forward to all the AMD fans praising Intel for forcing AMD to lower their prices as they undoubtedly will after this launch.
 
My Youtube streaming PC is a Core i7 5960x with 32GB DDR4 and a 3090FTW3.
It runs everything: Cyberpunk, Far Cry 6, etc in max settings.
I definitely want to upgrade to a 12th, 13th or 14th generation - although I want an all new DDR5 motherboard, but considering my 5960X is doing its job so well, my thought is that games really aren't very demanding right now. An 8-core CPU - even a Ryzen mobile 8 core - seems to be able to run games fine. The GPU seems to be the limiting factor.

RAM, not so much a limiting factor. Once you have 16GB it seems most games don't demand more.
 
OK Sausagemeat, here's my praise and thanks for Intel getting back in the game and hopefully turning price competition mode back on. Genuinely happy to see it although I'm not planning on throwing out my 5950x just yet. Even if they are just another source of supply of a desirable part I'm grateful just for that too.

Also, here's my hat tip to the tin foil hat crowd who noted the Windows 11 AMD performance bug could coincide nicely with Intel Alder Lake reviews. I thought that was a stretch, but sure enough here's Intel publishing graphs taking advantage of it and apparently not even acknowledging the short-lived nature of the issue or that it is now fixed. Even if it's an intentional gambit I still don't think it will matter -- any good reviewer will do their own investigation not based on the bug and the target market for this chip will see those reviews -- but it does make me think less of Intel's integrity to publish these graphs knowing full well they are not reflective of actual silicon performance differential. And if I were an AMD lawyer I might want to send a letter to Microsoft & Intel asking them to confirm the AMD bug was not created on purpose for this stunt and to update the materials.
 
RIP Ryzen 5000 lol. I’ve had my 4790K for 7 years and I genuinely thought my next CPU would be a Ryzen part with the way Intel were going. But by the looks of this I will almost certainly be getting a 12600K.

I look forward to all the AMD fans praising Intel for forcing AMD to lower their prices as they undoubtedly will after this launch.

If you don't mind waiting till 2022 Q1 the new AMD processor is going to be faster. I'd also take these benchmarks with a grain of salt as they were done comparing it to a nerfed AMD cpu running 15% slower.

"Intel shows showing the 12900K being anywhere from slightly slower than the Ryzen 9 5950X, to being 30 percent faster. However, Intel does admit that these benchmarks were captured on Windows 11 before the performance patch for AMD CPUs was available, so the results aren’t as meaningful as they would have been had they tested the 5950X in its best performing mode, such as using Windows 10 or waiting for the patch to be available. "

https://www.guru3d.com/news-story/a...n-4-later-that-year-with-pcie-5-and-ddr5.html
 
Last edited:
This is really a comical introduction, with Intel pulling out all the stops:

However, Intel does admit that these benchmarks were captured on Windows 11 before the performance patch for AMD CPUs was available, so the results aren’t as meaningful as they would have been had they tested the 5950X in its best performing mode, such as using Windows 10 or waiting for the patch to be available...
It should also be noted that Intel tested with Windows 11’s virtualization-based security feature enabled. We recently showed in our Windows 11 vs Windows 10 benchmark test that VBS does hurt performance on the Core i9-11900K in games

Anything to try and look good. Well the Intel fanboys have another week of happy dreams until actual independent, level-playing field tests come out next week. I still expect these CPUs to be very good but not quite as great as these cherry-picked situations make them out to be.

Did I mention that I love marketing?
 
Last edited:
OK Sausagemeat, here's my praise and thanks for Intel getting back in the game and hopefully turning price competition mode back on. Genuinely happy to see it although I'm not planning on throwing out my 5950x just yet. Even if they are just another source of supply of a desirable part I'm grateful just for that too.

Also, here's my hat tip to the tin foil hat crowd who noted the Windows 11 AMD performance bug could coincide nicely with Intel Alder Lake reviews. I thought that was a stretch, but sure enough here's Intel publishing graphs taking advantage of it and apparently not even acknowledging the short-lived nature of the issue or that it is now fixed. Even if it's an intentional gambit I still don't think it will matter -- any good reviewer will do their own investigation not based on the bug and the target market for this chip will see those reviews -- but it does make me think less of Intel's integrity to publish these graphs knowing full well they are not reflective of actual silicon performance differential. And if I were an AMD lawyer I might want to send a letter to Microsoft & Intel asking them to confirm the AMD bug was not created on purpose for this stunt and to update the materials.
Sure, why would you throw out a 5950X, youl be good for many many years to come. Well unless you happen to need PCIe5 support, which I find unlikely but I said the same about PCIe4 last year and I was wrong, future PCIe4 GPU upgrades will be gimped in PCIe3 slots.

I’m sorry but implying that MS and Intel are working together, breaking the law to tarnish AMD, one of Microsofts biggest and most crucial business partners is absurd and people who believe that should be put in the same category as flat earthers. Of course Intel released graphs when AMD were at a disadvantage, what do you expect from Intels corporate marketing team? Also are you aware how often windows updates change the performance of CPUs? You are only outraging because the tech press told you about this one. But I challenge you, read the notes for every windows update that comes out from now on. Youl think there are conspiracies everywhere.

If anything it looks to me that MS rushed this fix out in time for the Alder lake reviews. There was a much shorter fix time than I would expect for such an issue.

If Intel are actually back on top (we still haven’t seen Tim and Steves numbers to verify Intels claims) then I’m sure there will be all sorts of stories from the AMD fans about how Intel are using dirty practises, fixing benchmarks and stabbing babies etc. They pretty much always won’t be true.
 
I hope they will be available in new macs too

I really do, too so that the Hackintosh community receives x86 compatible macOSes for years to come. But Apple already has comparable core count CPUs to these so there's no chance that these Alder Lakes will show in Macs. However if 16-24 core Xeons based on these cores become available and Apple can't scale the M1 series to higher core counts, then mayyyybe we could see a Mac Pro or iMac Pro with those Xeons.

Unfortunately I doubt it but there's always hope.
 
Look at that +15% average gaming performance over Zen3 and not +50% as the whole tech press and YT channels were shouting from the rooftops with clickbait titles of leaks up until now.

Do you know what also has a +15% average gaming performance increase over Zen3? Yup, Zen 3D. :cool:

On another note I do like the prices, should make Zen3 drop lower, so that's good.

Edit: Also that TDP, rofl.
Edit2: HUB just confirmed that the benchmarks intel did were on Win11 without the Zen3 patch ON, rofl again.
 
Last edited:
I want to know what intel is thinking putting E cores on the K series parts but NOT on the lower end i5 parts. The lower end are the ones going into office PCs and media PCs that could benefit more from the power savings.
Sure, why would you throw out a 5950X, youl be good for many many years to come. Well unless you happen to need PCIe5 support, which I find unlikely but I said the same about PCIe4 last year and I was wrong, future PCIe4 GPU upgrades will be gimped in PCIe3 slots.
Only if you buy AMD midrange GPUs with a x8 interface, which Quantum will never do because he has to spend a used cars worth of money on each GPU he owns.
My Youtube streaming PC is a Core i7 5960x with 32GB DDR4 and a 3090FTW3.
It runs everything: Cyberpunk, Far Cry 6, etc in max settings.
I definitely want to upgrade to a 12th, 13th or 14th generation - although I want an all new DDR5 motherboard, but considering my 5960X is doing its job so well, my thought is that games really aren't very demanding right now. An 8-core CPU - even a Ryzen mobile 8 core - seems to be able to run games fine. The GPU seems to be the limiting factor.

RAM, not so much a limiting factor. Once you have 16GB it seems most games don't demand more.
Even quad core i5s from the ivy bridge era, when OCed, can still maintain 60-70 FPS 1% mins in every game out today. The GPU has been the limiting factor for the last 10 years, unless you play NMS, supreme commander, or AotS, the former of which needs 10+ GB of free memory to truly run smoothly and the latter two cant get enough bandwidth even with modern DDR4 systems.
 
I hope they will be available in new macs too
I doubt it since it seems like Intel and Apple are competitors now with the release of the M1 and now the M1 Pro and M1 Max.
Before, I would brush off buying apple, but I've really been looking at those M1 minis.

These prices for the 12th gen are very competitive, because not only do they have to compete with AMD, but also Apple now, so Intel had no choice but to try to step it up with these CPUs.
 
If you don't mind waiting till 2022 Q1 the new AMD processor is going to be faster. I'd also take these benchmarks with a grain of salt as they were done comparing it to a nerfed AMD cpu running 15% slower.

"Intel shows showing the 12900K being anywhere from slightly slower than the Ryzen 9 5950X, to being 30 percent faster. However, Intel does admit that these benchmarks were captured on Windows 11 before the performance patch for AMD CPUs was available, so the results aren’t as meaningful as they would have been had they tested the 5950X in its best performing mode, such as using Windows 10 or waiting for the patch to be available. "

https://www.guru3d.com/news-story/a...n-4-later-that-year-with-pcie-5-and-ddr5.html
I’m waiting until I get a GPU. No point upgrading a CPU when the GPU it powers is an RX480. So, if the new Ryzen stuff is better value than the new Intel stuff when I can eventually get a GPU then I’ll get Ryzen.

However, the new Ryzen stuff is DDR4. I’m not too keen on that, I think I’d rather get into DDR5. I’m currently using DDR3 and it doesn’t seem to make much sense to upgrade to a memory spec that was released not long after I bought my current CPU.

Currently, $300 for a gaming CPU feels a bit much and that’s the cheapest AMD go with Ryzen 5000, so I would need to see a price cut and a performance boost from the next gen of Ryzen.


However I am currently bidding on several RTX cards on eBay. If I actually get one then no way am I waiting until next year to upgrade my CPU lol.
 
Pricing is good specially for i5 and i7

AMD fanboys now suddenly care about gaming only ??? Even though most of these CPU limited gaming test (using best GPU and running it a low resolution) don't have any actual benefit for gamers in real word.

What about 3d rendering, photoshop, compression.... ?? Nobody cares about that anymore ?? i5 and i7 will easily crush 5600X and 5800X in these


Do you know what also has a +15% average gaming performance increase over Zen3? Yup, Zen 3D. :cool:

15% average gaming performance on AMD tests = probably mean like 8-10% average when tested by independent review

6700XT was faster than RTX 3070 on average in AMD own tests at 1440p
https://www.dsogaming.com/wp-content/uploads/2021/03/4.jpg

Techspot review shows that 6700XT is 8% slower than 3070 at 1440p
https://static.techspot.com/articles-info/2227/bench/1440p.png
 
Not buying Intel ever again, not supporting the Corp that brought us tech stagnation for 10 years, the company that would gladly still sell mainstream customers monocore and dual core CPUs if they could get away with it and the company that put in my Haswell CPU the absolute worse TIM they could find on the market just so it won't OC well as a planned obsolescence feature, so the end user had had to delid.

No matter what they do, I think Intel and their shady business practices and mafia like manipulation of their partners (so they only buy Intel CPUs for their laptops and prebuilts) must be appropriately punished by the end customer so they will learn a lesson.
 
Only if you buy AMD midrange GPUs with a x8 interface, which Quantum will never do because he has to spend a used cars worth of money on each GPU he owns.

Today yes only if you buy the 6600XT you are gimped on PCIe3 as it only has an 8X interface.

But what about in 4 years time, say you buy a hypothetical RTX 5070 or an RTX 5080? If you’ve only just bought a CPU it’s not unreasonable to keep it that long at all and a bit annoying to have to upgrade what could be something like a 10900K after just 3-4 years.
 
What about 3d rendering, photoshop, compression.... ?? Nobody cares about that anymore ?? i5 and i7 will easily crush 5600X and 5800X in these

15% average gaming performance on AMD tests = probably mean like 8-10% average when tested by independent review

Seems odd to lead a complaint about exaggerated benchmarks with a completely unfounded opinion about the exact same thing. Especially when this was noted in the analysis:

"...at no point during Intel’s presentation did they compare 12th-gen CPUs to AMD’s competitors in productivity workloads, and made no claims about being the world’s fastest chip for productivity, like they did for gaming. This suggests that Intel are unlikely to beat AMD in productivity."

Well, it's really not odd when both contradictory arguments are in favor of the same brand...
 
Lol, AMD were second place to Intel in gaming for 15 years until last year and now just one year later it looks like Intel will snatch the crown straight back. Hopefully AMD wont let us down for another 15 years again this time. Lets hope Zen 4 punches back harder.

As for me, im on a 5800X, im very happy with it and I wont be looking for a new CPU for a few years yet. I actually keep forgetting to close the previous game and run a new game and dont even notice that my machine is running 2 games. Modern CPUs are more than capable of gaming very well. Really, gamers should be getting the lower core count variants of these parts and whichever company offer the cheaper option will probably be the better buy. I def should have saved my money and got a 5600X but here we are.

I find the comments amusing, lots of damage control from people emotionally attached to AMD it seems.
 
Not buying Intel ever again, not supporting the Corp that brought us tech stagnation for 10 years, the company that would gladly still sell mainstream customers monocore and dual core CPUs if they could get away with it and the company that put in my Haswell CPU the absolute worse TIM they could find on the market just so it won't OC well as a planned obsolescence feature, so the end user had had to delid.

No matter what they do, I think Intel and their shady business practices and mafia like manipulation of their partners (so they only buy Intel CPUs for their laptops and prebuilts) must be appropriately punished by the end customer so they will learn a lesson.
Intel, like MS, is a company with a bad soul.
 
Also, here's my hat tip to the tin foil hat crowd who noted the Windows 11 AMD performance bug could coincide nicely with Intel Alder Lake reviews. I thought that was a stretch, but sure enough here's Intel publishing graphs taking advantage of it and apparently not even acknowledging the short-lived nature of the issue or that it is now fixed. Even if it's an intentional gambit I still don't think it will matter -- any good reviewer will do their own investigation not based on the bug and the target market for this chip will see those reviews -- but it does make me think less of Intel's integrity to publish these graphs knowing full well they are not reflective of actual silicon performance differential. And if I were an AMD lawyer I might want to send a letter to Microsoft & Intel asking them to confirm the AMD bug was not created on purpose for this stunt and to update the materials.
OK, just so we'er clear "advertising" and "integrity", are antonyms, NOT, synonyms.(And most times it doesn't even matter who's doing the talking).
 
Last edited:
Today yes only if you buy the 6600XT you are gimped on PCIe3 as it only has an 8X interface.

But what about in 4 years time, say you buy a hypothetical RTX 5070 or an RTX 5080? If you’ve only just bought a CPU it’s not unreasonable to keep it that long at all and a bit annoying to have to upgrade what could be something like a 10900K after just 3-4 years.
Well the 3080 was the first flagship that showed a major difference between 2.0x16 and 3.0x16 (5% difference), and the last 2.0 platform was sandybridge in 2011. So I think you'll be fine.
 
Back