Intel Diamond Rapids server CPU with up to 192 cores to debut in 2026, will go up against AMD's Epyc Venice

DragonSlayer101

Posts: 693   +4
Staff
Something to look forward to: Intel is expected to launch its next-generation Xeon CPUs next year, aiming to finally challenge AMD more effectively in the data center market after trailing for several generations. Codenamed Diamond Rapids, the lineup is rumored to include a 192-core flagship model, matching AMD's top-tier Epyc 9965 in terms of core count.

According to a leaked slide posted on X, each CPU will feature up to four compute tiles, with the 192 cores distributed evenly across four 48-core tiles. Other leaked specifications include a 500W TDP for a single-socket configuration and support for between eight and 16 DDR5 memory channels per SoC. The slide also indicates that the CPUs will be manufactured using Intel's advanced 18A process node.

The new lineup will reportedly utilize high-performance Panther Cove cores with support for Intel's APX (Advanced Performance Extensions), which significantly improves the efficiency of AMX (Advanced Matrix Extensions) and introduces native support for additional floating-point formats, including TF32 and FP8.

While the latest leak doesn't throw any more light on the next-gen Xeon chips, earlier reports revealed they will feature a 9324-pin LGA socket.

Diamond Rapids is also expected to support second-generation MRDIMM memory, enabling data transfer rates of up to 12,800 MT/s – significantly higher than the 8,800 MT/s supported by the current Granite Rapids processors. If accurate, the upgraded memory subsystem could deliver up to 1.6 TB/s of peak bandwidth, a substantial improvement over Granite Rapids' 844 GB/s.

Intel's upcoming data center CPUs will be part of the Oak Stream platform, offering support for up to four sockets and PCIe Gen 6 connectivity. Notably, Diamond Rapids chips will be manufactured on Intel's 18A process node, though the company is reportedly shifting external foundry customers to the newer, more efficient 14A node instead.

Diamond Rapids will succeed Granite Rapids as Intel's next-generation server processor lineup and is expected to debut sometime in 2026. It will compete directly with AMD's Epyc Venice lineup, confirmed for launch in 2026. Venice is set to feature up to 256 Zen 6 cores, representing a significant jump from the current Epyc 'Turin' 9965, which offers up to 192 Zen 5c cores, 384MB of L3 cache, and a 2.25 GHz base clock.

Permalink to story:

 
I swear, I'm not trying to stir the pot, but I don't think Intel would lay off tons of people if they didn't have a competitive product ready for production
 
Intel finally copied AMD chiplet approach. Took only 7 years, congratulations Intel! Clock speeds are unknown. I expect very low ones, especially if AVX512 is supported. At least Intel gets moar cores but again AMD will take lead right when this is released.

Funny thing is that Intel puts four "IO dies" whereas AMD still probably has only one. AMD approach makes it much easier to unify memory latencies. Intel solution makes it very hard to achieve.
 
I swear, I'm not trying to stir the pot, but I don't think Intel would lay off tons of people if they didn't have a competitive product ready for production
Too complicated sentence.

Also It's not about just being "competitive". Only total domination will save Intel in its current form. You need to dominate and collect hefty margins in order to justify/retain such an army of employees.
 
Too complicated sentence.

Also It's not about just being "competitive". Only total domination will save Intel in its current form. You need to dominate and collect hefty margins in order to justify/retain such an army of employees.
Only thing to make it complicated was my typo as I often post from my phone where it switched "wouldn't" to "would"
 
Intel finally copied AMD chiplet approach. Took only 7 years, congratulations Intel! Clock speeds are unknown. I expect very low ones, especially if AVX512 is supported. At least Intel gets moar cores but again AMD will take lead right when this is released.

Funny thing is that Intel puts four "IO dies" whereas AMD still probably has only one. AMD approach makes it much easier to unify memory latencies. Intel solution makes it very hard to achieve.
Not sure where you've been, but Intel has been using Tiles for a while now as well as 3D stacking.
And where did you get 4 IO dies from?
 
Last edited:
Not sure where you've been, but Intel has been using Tiles for a while now as well as 3D stacking.
And where did you get 4 IO dies from?
You didn't see much tiles on server CPUs. That approach is much like AMD chiplet (separate dies: IO and CPU cluster).

It does seem from diagram that single CPU cluster have 4 DDR5 channels. 4 CPU clusters for 16 DDR5 channels total therefore require 4 "IO dies" connected together. AMD has single IO die that handles all memory access.
 
You didn't see much tiles on server CPUs. That approach is much like AMD chiplet (separate dies: IO and CPU cluster).

It does seem from diagram that single CPU cluster have 4 DDR5 channels. 4 CPU clusters for 16 DDR5 channels total therefore require 4 "IO dies" connected together. AMD has single IO die that handles all memory access.
I wish anandtech was still around, so we could get some die shots and a processor breakdown like the olden days.
 
You didn't see much tiles on server CPUs. That approach is much like AMD chiplet (separate dies: IO and CPU cluster).

It does seem from diagram that single CPU cluster have 4 DDR5 channels. 4 CPU clusters for 16 DDR5 channels total therefore require 4 "IO dies" connected together. AMD has single IO die that handles all memory access.
This is a joke right?
Majority of Tiles are used for Intel's Xeon CPU's starting with Sapphire Rapids, and the first consumer part was MTL followed by LNL.

"If the slide accurately depicts Diamond Rapids, then the CPU will use up to six chiplets: up to four compute tiles (produced using Intel 18A fabrication process and packing up to 48 cores) and two I/O tiles that will control memory interfaces..."
 
Last edited:
Back