AMD's top-end Zen server chip will reportedly pack 32 cores / 64 threads and 64MB of L3 cache

Shawn Knight

Posts: 15,285   +192
Staff member

AMD’s next-generation CPUs likely won’t arrive until sometime next year (CES in January seems like as good an unveiling venue as any). That leaves plenty of time for speculation and rumor which is exactly what’s on offer today.

Sources reportedly familiar with the matter tell Fudzilla that AMD’s top-end server chip, codenamed Naples, will sport a 32-core / 64-thread configuration with a mind-numbing amount of cache on tap. It’ll be built on a 14-nanometer FinFET manufacturing process by GlobalFoundries with support for the x86 instruction set.

The publication says each Zen core will have its own dedicated 512kb cache and that each “cluster” will have 8MB of L3 cache, bringing the total amount of L3 shared cache to 64MB. There’s also said to be eight independent memory channels and 128 PCIe gen 3 lanes with support for up to 32 SATA or NVMe drives.

Naples will also reportedly support 16x10GbE via an integrated controller that’ll likely be in the chipset.

Keep in mind that Naples is the high-end chip of the family. There will also reportedly be dual, quad and sixteen-core server versions with TDPs of between 35W and 180W.

AMD CEO Lisa Su previously told the publication that the desktop version will arrive first, followed by server, notebook and embedded solutions. Fudzilla says the server chips will be spread out over a staggered release that could begin as early as late 2016.

Permalink to story.

 
Oooh, so exciting!! Even if they finish a bit behind Intel in IPC, feature-wise is impressive, being on par or superior with what Xeon E5 v4 offers -I know it has eight memory channels too, as far as I know, Zen would have more PCIe lanes and 10GbE ports.
 
I hope AMD doesn't make the same mistake they did with their previous GPU generations. For example the Tahiti had DX12 support + Async shaders for years yet there are still very few DX12 that barely takes advantage of GCN's features. They implement so much features on their chips and these extra features take up space on the die space. On GCN's account, I still think those cards didn't display their full potential outside of Dx12 apps, whereas users' main need was in Dx11 apps. I see Bulldozer's failure in the same way, when released, these chips provided good multi-core performance actually(remember, Bulldozer was almost on par with high end i7s in heavily threaded apps/benchs) but the market was simply not ready to move forward to efficient SMT programming. And now we see a similar pattern with ZEN? A chip with multi (futuristic) features with "all-around" performance? Please AMD we've been there, seen that, we (at least the mainstream market) do not need futuristic semi-useful features, we need decent/practical features and high performance mainly for our gaming purposes for good value. Please don't make the same mistake....
 
I hope AMD doesn't make the same mistake they did with their previous GPU generations. For example the Tahiti had DX12 support + Async shaders for years yet there are still very few DX12 that barely takes advantage of GCN's features. They implement so much features on their chips and these extra features take up space on the die space. On GCN's account, I still think those cards didn't display their full potential outside of Dx12 apps, whereas users' main need was in Dx11 apps. I see Bulldozer's failure in the same way, when released, these chips provided good multi-core performance actually(remember, Bulldozer was almost on par with high end i7s in heavily threaded apps/benchs) but the market was simply not ready to move forward to efficient SMT programming. And now we see a similar pattern with ZEN? A chip with multi (futuristic) features with "all-around" performance? Please AMD we've been there, seen that, we (at least the mainstream market) do not need futuristic semi-useful features, we need decent/practical features and high performance mainly for our gaming purposes for good value. Please don't make the same mistake....

GCN isn't a very good example of a mistake. It's a very good architecture and yes, they are underutilized in DX 11. I think the card being DX 12 compat is just thanks to the inherent design of the card. AMD didn't go out of their way to add DX 11 to their cards.
 
GCN isn't a very good example of a mistake. It's a very good architecture and yes, they are underutilized in DX 11. I think the card being DX 12 compat is just thanks to the inherent design of the card. AMD didn't go out of their way to add DX 11 to their cards.
I saw the die diagrams of GCN (pre-Polaris) cards and compared them to the Maxwell ones, now I'm not a tech-savvy but I saw that Hardware Async compute units/resources take up much space which could be used to perhaps enhance some raw performance or perhaps better not be used at all to cut down on the power consumption. Nvidia on the other hand got away with a Dx11 centric approach with minimal Dx12 resources used on their dies, optimising their arch on Dx11 app-performance (obviously). So, while Nvidia provided a single sided sharp knife, AMD chose to provide a swiss knife. I remember back in first GCN days AMD cards provided much superior compute performance too. I favored AMD for providing so much on a single chip BUT obviously things don't go that way in the actual user-space. Gamers favor gaming performance and most of them do not even have any idea of the specifics of their hardware (and they don't have to)
 
I saw the die diagrams of GCN (pre-Polaris) cards and compared them to the Maxwell ones, now I'm not a tech-savvy but I saw that Hardware Async compute units/resources take up much space which could be used to perhaps enhance some raw performance or perhaps better not be used at all to cut down on the power consumption. Nvidia on the other hand got away with a Dx11 centric approach with minimal Dx12 resources used on their dies, optimising their arch on Dx11 app-performance (obviously). So, while Nvidia provided a single sided sharp knife, AMD chose to provide a swiss knife. I remember back in first GCN days AMD cards provided much superior compute performance too. I favored AMD for providing so much on a single chip BUT obviously things don't go that way in the actual user-space. Gamers favor gaming performance and most of them do not even have any idea of the specifics of their hardware (and they don't have to)

The Async compute units are part of the scheduling of the card and it would not function without them. It allows the GPU to que up compute tasks easily. In addition, they take up very little space, less than 3% of the die. Seems like a very good use of resources. FYI this didn't only help gamers. the 7000 series chips were hard to find for some time because they were beasts at compute. You had to have one if you wanted to mine bitcoin.
 
The Async compute units are part of the scheduling of the card and it would not function without them. It allows the GPU to que up compute tasks easily. In addition, they take up very little space, less than 3% of the die. Seems like a very good use of resources. FYI this didn't only help gamers. the 7000 series chips were hard to find for some time because they were beasts at compute. You had to have one if you wanted to mine bitcoin.
ohhh lol I was thinking Async Compute units took up much space because the image I saw indicated that, maybe faulty representation
 
I saw the die diagrams of GCN (pre-Polaris) cards and compared them to the Maxwell ones, now I'm not a tech-savvy but I saw that Hardware Async compute units/resources take up much space which could be used to perhaps enhance some raw performance or perhaps better not be used at all to cut down on the power consumption. Nvidia on the other hand got away with a Dx11 centric approach with minimal Dx12 resources used on their dies, optimising their arch on Dx11 app-performance (obviously). So, while Nvidia provided a single sided sharp knife, AMD chose to provide a swiss knife. I remember back in first GCN days AMD cards provided much superior compute performance too. I favored AMD for providing so much on a single chip BUT obviously things don't go that way in the actual user-space. Gamers favor gaming performance and most of them do not even have any idea of the specifics of their hardware (and they don't have to)

The Async compute units are part of the scheduling of the card and it would not function without them. It allows the GPU to que up compute tasks easily. In addition, they take up very little space, less than 3% of the die. Seems like a very good use of resources. FYI this didn't only help gamers. the 7000 series chips were hard to find for some time because they were beasts at compute. You had to have one if you wanted to mine bitcoin.

You mean litecoins. And the ones doing the mining were running the cards 24/7 meaning they weren't used to play games. So how did mining help gamers again? Are you talking about all the used cards that went up for sale after miners switched to ASICs? Because if you are, sure that got gamers cheap cards, but it also resulted in less sales for AMD.
 
You mean litecoins. And the ones doing the mining were running the cards 24/7 meaning they weren't used to play games. So how did mining help gamers again? Are you talking about all the used cards that went up for sale after miners switched to ASICs? Because if you are, sure that got gamers cheap cards, but it also resulted in less sales for AMD.

I don't know how you missed this part in my post - "FYI this didn't only help gamers.". Nowhere did I even say that mining helped gamers and you don't even have the context right. Please don't attempt another strawman argument.

No, it was bitcoin. How do I know this? Because I mined bitcoins. The 7970 was the card to have just as bitcoin pricing started exploding for sub dollar to the $20 price range.
 
By the way, I realised that nowhere is there a mention of iGPU on ZEN. Actually this is a key point that AMD divided APUs and CPUs into 2 different segments. As I see it, even Skylake has a large part of its die-area occupied by iGPU resources (this I double-checked :) ) so this is good news for ZEN?
 
Haven't Amd APU and CPU allways been separate ?
There's no gpu on an FX cpu is there .my last amd cpu was .and still is an old 3500 plus socket 940,still have it with an ATI card,
 
Haven't Amd APU and CPU allways been separate ?
There's no gpu on an FX cpu is there .my last amd cpu was .and still is an old 3500 plus socket 940,still have it with an ATI card,
Yeah but intel changed that convention for so long, nowadays even pentiums have igpu sticked to them.
 
Back