Our take on AMD Zen 2 CPU and Navi GPU rumors

First, I give Steve and Tim credit for having the patience of a saints.

I don't think Jim is an actual shill for AMD--he's not that professional. I do think he's an aggressive* megalomaniac whose "passion" results in a negative, partisan message contrived through confirmation bias, and it is damaging to the PC hardware enthusiast community.

Perhaps it's my own personal filter and selective memory, but I feel like his influence and his followers have caused the conversation now to be so much more toxic. It’s simply, not as fun. There has always been bad blood between the confused fans of red, blue, and green, but since he really came on the scene with his "Tech Press Loses the Plot" video, the enthusiast community seems to be so much more polarized and wrapped up in this nonsense about the companies, instead of focusing on the products.

Jim is right that lawyers do be like lawyers and VPs of Marketing do be like VPs of Marketing sometimes. That's how they roll, and it removes innocence from a hobby that we adore (ha). However, those actions aren't unique to the PC hardware industry, they certainly aren’t unique to only Intel and Nvidia, and I sure as hell don’t want it to be the focus of something that is supposed to be a fun distraction.

I’m not trying to bury my head in the sand about bad business practices, I’m trying to keep this in bloody perspective. It’s like watching a pro sporting event and the difference between cheering loudly versus being an angry loudmouth that is swearing obscenities about the referees in front of the 7 yr old next to you. For example, Nvidia coming out with cards that don’t offer huge uplifts in performance for the price should result in a reaction of, “kthanks, won’t be buying that.” The same way people wouldn’t buy a lawn mower or fishing kayak that got poor reviews.

The reaction from the community regarding poor PC gaming products shouldn’t be seeing red (ha), genuinely hating company X and it’s patrons, or feeling the need to burn this mutha down with negativity in videos or comments. However, that seems to be a common theme now, and a lot of the rhetoric seems to be generated by Jim's message.



*He absolutely has posted under his videos that certain dissenting commenters should kill themselves. I saw them. They are now deleted. He has also aggressively gone after people that dare to question him. For example, his original post under Steve’s 'cherry picked Intel CPUs - tinfoil hat' video was super aggressive and tantrum-like. Also now deleted. Believe me about the deleted posts or don’t. I saw them.

Can't deny that Jim is pretty hot headed sometimes. He's getting big enough to the point where he needs to cut that out. There are plenty of people who are going to harass you as a youtuber and dealing with it is just part of the job.

To be honest though I do not read YouTube comments as they tend to be pretty bad most of the time, so I've only seen one or two instances myself of him stepping out of line but it does certainly happen.

His analysis is pretty good and I would like it to continue but I do not want those kind of comments to be allowed.
 
1. Loss of integrated memory controller will mean huge latency penalties when using RAM.
2. Two chip solution is quite expensive for desktop even if it is $300+ part.

Far as I remember, AMD said at the Rome briefing that latency is down.

As for a two chip solution, why would it be significantly more expensive? I'm sure there's an overhead, but why do you think it's significant enough to offset the savings of using a single small chiplet for everything and 14nm for the I/O?
 
It does not matter if AdoredTV is right or not.

I think there's a bigger picture here than meets the eye folks. Jim@AdoredTV seems to have industry insiders that got him all this info from.

This is not a question of he's right or not, it is a question if his leaks are right or not.

We will see when the time comes at CES Vegas. Personally, I do hope his leaks are true. That would bring much needed competition to CPU and GPU space. And Huang need a good dose of modesty while we are at it.
 
First was a comparison between the RX480 and the GTX1060 from 2016, harping on about how AMD's arch would hold up way better as would soon become apparent with driver updates because of DX12 arriving and how Nvidia are just clinging on.

I just happened to check some reviews today, and while the 1060 beat the 480 soundly at release, with recent games they're much closer, with AMD usually taking the top spot. So I think that he was pretty spot on with this.
 
I just happened to check some reviews today, and while the 1060 beat the 480 soundly at release, with recent games they're much closer, with AMD usually taking the top spot. So I think that he was pretty spot on with this.

No, he wasn't. At launch they were fairly closely matched and the assertion was RX480 would run away with it given a short period of time. I disagreed, saying GTX1060 will probably remain better due to the high overclocking potential and Nvidia's large driver teams, plus the lack of projected DX12 games.

Over two years later and in a wide variety of games tested today, GTX1060 still sat right between the 480 and the 580. A year ago Techspot tests came up with 2 percent either way. Recently against the RX580 another outlet came up with the same 2 percent figure as well. Basically GTX1060 is currently easily as fast as the RX480. Actually it's still faster according to two separate in depth tests from respected sites, either way that is still contrary to what he claimed 2 years ago.

What's more I pointed out that Nvidia is significantly faster in DX11 (the majority of games even today) when the CPU used is not a monster. This if anything is one of the most important overlooked factors for people buying mid range cards, because they are likely to be pairing it with less than the best CPU on the market.

Just look at the Hitman 2 benchmarks recently for this, after DX12 support was ditched.
 
Far as I remember, AMD said at the Rome briefing that latency is down.

As for a two chip solution, why would it be significantly more expensive? I'm sure there's an overhead, but why do you think it's significant enough to offset the savings of using a single small chiplet for everything and 14nm for the I/O?

Latency between core to core communication is down, easy to believe. However low memory latency without integrated memory controller is pretty much impossible.

Two chip solution needs more testing, requires more connections, makes binning more difficult and adds one more thing that may broke. Designing that I/O chip to be suitable for Ryzen also takes some resources.

For simply cost saving calculations: Ryzen 7 2700x made on 7nm tech would be around 80mm2 estimated and "Zen2-powered 2700X" probably around 100 mm2. Judging from picture chiplet size is around 70nm and quarter of I/O die is around 110mm2. If 7nm wafer cost around 10K$ and 14nm wafer costs around 5K$, then:

Chiplets per wafer: 4037, cost per unit 2,5$
I/O dies per wafer: 2569, cost per unit 2$
"Zen 2 powered 2700X's" per wafer: 2826, cost per unit 3,5$

So basically two CCX Zen 2 die size should be at least around 127mm2 for chiplet design to be cheaper judging only wafer costs. For smaller die yields are better but chiplet design is more expensive to pack etc, those make no major difference in either direction.
 
Actually it's still faster according to two separate in depth tests from respected sites

It would be nice if you could state which sites are that.

Just looking at recent gaming benchmarks at TechSpot and Anandtech, the RX 480/580 look better.

The only thing I find really interesting is that the 1060 now beats the RX crowd at Ashes of the Singularity.
 
Designing that I/O chip to be suitable for Ryzen also takes some resources.

But probably a lot less than designing another Zen 2 CPU, including new 7nm I/O.

For simply cost saving calculations: Ryzen 7 2700x made on 7nm tech would be around 80mm2 estimated

Far as I understand, I/O doesn't scale as well as logic and SRAM, so I think your estimates are optimistic.
 
But probably a lot less than designing another Zen 2 CPU, including new 7nm I/O.

Quite opposite. Everything else than Zen2 CPU cores and PCIe 4.0 controller can be recycled from Zeppelin design with very small effort.

Far as I understand, I/O doesn't scale as well as logic and SRAM, so I think your estimates are optimistic.

Probably yes but even if my estimates are 30% too optimistic, it still doesn't make chiplet design clear winner from cost perspective.
 
One thing hit pretty heavily in the TechSpot article here is whether or not AMD will announce the supposed new products at CES.

Can't say for sure but I don't think AdoredTV made a strong prediction of that - and in any case it's one of the least important aspects of his speculations. Many commenters recently are focused on finding instances where he was wrong, rather than looking at the big picture. Isn't it much more important to know what AMD can and probably will produce in 2019, than when they'll choose to announce it? The former was set in motion years ago whereas the latter could change weekly.

Pricing too is very hard to predict. AdoredTV has stated what he thinks AMD could charge - he hasn't guaranteed that they will. But forum warriors everywhere seem to miss that point, and go off ranting about how it's all BS and will never happen etc.

I think we'll find that AdoredTV's predictions of the technical situation will be pretty close. And BTW who else is putting in this level of effort at trying to tell us the future - for free? Anybody?

Finally, to any who criticize his analysis based on what others report of it: you're working with incomplete info. Watch the video yourselves and listen to what he says. You'll learn something.
 
Last edited:
Quite opposite. Everything else than Zen2 CPU cores and PCIe 4.0 controller can be recycled from Zeppelin design with very small effort.

I don't see how it's the opposite. Rather, it strengthens the I/O chiplet argument. The Zeppelin design is possibly what made the I/O die easy in the first place: just take them and stick them in an I/O die, then connect it to a processing die. That tech AMD already has working.

Unless you're claiming that 14nm zeppelins can be stuck as is into a 7nm die. I would find such a claim strange.

Isn't it much more important to know what AMD can and probably will produce in 2019

We all more or less knew what AMD can and will produce. It was even speculated long ago that Ryzen 3000 will have more than 8 cores. Sure, some doubt that, but most people disbelieve the pricing and availability, not the specs.

And that's the part you yourself say is most speculative. It's also the part that most affects buying plans.
 
Last edited:
I don't see how it's the opposite. Rather, it strengthens the I/O chiplet argument. The Zeppelin design is possibly what made the I/O die easy in the first place: just take them and stick them in an I/O die, then connect it to a processing die. That tech AMD already has working.

Unless you're claiming that 14nm zeppelins can be stuck as is into a 7nm die. I would find such a claim strange.

Chiplet design I/O die requires 8 memory channels and 128 PCIe lanes. There are only 2 memory channels and 32 PCIe lanes on Zeppelin die. Also while module to module communication latency and memory latency is different between Zeppelin dies, using I/O chip solution module to module communication and memory latencies are equal. Both things mean I/O chip design cannot be just copy pasted from Zeppelin design, it must be (at least mostly) new design.

Zeppelin die I/O however is nothing more than bit boosted Bristol Ridge I/O (more PCIe, more flexible SATA Express). So basically AMD took design from 28nm Bristol Ridge and put it on 14nm Zen. So AMD could very easily take I/O design from 14nm Zen, boost it little bit (PCIe 4.0) and put it to 7nm Zen2.
 
Sadly, even Adored-tv had to call-out our techspot author (Tim Scheisser) for his own errors in objective thinking...!

I am no AMD fanboy, but I do keep up on tech industry & chip technology... not just AMD, or Intel, but Qualcomm, Samsung, IBM, etc (ie: the industry!) So, it happens that I also have watched Dr Su's keynotes speaches and have followed AMD's pursuance of heterogenous computing since they bought ATi. And it became obvious to me, that Techspot shouldn't have unqualified people, writing for them like this. If it was a personal blog... then sure. But not as a Techspot's take on it. Because if that is how Techspot thinks, then you have lost me as a supporter and follower.

AMD clearly had slides and Dr Su had already pointed out their uplift and Jim whole article was based on calling a CEO a liar. It is basic mathematics, as indicated within A-Tv's second video. Nothing in A-Tv's video seemed implausible… yet Scheisser was miffed?
 
Sadly, even Adored-tv had to call-out our techspot author (Tim Scheisser) for his own errors in objective thinking...!

I am no AMD fanboy, but I do keep up on tech industry & chip technology... not just AMD, or Intel, but Qualcomm, Samsung, IBM, etc (ie: the industry!) So, it happens that I also have watched Dr Su's keynotes speaches and have followed AMD's pursuance of heterogenous computing since they bought ATi. And it became obvious to me, that Techspot shouldn't have unqualified people, writing for them like this. If it was a personal blog... then sure. But not as a Techspot's take on it. Because if that is how Techspot thinks, then you have lost me as a supporter and follower.

AMD clearly had slides and Dr Su had already pointed out their uplift and Jim whole article was based on calling a CEO a liar. It is basic mathematics, as indicated within A-Tv's second video. Nothing in A-Tv's video seemed implausible… yet Scheisser was miffed?

I'm sure Tim is qualified to post this article, however, it is just down to a lack of research on his behalf. It is good to have articles like this that give a counter argument. This allows for those who made the primary argument to bring to light the facts they've gathered. Much like AdoredTV did in the video I embedded. Also, AMD has a negative connotation going for it due to its history with the FX series. It's hard to break misconceptions of a specific brand after it has been tarnished for so long. Hopefully AdoredTV hits these predictions right in the money so that Intel is forced to wake up and realize that their offerings are heavily overpriced and they do not sit on the throne they once did. We will all win in the end.
 
Chiplet design I/O die requires 8 memory channels and 128 PCIe lanes.

The more you post, the more it seems to me like you don't really understand what you're talking about. Why would there be any requirement for an I/O die not designed for EPYC to have 8 memory channels or 128 PCIe lanes? A Ryzen I/O die would only need to pack the normal AM4 I/O and pass it to the connected CPU chiplet.
 
The more you post, the more it seems to me like you don't really understand what you're talking about. Why would there be any requirement for an I/O die not designed for EPYC to have 8 memory channels or 128 PCIe lanes? A Ryzen I/O die would only need to pack the normal AM4 I/O and pass it to the connected CPU chiplet.

I talked about design costs. In that scenario there must be I/O chip designed for Epyc and another one designed for Ryzen as Epyc I/O die is way too expensive for Ryzen.

So you mean like this: Chiplet is connected to I/O die that handles all traffic out of CPU? As previously stated, it will cause big memory latency penalty. Something nobody wants in desktop parts.
 
Isn't it much more important to know what AMD can and probably will produce in 2019

We all more or less knew what AMD can and will produce. It was even speculated long ago that Ryzen 3000 will have more than 8 cores. Sure, some doubt that, but most people disbelieve the pricing and availability, not the specs. And that's the part you yourself say is most speculative. It's also the part that most affects buying plans.

How dismissive.

So we all knew about the AMD CPU & APU 2019 lineup, with detailed info on cores / thread / clocks? Guess I missed that reveal. As did AdoredTV since he went to the trouble of re-figuring it all out. But OK, you knew.

And the costs / pricing he presents is extremely important even if it's not the opening MSRP. It shows what AMD *can* do and still make money. That's what will drive the market. Even Lisa Su couldn't tell you now what you pretend to need for "buying plans".

Get real. These AdoredTV shows have made such a stir precisely because - right or wrong - they're alone in presenting a reasoned, detailed and coherent picture of what will hit the desktop market next year. It's telling that everyone ranting about it being shill BS either (1) present zero technical rebuttal or (2) get it wrong when they do. We'll see.
 
So we all knew about the AMD CPU & APU 2019 lineup, with detailed info on cores / thread / clocks?

No, but we don't know now, either. Speculation that I've read before, or posted myself, wasn't that different. Sure, it wasn't that specific, but it wasn't that different either. AdoredTV's predictions were audacious in their pricing, that's why they caused a stir. If everything was priced higher, there wouldn't have been such a stir. There wasn't anything particularly more reasoned or coherent than other speculations, just lower prices.

I talked about design costs. In that scenario there must be I/O chip designed for Epyc and another one designed for Ryzen as Epyc I/O die is way too expensive for Ryzen.

We're going in circles. Again, designing an I/O die would be less costly than designing another CPU variant. Creating masks at 7nm would cost more than creating masks at 14nm.

Yes, I agree that latency will be higher. A hardware guy on Reddit said it would add 5-10ns. However, AMD has done reasonably well with higher latency, and a larger L3 cache could offset some of the penalty. Still, in my mind, it's not clear cut that a single die CPU is the best solution, and I haven't yet seen a really convincing argument that the chiplet solution is significantly inferior. Sure, it could go both ways, but I think that the chiplet solution is the better one, due to flexibility and reuse.
 
We're going in circles. Again, designing an I/O die would be less costly than designing another CPU variant. Creating masks at 7nm would cost more than creating masks at 14nm.

Yes, I agree that latency will be higher. A hardware guy on Reddit said it would add 5-10ns. However, AMD has done reasonably well with higher latency, and a larger L3 cache could offset some of the penalty. Still, in my mind, it's not clear cut that a single die CPU is the best solution, and I haven't yet seen a really convincing argument that the chiplet solution is significantly inferior. Sure, it could go both ways, but I think that the chiplet solution is the better one, due to flexibility and reuse.

I/O die must be designed but all I/O for Zen2 could just be taken from current Ryzen designs. So while I/O die would save manufacturing costs, it adds more design costs.

That 10ns latency penalty Reddit guy was talking about applies to Threadrippers and Epyc's. Both have hard time dealing with it because currently used Infinity fabric is not built for fast memory access. But as same guy points out:

If AMD did not reduce the latencies by half, then Zen 2 based Ryzen will either be a high latency nightmare or offer not tangible latency improvement without an L4 cache... but ThreadRipper and EPYC will still benefit considerably, though IO IFOP to IMC latencies will grow due to the additional IMCs involved, creating ~10ns or so penalty... which is nothing considering the improvement they will be seeing.

Completely agreed. AMD has very barely "good enough" memory latency with current Ryzens. Bigger latency would be quite catastrophic.
 
Back