Three Ryzen 7000 X3D CPUs are reportedly on the way, no 6-core variant planned

Sweet. Was thinking of holding off upgrading to AM5 but now looking to upgrade my 5900X to 7900X3D or 7950X3D.
Why would you do that? The X3D cache does nothing for productivity. In fact, it has been shown to actually hurt it. The only instance in which I saw the X3D cache have a positive effect on productivity (that I can remember anyway) was in file compression and decompression. In everything else, the best-case scenario was no effect at all and the worst-case scenario was a reduction in performance because of its inability to clock higher. Save your money because the R9-5900X is one of the most efficient productivity CPUs out there.
Hopefully they will also have rdna3 cores, instead of rdna2. I can't understand why they put in rdna2 when they had the new and better rdna3.
Because the IGPs in the standard CPU line aren't meant for gaming, they're meant to do the same job that Intel's IGPs do. For that, even Vega would be good enough. I have no problem with them cutting costs on the IGP if it means the CPU costs less overall because the IGP is going to be pitifully weak no matter which architecture they use. It's just a display adapter and that doesn't take much.
Perhaps not - after all, the 5800X3D was no better or worse than the 5800X in productivity benchmarks. If AMD can retain the original clock speeds of the models they plan to add additional cache too, then maybe all will be well. Either way, Steve's got a lot more testing coming his way :)
This is exactly why I think that the decision to put this X3D cache in productivity CPUs and not the 6-core gaming CPU is galactically stupid. The 6-core variant would've been the one to benefit most from this X3D cache and fewer cores means higher possible boost clocks.

I can't believe that AMD shot themselves in the foot like this. They were in a position to completely own the PC gaming market from the CPU (and therefore, platform) standpoint yet they managed to snatch defeat from the jaws of certain victory!

Even worse, the X3D cache, at best, offers no advantage in productivity (as you rightly pointed out) so nobody who is in the market for a 12 or 16-core CPU will be willing to pay extra for it. Those 12 and 16-core X3D CPUs' main talent will be gathering dust on store shelves. To anyone with ½ a brain, all of this should have been obvious. Lisa Su really screwed up by allowing this course of action to take place.
Great.

Waiting on these new 3D CPUs before I finally make the jump from AM4 to AM5. Hopefully by then, 32GB of DDR5 can be had for around 150 US or so.
I wouldn't bother yet if I were you. The difference in performance isn't anywhere near worth the difference in cost. If you're on AM4, just get an R7-5800X3D (if you can) or an R7-5700X. Those CPUs will be viable for years to come and by the time you're really ready to jump to AM5, it will be ½ as expensive as it is now, DDR5 and all.
Total checkmate for Intel if they add a 7950x at the same clockspeeds and 3d Cache - less watt, same or higher productivity performance and a much higher gaming performance. Will be a beast
I don't know if you've read any of the articles or if you've seen any of the videos in which the X3D cache is tested but it has NO positive effect on productivity at best and a negative impact at worst. Putting X3D cache into a 12 or 16-core productivity CPU would be absolutely useless. The only one who will benefit from AMD squandering this opportunity is Intel.
3D V-Cache seems to mostly benefit realtime processes. So offline stuff (most productivity software) doesn't typically benefit. But some does (real time audio, video editing previews, etc.).
That's not enough of a reason to put the X3D cache on a 12 or 16-core CPU while NOT putting it on a 6-core gaming CPU where the X3D cache has shown to have by far the most benefit. Nope, AMD screwed the pooch with this decision.
LOL last week only an 8 core and 6 core variant were coming. Now we are back to 16, 12 and 8 cores, but no 6 core. You tech sites really should get you facts straight before posting stories.
Actually, I hadn't heard anything about a 6-core variant, just the other three (I maybe just missed that info). I remember thinking "Why would you NOT make a 6-core X3D CPU for gamers who benefit most from X3D?".
Anyway IMO AMD are morons for not doing a 6 core version at $329 max.
Yep, I call decisions like this "Galactically Stupid".
If they release a 7950X3D, I am sold.
Why? The R9-7950X is a productivity CPU and productivity doesn't benefit from the X3D cache from the tests that we've seen. If I were you, I'd just get the 7950X and not pay extra money for nothing.
So you don't just game clearly. Are aware of what productivity software will benefit from v-cache. I have seen server tests of Milan-X showing some huge uplifts in software, but most is server related naturally. Only one I saw of interest to me was OpenFOAM fluid sim. Currently 13900K kills 7950X in a lot of important software I use, so v-cache would need to bring big improvements outside of gaming. Zen 4 is already excellent in gaming and hardly needs a boost, but those Intel core counts are killing it for multi-threaded apps. I'm leaning towards 13700K rather than 7900X. AMD will no doubt add another $100 on the 7900X3D meaning in Australia it would be about $400-500 dearer, enough to pay for motherboard and a stick of memory.
I agree. I can't see a good use case for the R9-7900X3D or the R9-7950X3D. The best use case would have been for the R5-7600X3D but for some *****ic reason, AMD decided to NOT make that one. Man, if they did release an R5-7600X3D, no gamer would look at Intel for a very long time. Talk about $hitting their own bed!
That's a good question, they certainly don't seem to want to cater to the traditionally largest consumer base. Have they really found a sustainable business model of just selling these higher margin, mostly useless for gaming, hardware components? Financial reports indicate lowered guidance and revenues, so it seems like no but, they seem to be set on it.
They need more market share to gain parity with Intel and an R5-7600X3D could have possibly done that in a single generation. AMD's stupidity here is boggling my mind.
 
That's not enough of a reason to put the X3D cache on a 12 or 16-core CPU while NOT putting it on a 6-core gaming CPU where the X3D cache has shown to have by far the most benefit. Nope, AMD screwed the pooch with this decision.

?
3D Cache has never been offered on a 6-core CPU. It has also never been offered on a 12 or 16 core CPU. So we have not been shown anything about where it offers the most benefit... because there is nothing to compare when there is only one example.
 
Why would you do that? The X3D cache does nothing for productivity. In fact, it has been shown to actually hurt it. The only instance in which I saw the X3D cache have a positive effect on productivity (that I can remember anyway) was in file compression and decompression. In everything else, the best-case scenario was no effect at all and the worst-case scenario was a reduction in performance because of its inability to clock higher. Save your money because the R9-5900X is one of the most efficient productivity CPUs out there.
I have said this long time and will keep saying it: there is big difference between productivity benchmarks and real life multitasking.

I have no doubt I will see bid difference between 3D cache model and "standard" one. Heck, I have seen real life speedup at least 3x just doubling CPU L2 cache and nothing else. Looking at benchmarks, difference should have been like 5-10%. But having done that simple thing thousands times (Windows was reinstalled multiple times, no problem there), I immediately spot difference. And everything I did was just CPU swap. No question where that speedup came from.

As for benchmarks, you told that worst case scenario was because of inability to clock higher. OK, so IF 3D cache model have same frequencies that "normal" model has, then 3D cache model would be at least equally fast? And again, I have no doubt that real life multitasking will put more stress on cache that does benchmarking single program.

I'm also waiting for Ryzen 7000 series 3D models since I'll switch to DDR5. My current system is already "sold".
 
It's worth remembering that the L3 cache in Zen 3/4 is a victim cache -- it only stores L1 and L2 entries that have been evicted, rather than anything that is being prefetched. So in applications where L2 miss rates significantly impact performance, a spacious L3 cache will always help out. However, the L2 cache per core in Zen 4 is double that of Zen 3, and only with a small increase in latencies (offset by the higher clocks), so the increased L3 cache might not bring as large an increase in performance as seen with Zen 3.

If one compares the 7700X to 5800X3D, with its 32% higher base clock and 20% higher maximum boost clock, there are situations where the Zen 4 chip notably outperforms the Zen 3 one, more so than just outright clock differences:

7700X vs 5800X3D.png

Now that could be down to the larger L2 cache (along with the larger register files, translation lookaside buffers, and lower latencies) but the differences in platform performance could also be affecting matters. It would require further investigation, removing those variables from the testing, to be certain as to the exact reasons for the performance difference.

That all said, there are just as many cases where the 7700X's advantage over the 5800X3D, L3 cache size aside, is clearly more muted which suggests the use of V-cache is helping somewhat. And then we have the gaming benchmarks, which mostly point to the fact that the Zen 3 architecture really does benefit from having a large victim cache.
 

Attachments

  • 7700X vs 5800X3D.png
    7700X vs 5800X3D.png
    87.4 KB · Views: 1
It's also worth noting that their CPU division is much larger than their GPU one -- it generates far more revenue, for a start, so design choices for CPUs tends to lead those in GPUs. That's why they used the GCN architecture in their APUs for so long; RDNA 2 wasn't introduced into them until Jan 2022.

There's been very little focus on the new AMD APUs on this site but an APU with RDNA 3 graphics sounds pretty exciting to me. I understand that most of your readers will be interested in the latest CPUs and graphics cards but please don't forget your APU using readers.

My current PC uses an A8-7600 APU and it's been very good. It was originally built as a cheap short-term stopgap. I intended to lock the APU down to 45w and repurpose it for a silent living room PC after a few months but it basically does everything I want in a desktop so I just kept using it. It struggles a bit with games at 1080p so I was going to build a new PC... then the graphics card drought happened and here I am still using it two years later. For someone like me who isn't a content creator and just does a bit of casual gaming, mainly older titles, an APU is just the job.
 
I understand that most of your readers will be interested in the latest CPUs and graphics cards but please don't forget your APU using readers.
We don't!

 
I really don't get why AMD is doing this. The 3D cache has shown that it has little to no positive effect on productivity but a profoundly positive effect on gaming latency. I can't fathom a reason why AMD would EVER put the 3D cache in a 12-core CPU, let alone a 16-core while not putting it into a 6-core CPU.

This is worse than useless because AMD is investing money in producing chips that won't sell. Productivity users won't pay the extra for the 3D cache because they'll see no benefit from it. Gamers don't need more than six cores so many of them won't bother paying extra for the 12 or 16 cores. Sure, many of them will buy the 8-core variant but I expect NOBODY to buy the 12 or 16-core variants. No X3D CPU should have more than 8 cores. The 8-core variant should be the upper limit for core counts when it comes to X3D CPUs.

If AMD focused on 6-core 3D CPUs, they could make so many more of them and they'd sell them as fast as TSMC could produce them. Also, with only six cores, clock speeds would be less affected by the 3D cache making them even more attractive to gamers.

The only X3D CPU that will sell will be the 8-core but even that won't sell nearly as well as a 6-core X3D CPU would have. This annoys the hell out of me because the only way to get this market balanced again is for AMD to outsell Intel by a large margin and AMD is just snatching defeat from the jaws of victory here.

This is one of the worst marketing and production decisions that I've ever seen a semiconductor company make. AMD is just being plain stupid here.

I'm a software engineer who game on my workstation. And others might as well like content creators, streamers, etc. I know many friends and co-workers having PCs for both gaming and productivity. What's wrong with doing that?

And then there is also the crowd who just want the "fastest CPU" without qualification, no matter for future-proofing or bragging rights.

7950X3D is exactly the chip we need, especially since it's rumored to have similar clock speed as 7950X. It will sell.
 
Last edited:
?
3D Cache has never been offered on a 6-core CPU. It has also never been offered on a 12 or 16 core CPU. So we have not been shown anything about where it offers the most benefit... because there is nothing to compare when there is only one example.
When the 3D cache shows to have no positive effect on productivity with 8 cores, that's not going to change just because there are 12 or 16. Maybe you didn't see the review of the 5800X3D:
CB23_Multi.png
CB23_Single.png
Corona.png
PS.png
Code.png
Factorio.png
Blender.png

IIRC, it also does well in 7zip but for the overwhelming majority of productivity tasks, the 5800X3D does worse than the 5800X. This is because of the lower clock speeds but even if the 7000-series X3D chips have the same clocks as their non-X3D versions, it will be the same productivity performance at a higher price. Since few, if any, people are going to buy a 12 or 16-core CPU for gaming, putting the 3D cache on them is a complete waste of time. OTOH, putting it on the 6-core which a lot of gamers will go for would make AMD own the gaming CPU market.

This is why I say that AMD really screwed up with this decision that they made.
 
I have said this long time and will keep saying it: there is big difference between productivity benchmarks and real life multitasking.
Sure, but if the benchmarks say that productivity isn't improved, to automatically assume that it will be is nothing more than wishful thinking. Now, you're right that benchmarks and real productivity can be different but it's not a BIG difference like you say because if it were, nobody would bother with benchmarks at all. Steve and Tim are extremely knowledgeable about tech and they're always more than willing to put in the time to run the becnhmarks for us. If the benchmarks were as irrelevant as you seem to think, they wouldn't waste their extremely valuable time on them.
I have no doubt I will see bid difference between 3D cache model and "standard" one. Heck, I have seen real life speedup at least 3x just doubling CPU L2 cache and nothing else. Looking at benchmarks, difference should have been like 5-10%. But having done that simple thing thousands times (Windows was reinstalled multiple times, no problem there), I immediately spot difference. And everything I did was just CPU swap. No question where that speedup came from.

As for benchmarks, you told that worst case scenario was because of inability to clock higher.
What I said was that at best, it had a positive effect (in like 2 situations) and at worst, the slower clocks hindered performance. If the clocks in the 7000-series aren't affected (as AMD claimed), then the (let's call it the) R9-7900X3D will have exactly the same productivity performance as the R9-7900X. The problem is, it will still cost a good deal more and nobody will buy it because why pay extra money for no extra benefit?
OK, so IF 3D cache model have same frequencies that "normal" model has, then 3D cache model would be at least equally fast?
Sure, equally fast but not equally expensive. It will still cost significantly more and nobody will buy it.
And again, I have no doubt that real life multitasking will put more stress on cache that does benchmarking single program.
And yet, not a single expert reviewer agrees with your assessment, quite the contrary actually. No offence to you but I have more respect for the word of Steve Walton and Steve Burke with their testing methods than I do for the word of someone who says that "It feels faster".
I'm also waiting for Ryzen 7000 series 3D models since I'll switch to DDR5. My current system is already "sold".
Well, all I'll say is that I hope that you're right. I don't want for you to get fleeced. I really do hope that the 3D cache makes a positive difference for your uses.
 
I'm a software engineer who game on my workstation. And others might as well like content creators, streamers, etc. I know many friends and co-workers having PCs for both gaming and productivity. What's wrong with doing that?

And then there is also the crowd who just want the "fastest CPU" without qualification, no matter for future-proofing or bragging rights.

7950X3D is exactly the chip we need, especially since it's rumored to have similar clock speed as 7950X. It will sell.
Right, but your uses are very atypical. Even after every one of you buys one of these, thousands will gather dust.
 
Right, but your uses are very atypical. Even after every one of you buys one of these, thousands will gather dust.
Do you have anything to back up the claim it's "very atypical"? Don't make stuff up. Just take a look at how many "gaming" builds on PCPartPicker featuring the CPUs with presumably "too many cores". (Hundreds of pages - many say gaming in title or description)

And why would you think this market segment is not entitled to a proper chip, or that AMD can't estimate the stock needed? They might be overoptimistic about 7000 non-X3D due to surprisingly good Raptor Lake drop, but won't be the case now that both lineups are out.
 
Last edited:
Do you have anything to back up the claim it's "very atypical"? Don't make stuff up. Just take a look at how many "gaming" builds on PCPartPicker featuring the CPUs with presumably "too many cores". (Hundreds of pages - many say gaming in title or description)
Sure, just look at sales history. There's a reason why lower core-count CPUs are referred to as "gaming CPUs". I'm not making anything up. Gamers generally don't buy 12 and 16-core CPUs because there's no reason for them to incur that cost.

I'm not going to argue something as elementary as this.
 
Sure, just look at sales history. There's a reason why lower core-count CPUs are referred to as "gaming CPUs". I'm not making anything up. Gamers generally don't buy 12 and 16-core CPUs because there's no reason for them to incur that cost.

I'm not going to argue something as elementary as this.
What sales history?

Steam survey shows CPUs with 10+ cores at 7.23%. Judging by the GPUs and SHA instruction set, likely well less than 50% of those are newer builds when 10+ cores are viable options. That means the market for 10+ cores in new gaming builds is at least 15-20%. Like I said - plenty of people want the best of both worlds for whatever reason.

You are telling AMD not to make a chip for 15-20% of the gaming market, and a segment with potentially much higher margin than the rest. Thankfully you don't work for the AMD product design department, do you?
 
Last edited:
Sure, but if the benchmarks say that productivity isn't improved, to automatically assume that it will be is nothing more than wishful thinking. Now, you're right that benchmarks and real productivity can be different but it's not a BIG difference like you say because if it were, nobody would bother with benchmarks at all. Steve and Tim are extremely knowledgeable about tech and they're always more than willing to put in the time to run the becnhmarks for us. If the benchmarks were as irrelevant as you seem to think, they wouldn't waste their extremely valuable time on them.
Becnhmarks and real life situations are completely different things. Benchmarks are only used because there is basically no way to simulate real life situations multiple times on different hardware. If there really is, then nobody would bother benchmarks.

Basically if Steve and Tim want to compare productivity performance between different CPUs, benchmarks are about only way to do it. You see, sometimes non-perfect solution is used because there is either nothing better available or there is no other way. If I wanted to compare productivity performance between multiple CPUs (at least dozen), I would also use benchmarks. However I wouldn't say benchmarks tell everything about real life situations. Do they say?
What I said was that at best, it had a positive effect (in like 2 situations) and at worst, the slower clocks hindered performance. If the clocks in the 7000-series aren't affected (as AMD claimed), then the (let's call it the) R9-7900X3D will have exactly the same productivity performance as the R9-7900X. The problem is, it will still cost a good deal more and nobody will buy it because why pay extra money for no extra benefit?
Because real life benefit and benchmark benefit are not same thing? I have said that same thing for decades.
Sure, equally fast but not equally expensive. It will still cost significantly more and nobody will buy it.
Already two persons on this thread that will buy it.
And yet, not a single expert reviewer agrees with your assessment, quite the contrary actually. No offence to you but I have more respect for the word of Steve Walton and Steve Burke with their testing methods than I do for the word of someone who says that "It feels faster".
Who actually disagrees? I haven't seen single person that disagrees with me. Do Steve x2 you mention say that benchmarks reflect well on IRL situation? I don't see that.

You see, when you run benchmarks, totally bad results are discarded and runs are repeated until there are many results around same. In real life is something runs slowly, it is slowly and you just cannot turn back time. This can easily happen when program does not work as intended.

To put very easy example benchmarks vs real life. Benchmark rigs usually have only one SSD, in very rare occasions two (and they might be RAID 0). I have six SSDs on my rig. Why? Because I want OS and every heavy software on own SSD. Running two heavy software on same SSD causes noticeable slowdown. That's something you won't see on benchmarks because they only use one heavy software at same time.

Easy example of extra cache benchmarks vs real life: I occasionally run 4-6 simultaneous file encryptions that take ages because encryption is much stronger than AES (no HW acceleration) and file sizes are huge. Multitasking same time, I have no doubt 3D cache will help a lot.
Well, all I'll say is that I hope that you're right. I don't want for you to get fleeced. I really do hope that the 3D cache makes a positive difference for your uses.
I have absolutely no doubt it does. So far I have always been right on my predictions on real life speedups, no disappointments :)
 
Becnhmarks and real life situations are completely different things. Benchmarks are only used because there is basically no way to simulate real life situations multiple times on different hardware. If there really is, then nobody would bother benchmarks.
Sure, but when there is a strong consistency among different benchmarks from different testers, you can make a pretty good prediction. These guys tend to know what programs will benefit from what aspects of a CPU like number of cores, cache size and clock speed.
Basically if Steve and Tim want to compare productivity performance between different CPUs, benchmarks are about only way to do it. You see, sometimes non-perfect solution is used because there is either nothing better available or there is no other way. If I wanted to compare productivity performance between multiple CPUs (at least dozen), I would also use benchmarks. However I wouldn't say benchmarks tell everything about real life situations. Do they say?
I agree that they don't say everything but they do say something.
Because real life benefit and benchmark benefit are not same thing? I have said that same thing for decades.
I agree, they're not the same thing but I've never seen them be polar opposites either.
Already two persons on this thread that will buy it.
And I hope that it works out for them, I really do.
Who actually disagrees? I haven't seen single person that disagrees with me. Do Steve x2 you mention say that benchmarks reflect well on IRL situation? I don't see that.
Here's what I do see:
"It's worth noting that this is a gaming focused CPU. We're sure there will be some productivity workloads that can benefit from the extra L3 cache, but AMD has refrained from giving any examples if they exist. Instead, AMD is 100% pushing this as a gaming CPU, and once you see the data it will make sense why there isn't a "5950X3D."
- Steve Walton: April 14, 2022
If it makes sense to not have an R9-5950X3D, then it would also make sense to not have an R9-7950X3D. They're both 16-core productivity-first CPUs.
You see, when you run benchmarks, totally bad results are discarded and runs are repeated until there are many results around same. In real life is something runs slowly, it is slowly and you just cannot turn back time. This can easily happen when program does not work as intended.

To put very easy example benchmarks vs real life. Benchmark rigs usually have only one SSD, in very rare occasions two (and they might be RAID 0). I have six SSDs on my rig. Why? Because I want OS and every heavy software on own SSD. Running two heavy software on same SSD causes noticeable slowdown. That's something you won't see on benchmarks because they only use one heavy software at same time.
Oh believe me, you don't have to explain benchmarks to me. I've been building PCs since 1988. I know that they're not 100% accurate but I do know that they tend to be way more than 50% accurate. They don't show absolutes but they do show trends.
Easy example of extra cache benchmarks vs real life: I occasionally run 4-6 simultaneous file encryptions that take ages because encryption is much stronger than AES (no HW acceleration) and file sizes are huge. Multitasking same time, I have no doubt 3D cache will help a lot.
I have no doubt that it will. I never said that it would be useless for everybody but it sure does seem like it would be useless in most productivity applications based on what we've seen. Of course some people will benefit but the number of people who would stand to benefit will be far smaller than the gamers who buy 6-core CPUs. There is no question about the benefits that they could glean from the 3D cache.

Now sure, a good number of gamers buy 8-core CPUs (I did) and that's why I didn't attack the fact that there is an 8-core model. The thing is, there are a lot of gamers for whom maybe the 8-core is out of their budgetary reach and a lot of those gamers are new. Those gamers might choose Intel instead and that's bad for AMD who needs all the market share that they can get. This is because if someone starts with an Intel CPU and likes it (and why wouldn't they?), they're far more likely to stick with Intel. An R5-7600X3D would get into the builds of many new gamers who would then be more likely to stick with AMD. Even if you're right (and you could very well be) and making the 12-core and 16-core CPUs wasn't a mistake, not releasing a 6-core definitely was.

I think that this is a case of AMD wanting to force consumers to pay more for the 8-core. This really pisses me off because even if some people find it understandable, it will still result in more Intel platforms being purchased. New gamers who are making a new build for the first time are facing a more financially daunting task than ever before. Buying a new motherboard, CPU, RAM and video card all at once can easily stretch a new gamer's budget to the limit.

Now, currently the R5-5600 is the cheapest CPU that's decent for gaming but it's out of production and I can guarantee you that the i5-12400F is in much greater supply because Intel can just make more of them in their own fabs if they want. OTOH, AMD only has a specific TSMC allocation. Once the R5-5600 is gone, gamers will flock to the i5-12400/F instead of the R5-7600X because motherboards can be had as low as $80 and every cent they save on the platform can be used to improve the most important piece, the video card. People are creatures of habit and once they're used to having one brand, it can be like pulling teeth to get them to try something else. Just look at nVidia's stranglehold on the GPU market. It's because once someone has an nVidia GPU, they tend to stick with the brand that they know.
I have absolutely no doubt it does. So far I have always been right on my predictions on real life speedups, no disappointments :)
I'm glad to hear it. I only hope that, for AMD's sake, there are enough people who will benefit from that cache in productivity that it doesn't end up being a failed product.
 
Sure, but when there is a strong consistency among different benchmarks from different testers, you can make a pretty good prediction. These guys tend to know what programs will benefit from what aspects of a CPU like number of cores, cache size and clock speed.
Partially agreed. Pretty often these benefits are derived straight from results. I remember very well when there was lots of discussion about what aspect on CPU makes Quake 3 run faster. There was like SSE, memory bandwidth, FPU power etc. But when newer CPUs come out that were much stronger on those areas, we may conclude none discussed things actually held true.
I agree that they don't say everything but they do say something.
I agree too.
I agree, they're not the same thing but I've never seen them be polar opposites either.
Not exact opposites but difference may be huge. Just look at single core vs dual core. Benchmarks many times favoured single core. On real life, who actually would change dual core to single core after using dual core?
And I hope that it works out for them, I really do.
Thanks.
Here's what I do see:
"It's worth noting that this is a gaming focused CPU. We're sure there will be some productivity workloads that can benefit from the extra L3 cache, but AMD has refrained from giving any examples if they exist. Instead, AMD is 100% pushing this as a gaming CPU, and once you see the data it will make sense why there isn't a "5950X3D."
- Steve Walton: April 14, 2022
If it makes sense to not have an R9-5950X3D, then it would also make sense to not have an R9-7950X3D. They're both 16-core productivity-first CPUs.
Too bad, Steve was unable to test dual chiplet CPU with 3D cache since it does not exist. Also AMD probably had some production issues and that's why they released only one model. And if that holds true, why AMD is (probably) releasing 12+ core CPUs with 3D cache?
Oh believe me, you don't have to explain benchmarks to me. I've been building PCs since 1988. I know that they're not 100% accurate but I do know that they tend to be way more than 50% accurate. They don't show absolutes but they do show trends.
Good. Then you also know they show something but even little different computer usage may render benchmarks almost useless.
I have no doubt that it will. I never said that it would be useless for everybody but it sure does seem like it would be useless in most productivity applications based on what we've seen. Of course some people will benefit but the number of people who would stand to benefit will be far smaller than the gamers who buy 6-core CPUs. There is no question about the benefits that they could glean from the 3D cache.

Now sure, a good number of gamers buy 8-core CPUs (I did) and that's why I didn't attack the fact that there is an 8-core model. The thing is, there are a lot of gamers for whom maybe the 8-core is out of their budgetary reach and a lot of those gamers are new. Those gamers might choose Intel instead and that's bad for AMD who needs all the market share that they can get. This is because if someone starts with an Intel CPU and likes it (and why wouldn't they?), they're far more likely to stick with Intel. An R5-7600X3D would get into the builds of many new gamers who would then be more likely to stick with AMD. Even if you're right (and you could very well be) and making the 12-core and 16-core CPUs wasn't a mistake, not releasing a 6-core definitely was.

I think that this is a case of AMD wanting to force consumers to pay more for the 8-core. This really pisses me off because even if some people find it understandable, it will still result in more Intel platforms being purchased. New gamers who are making a new build for the first time are facing a more financially daunting task than ever before. Buying a new motherboard, CPU, RAM and video card all at once can easily stretch a new gamer's budget to the limit.

Now, currently the R5-5600 is the cheapest CPU that's decent for gaming but it's out of production and I can guarantee you that the i5-12400F is in much greater supply because Intel can just make more of them in their own fabs if they want. OTOH, AMD only has a specific TSMC allocation. Once the R5-5600 is gone, gamers will flock to the i5-12400/F instead of the R5-7600X because motherboards can be had as low as $80 and every cent they save on the platform can be used to improve the most important piece, the video card. People are creatures of habit and once they're used to having one brand, it can be like pulling teeth to get them to try something else. Just look at nVidia's stranglehold on the GPU market. It's because once someone has an nVidia GPU, they tend to stick with the brand that they know.
Problem with "most". There are gazillion productivity software out there, testing only few without any multitasking is far from most. On other hand I would say most users never touch any productivity software benchmarks use.

What would be pricing of 6-core v-cache model? Doesn't make much sense to put premium feature on lower end CPU. Also if there is any sort of shortage of caches, it makes no sense to put them into lower profit products. Later if there is no shortage of cache, then AMD could release 6- or even 4-core V-cache versions.

This "Intel could make more chips because they have own fabs" sounds very funny when you talk about CPU that uses 10nm manufacturing tech, one that was late around 4 years. Additionally Intel wants to use TSMC to make some of their own chips. 5600 is out of production just because AMD wants to concentrate on 5nm chips. Not because there is no 7nm capacity available in, fact there is more than ever 7nm capacity available for AMD because AMD is switching to 5nm. AMD just wants to concentrate on newer stuff. Another reason is to get rid of GlobalFoundries, since Ryzen 7000 series does not require anything from them but 5000 series does.

Not too long ago AMD and Nvidia had around same market share on discrete cards. Then why people didn't stick with AMD? Right. Most people are not sticking on certain manufacturer. Fanboys are pretty small group.
I'm glad to hear it. I only hope that, for AMD's sake, there are enough people who will benefit from that cache in productivity that it doesn't end up being a failed product.
I have no worries about that one.
 
Partially agreed. Pretty often these benefits are derived straight from results. I remember very well when there was lots of discussion about what aspect on CPU makes Quake 3 run faster. There was like SSE, memory bandwidth, FPU power etc. But when newer CPUs come out that were much stronger on those areas, we may conclude none discussed things actually held true.

I agree too.

Not exact opposites but difference may be huge. Just look at single core vs dual core. Benchmarks many times favoured single core. On real life, who actually would change dual core to single core after using dual core?

Thanks.

Too bad, Steve was unable to test dual chiplet CPU with 3D cache since it does not exist. Also AMD probably had some production issues and that's why they released only one model. And if that holds true, why AMD is (probably) releasing 12+ core CPUs with 3D cache?
I think that it's because they'll make more profit, like you said.
Good. Then you also know they show something but even little different computer usage may render benchmarks almost useless.
Yes, I have seen that. It's not a completely exact science.
Problem with "most". There are gazillion productivity software out there, testing only few without any multitasking is far from most. On other hand I would say most users never touch any productivity software benchmarks use.
That's true. I think that productivity users (at least with regard to their home systems) are relatively rare. For most people, gaming is the hardest thing that their PCs will ever do.
What would be pricing of 6-core v-cache model? Doesn't make much sense to put premium feature on lower end CPU.
Well, they were sending in the 3D cache CPUs in at the MSRP that the X models were at originally. The gaming results are pretty much guaranteed to top the charts of every game that benefits from them. To be fair, not every game does benefit but from what I've seen, I'd say that at least ¾ of them do.
Also if there is any sort of shortage of caches, it makes no sense to put them into lower profit products. Later if there is no shortage of cache, then AMD could release 6- or even 4-core V-cache versions.
Sure, but that is, as the Spartans would say "IF". I haven't heard of any shortages of any kind. I think that the silicon shortage that hit us before was a perfect storm of new CPUs, new GPUs (desktop and mobile versions) and new APUs (especially the consoles). Fortunately, that's not happening now and I've even heard of oversupply causing the price of RAM to drop.
This "Intel could make more chips because they have own fabs" sounds very funny when you talk about CPU that uses 10nm manufacturing tech, one that was late around 4 years.
Well yeah, for a long time, Intel was hampered but new gamers don't care about lithography or process nodes, they just want something that works.
Additionally Intel wants to use TSMC to make some of their own chips. 5600 is out of production just because AMD wants to concentrate on 5nm chips. Not because there is no 7nm capacity available in, fact there is more than ever 7nm capacity available for AMD because AMD is switching to 5nm. AMD just wants to concentrate on newer stuff.
Can't say I blame them for that but their 7nm parts could be so cheap that all of the OEMs would use them if they were still in production.
Another reason is to get rid of GlobalFoundries, since Ryzen 7000 series does not require anything from them but 5000 series does.
IIRC (and I could be wrong), doesn't GloFo make the I/O for the Ryzen and Radeon chips?
Not too long ago AMD and Nvidia had around same market share on discrete cards. Then why people didn't stick with AMD? Right. Most people are not sticking on certain manufacturer. Fanboys are pretty small group.
Honestly, the last time I can remember when they were similar that way was before the GeForce 8800 GTX came out.
 
I think that it's because they'll make more profit, like you said.
That's what matters most, no surprise if they do it.
Yes, I have seen that. It's not a completely exact science.

That's true. I think that productivity users (at least with regard to their home systems) are relatively rare. For most people, gaming is the hardest thing that their PCs will ever do.
Most games only utilize single core, not hard to do something heavier. Not to say my use is somewhat heavier that most people do but still.
Well, they were sending in the 3D cache CPUs in at the MSRP that the X models were at originally. The gaming results are pretty much guaranteed to top the charts of every game that benefits from them. To be fair, not every game does benefit but from what I've seen, I'd say that at least ¾ of them do.
Six core Ryzen only makes sense if there is huge amount of partially defective chips. Considering die size is very small, I doubt it. Not very economical to disable 2 cores from many fully working Ryzen chiplets.
Sure, but that is, as the Spartans would say "IF". I haven't heard of any shortages of any kind. I think that the silicon shortage that hit us before was a perfect storm of new CPUs, new GPUs (desktop and mobile versions) and new APUs (especially the consoles). Fortunately, that's not happening now and I've even heard of oversupply causing the price of RAM to drop.
AMD is putting very much 3D cache onto Epyc chips and so there might be shortage of it. As predicted, one Epyc chip easily equals 8 Ryzens when it comes to V cache consumption.
Well yeah, for a long time, Intel was hampered but new gamers don't care about lithography or process nodes, they just want something that works.
Talking about production capacity, I doubt Intel has much capacity even they have own fabs. Otherwise Intel wouldn't use TSMC to make own chips.
Can't say I blame them for that but their 7nm parts could be so cheap that all of the OEMs would use them if they were still in production.
7nm chips are not cheap to make. TSMC 7nm is quite expensive node.
IIRC (and I could be wrong), doesn't GloFo make the I/O for the Ryzen and Radeon chips?
Ryzen 1000 and 200 series is GF14 or GF12nm, APU's are GF14nm or GF12nm
Ryzen 3000 series and above IO chip is GF12nm, Zen2 APU's and later are mostly TSMC with some exceptions
X570 chipset is GF12nm
Epyc IO chips are GF14nm until Zen4 Epyc
Ryzen 7000 series is TSMC 6nm + TSMC 5nm
Radeons were GF14nm or GF12nm until Radeon 5000 series. After that TSMC.
600 series chipset is ASMedia.

With Zen4 Epycs and Ryzen 7000 series AMD gets rid of GF since there is nothing GF on either product. WSA ends 2024 December 31 so until that AMD can produce IO chips for older Epycs and X570 chipsets. But with newest products, AMD has no longer need for GF.
Honestly, the last time I can remember when they were similar that way was before the GeForce 8800 GTX came out.
Exactly. But then some Radeon owners switched to Nvidia despite owning AMD. Same could happen other way too.
 
I wouldn't bother yet if I were you. The difference in performance isn't anywhere near worth the difference in cost. If you're on AM4, just get an R7-5800X3D (if you can) or an R7-5700X. Those CPUs will be viable for years to come and by the time you're really ready to jump to AM5, it will be ½ as expensive as it is now, DDR5 and all.
Just got my hands on one (5800X3D) to be honest. Totally satisfied with the performance.
Any 7000-series 3D CPU would have to really knock my socks off to consider upgrading.
Certainly cannot afford the more expensive 79xx variants, so any new CPU would have to be in the 350-450 US (77xx tier) range anyway for me to possibly jump ship.

We shall see.
 
Just got my hands on one (5800X3D) to be honest. Totally satisfied with the performance.
Any 7000-series 3D CPU would have to really knock my socks off to consider upgrading.
Certainly cannot afford the more expensive 79xx variants, so any new CPU would have to be in the 350-450 US (77xx tier) range anyway for me to possibly jump ship.

We shall see.
That's not going to happen because the R7-5800X3D is easily one of the fastest AMD gaming CPUs on the planet:

Ryzen 7 5800X3D vs. Core i9-12900K in 40 Games:
"In our day-one review we only featured 8 games (with various memory configurations) and from that sample the 12900K paired with DDR5-6400 memory was 2.5% faster. Today we have 40 games and the margin has narrowed to a single percent delta in AMD's favor, which for all practical purposes means these flagship parts deliver comparable gaming performance."

The Best Value Gaming CPU: 13600K vs 12600K vs 7600X vs 5800X3D vs 5600X:
"Moreover, if going DDR4, the Ryzen 7 5800X3D is a better gaming CPU, offering greater performance while costing less. Not only that, but AM4 offers a wide range of motherboards and B550 is typically better than B660 in terms of features and build quality at the same price point. That is to say, if we were trying to get the most gaming performance possible, while spending as little as possible, the 5800X3D is your answer. Even if you want premium performance, the 5800X3D is a viable option as it matched the 13600K using low latency DDR4, and that was when pairing the Core i5 with DDR5-6400."

In other words, it will be YEARS before it would be within the realm of sanity to even considering upgrading from an R7-5800X3D for gaming performance. I'm going to take a shot in the dark and say that the R7-5800X3D will be a viable gaming CPU for over 5 years, just like the FX-8350 was.
 
I am at pains to find fault with your logic. 😅
 
Back