AMD Ryzen 7 5700G APU leaks alongside 5900 and 5800 (non-X) CPUs

mongeese

Posts: 643   +123
Staff
Why it matters: A smattering of new AMD processors will soon release, completing the deployment of the Zen 3 architecture. Included are the Ryzen 7 5800 and Ryzen 9 5900 CPUs, low power models based on existing hardware, and the Ryzen 7 5700G, an octa-core APU.

A reliable leaker has posted the pivotal details of the Ryzen 7 5800 and Ryzen 9 5900. The two CPUs will target the same 65 W TDP that previous non-X and non-XT models did, down two-fifths from the 105 W granted to the 5800X and 5900X. At 4.6 GHz and 4.7 GHz, respectively, their boost clocks are 100 MHz lower than the 5800X and 5900X's.

It's also expected that the vanilla models will have lower base clocks, and will sustain maximum clocks for a shorter period of time. The downgrade is usually counterbalanced by a sturdy discount, though that may be irrelevant to the 5800 and 5900; rumor has it they'll be exclusive to OEMs, at least at first.

Meanwhile, in China, a leaker's encapture of a Ryzen 7 5700G has exposed the APU's specifications and performance bracket. In short, it's a slightly crippled 5800X octa-core processor paired with Vega graphics.

CPU-Z shows that the processor belongs to the Cezanne family, which also includes some of the upcoming mobile APUs. The 5700G uses a fully-equipped Cezanne chip, containing 8 cores and 16 threads built on TSMC's 7nm node. Software registers a 4.4 GHz clock speed.

In the CPU-Z benchmark, the 5700G scored 613.6 points in the single-threaded test and 6292.2 points in the multi-threaded test, making it about 5% slower than the 5800X. Part of the difference can be explained by the lower clock speed, but it's partly architectural differences, too. While the 5800X contains a pricey 32 MB L3 cache, the 5700G has just 16 MB of L3 to work with.

Nevertheless, that's an incredible showing for an APU. It's only a shame that the leaker chose not to test the integrated GPU, too. They did, however, experiment with overclocking, and reportedly achieved a stable 4.7 GHz that put performance almost on par with the 5800X.

Like the 5800 and 5900, there's a good chance that the 5700G will be exclusive to OEMs. Given how unconventional it is, that would be a disappointing outcome though. If any of the above are coming to shelves, they'll probably be announced on Tuesday during the AMD CES 2021 keynote.

Permalink to story.

 
I get that AMD is very afraid of cannibalizing their higher end tier CPUs but I find it worrisome that they take a "We know better than you" attitude towards consumers not unlike Apple does: Yes for most gamers they will never actually get any use out of the integrated graphics since almost all of them will have dedicated GPUs

However, it's just simple convenience: even if it's barely ever used, the fact that it is there would be extremely convenient for troubleshooting purposes or very advance uses like GPU passthrough and such. They're obviously making them so they know all about this uses and the ones corporate office users require so why not sell them to regular consumers?

I would like to be the one to decide if I want to trade the extra performance for that peace of mind of having a working, integrated graphics module there. I, the consumer, would like to make that choice instead of AMD making it for me arbitrarily.
 
It has been awhile since 5900 release, and I still cannot get my hands on one, unless I agree to pay about twice the price, I mean, WTF is going on? This never happened before. The CPU market has devolved, it seems.

This SKU iirc started with the 3900x and even then, the release was far more staggered. The way their production works they probably just haven't got enough time to build inventory of perfect or near perfect die yields. This is why the 5600x is comparatively easy to find because it can be a 5950x with more than half of the cores ending up as failed yields.

I think it was a mistake to try to release the 5900x and 5950x simultaneously with the rest of the cpus: they could have easily stopped with the 7 series. It would have made the 5800x way more desirable and they could have had something ready to counter an eventual intel launch several months down the line. It seems like it was more important for them to just attempt to bury intel by taking the performance lead on paper and have all these review outlets give them tons of free publicity carelessly, ignoring the fact that the product was simply not a real launch at all.
 
It has been awhile since 5900 release, and I still cannot get my hands on one, unless I agree to pay about twice the price, I mean, WTF is going on? This never happened before. The CPU market has devolved, it seems.

My understanding is all chip makers are having a hard time getting certain materials along side console SOC's using up a lot of the wafer orders right now, which means we might see shortages all the way through 2021. Not good if true.
 
I would love to get one for my wife's PC. She occasionally games with my daughter and I. Doesn't really need a discrete GPU like us. This would be perfect for her.
 
It has been awhile since 5900 release, and I still cannot get my hands on one, unless I agree to pay about twice the price, I mean, WTF is going on? This never happened before. The CPU market has devolved, it seems.
Unprecedented demand, simple as that. We've seen black Friday level sales for over half a year now. The foundries do not have the capacity to keep up with such high demand. And ryzen 5000 is already in high demand in the DIY market because there is nothing Intel wins in anymore, and as mindfactory showed there have been tens of thousands of ryzen 5000s sold in germany alone, there's just not enough supply to keep up with the entire DIY market right now.

It's not just high level products. As others have mentioned, all the console SoCs are made on the same node as the RX 6000 GPU and ryzen 5000 CPUs, AND ryzen 4000 and 5000 APUs, AND any remaining production of ryzen 3000 CPUs. Simply not enough room on the node for everything AMD is making. Imagine how much worse it would be if Nvidia was also using it.

The same factories in China churn out multiple different products. Normally when a product is a huge hit the factory will switch production to prioritize that product. But when everything from keyboards to chairs, desktops to laptops, graphics cards to CPU coolers, are ALL surging in demand, there is no prioritization you can do. On top of that there is the shipping issue, international shipping is running at reduced capacity and demand is surging, leading to vastly increased prices AND reduced shipments, leading to massive shortages and price hikes.

None of this will improve for a while yet. It will take 6+ months for vaccination efforts to produce noticeable results, then supply line shave to be brought back up to full speed, and then there will be backordered demand which will take months to sort out. Welcome to a pandemic economy.
 
I get that AMD is very afraid of cannibalizing their higher end tier CPUs but I find it worrisome that they take a "We know better than you" attitude towards consumers not unlike Apple does: Yes for most gamers they will never actually get any use out of the integrated graphics since almost all of them will have dedicated GPUs

However, it's just simple convenience: even if it's barely ever used, the fact that it is there would be extremely convenient for troubleshooting purposes or very advance uses like GPU passthrough and such. They're obviously making them so they know all about this uses and the ones corporate office users require so why not sell them to regular consumers?

I would like to be the one to decide if I want to trade the extra performance for that peace of mind of having a working, integrated graphics module there. I, the consumer, would like to make that choice instead of AMD making it for me arbitrarily.

Wasting silicon for integrated GPU will mean less CPU's per wafer. Zen2 chiplet is around 75 mm2 and Zen3 chiplet around 80 mm2. Now, Renoir (Zen2 APU, 8 cores) is over 150 mm2. Basically AMD could make over two (yields considered) Ryzens for same 7nm wafer consumption they make one APU.

No wonder AMD really doesn't want those APU's become mainstream just now. Of course Ryzens also need IO chip but there shouldn't be any shortage of 12/14nm capacity.
 
This SKU iirc started with the 3900x and even then, the release was far more staggered. The way their production works they probably just haven't got enough time to build inventory of perfect or near perfect die yields. This is why the 5600x is comparatively easy to find because it can be a 5950x with more than half of the cores ending up as failed yields.

I think it was a mistake to try to release the 5900x and 5950x simultaneously with the rest of the cpus: they could have easily stopped with the 7 series. It would have made the 5800x way more desirable and they could have had something ready to counter an eventual intel launch several months down the line. It seems like it was more important for them to just attempt to bury intel by taking the performance lead on paper and have all these review outlets give them tons of free publicity carelessly, ignoring the fact that the product was simply not a real launch at all.

5600x is easy to find (even relatively)? Can you post a link to one?
 
It has been awhile since 5900 release, and I still cannot get my hands on one, unless I agree to pay about twice the price, I mean, WTF is going on? This never happened before. The CPU market has devolved, it seems.

I remember when the Ryzen 5000 series launched and thought there would be enough stock to drive down the prices of the 3000 series. I thought I'd be clever and buy a 3900X/XT for a dealio as a drop in replacement for my 2700X. After all, as a professional IT person, I'm a huge advocate of buying the previous generation to save massive amounts of money when equipping an entire facility with new hardware.

Boy oh boy was I wrong. If anything, I'm grateful I got my 2700X when I did.
 
The ruling class elites have no issues with availability. They have all the new tech sent to them for free.
 
I remember when the Ryzen 5000 series launched and thought there would be enough stock to drive down the prices of the 3000 series. I thought I'd be clever and buy a 3900X/XT for a dealio as a drop in replacement for my 2700X. After all, as a professional IT person, I'm a huge advocate of buying the previous generation to save massive amounts of money when equipping an entire facility with new hardware.

Boy oh boy was I wrong. If anything, I'm grateful I got my 2700X when I did.

I was thinking about doing the same thing but then decided to skip one gen to make the upgrade worth it, so right now the plan is to go 2700X -> 5900(X) eventually, but that'll be once prices drop a good bit below MSRP.

A new XBox Series X and GPU come first but I guess that'll have to wait, as well.
 
5600x is easy to find (even relatively)? Can you post a link to one?
Depends which market you're in. Checking the larger US retailers it seems like you're SOL, but here are links to German and UK retailers:

https://www.scan.co.uk/shop/computer-hardware/cpu-amd-desktop/3196/3197/3198
https://www.mindfactory.de/Highlights/AMD_Ryzen_Game_Bundle_FarCry6
https://m.alternate.de/AMD-Prozessoren

Yep. To clarify this is exactly what I mean: the US market is just terrible at this time but abroad the 5600x has been in stock, it's just that for example for me the prices are always 30% more expensive cause of import tariffs but there's no shortage here since these were already imported is not like the US retailers can import them back since it would mean paying the 30% tariff and killing all of their margins and then some.
 
This SKU iirc started with the 3900x and even then, the release was far more staggered. The way their production works they probably just haven't got enough time to build inventory of perfect or near perfect die yields. This is why the 5600x is comparatively easy to find because it can be a 5950x with more than half of the cores ending up as failed yields.

I think it was a mistake to try to release the 5900x and 5950x simultaneously with the rest of the cpus: they could have easily stopped with the 7 series. It would have made the 5800x way more desirable and they could have had something ready to counter an eventual intel launch several months down the line. It seems like it was more important for them to just attempt to bury intel by taking the performance lead on paper and have all these review outlets give them tons of free publicity carelessly, ignoring the fact that the product was simply not a real launch at all.

“Performance lead on paper”

Yeah intel fanboy confirmed. They have the performance lead full stop. The issue isn’t a paper launch - it’s incredibly high demand due to pandemic combined with the TSMC node competing with Big Navi and the consoles.
 
This SKU iirc started with the 3900x and even then, the release was far more staggered. The way their production works they probably just haven't got enough time to build inventory of perfect or near perfect die yields. This is why the 5600x is comparatively easy to find because it can be a 5950x with more than half of the cores ending up as failed yields.

You misunderstand how AMD puts their CPUs together. The 5600x is a single CCX with 2 of the 8 cores unused. That's only 1/4 of the cores with failed yields. AMD doesn't have a 16-core die, which is the only way you could end up with "more than half of the cores ending up as failed yields." The 5900X and 5950X use 2 CCXs to get 12 and 16 cores, respectively.
 
AMD's juggling act ATM, is "how can we make the most margin from the least silicon?"

It wouldnt surprise me, if that is via price hiked APUs - top binned 8 cores on the desktop, & leaner running lesser bins w/ fewer cores for mobile.
 
I get that AMD is very afraid of cannibalizing their higher end tier CPUs but I find it worrisome that they take a "We know better than you" attitude towards consumers not unlike Apple does: Yes for most gamers they will never actually get any use out of the integrated graphics since almost all of them will have dedicated GPUs

However, it's just simple convenience: even if it's barely ever used, the fact that it is there would be extremely convenient for troubleshooting purposes or very advance uses like GPU passthrough and such. They're obviously making them so they know all about this uses and the ones corporate office users require so why not sell them to regular consumers?

I would like to be the one to decide if I want to trade the extra performance for that peace of mind of having a working, integrated graphics module there. I, the consumer, would like to make that choice instead of AMD making it for me arbitrarily.

There are more than 1 category of gamers. Hardcore gamers will probably always have a dedicated GPU. I'm a casual gamer and the AMD APUs suit my needs just fine. I have a 2200G with 8 CUs which has proven sufficient for most of my games. Now, I'm not saying I couldn't use more power for Unreal engine games but that's why a 5700G would be very enticing for me if they doubled the CUs on the APU. I would definitely sell my 2200G and jump to the 5700G for 16 CUs.
 
There are more than 1 category of gamers. Hardcore gamers will probably always have a dedicated GPU. I'm a casual gamer and the AMD APUs suit my needs just fine. I have a 2200G with 8 CUs which has proven sufficient for most of my games. Now, I'm not saying I couldn't use more power for Unreal engine games but that's why a 5700G would be very enticing for me if they doubled the CUs on the APU. I would definitely sell my 2200G and jump to the 5700G for 16 CUs.
The problem with THAT is that you could double the CUs of 2200G and you'd still only get marginally better performance unless you add another level of on-die cache or figure out some other way to get much higher memory bandwidth.
 
The problem with THAT is that you could double the CUs of 2200G and you'd still only get marginally better performance unless you add another level of on-die cache or figure out some other way to get much higher memory bandwidth.

Right. But I'm not talking about doubling the CUs on a 2200G. I'm talking about AMD doubling them up on an 5700 APU. They already give you 11 CUs on an Ryzen 2400G. Going up to 16 CUs on Zen 3 they should most certainly offer performance gains. But if you can go into a bit more details about why you think this would offer no performance improvements I'm all for learning something new. The max supported memory speed on the 2200G is 2933MHz. The 3000 series went up to a max of 3200MHz. I'm guessing the 5000 series would see a memory speed bump as well.
 
Going up to 16 CUs on Zen 3 they should most certainly offer performance gains. But if you can go into a bit more details about why you think this would offer no performance improvements I'm all for learning something new.
Check out our review of the Ryzen 7 4800U and look carefully at the gaming tests. Here's one such example:

17-p.webp


Now compare the 4800U (15W) results to those for the 3700U (15W) - see how it's more than twice as fast?

The GPU in the 3700U is the Radeon Vega 10 Mobile: it has 640 SPs, 40 TMUs, and 8 ROPs, using the GCN 5.0 architecture. The 4800U uses a Radeon Graphics 512SP GPU and as the name suggests, it has 512 SPs, 38 TMUs, and 8 ROPs. It uses GCN 5.1, which is as near-as-makes-no-difference the same as GCN 5.

The 4800U's GPU does have a 25% higher boost clock than the 3700U's and that helps to offset the 20% fewer SPs, but it's certainly not enough to account for the 4800U being twice as fast as the 3700U. So where is that performance coming from?

Firstly, the 4800U tested in that review is using DDR4-4266, whereas the 3700U only sports DDR4-2400. That's a 78% increase in peak memory bandwidth, which goes a long way to helping matters - but only up to a certain point. After that, the rest of the gains are coming from the fact that the 4800U is using the Zen 2 architecture, has more cores, and a higher boost clock.

In other words, when it comes to mobile CPUs and APUs, there are enormous gains to be found improving system and internal bandwidth, as well as the overall IPC of the CPU. Integrated GPUs are so limited by their number of shader units, TMUs, and ROPs that simply going from 11 CUs to 16 CUs, with no other changes, won't make much of a difference (if any).
 
Check out our review of the Ryzen 7 4800U and look carefully at the gaming tests. Here's one such example:

17-p.webp


Now compare the 4800U (15W) results to those for the 3700U (15W) - see how it's more than twice as fast?

The GPU in the 3700U is the Radeon Vega 10 Mobile: it has 640 SPs, 40 TMUs, and 8 ROPs, using the GCN 5.0 architecture. The 4800U uses a Radeon Graphics 512SP GPU and as the name suggests, it has 512 SPs, 38 TMUs, and 8 ROPs. It uses GCN 5.1, which is as near-as-makes-no-difference the same as GCN 5.

The 4800U's GPU does have a 25% higher boost clock than the 3700U's and that helps to offset the 20% fewer SPs, but it's certainly not enough to account for the 4800U being twice as fast as the 3700U. So where is that performance coming from?

Firstly, the 4800U tested in that review is using DDR4-4266, whereas the 3700U only sports DDR4-2400. That's a 78% increase in peak memory bandwidth, which goes a long way to helping matters - but only up to a certain point. After that, the rest of the gains are coming from the fact that the 4800U is using the Zen 2 architecture, has more cores, and a higher boost clock.

In other words, when it comes to mobile CPUs and APUs, there are enormous gains to be found improving system and internal bandwidth, as well as the overall IPC of the CPU. Integrated GPUs are so limited by their number of shader units, TMUs, and ROPs that simply going from 11 CUs to 16 CUs, with no other changes, won't make much of a difference (if any).

Thank You for the very detailed response. My understanding is that a CU is composed of shaders, TMUs and ROPs. I'm still not sure what to make of these responses. I don't know if your trying to say that die space is limited so much so that AMD cannot actually increase the number of CUs on the 5000 series of chips. Or if your saying that they could produce an APU with 16 CUs but it would not help because...... 16 x 64 SPs is 1024 shaders.
 
Back