AMD Radeon RX 6000 GPUs revealed in macOS Big Sur code: up to 5120 cores, 2.5 GHz

mongeese

Posts: 643   +123
Staff
Highly anticipated: One brave Redditor who trawled through the deep mines of macOS Big Sur code has uncovered preliminary specifications for AMD’s upcoming Navi 21, Navi 22, and Navi 23 GPUs. Although the information isn’t entirely precise, it’s sufficient to conclude that these will be very, very powerful GPUs.

Listed plainly and clearly within macOS code is the number of compute units each GPU will have: Navi 21 will have 80, Navi 22 will have 40, and Navi 23 will have 32. Assuming that each compute unit corresponds to sixty-four shaders, the GPUs will have 5120, 2560, and 2048 shaders respectively. The clock speeds for each GPU are a little more ill-defined, and we’ll explain why in a second, but the driver lists speeds from 2 GHz to 2.5 GHz.

There’s also a mention of a Navi 31 GPU with 80 compute units; 5120 shaders; but there’s no other info on that chip yet – that one’s still a year from release.

Radeon RX 6000

  Navi 21 Navi 22 Navi 23
Codename Sienna Cichlid Navy Flounder Dimgrey Cavefish
Shaders 5120 2560 2048
Clock 2050 MHz → 2200 MHz 2500 MHz  
TDP 200 W → 238 W 170 W  

Our heroic Redditor extracted all this information from the “AmdRadeonX6000HwServices” file in the newest beta of macOS 11 Big Sur. The beta is publicly available and so this data is easily verifiable – the only caveat is the possibility that AMD provided prototype information to Apple, or that the architecture has changed so significantly that the terms we’re used to now mean something else, but those are unlikely scenarios.

Clock speeds and TDPs usually aren’t finalized until just before release, so it’s best to treat these values as rough guides that the processors are capable of. The table below shows that the specs for Navi 10 disagrees slightly with the specifications of the RX 5700XT, the current flagship Navi 10 GPU. Both the clocks and the TDP are higher.

Radeon RX 5000

  Navi 10 / RX 5700XT Navi 14 / RX 5500XT
Shaders 2560 1536
Clock 1400 MHz 1605 MHz 1900 MHz 1607 MHz
TDP 180 W 225 W 110 W 130 W

Although 5120 shaders are a lot less than the 10,000 or so CUDA cores on Nvidia’s newest GPUs, AMD and Nvidia take a markedly different approach to their unified shader units, even though a lot of the terminology used seems to be the same.

Nvidia's execution units (CUDA cores) are scalar in nature -- that means one unit carries out one math operation on one data component; by contrast, AMD's units (Stream Processors) work on vectors -- one operation on multiple data components. For scalar operations, they have a single dedicated unit. Further explanation of this is beyond the scope of this article, but you can check out our deep dive: Navi vs. Turing: An Architecture Comparison for more.

Obviously, there’s no telling exactly how fast Big Navi is going to be compared to Nvidia's RTX 30 series, for now at least. AMD is set to launch Radeon RX 6000 GPUs on October 28.

Permalink to story.

 
Surely it'll be twice as fast as a 5700XT with these shader counts.

It's impossible to know for sure with the changes AMD will have made for RDNA2, and the sustainable boost clocks. However it's gotta be a safe bet.

That would put it well over the top of a 2080Ti but probably short of a 3080.
 
Surely it'll be twice as fast as a 5700XT with these shader counts.

It's impossible to know for sure with the changes AMD will have made for RDNA2, and the sustainable boost clocks. However it's gotta be a safe bet.

That would put it well over the top of a 2080Ti but probably short of a 3080.

Going with that logic, 5500 with twice less shaders than 5700 is only two times slower right? smh...
 
It says cuda cores specifically. What's your point?
Perhaps I'm being overly picky, but the sentence "Nvidia's execution units (CUDA cores) are scalar" seems to imply that the only execution units which exist are the scalar CUDA cores.
 
So the article’s rider is “Nvidia, be scared” yet it admits at the end that there is no way to compare it to 3000 series yet...
Could this be clickbait?!?
It says "there’s no telling exactly how fast Big Navi is going to be compared to Nvidia's RTX 30 series" (my emphasis in both quotes). AMD is unlikely to change the CU structure in the shader engines, with the new chips (for example, the PS5, XBSX, and XBSS all use 64 shader core CUs) so one can make some comparisons: at 2.2 GHz, an 80 CU RDNA 2 chip would hit 22.5 FP32 TFLOPS and 45.1 FP16 TFLOPS, as theoretical peaks.

That's roughly 25% less and 51.4% more than a 3080, respectively, and with a TDP of less than 240W (the RX 5700 XT is 225 W), this suggests that an article pertaining to Navi 21 can justifiably carry the light hearted tag of 'Nvidia, be scared.'

Naturally, the details in Big Sur could be complete junk, but it would be a very odd thing to put totally fake/misleading data into an OS database.
 
So the article’s rider is “Nvidia, be scared” yet it admits at the end that there is no way to compare it to 3000 series yet...
Could this be clickbait?!?

Well... Budget gaming will be quite happy!!!

Hopefully it gives my AMD shares a boost.

You can use existing Navi data and the numbers provided to get a rough estimate:

(assuming 0 architectural improvements over RDNA1)

The 5700 XT has 2560 shaders where as this new card has 5120, exactly double the number of shaders.

The clocks speeds are increased from 1905 to 2.205 / 2.5. That's a 15.74% increase to the clocks assuming the clocks given are boost and not base. If those clocks are base clocks, boost clocks could be 200 MHz+ on the new Navi 2 chips. Trying to get as conservative a number here as possible though.

So with the 100% increase in shaders let's say performance only increases 90%. Relative to the RX 5700XT that puts performance at 190%. Now calculate the clock boost into that and you get 219.9% relative to the RX 5700 XT.

The 5700 XT is 65% relative to the 3080, if these rumored specs are correct that would put it significantly above the 3080 and at a much lower power consumption. Mind you that's assuming 0 architectural improvements over RDNA1 as well.

https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-founders-edition/34.html

Of course the increase in the number of shaders is nothing surprising, the 5700 XT always had a very small die size and the option for a larger die has always been there for AMD. People do not give the original Navi enough credit, it was very competitive with Nvidia, AMD just didn't release any high end options. The clocks speeds up to 2.2 GHz might as well be confirmed given the PS5 is running at that value and it isn't the best binned silicon.

All I have to say is there's a reason Nvidia launched the 3080 at $700. Nvidia doesn't lower prices unless it thinks it has to.

Going with that logic, 5500 with twice less shaders than 5700 is only two times slower right? smh...

Follow the link above, the 5500 XT is indeed close to half the performance of the 5700 XT.
 
Not much need to have an answer for a $1500 video card.

Trust me I agree and think the price is crazy but it will still hold the title of fastest gaming gpu so there is nothing to be scared of.

No need, the 3090 is a niche card not the mainstream gaming one. They are targeting the 3080, 3070, 280 Ti levels. Nice fan boy try though...

Lmao my system is a Ryzen 3800X and my GPU a RX580 and I'm upgrading to RDNA2 and I'm an Nvidia Fanboy now.... do tell?

It is indeed a niche card that however doesn't change the fact it is at the top.

Trying looking at the facts instead of quick to be calling people names.
 
How do supposed tech sites get basic things so wrong?

Each Navi shader core works on a scalar value just like GCN and every Nvidia card since G80. It does not work on multiple vector values at once. Where did the author get that nonsense?
 
In the past decade AMD has never delivered on pre-release hype for GPUs. I doubt this changes anything.
 
5700XT was very close to 2080 (2070S to those benchmark purists) and proved a better value card, coming out of nowhere with a much lower price tag.

I wouldn't call a few frames lead a high performance card, while denouncing others as "budget".

As mentioned, the 3080 price is dropped a lot for a reason (despite trouncing 2080Ti both price-wise and performance-wise.)

And recent benchmarks also showed us how absolutely terrible is the value of the 3090 when compared to 3080 itself. If gamers buy it thinking it will be the best card, just because it's the most expensive card, no one can be more clueless than them.
 
Last edited:
Well all I can say is that I hope AMD delivers some good competition on the high end. They've done a decent job at the budget to mainstream segments, but they haven't been able to take the performance crown for years.
 
Going with that logic, 5500 with twice less shaders than 5700 is only two times slower right? smh...

Don't shake your head when your post doesn't make a lot of sense to English readers.

Who knows exactly what your point was, but a 5500XT with a little more than half the shading and memory bandwidth performance of a 5700XT is a little more than half as fast.

Here we have a vague leak that claims to show a part with double the shaders, unknown overall shading or memory bandwidth performance, unknown sustainable boost clocks.

It's probably not likely to be hitting extremely high speeds like 2.5GHz though if the die is as big as it should be with that number of transistors on it at that claimed power consumption.

That would be some feat. It would be an enormous step in what we have seen possible. That includes what we know of the architecture in the consoles. I think a realistic figure is more like 2GHz.

All will no doubt be revealed in one month.
 
Back