AMD Radeon RX 6000 GPUs revealed in macOS Big Sur code: up to 5120 cores, 2.5 GHz

You can use existing Navi data and the numbers provided to get a rough estimate:

(assuming 0 architectural improvements over RDNA1)

The 5700 XT has 2560 shaders where as this new card has 5120, exactly double the number of shaders.

The clocks speeds are increased from 1905 to 2.205 / 2.5. That's a 15.74% increase to the clocks assuming the clocks given are boost and not base. If those clocks are base clocks, boost clocks could be 200 MHz+ on the new Navi 2 chips. Trying to get as conservative a number here as possible though.

So with the 100% increase in shaders let's say performance only increases 90%. Relative to the RX 5700XT that puts performance at 190%. Now calculate the clock boost into that and you get 219.9% relative to the RX 5700 XT.

The 5700 XT is 65% relative to the 3080, if these rumored specs are correct that would put it significantly above the 3080 and at a much lower power consumption. Mind you that's assuming 0 architectural improvements over RDNA1 as well.

https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-founders-edition/34.html

Of course the increase in the number of shaders is nothing surprising, the 5700 XT always had a very small die size and the option for a larger die has always been there for AMD. People do not give the original Navi enough credit, it was very competitive with Nvidia, AMD just didn't release any high end options. The clocks speeds up to 2.2 GHz might as well be confirmed given the PS5 is running at that value and it isn't the best binned silicon.

All I have to say is there's a reason Nvidia launched the 3080 at $700. Nvidia doesn't lower prices unless it thinks it has to.



Follow the link above, the 5500 XT is indeed close to half the performance of the 5700 XT.

I hope you are right with these calculations, competition is good.
 
Trust me I agree and think the price is crazy but it will still hold the title of fastest gaming gpu so there is nothing to be scared of.



Lmao my system is a Ryzen 3800X and my GPU a RX580 and I'm upgrading to RDNA2 and I'm an Nvidia Fanboy now.... do tell?

It is indeed a niche card that however doesn't change the fact it is at the top.

Trying looking at the facts instead of quick to be calling people names.

You are missing the point that AMD is not aiming for a $1500 consumer card...how hard is that to comprehend! Those are the facts and the 3090 is not Nvidia's bread and butter so if AMD does well it could take market share. I'm pretty sure NV is worried more about that than the performance crown. Does that make you LOL again like a fan boy?
 
All AMD has to do is make a another RX480/580 replacement for the £250/$300 price range, I always thought the 5700XT was over priced so if we can get that level of performance for that price I'd be happy.
 
5700XT was very close to 2080 (2070S to those benchmark purists) and proved a better value card, coming out of nowhere with a much lower price tag.

I wouldn't call a few frames lead a high performance card, while denouncing others as "budget".

As mentioned, the 3080 price is dropped a lot for a reason (despite trouncing 2080Ti both price-wise and performance-wise.)

And recent benchmarks also showed us how absolutely terrible is the value of the 3090 when compared to 3080 itself. If gamers buy it thinking it will be the best card, just because it's the most expensive card, no one can be more clueless than them.

It was not close to 2080... 2070S is the direct competitor, still a few percent slower than. And 9% slower than 2080. If you acknowledge that, why even mention 2080? Quite desperate...
https://www.techspot.com/review/1870-amd-radeon-rx-5700/

Besides, 2080 was always the worst value, you either got 2070, 2070super later,1080ti or 2080ti. Not to mention it arrived nearly a year earlier than 5700xt, and still couldn't match it.
And to crown it all, drivers were a disaster. I updated my rig from 4790k/970 to 3900x/5700xt. 5700xt died in 24 hours, had to RMA, and the system crashed a lot more than my old one.
And 5700xt was the best AMD could offer....
3090 should have been named Titan.
 
I mean, I'm completely prepared to buy the fastest card out there, 3090 inclusive but I don't plan to buy anything until April next year, I wonder if AMD can actually surpass the 3080? Bring the pricing down a bit or if it can be overclocked to 3090 levels, I'd buy one.
 
My concern is the predicted power consumption: 5700XT was already at ~240W, so if we double the shader count AND boost the clocks, they must pull something off the hat efficiency-wise, or the card will become a 500W radiator (don't laugh, I have 2 of those mini-heaters in the house, for the missus)

Eagerly waiting for the launch and to see what's what :)
 
@Vulcanproject
My point was, you can't deduce performance estimates based on just shader (compute units) count. With the rest, I cannot help, you are mainly wrong, but that's, again, your point of view, unfounded and jiberish.

PS. My English is fine.
 
5700XT was already at ~240W, so if we double the shader count AND boost the clocks, they must pull something off the hat efficiency-wise, or the card will become a 500W radiator
Well, pull 15% off for the N7+ node, but that still only takes off 75w or so. I remember some speculation a few months back that it might be done with TSMC's N5 process instead. Has anyone heard an update on that?
 
From AMD themselves:


If one has issues with the terminology used, take it up with them.

AMD’s terminology isn’t the issue. It’s actually quite clear. You have a responsibility to not misinform your readers. If you choose to continue to do so that’s your call.

I’ll just leave this here for those it would benefit.

Scalar SIMD architectures:
GCN: 64-wide SIMD
RDNA: 32-wide SIMD (or 64-wide in GCN compatibility mode)
Fermi, Maxwell, Pascal, Turing, Ampere: 32-wide SIMD

Vector SIMD architectures also known as VLIW (what the author claims RDNA to be):
All AMD GPUs before GCN and all Nvidia GPUs before G80.
 
No need, the 3090 is a niche card not the mainstream gaming one. They are targeting the 3080, 3070, 280 Ti levels. Nice fan boy try though...
This is the EXACT SAME ARGUMENT people used to excuse AMD only going up to 1060 performance with the 480 and leaving the 1070/1080 alone. "well they're a niche, the majority of the market is 480 level"

Nvidia then promptly sold more 1070s then AMD did polaris GPUs, made a boatload of money, and eventually wound up with 80% dGPU marketshare.

Believe it or not, enthusiast communities matter. The 3090 will sit at the top of the charts and nvidia will hold the title of "fastest GPU", and that will influence people's purchasing decisions. If the 6900xt is below the 3080, then nvidia will hold the #1 and #2 spot. That will have an even greater influence.

The "leave the high end to nvidia" plan has so far been a total failure. Hopefully AMD has something up its sleve to get the 6900xt to 3080 performance level.
 
No need, the 3090 is a niche card not the mainstream gaming one. They are targeting the 3080, 3070, 280 Ti levels. Nice fan boy try though...
If they get close to 2080Ti perf then I will take notice. I'm still skeptical they will even get that far.
 
This is the EXACT SAME ARGUMENT people used to excuse AMD only going up to 1060 performance with the 480 and leaving the 1070/1080 alone. "well they're a niche, the majority of the market is 480 level"

Nvidia then promptly sold more 1070s then AMD did polaris GPUs, made a boatload of money, and eventually wound up with 80% dGPU marketshare.

Believe it or not, enthusiast communities matter. The 3090 will sit at the top of the charts and nvidia will hold the title of "fastest GPU", and that will influence people's purchasing decisions. If the 6900xt is below the 3080, then nvidia will hold the #1 and #2 spot. That will have an even greater influence.

The "leave the high end to nvidia" plan has so far been a total failure. Hopefully AMD has something up its sleve to get the 6900xt to 3080 performance level.

Agreed.

The click bait title is irrelevant when they most likely won't have a card to challenge the top spot. And I'm hoping the 6900XT will equal 3080 levels of performance, however if the rumored launch price is $549 they are expecting it to be below that level.
 
This is the EXACT SAME ARGUMENT people used to excuse AMD only going up to 1060 performance with the 480 and leaving the 1070/1080 alone. "well they're a niche, the majority of the market is 480 level"

Nvidia then promptly sold more 1070s then AMD did polaris GPUs, made a boatload of money, and eventually wound up with 80% dGPU marketshare.

Believe it or not, enthusiast communities matter. The 3090 will sit at the top of the charts and nvidia will hold the title of "fastest GPU", and that will influence people's purchasing decisions. If the 6900xt is below the 3080, then nvidia will hold the #1 and #2 spot. That will have an even greater influence.

The "leave the high end to nvidia" plan has so far been a total failure. Hopefully AMD has something up its sleve to get the 6900xt to 3080 performance level.

I was not disagreeing about the history of AMD vs. Nvidia. If you think the performance crown of a $1500 3090 will sway gamers away from the the 6K series if it is competitive with the 3070-3080, etc. then I don't know what to tell you. Price is king and if it is competitive, cheaper and cooler they will be taking market share away. Enthusiasts may matter but they are not the bread and butter of the bottom line. Will they, who knows and saying "NV should be scared" is hyperbole but so is laughing and brushing AMD off with the 6K series (at this time).
 
Come on man! The reason the RX 5700 XT clocks are lower in the MacOS drivers is because the soldered version of that card in current iMac's are clocked lower than the normal desktop cards! This is easy stuff! This also means that desktop clocks for RDNA 2 could be even higher than what's listed here.
 
This is the EXACT SAME ARGUMENT people used to excuse AMD only going up to 1060 performance with the 480 and leaving the 1070/1080 alone. "well they're a niche, the majority of the market is 480 level"

Nvidia then promptly sold more 1070s then AMD did polaris GPUs, made a boatload of money, and eventually wound up with 80% dGPU marketshare.

Believe it or not, enthusiast communities matter. The 3090 will sit at the top of the charts and nvidia will hold the title of "fastest GPU", and that will influence people's purchasing decisions. If the 6900xt is below the 3080, then nvidia will hold the #1 and #2 spot. That will have an even greater influence.

The "leave the high end to nvidia" plan has so far been a total failure. Hopefully AMD has something up its sleve to get the 6900xt to 3080 performance level.
This is delusional. The insane success of the Radeon HD 4000 & 5000 series (which didn't beat Nvidia on flagship performance, but CRUSHED them everywhere else [price, value, power efficiency, etc...] using small, efficient dies) says you don't know what the hell you're talking about. The reason Nvidia sold more 1070's than AMD's entire Polaris stack has to do with Nvidia's pre-existing market dominance than anything else. Most specially in regards to OEM's/ODM's. But Nvidia still sold way, WAAAAAAAAAAAY more GTX 1050/Ti's & 1060's then they ever did 1070's (let ALONE 1080's).

Also, Polaris was a major financial success for AMD. They were widely acclaimed, selling literally every single GPU they could make from 2016-2017 & GAINED marketshare with the launch, not lost it. They wouldn't start REALLY bleeding market-share until deep in the Vega days when those cards ended up being about a year late & they started beating the dead Polaris horse far longer then they'd ever hoped to have to.

And any gamer with half a brain doesn't give a crap about the RTX 3090. The performance gains over the 3080 are embarrassing, and the pricing is an insult. The RTX 3080 is the only card that really matter when it comes to GAMING. Hence why Nvidia's calling it their gaming flagship. Nobody will give a crap if AMD is ≈10-15% behind the RTX 3090 if it costs ONE THIRD the price. That kind of performance margin is essentially impossible to notice while actually gaming (hence why the RTX 3090 is largely pointless to talk about in regards to the gaming market).
 
Last edited:
This is the EXACT SAME ARGUMENT people used to excuse AMD only going up to 1060 performance with the 480 and leaving the 1070/1080 alone. "well they're a niche, the majority of the market is 480 level"

Nvidia then promptly sold more 1070s then AMD did polaris GPUs, made a boatload of money, and eventually wound up with 80% dGPU marketshare.

Believe it or not, enthusiast communities matter. The 3090 will sit at the top of the charts and nvidia will hold the title of "fastest GPU", and that will influence people's purchasing decisions. If the 6900xt is below the 3080, then nvidia will hold the #1 and #2 spot. That will have an even greater influence.

The "leave the high end to nvidia" plan has so far been a total failure. Hopefully AMD has something up its sleve to get the 6900xt to 3080 performance level.

Don't waste your time. My original post was about the Click bait title and with NV having the top card makes the statement invalid. The people that are arguing with you are trying to change the topic into a price / performance one to support their views.
 
If they get close to 2080Ti perf then I will take notice. I'm still skeptical they will even get that far.

I think it will be faster.

My guess is in between the 3080 and 3070 and that's why we are going to see the $549 MSRP.

Once they release these then NV's drops the 20GB 3080 we will have a better idea of what the pecking order will be. If you can wait Xmas time will probably be the best time to buy.
 
AMD’s terminology isn’t the issue. It’s actually quite clear. You have a responsibility to not misinform your readers. If you choose to continue to do so that’s your call.

I’ll just leave this here for those it would benefit.

Scalar SIMD architectures:
GCN: 64-wide SIMD
RDNA: 32-wide SIMD (or 64-wide in GCN compatibility mode)
Fermi, Maxwell, Pascal, Turing, Ampere: 32-wide SIMD

Vector SIMD architectures also known as VLIW (what the author claims RDNA to be):
All AMD GPUs before GCN and all Nvidia GPUs before G80.
Ampere's (and the others) SMs contain four execution blocks that are 32 threads wide, and these are dispatched to the 16 FP32 and 16 FP32+INT32 ALUs - what Nvidia calls CUDA cores. These are counted as individual shader units in their terminology, and they are indeed scalar in nature, because each one processes one instruction with one or two pieces of data, at most.

What AMD calls their shader units/ALUs is a combined bank of 32 sub-units: individually these are scalar too (because they process one element of a vertex, pixel, etc), but the unit as a whole is treated as a vector processor, because it will always work on 32 pieces of data.

AMD repeatedly states this, in all of their documentation, and has done since GCN first appeared - just as with the use of a dedicated scalar 'shader unit'/ALU. So once again, if you have an issue with this (and you clearly do), then take it up with AMD and Nvidia, as they're the ones saying all of this. If any misinformation is taking place, then they are the ones responsible for it.
 
Ampere's (and the others) SMs contain four execution blocks that are 32 threads wide, and these are dispatched to the 16 FP32 and 16 FP32+INT32 ALUs - what Nvidia calls CUDA cores. These are counted as individual shader units in their terminology, and they are indeed scalar in nature, because each one processes one instruction with one or two pieces of data, at most.

What AMD calls their shader units/ALUs is a combined bank of 32 sub-units: individually these are scalar too (because they process one element of a vertex, pixel, etc), but the unit as a whole is treated as a vector processor, because it will always work on 32 pieces of data.

AMD repeatedly states this, in all of their documentation, and has done since GCN first appeared - just as with the use of a dedicated scalar 'shader unit'/ALU. So once again, if you have an issue with this (and you clearly do), then take it up with AMD and Nvidia, as they're the ones saying all of this. If any misinformation is taking place, then they are the ones responsible for it.

It seems we agree then that both Nvidia's and AMD's SIMDs operate on scalar values in each SIMD "lane" and that this comment in the article is inaccurate.

"Nvidia's execution units (CUDA cores) are scalar in nature -- that means one unit carries out one math operation on one data component; by contrast, AMD's units (Stream Processors) work on vectors -- one operation on multiple data components."

The article implies that we should multiply AMD's shader count by some value because each shader executes more than one operation. This is incorrect.
 
The article implies that we should multiply AMD's shader count by some value because each shader executes more than one operation. This is incorrect.
First of all, the article's primary point was that you cannot directly compare CUDA cores to stream processors. This is correct. Secondly, AMD's terminology may be a bit loose, but it is not incorrect to to say that one operation affects multiple data components. From the RDNA instruction set reference:

Vector-memory (VM) operations transfer data between the VGPRs and buffer objects in memory through the texture cache (TC). Vector means that one or more piece of data is transferred uniquely for every thread in the wavefront, in contrast to scalar memory reads, which transfer only one value that is shared by all threads in the wavefront.

There are also vector ALU operations:

Vector ALU instructions (VALU) perform an arithmetic or logical operation on data for each of 32 or 64 threads....

Also, by using the packed-math format, you can also treat each of those input values as a pair of values, breaking each input scalar into a two-part vector (see VOP3P encoding for details).
 
Last edited:
First of all, the article's primary point was that you cannot directly compare CUDA cores to stream processors. This is correct. Secondly, AMD's terminology may be a bit loose, but it is not incorrect to to say that one operation affects multiple data components. From the RDNA instruction set reference:

Vector-memory (VM) operations transfer data between the VGPRs and buffer objects in memory through the texture cache (TC). Vector means that one or more piece of data is transferred uniquely for every thread in the wavefront, in contrast to scalar memory reads, which transfer only one value that is shared by all threads in the wavefront.

There are also vector ALU operations:

Vector ALU instructions (VALU) perform an arithmetic or logical operation on data for each of 32 or 64 threads....

Also, by using the packed-math format, you can also treat each of those input values as a pair of values, breaking each input scalar into a two-part vector (see VOP3P encoding for details).

If the primary point was to say that CUDA core != stream processor there are better ways to say that than to imply that a stream processor does more than a CUDA core and is subject to some multiplier effect (grossly inaccurate).

I like this site and think it has potential to be a great resource as so many of the good tech sites have died off. Providing accurate info on complex topics is one of the things that separates good sites from the riff raff. If you think it's beating a dead horse to address such a fundamentally incorrect statement about GPU performance then you can feel free to not reply.

Btw, your comments on vector instructions and packed math are equally applicable to Nvidia's SIMD architectures and are irrelevant to the comment in the article.
 
This is delusional.
pot, meet kettle

The insane success of the Radeon HD 4000 & 5000 series (which didn't beat Nvidia on flagship performance, but CRUSHED them everywhere else [price, value, power efficiency, etc...] using small, efficient dies) says you don't know what the hell you're talking about.
Whataboutism at its finest.
The reason Nvidia sold more 1070's than AMD's entire Polaris stack has to do with Nvidia's pre-existing market dominance than anything else.
So you agree with me? Because that was kinda my point here, having the top/top 2 GPUs on benchmarking sites is great marketing that sways consumer decisions
Most specially in regards to OEM's/ODM's. But Nvidia still sold way, WAAAAAAAAAAAY more GTX 1050/Ti's & 1060's then they ever did 1070's (let ALONE 1080's).

I didnt say that Nvidia didnt sell lots of 1060s, so not sure why you're bringing that up.

Also, Polaris was a major financial success for AMD. They were widely acclaimed, selling literally every single GPU they could make from 2016-2017 & GAINED marketshare with the launch, not lost it. They wouldn't start REALLY bleeding market-share until deep in the Vega days when those cards ended up being about a year late & they started beating the dead Polaris horse far longer then they'd ever hoped to have to.
Polaris sold well, again never said it didnt. But the 1070 had more users on steam then the entire polaris stack thrown together, and each of those 1070s had higher margins then any AMD card. Those margins could onyl be kept because AMD didnt bother competing with the card, and handed all that cash to nvidia. AMD could have made some decent $$$ there and appealed to the enthusiast market at the same time if they had made a larger polaris chip isntead of pulling a 3DFX and pouring money into the faiure that was VEGA

And any gamer with half a brain doesn't give a crap about the RTX 3090. The performance gains over the 3080 are embarrassing, and the pricing is an insult. The RTX 3080 is the only card that really matter when it comes to GAMING.
Translation: "WAH I CANT AFFORD THE 3090 THEREFORE IT IS LITERALLY HORRIBLE AND NOBODY DECENT CARES ABOUT IT EITHER"

Nobody will give a crap if AMD is ≈10-15% behind the RTX 3090 if it costs ONE THIRD the price. That kind of performance margin is essentially impossible to notice while actually gaming (hence why the RTX 3090 is largely pointless to talk about in regards to the gaming market).
And yet, it seems having that tiny lead has allowed Nvidia to outsell AMD consistently, by large margins, even when they have a failure of a product like Thermi.

Almost like performance enthusiasts care about performance above all else, and consumers will see the top card(s) belonging to nvidia and thus assuming that nvidia is faster, which is a pretty simple concept called "mindshare". You'd be able to see this if you cleared the spittle and foam from your monitor and read the comments instead of flying into a furious red-herring fueled typing rage.
 
Back