AMD Radeon RX 6000 GPUs revealed in macOS Big Sur code: up to 5120 cores, 2.5 GHz

Avro Arrow

Posts: 369   +388
Surely it'll be twice as fast as a 5700XT with these shader counts.

It's impossible to know for sure with the changes AMD will have made for RDNA2, and the sustainable boost clocks. However it's gotta be a safe bet.

That would put it well over the top of a 2080Ti but probably short of a 3080.
The top-tier card MIGHT be but it's not that simple. Compute Units, like shaders, don't scale linearly unless you can keep them all working all the time. It's possible that the RX 6900 XT(X) is twice as fast as the RX 5700 XT but that can't be confirmed just by looking at the number of CUs. Things like IPC, clock speed, cache size, cache latency, VRAM speed and VRAM bandwidth also have to be taken into account.

Even then, you still won't really know because when actually using the card, the efficiency of the drivers and the optimization of the actual game in question also come into play. That's why, despite all of the leaked misinformation that we have, we still won't know until we see actual testing from people like Steve Walton, Steve Burke and Jarred Walton.

Until then, all we can do is guess. We might be close or we might be way off. Remember, on paper, the Radeon VII should be faster than the RTX 2080 Super but it's not because these things aren't so cut-and-dry. There are far more variables than constants and that's kinda what makes it exciting.
 
Last edited:
  • Like
Reactions: Ludak021

Vulcanproject

Posts: 1,271   +2,121
Remember, on paper, the Radeon VII should have been faster than the RTX 2080 Super but it wasn't because these things aren't so cut-and-dry.
I don't know who thought a warmed up Vega 64 should be faster than an RTX2080 Super but it wasn't me. Partly because 2080 Super wasn't around at the time. Even then it wasn't likely it would be faster than an RTX2080.

Linear scaling is never assumed, but acceptable margin of errors for an estimate exist.

If you have a card that is literally just twice a 5700XT across the board with similar clocks it'll end up close enough twice as fast when you take the CPU out of the equation. High resolution, heavy GPU dependent load then.

The main constraint will be memory bandwidth in those hypothetical circumstances, which will not be double. The main unknown will be architectural changes to RDNA, and what that means with accelerated ray tracing performance.

It seems likely with the leaks and console arch we have seen RDNA2 will close the gap to Nvidia's best on raw raster performance but fall back with ray tracing enabled.

The dimensions for testing are increasing so someone at Techspot has their work cut out to compare across more variables!
 

Avro Arrow

Posts: 369   +388
All of this leaked misinformation and we still don't know how good it will be. It's enough to drive a person mad! I don't live too far away from ATi, maybe I should just go ask someone there and let the look on their face be my answer. :p:laughing:
 

Avro Arrow

Posts: 369   +388
I don't know who thought a warmed up Vega 64 should be faster than an RTX2080 Super but it wasn't me.
Let me get this straight, you don't think that a card with 240 texture mapping units and 13.44 Teraflops looks better on paper than a card with only 192 texture mapping units and 11.15 Teraflops? Well, I can't help you with that because it looks much better on paper, especially when you also consider the 8GB of HBM2 vs GDDR6. Nobody said that the card was actually better, it just looked better on paper.

It nevertheless proves the point that I was trying to make that all of the "on paper" specifications that have been leaked do not tell us whether the card will be a cast-iron monster like the HD 5890 or a paper tiger like the Radeon VII.

Don't misunderstand me. I WANT the RX 6900 XT(X) to be at RTX 3090 levels because that will make ALL video cards MUCH cheaper. I think that it's very likely that RDNA2 is going to be a cast-iron monster and not a paper tiger but that's just a hunch based partially on leaked info, partially on how confident AMD seems to be in ATi's latest creation and partially on how much nVidia rushed the RTX 30 series launch.

However, I could be 100% wrong and we could have another Radeon VII on our hands. I don't think it likely because RDNA2 isn't a jack-of-all trades arch like GCN was. Now that AMD has money, ATi was given enough of a budget to have the two archiectures that they wanted to begin with. That's where RDNA2 and CDNA2 have both branched off of GCN. RDNA2 is optimised for gaming consumers and CDNA2 is optimised for pro-sumers and workstations. Think of RDNA2 as Radeon/GeForce and CDNA2 as FirePro/Quadro.

That should make a massive difference because it is something that was always holding GCN back but human beings are incredible adept at screwing up foolproof plans and snatching defeat from the jaws of victory so I'll just sit and hope.
 
Last edited:
  • Like
Reactions: Ludak021

Vulcanproject

Posts: 1,271   +2,121
Let me get this straight, you don't think that a card with 240 texture mapping units and 13.44 Teraflops looks better on paper than a card with only 192 texture mapping units and 11.15 Teraflops? Well, I can't help you with that because it looks much better on paper, especially when you also consider the 8GB of HBM2 vs GDDR6. Nobody said that the card was actually better, it just looked better on paper.
Yup. That's right we didn't think that. Vega 64 existed and we knew exactly how it performed.

When Radeon VII was announced and it became clear it was a near straight 7nm shrink of Vega 64 with boosted clocks and memory bandwidth then sure. At that point nobody thought it should be easily beating up an RTX2080.

Nobody up to date and informed on the market anyway.

In this case it'll be something else if a RDNA2 card had lower IPC than it's predecessor which is what you're subtly implying. If it's twice the card it'll surely be twice as fast, or thereabouts. Give or take. It's not a leap to say that.

Of course we can't know critical details like the effect architecture changes we are aware have been made nor are we certain on the current memory configuration or boost clocks. So therefore fuzzy estimations based on leaks that may also be unfounded is all we can hope for. Everyone understands that straightforward concept I would imagine. Ray tracing acceleration is an example I gave that is mostly unknown, and speculative at best.

I'll wait and see the real deal in another month.