A mysterious new AMD Radeon GPU has beaten the RTX 2080 Ti

For the dissatisfied customers, let’s re-analyze this with ZERO speculation. (Which is a reasonable request.)

Navi’s cores are floating-point Stream Processors (SPs), sixty-four of which make a Compute Unit (CU). There are two of those in a Workgroup Processor (WGP) and four or five of those in an Asynchronous Compute Engine (ACE) depending on the GPU’s configuration. Then there are two ACEs per Shader Engine (SE) and varying amounts of those per GPU.

The biggest Navi GPU, the Radeon RX 5700 XT, has two SEs and five WGPs per ACE. This makes for a total of forty CUs or 2560 cores.

Each Navi core in a 5700 XT is more or less equivalent to an Nvidia CUDA core, as in TechSpot’s testing, the 5700 XT is only 2% slower than the RTX 2070 Super, which has the same core count. This would imply that for an AMD GPU to be ~30% better than an RTX 2080 Ti, it would need over 30% more cores than the Ti’s 4352. Potentially even more, if it has engineering sample clock speeds.

The Navi architecture is obviously designed to be scalable, so let’s scale it up. To get to roundabouts 6000 cores, there are a couple of possible configurations. With six SEs and four WGPs per ACE, you get to 6144 cores. Or with five SEs (an unusual number) and five WGPs per ACE, you get 6200 cores.

That’s literally triple the amount of silicon the 5700 XT has. How likely is it that AMD will literally triple the size of its GPUs? In the six months since the 5700 XT was released?

We can take the same approach with Nvidia. Turing puts sixty-four CUDA cores into a Streaming Multiprocessor (SM), and two of those in a Texture Processing Cluster (TPC). There are either four or six TPCs in a Graphics Processing Cluster (GPC), and varying amounts of them per GPU.

The RTX 2080 Ti has six GPCs and six TPCs per GPC, for sixty-eight SMs. If Ampere has similar performance per core to Turing (but cheaper prices, better RTRT, whatever to make it marketable) the again, we need ~30% more cores. Well, just add two more GPCs and boom, 6144 cores.

That’s just going from six to eight GPCs, which is not an outrageous leap. Particularly since Nvidia has had fifteen months to develop new GPUs. Or a new architecture coupled with the 7nm node could also improve performance and lessen the number of required cores to reach this performance level.

So, without doing any guesswork at all, compare the likelihood of AMD and Nvidia to have a GPU this powerful. Then you can factor that into your personal probability assessment of who’s more likely to be testing with an unreleased AMD APU.
 
Or maybe this is there igpu using the smart shift they was talking about? Maybe no one knows really as it's all speculation but doesnt surprise me either
 
"A mysterious new AMD Radeon GPU has beaten the RTX 2080 Ti"
"But what if we told you, the mysterious GPU may not be a Radeon, but a next-gen Nvidia part?"

A mysterious website beats Techspot in the clickbait game by 50%!!

But wait, what if I told you it's actually Techspot who that website is!!!
 
No GPU can be branded by another rival company due to copyright claims and possible infringement. AMD and it's AIB partners have leaked whitepaper documents touting the RX 5800 and 5900 series including the RX 5900, 5900XT, and RX 5950XT variants which were due out by mid 2020.

If it is anything, it's probably the RX 5950XT flagship GPU being readied for AIB partners.

Nvidia's Ampere isn't even out of Alpha testing yet and won't be out of Beta testing until 4th Quarter 2020 or 1st quarter of 2021.
 
For the dissatisfied customers, let’s re-analyze this with ZERO speculation. (Which is a reasonable request.)

Navi’s cores are floating-point Stream Processors (SPs), sixty-four of which make a Compute Unit (CU). There are two of those in a Workgroup Processor (WGP) and four or five of those in an Asynchronous Compute Engine (ACE) depending on the GPU’s configuration. Then there are two ACEs per Shader Engine (SE) and varying amounts of those per GPU.

The biggest Navi GPU, the Radeon RX 5700 XT, has two SEs and five WGPs per ACE. This makes for a total of forty CUs or 2560 cores.

Each Navi core in a 5700 XT is more or less equivalent to an Nvidia CUDA core, as in TechSpot’s testing, the 5700 XT is only 2% slower than the RTX 2070 Super, which has the same core count. This would imply that for an AMD GPU to be ~30% better than an RTX 2080 Ti, it would need over 30% more cores than the Ti’s 4352. Potentially even more, if it has engineering sample clock speeds.

The Navi architecture is obviously designed to be scalable, so let’s scale it up. To get to roundabouts 6000 cores, there are a couple of possible configurations. With six SEs and four WGPs per ACE, you get to 6144 cores. Or with five SEs (an unusual number) and five WGPs per ACE, you get 6200 cores.

That’s literally triple the amount of silicon the 5700 XT has. How likely is it that AMD will literally triple the size of its GPUs? In the six months since the 5700 XT was released?

We can take the same approach with Nvidia. Turing puts sixty-four CUDA cores into a Streaming Multiprocessor (SM), and two of those in a Texture Processing Cluster (TPC). There are either four or six TPCs in a Graphics Processing Cluster (GPC), and varying amounts of them per GPU.

The RTX 2080 Ti has six GPCs and six TPCs per GPC, for sixty-eight SMs. If Ampere has similar performance per core to Turing (but cheaper prices, better RTRT, whatever to make it marketable) the again, we need ~30% more cores. Well, just add two more GPCs and boom, 6144 cores.

That’s just going from six to eight GPCs, which is not an outrageous leap. Particularly since Nvidia has had fifteen months to develop new GPUs. Or a new architecture coupled with the 7nm node could also improve performance and lessen the number of required cores to reach this performance level.

So, without doing any guesswork at all, compare the likelihood of AMD and Nvidia to have a GPU this powerful. Then you can factor that into your personal probability assessment of who’s more likely to be testing with an unreleased AMD APU.

I really thought you were going to make the better argument which is perf/watt ie power efficiency.The reason why amount of silicon isnt an issue is twofold. Firstly, the navi 5700xt die size is much much smaller than in the 2070 (which was the GPU that the 5700XT was intended to be the competitor to..see AMD slides). The second reason is that 7nm will be less expensive in 2020 than 2019 as TSMC is moving to 5 nm and it will have been more than a year of 7nm production. So I can see them using 3x the amount of silicon to compete with nvidia's top consumer GPU.

The real issue is power efficiency and this will be the real tell. The problem is that navi is still a gen behind in power efficiency while being a gen ahead in the node! Nvidia is moving to 7nm with Ampere so what does AMD have then? Just look at the power consumption of the 5700XT. I estimate that in order compete with Ampere Navi 2 has to be at least 2x more power efficient than Navi. That is a big leap. Remember, Navi is 1.5x more power efficient than Vega and I consider that an accomplishment.
 
16 months too late.
We have a few 2080 Ti since September 2018 and we use them daily in our indie VFX studio. We made a lot of money with these cards since when we buy it. It's just laughable that AMD have a prototype card that is barely faster 16 months later.
Therefore the real difference is that AMD GPU performance/$ on professional creative software is just bad. We have to wait until when to see AMD understanding that the high-end GPU are mainly for creators and not for gamers ???
NB: all our WS use AMD Threadripper CPU than don't call me a fan boy please.
 
I really hope its AMD as its defiantly time for AMD to hold the crown, but then again Nvida hasnt made anything high end for while so theres a good chance its them. I highly doubt its Intels efforts.


I'm going to scare this guy....watch. Boo!!!

How about that GTX 1080 TI.
 
It's the combined apu GPU and new external GPU working together, it's not that difficult to work out once you've done GPU driver development.
 
16 months too late.
We have a few 2080 Ti since September 2018 and we use them daily in our indie VFX studio. We made a lot of money with these cards since when we buy it. It's just laughable that AMD have a prototype card that is barely faster 16 months later.
Therefore the real difference is that AMD GPU performance/$ on professional creative software is just bad. We have to wait until when to see AMD understanding that the high-end GPU are mainly for creators and not for gamers ???
NB: all our WS use AMD Threadripper CPU than don't call me a fan boy please.

A majority of VFX work is done on the CPU. Adobe after affect (which is used by many hollywood studios) doesn't even support tensor core acceleration yet.


https://www.pugetsystems.com/recomm...be-After-Effects-144/Hardware-Recommendations

More important is (depending on complexity) a lot of system RAM. Otherwise even a 2060 super performs within margin of error for most VFX work unless your studio has some kind of custom software that somehow supports Tensor acceleration before even adobe (incredibly unlikely).

It seems to me you came here to brag about Nvidia cards when in reality you should be bragging about the AMD CPUs that are in fact doing a majority of the work.
 
A majority of VFX work is done on the CPU. Adobe after affect (which is used by many hollywood studios) doesn't even support tensor core acceleration yet.



Interesting - read up on this huh?

"What CPU is best for After Effects?
Currently, the CPUs we most often recommend for After Effects is the Intel Core i9 9900K 8 Core, followed closely by the 3rd generation AMD Ryzen 9 3900X or Ryzen 7 3800X processors. There are more expensive options that can give you a few percent better performance, but in terms of overall system performance you would be better off purchasing more RAM or faster storage than spending your budget on one of those higher-end processors."

From the same page cited. Puget's CPU rank. {edit, name deleted}

pic_disp.php


"Do more CPU cores make After Effects faster?"
"To a certain extent, more cores should improve performance. However, After Effects doesn't scale particularly well since version 2015.3 so the number of cores tends to be less important than the speed of each individual core."
"The exception to this is if you use the Cinema 4D renderer which can be slightly faster with a high core count CPU like the AMD Ryzen 9 3950X or the Intel Core i9 10980XE."

...you should be bragging about the AMD CPUs that are in fact doing a majority of the work.

Except they're not. Opinions about "what's the best gear for VFX" are as varied as for a GP, PC.
 
Last edited:
For the dissatisfied customers, let’s re-analyze this with ZERO speculation. (Which is a reasonable request.)

Navi’s cores are floating-point Stream Processors (SPs), sixty-four of which make a Compute Unit (CU). There are two of those in a Workgroup Processor (WGP) and four or five of those in an Asynchronous Compute Engine (ACE) depending on the GPU’s configuration. Then there are two ACEs per Shader Engine (SE) and varying amounts of those per GPU.

The biggest Navi GPU, the Radeon RX 5700 XT, has two SEs and five WGPs per ACE. This makes for a total of forty CUs or 2560 cores.

Each Navi core in a 5700 XT is more or less equivalent to an Nvidia CUDA core, as in TechSpot’s testing, the 5700 XT is only 2% slower than the RTX 2070 Super, which has the same core count. This would imply that for an AMD GPU to be ~30% better than an RTX 2080 Ti, it would need over 30% more cores than the Ti’s 4352. Potentially even more, if it has engineering sample clock speeds.

The Navi architecture is obviously designed to be scalable, so let’s scale it up. To get to roundabouts 6000 cores, there are a couple of possible configurations. With six SEs and four WGPs per ACE, you get to 6144 cores. Or with five SEs (an unusual number) and five WGPs per ACE, you get 6200 cores.

That’s literally triple the amount of silicon the 5700 XT has. How likely is it that AMD will literally triple the size of its GPUs? In the six months since the 5700 XT was released?

We can take the same approach with Nvidia. Turing puts sixty-four CUDA cores into a Streaming Multiprocessor (SM), and two of those in a Texture Processing Cluster (TPC). There are either four or six TPCs in a Graphics Processing Cluster (GPC), and varying amounts of them per GPU.

The RTX 2080 Ti has six GPCs and six TPCs per GPC, for sixty-eight SMs. If Ampere has similar performance per core to Turing (but cheaper prices, better RTRT, whatever to make it marketable) the again, we need ~30% more cores. Well, just add two more GPCs and boom, 6144 cores.

That’s just going from six to eight GPCs, which is not an outrageous leap. Particularly since Nvidia has had fifteen months to develop new GPUs. Or a new architecture coupled with the 7nm node could also improve performance and lessen the number of required cores to reach this performance level.

So, without doing any guesswork at all, compare the likelihood of AMD and Nvidia to have a GPU this powerful. Then you can factor that into your personal probability assessment of who’s more likely to be testing with an unreleased AMD APU.
In TechSpot testing Rx 5700 xt is 9% slower vs RTx 2070 Super in new updated benchmark. Not to mention for most it is over 10%.
Rx 5700 xt gains 1-3% from oc vs 8-10% of nvidia RTx 2070 Super.
Also Navi does not scale well past the clocks given by amd.
Rx 5700 xt at 1950mhz consumer 290+ watts that is more than stock 2080 ti.
Technologically a AMD GPU as fast as rtx 2080 ti on 7nm+ should eat 500 watts + .
Otherwise Amd could release a gpus as fast as Rtx 2080 ti or even faster.
But the power consumption and heat would make it a laughing stock in this time and age.
So, they are doing what is right.
Will release a 7nm+ with architecture improvements with 3000 cores+ navi part.
Which will put it on Rtx 2080 Super territory while keeping the power under 300 watts.
Amd usually takes 1-2 years to beat the nvidia high end.
So, A card faster than rtx 2080 ti or same should arrive in 2022 when 5nm for gpus become available.
Now, amd could pull off some miracle like ryzen. The problem is Nvidia will not go to sleep for 5+ years like intel.
Amd still does not have a gpu faster than GTX 1080 ti or Titan Pascal. If Amd releases a card faster than them this year in august. It will 4 years. Which is crazy.
 
Last edited:
Let's hope it's amd. Prices of the 2080ti have remained ludicrously high simply because there's no competition
And if amd have got a very powerful gpu it should force the release price of the 30 series to be lower
No competition equals high prices
I've got the 1080ti and will be buying the 3080ti this year. I've skipped the 2080ti as it offers little over what I've got and the price is silly.
However, if the 2080ti had have been priced lower then I may have bought one.
Many 1080ti owners haven't bothered with the 2080ti for the same reasons
In reality high prices have cost nvidia dearly.
So let's hope that amd have finally got something
Saying that. If nvidia have any sense then then they will realise that pricing the 3080ti above what we already have will severely damage sales
Or are going to be adopt intels style of arrogance
Look where that's got Intel?
It's time for nvidia to take stock of their whole business approach because amd will eventually strike back.
And when that happens could nvidia take intels deserved tumble?
 
It doesn't make sense for it to be Intel due to the CPU, nor does it make sense for it to be Nvidia due to it being listed as a Radeon.

From what it looks like to me, the only way this would make sense is if it would be a GPGPU compute device (probably custom silicon for the Department of Energy) that's appearing to be run from a notebook because it's a discrete rack mounted piece of hardware that was benchmarked from a connection to a notebook.

I know NNSA (not NSA/CSS, the National Nuclear Security Administration), Sandia-NM Campus and Los Alamos were all working on something with AMD's Radeon division recently and this may very well be the result. I would not expect it to be a mainline workstation or gaming product myself.
 
I really thought you were going to make the better argument which is perf/watt ie power efficiency.The reason why amount of silicon isnt an issue is twofold. Firstly, the navi 5700xt die size is much much smaller than in the 2070 (which was the GPU that the 5700XT was intended to be the competitor to..see AMD slides). The second reason is that 7nm will be less expensive in 2020 than 2019 as TSMC is moving to 5 nm and it will have been more than a year of 7nm production. So I can see them using 3x the amount of silicon to compete with nvidia's top consumer GPU.

The real issue is power efficiency and this will be the real tell. The problem is that navi is still a gen behind in power efficiency while being a gen ahead in the node! Nvidia is moving to 7nm with Ampere so what does AMD have then? Just look at the power consumption of the 5700XT. I estimate that in order compete with Ampere Navi 2 has to be at least 2x more power efficient than Navi. That is a big leap. Remember, Navi is 1.5x more power efficient than Vega and I consider that an accomplishment.
Very good point. Triple 5700 XT could produce 700W, which would be interesting to cool...
 
https://www.pugetsystems.com/recomm...be-After-Effects-144/Hardware-Recommendations

Interesting - read up on this huh?

"What CPU is best for After Effects?
Currently, the CPUs we most often recommend for After Effects is the Intel Core i9 9900K 8 Core, followed closely by the 3rd generation AMD Ryzen 9 3900X or Ryzen 7 3800X processors. There are more expensive options that can give you a few percent better performance, but in terms of overall system performance you would be better off purchasing more RAM or faster storage than spending your budget on one of those higher-end processors."

From the same page cited. Puget's CPU rank. {edit, name deleted}

pic_disp.php


"Do more CPU cores make After Effects faster?"
"To a certain extent, more cores should improve performance. However, After Effects doesn't scale particularly well since version 2015.3 so the number of cores tends to be less important than the speed of each individual core."
"The exception to this is if you use the Cinema 4D renderer which can be slightly faster with a high core count CPU like the AMD Ryzen 9 3950X or the Intel Core i9 10980XE."



Except they're not. Opinions about "what's the best gear for VFX" are as varied as for a GP, PC.

At no point do I say anything about strictly high core count CPUs, which you assume. You spent your entire comment under that premise wasting words when you could have simply read, processed, then replied.
 
At no point do I say anything about strictly high core count CPUs, which you assume. You spent your entire comment under that premise wasting words when you could have simply read, processed, then replied.

I understand the need to back out a jejune comment, and congratulate your economy with language.
 
I understand the need to back out a jejune comment, and congratulate your economy with language.

I never backed out of anything. You might want to re-read that comment. I also don't see the value of your replies here. You seem to be trying to bait off-topic conversation.
 
Let's hope it's amd. Prices of the 2080ti have remained ludicrously high simply because there's no competition

If AMD builds a GPU that can match or beat the 2080 Ti, it will likely be priced similarly to the 2080 Ti unfortunately.

We've seen with the 5700 XT, 5600 XT that AMD is not undercutting price on GPUs the same way it did on CPUs. The 2600 XT matches a 1660 Ti? Cool, $280 just like the 1660 Ti.

Anyone thinking that AMD is going to build a 2080 Ti killer and sell it for $600-$700 is dreaming; that ship sailed long ago and that era is over sadly.
 
Back