Report: Nvidia's next-gen Ampere cards 50% faster than Turing with half the power draw

TU106 (RTX 2070) is 10.8bn transistors, 5700XT is 10.3bn. There isn't some huge disparity.

Even if we go by a conservative logic density for 7nm TSMC over 12nm a TU106 would be comfortably sub 300mm² and totally comparable to Navi 10. Vega 64 was 495mm² on Global Foundries 14nm, Radeon 7 added a small number of transistors and still came out 164mm² smaller.

Extrapolating TU106 down to 7nm is only an estimation game at best. Long story short, it's close enough in that same 250mm² ballpark to not matter.

It would still be faster from a modest clock bump and use less power. While still having a whole bunch of RT hardware Navi 10 entirely lacks!

You are making the error of thinking that Turing performances is not relying on the the Tensor and RT transistors. It does. By the way, the 2070S is hosting 13.6 billions transistors compared to 10,3 for the 5700xt.

When you scale down the performance to 7nm (331/495) with 545 = 364mm2, which is way above the 251mm2 of the 5700xt. The 2070S is 45% bigger while offering barely 5-10% better performance, a deficit of 35% in area efficiency.
 
You are making the error of thinking that Turing performances is not relying on the the Tensor and RT transistors. It does. By the way, the 2070S is hosting 13.6 billions transistors compared to 10,3 for the 5700xt.

When you scale down the performance to 7nm (331/495) with 545 = 364mm2, which is way above the 251mm2 of the 5700xt. The 2070S is 45% bigger while offering barely 5-10% better performance, a deficit of 35% in area efficiency.

I didn't claim that the Tensor cores were unused all the time, only that Navi 10 has no such equivalent on die. Nvidia has a lot of extra hardware functionality in the same package!

I was asked about shrinking a 2070. One assumes because that is very close in transistor count to the 5700XT and both have all their silicon utilized. I specifically stated the TU106 core to confirm that comparison. Multiple times.

You would have known this had you read any of those posts with a little care.

Not the 2070 Super that you talk about with the larger TU104 die. That has a lot more transistors but a large portion of them disabled- it's a heavily binned GPU. Over 500 cuda cores (shaders) are disabled on the 2070S as well as a reduction in corresponding TMUs, RT and Tensor cores. A big chunk of dead silicon.

Thus your comparison is badly flawed.
 
Last edited:
I doubt that they'll pull off something like this and also reduce the power draw by that much. Even the same performance for half the power draw is a stretch too.

The more obvious answer would be that he wasn't referring to general graphics performance, but something like improved Tensor/RTX cores and even there at a very specific workflow.

We have never had in the history of GPU such a huge perf/W increase and it will 100% not happen with current smaller and smaller nodes where it becomes more and more difficult to achieve efficiency improvements because of increased leakage.

That is why UVL is such a big deal, double/triple/quad patterning is the largest source of leakage due to how imprecise it is, while I am going to say this is unrealistic to this magnitude, they will probably be pulling a Maxwell to Pascal performance incease realistically. Half the node size, improved arch, and UVL. I would fully expect a 3060 to have slightly better performance than a 3080. Now if they are going MXM format then yes I could see these figures.
 
On another note 3080Ti might be bandwidth constrained that it might not be much faster than 2080 Ti + 50% even at full power (250W TDP), so why not spin the news a little and say that it outperforms 2080 Ti by 50% at 50% power consumption.

A good example is that I lower the power limit of my 2080 Ti to 50% (130W TDP) and it still performs the same as stock 1080 Ti, but cranking the PL to 100% (260W) only provide 40% more performance.
 
That is why UVL is such a big deal, double/triple/quad patterning is the largest source of leakage due to how imprecise it is, while I am going to say this is unrealistic to this magnitude, they will probably be pulling a Maxwell to Pascal performance incease realistically. Half the node size, improved arch, and UVL. I would fully expect a 3060 to have slightly better performance than a 3080. Now if they are going MXM format then yes I could see these figures.
I don't expect miracles, realistically speaking we should see at most a 30-40% improvement when comparing the 2080 ti with the next 3080ti.

Maybe we'll see a 50% improvement when we compare FPS with raytracing enabled, I fully expect the second generation RTX to improve that problem (it's also why I never recommended 2000 series RTX cards just for this feature).

I also don't expect AMD to compete on the ultra high end even with RDNA2. I hope that we'll get something close to 3080 so that we still see some price reductions.
 
IMO, this whole pricing this is very annoying. I am certainly not saying that AMD, sIntel, and nVidia do not have a right to make money. What I am saying is that a lack of competition has pushed prices somewhat insanely high, IMO, in both the GPU and CPU market places, and that has enabled the competitor, AMD, to price their current products at a similar level.

What, where is the lack of competition? AMD is everywhere, each day, making products, marketing products, doing product announcements AND losing ground due to a plethora of production/delivery shortfalls. Intel has problems of their own, it's rather difficult to be a competitor when a company doesn't deliver products in sufficient quantities, and do it in a timely manner.

The AMD line always seems to be "wait minit $thing malfuncTion" - and if it's not one thing with AMD, it's the other. Where did megahurtz go? Why isn't Intel sending commands on the edges? How does µarch recycling and gluing chips together qualify as progress? Why has AMD released and bet the bank on TSMC and RX5700XT? Mediocre and middling come to mind, impressive does not. Nvidia has spanked Intel at every turn while managing to keep AMD kayoed on Video Corner. These days when Intel does or says something it's a lie, or just wrong b'cause Intel -is rich- therefore evul. But, when-if, AMD does "some thing" a cultivated fancult sends Praise and Prayers to Savior of the Master PC Race, Lisa, who only interrupts said cult to dump another bundle of AMD Freedom Stock through "insider trading"; "rich-evul - nothing found here".

Lifted from Led Zep - such a pretty song!
"Oh father of the four winds, fill my sails, across the sea of years
With no provision but an open face, along the straits of fear"


AMD came to the battle of hyperthreaded cpu's with brave talk and colorful fanfare, the bravado to fuel expectations exceeding that insinuated by the quoted lyric. Intel has easily held the lead for years, monopoly is such a strong and loaded word to describe a market situation regardless of what forced it into being.

Maybe, just maybe, with added competition, the prices will equalize at some point. I will be hoping for some sort of price correction event, but I am certainly not counting on it.

I see dead AMD chips. /s
 
I seriously doubt that this would be the case. It would make sense if it's either or, but both? Nah. Too good to be true. Do you really expect a 1660Ti successor to perform like an RTX 2070 Super at 60W? Or an RTX 2070 Super successor to perform 25% faster than an RTX 2080Ti at 100W? Get real.
The most likely thing is that they are talking specifically about ray tracing performance, if this is true. And that's a big if.
 
That is why UVL is such a big deal, double/triple/quad patterning is the largest source of leakage due to how imprecise it is, while I am going to say this is unrealistic to this magnitude, they will probably be pulling a Maxwell to Pascal performance incease realistically. Half the node size, improved arch, and UVL. I would fully expect a 3060 to have slightly better performance than a 3080. Now if they are going MXM format then yes I could see these figures.

Mmm EUV? They (ASMR/TrumF/TSMC) do a lot of work to mitigate problems with pellicles and scattering. The repair process for shot patterns is an amazing. TrumpF.com has a video on their front page showing the path of a one second shot, there's a lot of protection for that energetic, tiny beam.
 
I don't expect miracles, realistically speaking we should see at most a 30-40% improvement when comparing the 2080 ti with the next 3080ti.

Maybe we'll see a 50% improvement when we compare FPS with raytracing enabled, I fully expect the second generation RTX to improve that problem (it's also why I never recommended 2000 series RTX cards just for this feature).

I also don't expect AMD to compete on the ultra high end even with RDNA2. I hope that we'll get something close to 3080 so that we still see some price reductions.

Nvidia could have jam pack 3080Ti with twice the transistors count that of 2080 Ti, that is the beauty of node shrinking. Don't let the numbers fool you, the transistors density from 28nm to 16nm is 2x while from 12nm to 7nm+ EUV is 3x as much (Source). 7nm manufacturing has reached the matured stage now that producing big chip is a viable option for Nvidia atm. I would say that ONLY 50% improvement is because the chip might get hold back by its bandwidth since no new GDDR tech this time around.
 
Last edited:
I just hope they don't make the same mistakes they did with the 2000 series.

The 2060 should have been more powerful and more efficient than the 1080Ti...which it wasn't. We are looking for solid 4K 60FPS gaming (at least).
The 2060 was pathetic. Although it was a better choice than the last gen GPU or the 1660Ti...it was so underpowered that it doesn't deserve to wear the RTX badge.
The 2070 wasn't much better and until you were spending 2080 money - or take the dip into a 2080Ti, you weren't seeing the real advantages of upgrading to the RTX models.

I have an EVGA 2080Ti FTW3 liquid cooled and I run my games with every setting at maximum and Direct X Ray Tracing on.

I expect the 3000 models to be: less expensive (comparatively), more efficient and more powerful.

The lowest end 3000 should be more powerful than the 2080/super.

I personally don't see myself upgrading till the 4000 series because thus far, there hasn't been a single game out there to make me feel that my 2080Ti was "necessary" - although I bought it expecting to have a few years worth of future proofing.

There's no game made - or that will be made which will demand 11GB of RAM. They won't do it because they want to ensure people with the low end AMD cards and low end RTX models can buy their software.
 
Last edited:
While this does sound too good to be true, it's not within the realms of impossible - the RTX 2080 Ti was up to 55% faster in our testing:

4K_2080Ti_2080Ti.png


Of course, there were more games that improved between 20% and 30% than were over 50%, so any vendor performance claims are always going to be a best case scenario.

Except the 2080 Ti not only didn't reduce power usage, but actually increased it. There are models out there with 330W+ peak...
 
In case anyone's actually legitimately wondering if this might be true, let me tell you: No. Chip makers don't achieve a THREE TIMES perf/watt improvement within 2 years. Particularly when NV cards are already often TWICE as efficient as their AMD counterparts (e.g. GTX 1650 vs RX 570, 75W vs 150W respectively).
 
Except the 2080 Ti not only didn't reduce power usage, but actually increased it. There are models out there with 330W+ peak...
Indeed, which is why anyone making a 50% power reduction claim needs to be handled with extreme caution! :)
 
Back