Report: Nvidia's next-gen Ampere cards 50% faster than Turing with half the power draw

Turing can't keep up with rDNA, because rDNA is a game specific uArch. Turing and Ampere are not...!

AMD will win every time, when it comes down to size of die & price. Nvidia just can not compete.

You talked about straightforward gaming performance right?

A 5700XT 'Navi 10' has over 40 percent more transistors than a GTX1080 GP104 die.

Those transistors are running at 10 percent more boost clock speeds thanks to a newer generation process node.

It also has 50 percent more memory bandwidth.

Yet- 5700XT is not more than a mere 20 percent faster. Even at 4K.

Bearing all this in mind, the GTX1080 is three whole years older. Nvidia's designs the past few years have if nothing else been super efficient in their architecture for gaming compared to AMD's efforts. The fact they can toss on billions of transistors for hardware accelerated ray tracing and still be competitive on an aging node is pretty beserk.

Nvidia are the ones with the most flexibility (I.e pieces of gold) to design different products for different markets.
 
You talked about straightforward gaming performance right?

A 5700XT 'Navi 10' has over 40 percent more transistors than a GTX1080 GP104 die.

Those transistors are running at 10 percent more boost clock speeds thanks to a newer generation process node.

It also has 50 percent more memory bandwidth.

Yet- 5700XT is not more than a mere 20 percent faster. Even at 4K.

Bearing all this in mind, the GTX1080 is three whole years older. Nvidia's designs the past few years have if nothing else been super efficient in their architecture for gaming compared to AMD's efforts. The fact they can toss on billions of transistors for hardware accelerated ray tracing and still be competitive on an aging node is pretty beserk.

Nvidia are the ones with the most flexibility (I.e pieces of gold) to design different products for different markets.


You are talking Pascal.
Take the RTX2070 die (TU106) and normalize it down to 7nm. See what you get.
 
I mean.. if "big navi" outperforms a 2080ti by around 30 percent or even more nvidia may have to price accordingly which could be around the $900 mark but thats wishful thinking on both amd and nvidias side.
If their next gen is as good as claimed, they could easily drop prices on the 2080 series since they are on a mature (cheap) node and R&D is already amortized. That would give AMD a hard time without devaluing Ampere.

50% higher perf while using half the power does sound a bit too goo tobe true given that nVidia's design is already very efficient but even if it''s "only" an either/ or situation that would be more than nice.

So 50% higher performance at the same power or half the power at the same performance.
 
A 5700XT 'Navi 10' has over 40 percent more transistors than a GTX1080 GP104 die.

Those transistors are running at 10 percent more boost clock speeds thanks to a newer generation process node.

It also has 50 percent more memory bandwidth.

Yet- 5700XT is not more than a mere 20 percent faster. Even at 4K.
If one uses those metrics, then the 1080 Ti is fair game for comparison to, as it has 15% more transistors then the 5700 XT, 10% lower (game) boost clocks, and the same bandwidth but still outperforms the Navi based card by an average of 9% over 39 tested games. Pascal really was something else.
 
If even remotely accurate, Ampere 75W cards should provide around 2070 level performance, if they don't hamstring the mem bus width. That would be something impressive indeed.
 
You are talking Pascal.
Take the RTX2070 die and normalize it down to 7nm. See what you get.

TU106?

You'll end up with something much faster than a 5700XT, at pretty much the same die size. Oh, and it'll have far less power consumption!

Because you can ramp the clocks that much on 7nm.

It'll also still have the hardware accelerated ray tracing components.

At least on TSMC 7nm, where you'll get 25-30 percent more performance from clocks alone over 12nm. Did you not see Radeon 7? It was a shrunken Vega 64 die. Bumped the clocks, improved the boost stability. Disabled some CUs. Still ended up 25-30 percent faster.

Nvidia might be on the Samsung equivalent process, or maybe even a little better. TSMC 7nm+ equivalence.
 
If one uses those metrics, then the 1080 Ti is fair game for comparison to, as it has 15% more transistors then the 5700 XT, 10% lower (game) boost clocks, and the same bandwidth but still outperforms the Navi based card by an average of 9% over 39 tested games. Pascal really was something else.

Was...
bcz Pascal doesn't do async compute, etc. (was also $599 & tested on old games)


Navi's rDNA architecture has not fully been realized and will become inherently more efficient under rDNA2, when GCN remnants are gone. Freeing up more space for new game features, such as 3d sound, ray tracing, etc.

Again, Take the RTX2070 die (TU106) and normalize it down to 7nm. See what you get...?
 
TU106?

You'll end up with something much faster than a 5700XT, at pretty much the same die size. Oh, and it'll have far less power consumption!

Because you can ramp the clocks that much on 7nm.

It'll also still have the hardware accelerated ray tracing components.

At least on TSMC 7nm, where you'll get 25-30 percent more performance from clocks alone over 12nm. Did you not see Radeon 7? It was a shrunken Vega 64 die. Bumped the clocks, improved the boost stability. Disabled some CUs. Still ended up 25-30 percent faster.

Nvidia might be on the Samsung equivalent process, or maybe even a little better. TSMC 7nm+ equivalence.

See, you can not do it. You will not give me your calculations of die size, because the answer is blaring. Let me help you.

TU106 is 445mm^2 @ 12nm TSMC. Now go ahead and normalize that to 7nm TSMC. I know it hurts, but you should learn to face the facts.

edit: And mind you the RTX2070s (TU107) is 545mm^2 @12nm TSMC. And navi10 flirts with that, in many games.
 
Last edited:
Sure Pascal lacks modern features, as one would expect for something that came out over 3 years ago. My remarks weren’t about Navi being poor (it’s not) but about how good Pascal was.

Not sure what elements of GCN you think are still in RDNA though.
 
...
Not sure what elements of GCN you think are still in RDNA though.

AMD said this themselves at e3 or one of their presentations, that current Navi (5700(xt)) is a hybrid between GCN and rDNA, and that full rDNA cores won't be coming until 2020.

Exactly what parts GCN has carried over into current Navi, I do not know.
 
I agree neeyik. I have 1080 & 1080ti. Just hated paying the RTX & Jensen tax on an RTX2080 @ $850... only to have a SUPER come out.

My 1080ti was 40% cheaper.
But:

The other problem that I see with current turing (in addition to the non-super "tax") is that by the time any raytracing games comes out in earnest, half the lineup will be too slow for raytracing in those games with high quality settings.

It's already a huge performance hit (in a gamer's world where 1000fps is the only valuable metric these days it seems), so ampere will likely be the true raytracing cards we actually can use, and anyone with a 2070 or slower RTX card will have paid extra for raytracing, not be able to use it because no games existed.

When the games finally do come out, their demands will be more suitable to ampere, leaving turing at a very low end of the spectrum, for a feature you paid extra for, and could never really use properly. NVidia didn't give us the choice to not pay extra.

But that sounds about like an NVidia strategy ... who cares as long as people are motivated to keep constantly buying new cards ... right?
 
See, you can not do it. You will not give me your calculations of die size, because the answer is blaring. Let me help you.

TU106 is 445mm^2 @ 12nm TSMC. Now go ahead and normalize that to 7nm TSMC. I know it hurts, but you should learn to face the facts.

edit: And mind you the RTX2070s (TU107) is 545mm^2 @12nm TSMC. And navi10 flirts with that, in many games.

TU106 (RTX 2070) is 10.8bn transistors, 5700XT is 10.3bn. There isn't some huge disparity.

Even if we go by a conservative logic density for 7nm TSMC over 12nm a TU106 would be comfortably sub 300mm² and totally comparable to Navi 10. Vega 64 was 495mm² on Global Foundries 14nm, Radeon 7 added a small number of transistors and still came out 164mm² smaller.

Extrapolating TU106 down to 7nm is only an estimation game at best. Long story short, it's close enough in that same 250mm² ballpark to not matter.

It would still be faster from a modest clock bump and use less power. While still having a whole bunch of RT hardware Navi 10 entirely lacks!
 
Last edited:
AMD said this themselves at e3 or one of their presentations, that current Navi (5700(xt)) is a hybrid between GCN and rDNA, and that full rDNA cores won't be coming until 2020.
This comes entirely from one source, Sweclockers, and it’s not something AMD have explicitly stated in other documents. This isn’t to say that RDNA isn’t a GCN hybrid, but I rather suspect that it’s a misunderstanding of statements made during a presentation. For example, the RDNA instruction set is essentially the same as GCNs but AMD would be mad to have two fully separate instruction sets for game developers to deal with.

Architecturally elements of RDNA are indeed evolutions of those found in GCN (eg. TMUs, primitive setup) but there wasn’t much wrong with them in the first place. The main issue with GCN was the instruction issue rate, amount of cache (and thus internal bandwidth), and thread loading efficiency have been heavily addressed. I suspect that when AMD say that RDNA2 will be “fully” RDNA, they continue to address these areas (as well as add new hardware features).
 
The other problem that I see with current turing (in addition to the non-super "tax") is that by the time any raytracing games comes out in earnest, half the lineup will be too slow for raytracing in those games with high quality settings.

It's already a huge performance hit (in a gamer's world where 1000fps is the only valuable metric these days it seems), so ampere will likely be the true raytracing cards we actually can use, and anyone with a 2070 or slower RTX card will have paid extra for raytracing, not be able to use it because no games existed.

When the games finally do come out, their demands will be more suitable to ampere, leaving turing at a very low end of the spectrum, for a feature you paid extra for, and could never really use properly. NVidia didn't give us the choice to not pay extra.

But that sounds about like an NVidia strategy ... who cares as long as people are motivated to keep constantly buying new cards ... right?

The GeForce RTX's ray tracing is an afterthought, a needful explanation of why these hand me down DataCenter Turing chips, are worth the price of $800~$1,500+ price.

The TU102 has to be explained away somehow. Much unwanted and wasted space for gaming. Everyone in the entire Gaming industry, is shooting for the standard, while Nvidia is trying to market Enterprise GPU, at gamers.

AMD followed suite and did the same with Vega20 for gaming. But at Navi's release, Dr Su made a statement to gamers, that I took notice to, in that she was a gamer. It was sincere. About how endearing she feels, (as a gamer) about releasing AMD's kept secret, a brand new GPU architecture.

And that rDNA is for the Gamer's.

That this architecture is not derived from another industry needs, and repurposed for gaming. rDNA is 100% what the industry is asking for and what gamer's demand. And it is scalable.

rDNA2 is only going to be more refined, on a refined backbone (infinity fabric).


I have not heard a single thing, of what Nvidia is targeting. They can't do too much game specific hardware changes, because those high-end 3080ti chips are derived from Enterprise hardware.

I suspect, that if nvidia made a game only chip, the RT cores would get changed to design better suited for gaming. But again, that will only come at the low end where mass sales would offset the cost. The high-end nvidia products are locked into Enterprise architecture.
 
AMD didn't simply release larger chips because RDNA is a complete failure. Despite being on 7nm, it is power hog so anything over 500mm2 chips would be worse than 480GTX.

Slighty OCed 5700xt editions consumes over 300w.

5900xt on RDNA would consume 350w+ without huge underclock.

If AMD don't do some kind of magic with RDNA2, their gpus gonna be for laugh.
 
This comes entirely from one source, Sweclockers, and it’s not something AMD have explicitly stated in other documents.
...

I don't who sweclockers is ... I just heard the AMD video card guy (can't recall his name) say it at the presentation (well I wasn't there ... I watched the live stream). I'm 99% sure it was E3 2019 as they announced the 5700 series, or at the 5700 series launch event.

Its not something I have seen said a whole lot elsewhere, nor do I know where the other guy you were conversing with got his info - but it collabs with my first hand knowledge of the words from one of the AMD presenters.

EDIT: it may have been a PCWorld "Full Nerd" episode interview with the AMD GPU guy ... anyway I believe it was Scott Herkelman who said it; I'm taking a quick look now, but don't really want to spend a bunch of time looking ... will let you know if I find it.

Edit2: Didn't find it, but now its bugging me ...
 
Last edited:
The GeForce RTX's ray tracing is an afterthought, a needful explanation of why these hand me down DataCenter Turing chips, are worth the price of $800~$1,500+ price.

The TU102 has to be explained away somehow. Much unwanted and wasted space for gaming. Everyone in the entire Gaming industry, is shooting for the standard, while Nvidia is trying to market Enterprise GPU, at gamers.

AMD followed suite and did the same with Vega20 for gaming. But at Navi's release, Dr Su made a statement to gamers, that I took notice to, in that she was a gamer. It was sincere. About how endearing she feels, (as a gamer) about releasing AMD's kept secret, a brand new GPU architecture.

And that rDNA is for the Gamer's.

That this architecture is not derived from another industry needs, and repurposed for gaming. rDNA is 100% what the industry is asking for and what gamer's demand. And it is scalable.

rDNA2 is only going to be more refined, on a refined backbone (infinity fabric).


I have not heard a single thing, of what Nvidia is targeting. They can't do too much game specific hardware changes, because those high-end 3080ti chips are derived from Enterprise hardware.

I suspect, that if nvidia made a game only chip, the RT cores would get changed to design better suited for gaming. But again, that will only come at the low end where mass sales would offset the cost. The high-end nvidia products are locked into Enterprise architecture.


NVidia doesn't do a horrible job of bridging that gap between enterprise and gaming, and they make distinct true compute cards with their Tesla series. Radeon7 was a bit like a Tesla card that they said was for gaming.

That said ... it is a bit more obvious that AMD has made a much clearer dichotomy between enterprise and gaming with Navi. Rumour is that Vega is and will continue to be developed for enterprise where it does well, and Navi obviously has a different path and likely dev team. Which is great.

At the end of the day though ... Its all about performance per dollar (or performance at any cost for a few), nothing else really matters much.

I don't need Navi to be 2x faster than 2080ti - Because I will never spend over a grand on GPU I could care less ... I just need good bang for buck at around the $350-$400 mark. I lean toward AMD because they almost always have better bang for buck, and I don't like NVidia as a company that much. If they become the bang for buck champion though ... I might be persuaded to go back to them - so probably never.
 
EDIT: it may have been a PCWorld "Full Nerd" episode interview with the AMD GPU guy ... anyway I believe it was Scott Herkelman who said it; I'm taking a quick look now, but don't really want to spend a bunch of time looking ... will let you know if I find it.
Appreciate the time and effort you've spent looking for it; I'd be grateful if you do find a fresh source.
 
Appreciate the time and effort you've spent looking for it; I'd be grateful if you do find a fresh source.
Will do. My memory isn't spectacular, but the comment was something along the lines that the 5700 series was an evolution toward the full "Navi" design, like a bridging point between GCN and a full Navi realization.

I'm really starting think it was in a Full Nerd episode, where they had Herkelman in as a guest. If it bugs me enough I'll dig up the episode and re-watch it (their episodes are usually over an hour). I'll have to get fairly bored at the same time to actually follow through on that though. :)
 
Something doesn't add up, Nvidia is having no 7nm business with TSMC and this news indicate Nvidia launching in second half of 2020. It doesn't add up and this news is coming from Yuanta Securities Investment Consulting Co, which is having potential incentive from stocks. I am not believing anything of this, especially that Turing is scaling poorly.
 
It's a great time to be a PC builder. So many great choices available now and in the near future.
 
Back