Report: Nvidia's next-gen Ampere cards 50% faster than Turing with half the power draw

midian182

Posts: 9,722   +121
Staff member
Something to look forward to: It looks like this year could be an exciting one for the graphics card market. Not only is AMD’s next-generation “Big Navi” GPU rumored to arrive, but a new report claims Nvidia’s Ampere cards are also launching—and they’ll bring a sizable performance boost yet half the power consumption of Turing.

As reported by the Taipei Times (via Tom's Hardware), Yuanta Securities Investment Consulting Co said in a client note that Nvidia would launch its Ampere-based GPU in the second half of the year. Having revealed the Turing architecture at SIGGRAPH 2018, it’s likely that the company will unveil Ampere at the same event in July.

AMD has been on the 7nm process for a while now, so it’s no surprise that its rival will be following in Team Red's footsteps, moving from Turing’s 12nm FinFET manufacturing process to 7nm and the benefits it brings. And unlike AMD, Nvidia said its next-gen GPUs would be made by both TSMC and Samsung.

Speaking of benefits, Yuanta claims that Ampere’s 7nm process will lead to a 50 percent increase in graphics performance while halving power consumption. That does sound almost too good to be true, so we’ll have to wait and see how accurate the note proves.

As one might expect, the launch of a new Nvidia GPU is expected to see increased demand for graphics cards and notebooks, which suffered from weaker-than-expected sales last year due to Nvidia digesting inventory, writes Yuanta. MSI, which sees 60 percent of its sales come from the gaming sector, will benefit most, with Gigabyte and Asus also reaping the rewards of a new architecture.

Ampere could find itself going up against AMD’s Big Navi graphics cards. The Navi 21 GPU is rumored to be twice the speed and physical size of the Navi 10 found inside the RX 5700 XT.

Permalink to story.

 
Both title and body mention a 50% performance increase, yet the first paragraph says:
"they’ll have twice the performance yet half the power consumption of Turing."
Since the source claims x1.5 increase, not x2, that's probably a mistake.
 
Speaking of benefits, Yuanta claims that Ampere’s 7nm process will lead to a 50 percent increase in graphics performance while halving power consumption. That does sound almost too good to be true, so we’ll have to wait and see how accurate the note proves.
While this does sound too good to be true, it's not within the realms of impossible - the RTX 2080 Ti was up to 55% faster in our testing:

4K_2080Ti_2080Ti.png


Of course, there were more games that improved between 20% and 30% than were over 50%, so any vendor performance claims are always going to be a best case scenario.
 
While this does sound too good to be true, it's not within the realms of impossible - the RTX 2080 Ti was up to 55% faster in our testing:

4K_2080Ti_2080Ti.png


Of course, there were more games that improved between 20% and 30% than were over 50%, so any vendor performance claims are always going to be a best case scenario.
I doubt that they'll pull off something like this and also reduce the power draw by that much. Even the same performance for half the power draw is a stretch too.

The more obvious answer would be that he wasn't referring to general graphics performance, but something like improved Tensor/RTX cores and even there at a very specific workflow.

We have never had in the history of GPU such a huge perf/W increase and it will 100% not happen with current smaller and smaller nodes where it becomes more and more difficult to achieve efficiency improvements because of increased leakage.
 
Last edited:
And that might not mean much of a good thing for us, that that is so much better and having such high standards compared to the actual offer. The rule of demand would imply that they would start with the most powerful product, as not to give the one product that would exceed demand (for example a mainstream card twice as powerful as the current ones) without asking for more (since in this ecosystem, is a rare product). For example, the trade off (it seemed to me) of Apple having twice the power of any gpu in smartphones, was that they traded performance for a bit of efficiency.

Simply put. Nvidia, like any business practices on the market, would not offer the next mainstream card, as powerful as 2070 super, or 5700Xt at the cost of 1060 - 2060 ones, since the performance would be in the class of 2k to 4k monitors. The same, tv manufacturers don't offer new tvs (that are much more, or less different in technology) at lower than 65' tv size. Having the ace, it means premium. So I am guessing Nvidia would respond to something AMD would also offer on the market, be it in form of consoles or else.
 
Last edited:
I doubt that they'll pull off something like this and also reduce the power draw by that much. Even the same performance for half the power draw is a stretch too.
Turing and Pascal are both, in essence, on TSMC's 16nm process node (yes Turing is on "12nm" but it's just a revision of the 16nm one to improve density slightly). We don't know what process Ampere will be on - it could be TSMC's N7, N7P, or N7+ process (and it's a similar story for Samsung). N7, for the same operational parameters, is roughly 50% lower in terms of power consumption; N7+ is expected a further 10% reduction. So there's scope for some serious power reductions.

Unfortunately we can't see exactly how well this works in practice, because the only 7nm GPU around are from AMD, and they switched fabs with Navi, going from using GloFlo's 14nm process to TSMC's N7. That said, if one compares the Radeon Instinct M160 to the Radeon RX Vega 64, they both have the same TDP of 300W, yet the former has four times the amount of HBM2 and its boost clock is 264 MHz higher. Beyond these main differences, the chip designs are pretty much the same.

I fully agree that statements saying "double this, half that" are marketing hyperbole, though, and that we're not going to see a new Nvidia card that's 50% faster in general terms than the TU102 with a TDP of just 125W.
 
I mean.. if "big navi" outperforms a 2080ti by around 30 percent or even more nvidia may have to price accordingly which could be around the $900 mark but thats wishful thinking on both amd and nvidias side.
 
I mean.. if "big navi" outperforms a 2080ti by around 30 percent or even more nvidia may have to price accordingly which could be around the $900 mark but thats wishful thinking on both amd and nvidias side.
I agree it would be nice to see such a pricing difference from nVidia and see "big navi" at a price consistent, or less than, the 2080 ti.

However, much like sIntel did with HEDT procs, nVidia has set the pricing bar at a relatively high value and, since we have not yet seen 2080 ti price drops, this paves the way for AMD to price something ("big navi") with that kind of performance increase over the 2080 ti at a price comparable to, or more than, current prices for the 2080 ti, much like what AMD has done with the current, and last, generation of TR procs.

If that happens, we may not see much of a drop in 2080 ti prices.

IMO, this whole pricing this is very annoying. I am certainly not saying that AMD, sIntel, and nVidia do not have a right to make money. What I am saying is that a lack of competition has pushed prices somewhat insanely high, IMO, in both the GPU and CPU market places, and that has enabled the competitor, AMD, to price their current products at a similar level.

Maybe, just maybe, with added competition, the prices will equalize at some point. I tend toward HEDT procs myself and currently have an Ivy Bridge-E Xeon in my "workhorse" PC. While I would like to upgrade to a TR proc, current prices have me asking myself if it is worth it even at the entry level point for TR, and when I decide to do a new build, I may closely look at all the issues, or simply wait until the next gen and pick up a used previous gen TR proc which will probably be priced at a level that I consider reasonable.

I will be hoping for some sort of price correction event, but I am certainly not counting on it.
 
While this does sound too good to be true, it's not within the realms of impossible - the RTX 2080 Ti was up to 55% faster in our testing:

4K_2080Ti_2080Ti.png


Of course, there were more games that improved between 20% and 30% than were over 50%, so any vendor performance claims are always going to be a best case scenario.


Something doesn't make sense.

I am looking now, at overalls vs 2080ti & 2080 SUPER. To see the actual uplift of Turing and the power draw during gaming, between both. I understand, that going from TSMC12nm to TSMC 7nm EUV could potentially cut the power usage in half. But I do not find the "twice the performance" credible.

50% faster = half as fast, more. Not twice as fast (2x/x2).
 
I doubt that they'll pull off something like this and also reduce the power draw by that much. Even the same performance for half the power draw is a stretch too.

The more obvious answer would be that he wasn't referring to general graphics performance, but something like improved Tensor/RTX cores and even there at a very specific workflow.

We have never had in the history of GPU such a huge perf/W increase and it will 100% not happen with current smaller and smaller nodes where it becomes more and more difficult to achieve efficiency improvements because of increased leakage.
We have had close. Look at the 780ti vs the GTX 580. That was a ~90% jump. Granted, the 700 series was a year late from the 600s, but they were not even refreshes, same arch.

Going from 12nm to 7nm with a mjor arch change could see a major bump in performance. After all, turing's gamign cores are still heavily refined kepler, which is heavily refined fermi. mpere could be departure from that design, and could bring with it major improvements in performance.
 
What do you know about Ampere, that we don't?
Who said Ampere was anything other than Turing shrunk, with some RT overhead..? Nobody has suggested that Ampere is some new from the ground up Gaming architecture...

Ampere is just a new Volta for Data Center, AI, etc. That is lopped-off for Gamers. Ampere is not a game specific GPU, nothing of the sorts.
 
50% faster at 50% lower power consumption is not that far fetched when we look at 1080 Ti vs 980 Ti
perfrel_3840_2160.png

Anyone with 1080Ti can test by decreasing the power limit to 50% and it would still be 70% of its stock performance, which is 40% faster than 980Ti. This could be a prototype Ampere at this point so they haven't crank up the power yet. But we are looking at the same leap in performance as Maxwell to Pascal here.

3080Ti die size could be as big as 2080Ti even when built on 7nm+. This is where Nvidia get their improved efficiency from, by making very big GPU, and well they are very experienced at it too.

So yeah Ampere will be expensive, but is it worth it ? hell yeah :).
 
Last edited:
If Nvidia's more recent graphics architectures have only been one thing, that thing is amazingly power efficient for their performance. We have to remember currently we have last gen Nvidia products effectively facing down AMD's best shot on a newer node, and still coming out on top for performance.

Despite this entirely next gen node, AMD and Navi still struggle to get ahead of Turing even on the basis of performance per watt! 5700XT on 7nm only uses maybe a dozen watts less at load than the 2070 Super on 12nm, which is that little bit faster anyway.

If Nvidia did literally nothing to Turing but shrink it down, they would massively reduce the power consumption and gain a good 30 percent more performance from ramping clock speeds alone.

AMD will still need better cards in 2020 to combat Nvidia's next generation. I suspect they will transition to 7nm+ quickly this year. Probably refresh everything and we should have a nice GPU fight once again. Good for the consumer.
 
NICE, very good news. My i9-9900K and 2 RTX 2080s SUPER NVLinked served me well since launch. I can't wait to replace them with an i9-10900K and a pair of RTX 3080/Tis as soon as they get released. More power draw on the CPU while getting less power draw on the GPU side will be a good trade-off.
 
Not too surprised. AMD 7nm is good, Nvidia’s 7nm (and probably Intel’s 10nm for CPUs) will be a lot better.
 
If Nvidia's more recent graphics architectures have only been one thing, that thing is amazingly power efficient for their performance. We have to remember currently we have last gen Nvidia products effectively facing down AMD's best shot on a newer node, and still coming out on top for performance.

Despite this entirely next gen node, AMD and Navi still struggle to get ahead of Turing even on the basis of performance per watt! 5700XT on 7nm only uses maybe a dozen watts less at load than the 2070 Super on 12nm, which is that little bit faster anyway.

If Nvidia did literally nothing to Turing but shrink it down, they would massively reduce the power consumption and gain a good 30 percent more performance from ramping clock speeds alone.

AMD will still need better cards in 2020 to combat Nvidia's next generation. I suspect they will transition to 7nm+ quickly this year. Probably refresh everything and we should have a nice GPU fight once again. Good for the consumer.

rDNA per transistor, runs games faster than Turing. Nobody buying a GPU ever picks up the box and ask how much power does it use while I game. Only shareholders obsess, over such things.

How much did I pay and how many FPS do I get...
 
rDNA per transistor, runs games faster than Turing. Nobody buying a GPU ever picks up the box and ask how much power does it use while I game. Only shareholders obsess, over such things.

How much did I pay and how many FPS do I get...

Per transistor? Meaningless metric when comparing an architecture with transistors deployed for hardware ray tracing acceleration and one without. Meaningless when transistors versus clock speeds is a trade off for every processor.

Primary concern for GPU selection is bang per buck.

However, performance per watt is usually quite high on my list of influential factors. Less power draw, less heat, less noise and everything that comes along with that is very desirable.

I'm far from the only one, so saying that nobody asks how much power a GPU uses is flagrantly false.

If I see a GPU 5-10 percent faster for the same money as another but it uses 50 percent more power, then I'm not buying it and many others will pass too.

Ask AMD, they did sell Vega. Tried to, anyway......
 
Last edited:
NICE, very good news. My i9-9900K and 2 RTX 2080s SUPER NVLinked served me well since launch. I can't wait to replace them with an i9-10900K and a pair of RTX 3080/Tis as soon as they get released. More power draw on the CPU while getting less power draw on the GPU side will be a good trade-off.
When it comes to gaming the i7 9700k is faster than the i9 9900k.
It appears many only have the top of the range products for bragging rights only. They haven't a clue.
 
If I see a GPU 5-10 percent faster for the same money as another but it uses 50 percent more power, then I'm not buying it and many others will pass too.
People are way to concerned about all these things, and as well, they use the same idea someone said before many times: lower heat, lower noise... Like if anything is noisy these days and such? Not even in a laptop it seems...

I think people do not care about power consumption, as long as the performance is high. For example, if you were paid for your hardware, and the performance would deal you $, and the more power consumption the hardware had, the less $ you would make... Would you be that much concerned, since you still make $? Even if not that high.

"If I see a GPU 10 percent faster for the same money as another but it uses 50 percent more power, then I'm not buying it and many others will pass too."

We can only hope people don't trade -10% power consumption, for 50% less performance. That would not be adequate in pc sphere imo.

It appears many only have the top of the range products for bragging rights only. They haven't a clue.

Not to offend anyone, but that attitude is not useful. The best attitude is that of needing less and wanting less, being content with what ya got; not deriving pleasure "from outside", but joy from within. Imo, I don't understand why someone would brag with hardware ...
 
Last edited:
Per transistor? Meaningless metric when comparing an architecture with transistors deployed for hardware ray tracing acceleration and one without. Meaningless when transistors versus clock speeds is a trade off for every processor.

Primary concern for GPU selection is bang per buck.

However, performance per watt is usually quite high on my list of influential factors. Less power draw, less heat, less noise and everything that comes along with that is very desirable.

I'm far from the only one, so saying that nobody asks how much power a GPU uses is flagrantly false.

If I see a GPU 5-10 percent faster for the same money as another but it uses 50 percent more power, then I'm not buying it and many others will pass too.

Ask AMD, they did sell Vega. Tried to, anyway......


Because, you can scale your node/uarch for various advantages.

And, all your other facetious/pseudo concerns are drowned out by water cooling/aftermarket. Again, performance per watt is not a concern to the ignorant, or educated buyer.

Never heard a single gamer, ask another gamer...
"yo, how many wtts you pulling.."

They want to know the FPS..
 
People are way to concerned about all these things, and as well, they use the same idea someone said before many time: lower heat, lower noise... Like if anything is noisy these days and such? Not even in a laptop it seems...

I think people do not care about power consumption, as long as the performance is high. For example, if you were paid for your hardware, and the performance would deal you $, and the more power consumption the hardware had, the less $ you would make... Would you be that much concerned, since you still make $? Even if not that high.

"If I see a GPU 10 percent faster for the same money as another but it uses 50 percent more power, then I'm not buying it and many others will pass too."

We can only hope people don't trade -10% power consumption, for 50% less performance. That would not be adequate in pc sphere imo.

I'm sorry but you're just dead wrong in virtually all of your observations for my money.

I suspect again, I'm not the only one who seriously considers the power consumption, heat dissipation, and cooler noise on a GPU purchase. This as a private consumer, if I was a commercial/industrial user performance per watt of an architecture is basically my MAIN consideration.

The underpinnings of these designs find themselves in spheres outside of mere consumer desktop machines. Where power consumption is a critical factor.

Also, if you think laptops these days cannot be noisy (especially ones with discrete graphics!) then you're seriously out of touch. What else can I say.
 
Because, you can scale your node/uarch for various advantages.

And, all your other facetious/pseudo concerns are drowned out by water cooling/aftermarket. Again, performance per watt is not a concern to the ignorant, or educated buyer.

Never heard a single gamer, ask another gamer...
"yo, how many wtts you pulling.."

They want to know the FPS..

You're comparing gaming performance per transistor over two designs which aren't directly comparable. To bring that up as a concern and dismiss performance to watt as one for anybody is ludicrous.

There is little doubt Nvidia have a performance per transistor advantage and has had for a long time now, contrary to your initial claims. This is entirely reliant on the clocks you can hit, which is heavily reliant on the process node being used. On comparable nodes then Nvidia still has a clear advantage.

If you really believe performance per watt is not a concern to anyone than you should probably talk to someone else rather than just looking in a mirror.
 
You're comparing gaming performance per transistor over two designs which aren't directly comparable. To bring that up as a concern and dismiss performance to watt as one for anybody is ludicrous.

There is little doubt Nvidia have a performance per transistor advantage and has had for a long time now, contrary to your initial claims. This is entirely reliant on the clocks you can hit, which is heavily reliant on the process node being used. On comparable nodes then Nvidia still has a clear advantage.

If you really believe performance per watt is not a concern to anyone than you should probably talk to someone else rather than just looking in a mirror.

Turing can't keep up with rDNA, because rDNA is a game specific uArch. Turing and Ampere are not...!

AMD will win every time, when it comes down to size of die & price. Nvidia just can not compete. (Even at 7nm)
 
Back