AMD Navi vs. Nvidia Turing: An Architecture Comparison

For the consoles, Microsoft and Sony went with AMD because of the semi-custom chip and the more open mindset so the get all the intrinsics and normally hidden stuff etc. that nvidia whould not hand to them.
If you talk to PS4 programmers, they mostly have learned a lot over the hardware and how to use it best.
The Nintendo Switch is a story of its own because it whould maybe never happend on AMD hardware because of the efficiency deficit, wich is fatal in those formfactors.
 
Vegas scalar register (SGPR) is only 3.2KB each, from Hawaii until Polaris it was 4KB
and HD7870 had 8KB each.
Thanks for the feedback - I went with 4 kiB as the value for the SGPR as a generalised value, as the Polaris architecture is more common than Vega, when taking into account all available gaming platforms.

For the general understanding this article is very good, espacially with the previous game render pipeline article as background.
thanks for both articles (y) (Y)
Many thanks for the kind words - there's more of this stuff to come, and hopefully each one will be better than the last!
 
So it is official finally, Nvidia architecture is better despite being on the older node and having like half of the chip wasted for raytracing.

ROFL... what?

TU104 on 7nm = roughly 340mm2 for the same amount of transistors of a 5700xt at 251mm2.

AMD is having a massive area advantage of 35% for the same performances.

Also, AMD is using less transistors than Nvidia for about the same performances. Reality is a harsh place to be when you are a blind fanboy.
 
So it is official finally, Nvidia architecture is better despite being on the older node and having like half of the chip wasted for raytracing.

By the way, navi 10 is facing Turing, not Ampere. When Ampere is going to be release, AMD will be releasing either Navi 20 or Navi 30... and maybe on 5nm. So your whole argument is just a delusion.
 
So it is official finally, Nvidia architecture is better despite being on the older node and having like half of the chip wasted for raytracing.

ROFL... what?

TU104 on 7nm = roughly 340mm2 for the same amount of transistors of a 5700xt at 251mm2.

AMD is having a massive area advantage of 35% for the same performances.

Also, AMD is using less transistors than Nvidia for about the same performances. Reality is a harsh place to be when you are a blind fanboy.

TU 116 (1660 Ti) is 284mm2, so just die shrink to 7nm make it ~190mm2 and perform the same as 5700 while consuming just 120W.

relative-performance_2560-1440.png


In fact if Nvidia just made TU 116 12nm 50% bigger (420mm2) it would consume 180W and put every single card south of 2080 Ti out of commission. How does RTX 2080 performance for 420usd sounds like ?
But yeah Nvidia don't want to cannibalize their own RTX Turing sales either, perhaps next year when AMD has shown all their hands.
 
Last edited:
As we now know, nobody cared back then about power consumption, did they? And it seems that efficiency is important to you, so now, or in the future RDNA should be right up your ally.

But Maxxi, You can't protect Nvidia anymore, Jensen can't control all the channels. The truth is out. Major discount incoming for RTX cards soon.

AMD has a class act going and it seems the entire gaming industry is on-board.


AMD GCN has been in consoles for years, yet NVIDIA IS FASTER. So quit your tales, no one is interested in. All of the RX series including VEGA are using GCN architecture.

Turing chip is on an older node, 2x larger, half of the die is dedicated for raytracing, yet it consumes as much power as 7nm underperforming RDNA and offers more performance.

So yeah, when comes to efficiency, NVIDIA is more efficient.

I am sorry, that AMD got you again, I see their marketing worked. It was a few years ago when the current consoles were annouced with AMD cpus and GCN gpus. "Intel is done, Nvidia is done. All the game gonna be optimized for AMD and will perform better on it." people were saying.

We know how it turned out.. but but but but.

Now the circlejerk has started again. I am really suprised how often the same trick keeps working again and again. Now go read AMD tweets how RDNA is awesome, how is cheaper to manufacture whilst the opposite is true because 7nm is significantly expensive with lower yields than 12nm. Go celebrate with them that after a year they realised something which can barely match mainstream Nvidia gpus card whilst being on a better node and having no raytracing support. Oh god, RDNA is trully awesome and superior.

AMD marketing: So after a year we finally have released 2 gpus on the 7nm node. Nvidia is using the 12 nm node. Despite the fact of using the clearly superior node and not supporting raytracing, thus not wasting additional die space which is turned off most of time, OUR CARDS CAN ONLY MATCH MAINSTREAM NVIDIA GPUS. *amd panics* What we gonna do? What we gonna do? You know what, we cut the prices and pretend it was intended, we will hype our RDNA like never before and tweet how we played Nvidia and how their chips are bigger and more expensive to make. Our fan base is dumb enough to believe anything we say and anyone with decent knowledge will not buy our gpus, so no loss! #RDNA4LIFE #RDNA4EVER

Maxi,

As much as you want to keep bring up the past, GCN is not RDNA. The industry didn't get behind GCN, like they are with RDNA. As much as you want to keep dwelling on the past, it has nothing to do with what the Industry is now doing. Which is everyone getting behind the RDNA banner.

And just so you know, Navi10 downclocked = the 2060 Super in Gaming performance, but is way more efficient. RDNA beat Turing... and it is more than obvious when you OC RDNA and it is beating the RTX2080 in some games.

Now AMD is coming out with a bigger chips based on RDNA with even more gaming performance. And it isn't AMD hyping their architecture, it is the Industry and us end-users. And you should be too... if you are a Gamer and someone who pays for their own hardware. Now that RDNA's whitepaper's are starting to come out, it obvious why AMD fooled everyone with Navi and held back. Nvidia has nothing left to fight RDNA with. And, it is going to take more than Turing @ 7nm to beat RDNA.

Also, nvidia's support of ray-tracing is broken. RTX doesn't work and causes severe hit in performance when ray-tracing, so RTX has been found to be only using partial ray tracing in games (per nvidia's team)... and therefore, will always be broken. Go read reddit..
 
TU 116 (1660 Ti) is 284mm2, so just die shrink to 7nm make it ~190mm2 and perform the same as 5700 while consuming just 120W.

In fact if Nvidia just made TU 116 12nm 50% bigger (420mm2) it would consume 180W and put every single card south of 2080 Ti out of commission. How does RTX 2080 performance for 420usd sounds like ?
But yeah Nvidia don't want to cannibalize their own RTX Turing sales either, perhaps next year when AMD has shown all their hands.


How much performance do you think Turing will get, if it moved to 7nm..? (nothing else)
Cuz you math doesn't work.

I would wager you don't understand much at all. Because if Nvidia can not wait an entire year to release a 190mm^2 TU-116, because it would have to sell that for much less than the current 1660Ti... because by the time Nvidia could release such a 7nm chip (a year from now?), AMD's 5700 would've already been out a year and be heavily discounted (ie: $199-$249).

Just like AMD is doing with Ryzen1 vs Ryzen2


For those upgrading to the new 1440p freesync2.0 panels on the cheap.. any RDNA gpu is going to work. If you are moving to a 4k FS2.0 display, then big-navi or biggest navi is going to be your best bet.

Not only that, if you are upgrading later & just bought a (navi10) 5700, you will be able to easily sell it, to upgrade to the next-sized Navi. The 5700 is not going to loose it's resale value, because Navi10 sets the bar, for mainstream 1440p. (AMD owns that space.)

That is what you call a win/win for anyone buying the 5700 series card.
 
Sorry to burst your bubble but 7nm Navi don't even compete with Nvidia's 12nm let alone 7nm.

TU116 vs Navi 10:
_Performance per transistor: 1660 Ti is 45% slower than 5700 XT but has 56% less transistor (6.6 vs 10.3 billions), not to mention how out of spec AMD is pushing the 5700 XT.

_Performance per watt:
performance-per-watt_2560-1440.png

Notice how all the RTX Turing pretty much have the same perf/watt, that means bigger Non-RTX Turing will have the same efficiency as TU116 or higher with faster VRAM.

_Cost per mm2 die size: 7nm cost twice as much as 14/16nm part
amd_die_cost_increase_per_nm_improvement.jpg

Nvidia could make ~500mm2 12nm Turing for the same price as Navi 10 chip, TU116 is 284mm2, doubling that number and we have 2080 Ti performance for not much more expensive than RX 5700 XT.

_Market adoption: 1660 Ti alone sold 2.25x more than Vega56/64 in its 5 months while Vega 56/64 have been selling for 2 years, RX 590 is not even on the list lol.
https://store.steampowered.com/hwsurvey/videocard/
Now you kept saying every game will be designed for RDNA, just give me the list of games that RDNA will definitely be faster because I can give you a list of games that they definitely do not. RDNA still take a beating with UE4 games and there are plenty AAA coming (see my previous post). Cyberpunk 2077 would definitely favor Nvidia (GG Navi right here), RDR 2 ? no need to ask.

We all know Nvidia always had Non-RTX Turing lineup for awhile now, it is just a matter what RTX cards they wanna replace with. Contrary to AMD who live off the rumor mill, Nvidia can just drop the Non-RTX Turing bomb at any moment with immediate availability like they did with the 1660 Ti and Super series.
 
Last edited:
Interesting article. In the world of using these cards, To me Navi is disappointing, no ray tracing and only really able to match Nvidia’s 12nm cards despite being in 7nm. Also the efficiency isn’t there, the cards come power locked to preserve their quoted consumption numbers. Remove that restriction and these Navi cards will consume more power than a 2080 ti whilst performing significantly worse than one. See the overclocking tests people have done.

The 5700XT is one very very hot monstrosity, I can’t see AMD going much bigger without making watercooling standard and you know the card it will compete against from Nvidia will be a lot cooler and quieter.

I don’t know what it is but the Radeon brand has really stagnated over the last 5 years or so. It’s a shame because AMD are doing so well with Ryzen. But Radeon is a shell of the brand it used to be.
 
RDNA learns a lot from Geforce Cuda, Zen2's IPC catches up with Intel's Coffee Lake and 7nm processing technology applies well. All of these make the PS5 and XBOX NEXT delicious to bite.
 
A TDP of 225watts is getting absurd.

Radeon cards consume power like a child consumes candy on Halloween.

I'll sacrifice some performance for a card that uses far less power (and generates less heat.)

If AMD wants to best nVidia the way their CPU's are besting most Intel cpu's, they MUST consume less power while delivering superior performance.
 
Compared to Turing, yes it does look disappointing. However, compare Navi to Vega, and it's a massive improvement:

Vega 10
12.5 billion transistors | 14 nm process | 495 mm2 area | 375 W TDP

Navi 10
10.3b trans | 7 nm | 251 mm2 | 225 W

Now it's probably the case that Vega's TDP includes the HBM, as AMD themselves state that Navi uses 23% less power than the Vega 10 chip, but it is a notably better performing product.

It's also worth noting that TSMC's 12 nm process node that the Turing chip is manufactured on, isn't actually 12 nm. TSMC manufacture chips for Nvidia on a specialised line, using a process node called 12FFN but it's a revised 16 nm process node. By going with this, Nvidia could have low transistor density chips (and thereby helping to keep temperatures down) but at a cost of them being very large in area (and hence reducing the yield rate from a standard size wafer).

However, TSMC's 16 nm lines started back in 2015 (with 12FF starting a year later), so the revised process was built off a matured product, which generally means better yields. And speaking of TSCM's 16 nm process node, it's really just a 20 nm process, but with better production (allowing far slightly higher density and lower power). So Nvidia's products are built off very mature systems and libraries, where successive improvements have been about reducing current leakage and manufacturing costs - both of which were crucial to Nvidia's choice of going for a very large transistor count design (the transistor density for their past 3 architectures has barely changed, for example).
 
Basically, you whole point is mute. Nvidia will never release a larger version of TU116. It would basically be a big confession that they were wrong, and completely lost trust with investors. Nvidia pushed big time how Ray Tracing was the best thing since sliced bread. If they released big cards, without it, they would basically be saying you don't need Ray Tracing. It would be committing marketing suicide.

How would you know they won't ? as more people voice their opinion that the RTX are useless on the bottom cards (2060, 2060 Super, 2070), Nvidia just might release non-RTX vesion to replace those (at lower price of course). The highest end however remain RTX Turing. It all depends on how market dictates now that 2060 Super and 2070 Super face fierce competition from 5700/ XT.
 
Basically, you whole point is mute. Nvidia will never release a larger version of TU116. It would basically be a big confession that they were wrong, and completely lost trust with investors. Nvidia pushed big time how Ray Tracing was the best thing since sliced bread. If they released big cards, without it, they would basically be saying you don't need Ray Tracing. It would be committing marketing suicide.

How would you know they won't ? as more people voice their opinion that the RTX are useless on the bottom cards (2060, 2060 Super, 2070), Nvidia just might release non-RTX vesion to replace those (at lower price of course). The highest end however remain RTX Turing. It all depends on how market dictates now that 2060 Super and 2070 Super face fierce competition from 5700/ XT.

How do I know? Well, the Nvidia CEO said “and to not have ray tracing is just crazy”. Completely makes my point.

To that, you would then respond, but they just launched a new GTX model! Yes, but this is a lower end, not flagship model.

There is no chance they will launch a large GPU without Ray Tracing.

(Sorry for the delayed response, per say. I accidentally created a new account with my previous responses and deleted it, and realized that my response was deleted as well.)
 
Lol 1660Ti is actually playable with Medium DXR at 1080p so basically non-RTX Turing (and Pascal) is Ray Tracing capable.


Need some DXR testing with Medium DXR in Control.
 
It's possible that, like with Intel and their 10nm node, Nvidia will have trouble getting high clocks. It might be one of the reasons why they haven't talked much about it and why they'll, "presumably", focus on power efficiency. This should also mean that they'll be able to add more RT and Tensor cores into the GPU to make them more viable.

We'll probably not see the same jump in performance like with Pascal (vs Maxwell) with their first gen GPUs on 7nm. I fully expect them to just refine Turing. I'll be happy if they manage to get 15-20% by just adding more cores even though they'll prolly not increase the clocks. The increased complexity of Nvidia's GPUs makes the transition more tricky.

Hopefully AMD manages to get something out for the high end market by the end of the year or in early 2020 and TSMC's 7nm+ will not be delayed. AMD needs to introduce ray-tracing together with the 7nm+ node in some shape or form (late 2020 or 2021?). I think Nvidia's 2nd gen RT cores should finally be good enough to get more devs to use this feature (thanks 1st gen beta testers :D)

Getting to play Cyberpunk 2077 with all the RT will be enough to justify my 2080 Ti purchase :D, just like I bought the Titan X Maxwell when Witcher 3 came out (GTX 980 was shuttering like hell).
Anyways Turing will be stronger than ever with the releases of upcoming Unreal Engine 4 games:

_Borderlands 3 (Sep 2019)
_The Outer Worlds (Sep 2019)
_Shenmue 3 (2019)
_Star Wars Jedi: Fallen Order (2019)
_Final Fantasy VII Remake (2020)
_Outriders (2020)
_System Shock (2020)
_Vampire: The Masquerade – Bloodlines 2 (2020)
_S.T.A.L.K.E.R 2 (2021)
...
And a whole bunch of other games but these are the ones I'm interested in. Also upcoming RTX games include Control (2019), MechWarrior 5: Mercenaries (2019), Cyberbunk 2077 (2020).
Final Fantasy VII remake has a 1 year PS4 exclusive deal, so you won't be seeing it in 2020.
 
This is fortunetelling with very very little to go on. I'm confident Nvidia will do more than fine, but we know very little AFAIK (at least, Techspot mentions next to nothing).
(So, why did this get liked?)

Because pointless fortunetelling, and responding to pointless fortunetelling is the absolute best some people can muster. I used to think the majority of people participating in tech forums were intelligent ... that opinion is changing rather rapidly ...
 
So by the articles final conclusion ... it sounds like the efficiency of each architecture is largely dictated by the game engines code and optimizations for any given platform.

"This will have an impact on how various games performance because one 3D engine's code will favor one structure better than the other, depending on what types of instructions are routinely sent to the GPU. "

Sounds a bit like pay to win? The company with the most bucks to incentivize game engine devs wins? :)
 
A TDP of 225watts is getting absurd.

Radeon cards consume power like a child consumes candy on Halloween.

I'll sacrifice some performance for a card that uses far less power (and generates less heat.)

If AMD wants to best nVidia the way their CPU's are besting most Intel cpu's, they MUST consume less power while delivering superior performance.

Yes, very goood analysis and logical reasoning, with no bias. 225 vs 215 is not remotely acceptable. I won't even need to heat my house in the winter with that 10w extra output!

Can you imagine all the extra money I'm going to have to put up for electricity? WTH is AMD thinking adding 10 extra watts for similarly performing parts?

Thanks for pointing that out, I was just about to buy a 5700XT based on all the great reviews, but now that you pointed that 10w difference out, I'll gladly pay $120 extra to not have to put up with that 10w BS!!

Whew! Almost bought a card with 10 more watts! Close one! Thank god there's great objective people like you looking out for us.


All facetiousness aside ... back on planet earth, Navi and Turing are consuming roughly the same power per performance, with the very slight edge going to Turing, as can be noted within the article contents.
 
Last edited:
"Unlike the ALUs, they won't be programmable by the end user; instead, the hardware vendor will ensure this process is managed entirely by the GPU and its drivers."

Um, actually, both AMD and Nvidia do provide software for programming their GPUs directly. But while they do this in hopes of enticing vendor lock-in, or for people doing GPU computing on a specific system, in general, so that games will work regardless of which brand of video card you have, game programmers will indeed use the DirectX, OpenGL, or Vulkan drivers provided by the GPU maker. But it isn't because there is some encryption feature locking them out.
 
I do not see it a good investment to be buying a video card later this year with HDMI 2.0 and DisplayPort 1.4, when HDMI 2.1 has been released, and DisplayPort 2.0 is almost here.

Looks like video card manufacturers want to suck you in for this one, so next year they will just update the ports, and get your money the second time. They won't suck me into this.

Wait, what...
You're describing some sort of scam by AIB's? I don't understand.
 
For me it is clear enough that AMD is catching up rather fast, ending up in second tier performance class is not small feat considering they have only compete in middle mainstream class for years. That 5700xt is now in the performance level of 1080 Ti.

There is a hint that they are putting a break to release the higher end, maybe they need to wait for 7nm to mature so they can get better thermal and yield. I can only guess. 2020 is going to be very exciting.
 
These gpu's are highly programmable, and we can see how the software can easily improve upon the hardware, thus the question would be what kind of knowledge is the developer allowed to have, and how useful is that to him compared to the cost. If it takes two console generations to fully use the hardware, then why these companies do not actually pay the developers themselves to improve the software, instead of having to rely on fixing the drivers, or waiting for major engine updates to get your HW implemented properly for once. The industry is kind of old, while the hardware keeps improving, so I wonder what/where the shortcuts are made versus price and time, reaping the profit. Maybe this is why AMD did not really rush that much? Since being in the consoles, developers could actually unleash the power that was hidden within the hardware, for a longer time span.


Because pointless fortunetelling, and responding to pointless fortunetelling is the absolute best some people can muster. I used to think the majority of people participating in tech forums were intelligent ... that opinion is changing rather rapidly ...
Out of all the possible answers, all we got is the carousel of some plain simple logic, mixed with subjective imperative tone.

The idea that such complex technology is available to people, but all they know about it is the brand, and what it cost to get it. No passion at all imo.

I kind of giggled reading how much math is packed in these computers and how is exposed to people of all age, but we know nothing but mouse and keyboard, and some of us pointless arguments without proper base knowledge. The whole industry is the same team basically, only the consumers pointing out some statements of their favor, is different. It reminds me of that green-eyed logic riddle... The only new information people are getting, is whatever would keep them in the same state they are into, aka being the consumer base. Sorry for slight O/t.

My two cents.
 
Last edited:
Back