No Nvidia Killer? Big Navi's best could match the RTX 2080 Ti, fall short of the RTX 3080...

Honestly the need for graphics horsepower has declined significantly. And graphics in games are not the jump they used to be; we are no longer in Crysis 1 or Far Cry 2 vs Deus Ex or Counterstrike 1 level graphical jump era in a 6 year span. There are 7 year old cards like the 780/290 that can play the latest titles just fine at 1080p medium settings and 5 year old cards that can do the same at high/very high settings @1080p/1440p. Hell, if the GTX 580 3GB had driver updates/vulkan it could still hang at 1080p low/medium.

The need for more graphics horsepower has increased not so much from big leaps in graphics engines, but more from the increased availability of higher resolution displays. I game at 2160p now. No way I would have been doing that 5 years ago.
 
DLSS is huge, it will allow budget grade RTX cards to compete with top end AMD cards and potentially even look better than them aswell. Huge amounts of support for it now too and more to come.

I’m currently playing through death stranding and DLSS quality mode looks sharper and overall better than DLSS off with anti aliasing, it’s very impressive and has come along so much since the scruffy implementation on the first set of games to use it.

It’s funny, Nvidia made a big song and dance out of ray tracing but ray tracing at this point is still a bit of a gimmick, it’s only really any good in Metro Exodus in my opinion. But DLSS on the other hand is turning out to be the real game changer.

DLSS unlocks the door to ray tracing for next gen Nvidia cards. It's that simple.

Without DLSS it'll still be too costly. With it, you can see it will be viable sooner rather than later.
 
You've just explained it yourself as to why the yields weren't bad - 12FFN is a tighter metal pitch revision of 16FF. In other words, it's a well matured process node. Mind you, we should really be more specific as to exactly what we mean by yield: wafer fabrication, wafer sorting, die binning, packaging. When I'm saying the likes of the TU102 and 104 have been fielding decent yields, I'm specifically referring to a combination of wafer sorting and die binning - I.e. the ratio of dies that can be used in any end product to the total number of dies fabricated.

Problem is still die size. It sounds reasonable that AMD got 80% yield for fully working Zeppelin (unconfirmed report). 14LPP is somewhat similar vs 12FFN and if AMD really got 80% yield for 212 mm² chip, then Nvidia (assuming similar defect density) would get somewhere around 30% for fully working 2080 Ti.

So fully working 2080 Ti dies have bad or at most mediocre yields, no matter how mature process. Die size is just so big.

However, if we refer to yield as being simply the total number of working dies from a single wafer, then one can argue that it is 'poor' - but this would be unfair to claim so, as the likes of the TU102 is huge. Even at 100% yield, a 300 mm wafer will only turn out 60 TU102 chips, needed for 11 different end products. This is partly why the prices are so high: just the sheer number of wafers that have to be manufactured to create the volume required.

Chips using 12FFN will still be manufactured on the 300mm production lines, using pretty much the same design rules and libraries as 16FF/12FFC, so it's not a case that TSMC have a separate production area just for Nvidia - that production just has to slot in with everything else. There's only something like 5 plants that handle those wafer too.

It's both. Low absolute number of dies per wafer and also small number of fully working dies per wafer.

Of course but TSMC won't create custom process for free. TSMC want Nvidia's money one way or another. Basically Nvidia could either pay large sum upfront and then get certain amount of wafers for normal fee. Or Nvidia agrees to buy certain amount of wafers for bigger fee. Either way Nvidia must estimate how much wafers they need or end paying too much. More wafer orders mean less custom process development money per wafer.

Something that is not problem with regular processes.
 
This doesn't makes a lot of sense. 2080 Ti isn't that much faster than a 1080 Ti (about 30%). AMD should be able to get that on a 6700 XT class card simply through the node and architecture changes. Nvidia's gotten a lot more from just architectural tweaks and a node change.

If AMD is only getting 30% from a new node, new architecture, and a bigger die something is wrong. I very much doubt this report is accurate.
 
DLSS unlocks the door to ray tracing for next gen Nvidia cards. It's that simple.

Without DLSS it'll still be too costly. With it, you can see it will be viable sooner rather than later.
Yeah DLSS just helps literally everything. Although I do find with an RTX 2080 at 1440p. I can get over 60fps in any ray traced game without DLSS. The performance impact isn’t as bad as everyone bangs on about. Maybe it was at first but it got patched out.
 
14LPP is somewhat similar vs 12FFN and if AMD really got 80% yield for 212 mm² chip, then Nvidia (assuming similar defect density) would get somewhere around 30% for fully working 2080 Ti.

So fully working 2080 Ti dies have bad or at most mediocre yields, no matter how mature process. Die size is just so big.
Are you suggesting that out of the maximum 60 or so TU102 dies achievable from a 300mm wafer, only 30% (18 chips) are functional or 30% of all the functioning dies end up in a 2080 Ti?

That particular GPU is used in the following products:

GeForce RTX 2080 Ti
Quadro RTX 6000 + non-active cooling version
Quadro RTX 8000 + non-active cooling version
TITAN RTX

The Quadro and Titan cards are full TU102 chips, with only clocks speeds and local memory sizes being the differentiators; only the 2080 Ti uses a partially disabled chip. In January 2020, from one German retailer alone, just under 500 GeForce 2080 Tis were sold. If a single 300mm wafer was only producing 18 viable chips and they all went into 2080 Tis, then 28 wafers would be needed for that one month, for that one store.

Expand that across all retailers across the globe, and include chips for the Quadro, and 30% simply isn't viable, regardless as to how much the end products cost. At worst, it's going to be 50%.
 
Which should be around the performance of a 3070->3080 which is better than what I thought it would be I was initially thinking it would be between a 3060+3070.
Still if they offer this for the right price it will still be a great value, just not the best on the market, High end chips typically are carrying the same % of adoption as VR headset which while is market viable for enthusiasts it's still near no level of mainstream market.
 
The need for more graphics horsepower has increased not so much from big leaps in graphics engines, but more from the increased availability of higher resolution displays. I game at 2160p now. No way I would have been doing that 5 years ago.

Except the vast majority of pc gaming consumers are 1080p or below.
 
AMD just didn't bother to make big enough chips because it's quite expensive.

5700XT is only 251 mm² while GTX 1080Ti is almost double 471 mm². Not to mention 2080 Ti that is 775 mm².

If AMD just have bothered to make around 400+mm² RDNA chip "bigger 5700 XT", it would have been miles faster than 2080 Ti. AMD decided it wasn't worth it. So much about "NVIDIA is always a whole generation ahead" *nerd*

Too bad the 7nm process is much more expensive and has worse yields than 12nm. Sure, it has perf and efficiency benefits, but it probably costs nVidia only slightly more for a 445 mm² 2060 Super die than it costs AMD for their 251 mm² 5700XT die. Not to mention the build cost for the rest of the card. It seems the RX 5000 Series suffers from a need for significantly more heavy-duty VRMs than their nVidia counterparts, further complicating the cost benefit associated with a smaller die.

The worst part is, despite the perf and efficiency gains of 7nm compared to 12nm, AMD can only hit perf/watt parity with their competing parts. I'm sure they aren't looking forward to Ampere on 7/8nm.
 
Are you suggesting that out of the maximum 60 or so TU102 dies achievable from a 300mm wafer, only 30% (18 chips) are functional or 30% of all the functioning dies end up in a 2080 Ti?

That particular GPU is used in the following products:

GeForce RTX 2080 Ti
Quadro RTX 6000 + non-active cooling version
Quadro RTX 8000 + non-active cooling version
TITAN RTX

The Quadro and Titan cards are full TU102 chips, with only clocks speeds and local memory sizes being the differentiators; only the 2080 Ti uses a partially disabled chip. In January 2020, from one German retailer alone, just under 500 GeForce 2080 Tis were sold. If a single 300mm wafer was only producing 18 viable chips and they all went into 2080 Tis, then 28 wafers would be needed for that one month, for that one store.

Expand that across all retailers across the globe, and include chips for the Quadro, and 30% simply isn't viable, regardless as to how much the end products cost. At worst, it's going to be 50%.

I suggest that around 30% of chips are fully functional. Estimating how many are "functional enough" for partially disabled products is much harder. Those partially disabled chips are almost full but on the other hand there are many possible parts to disable. Then probably we are talking about 30% fully functional and additionally around 20% functional enough. 50% total usable.

Those figures may sound excessive but again, we are talking about massive chip here. As Zeppelin is considered quite large at 212 mm², then what is 775 mm² ? That is ultra massive.

According to reports, AMD booked 30K 7nm wafers per month from TSMC. So if Nvidia takes at least 10K wafers per month, that makes 300K at least partially functional chips per month. Sounds enough.
 
As was expected

The top end cards are only a fraction of the market anyway, as long as performance is on par and the price is right, the fact it isn't faster than a 3080ti doesn't really mean much.
that's erroneous thinking. if you are nr 1 you are nr 1. people will simply buy your cards or put them into OEM systems just because you're number one.
 
I suggest that around 30% of chips are fully functional. Estimating how many are "functional enough" for partially disabled products is much harder. Those partially disabled chips are almost full but on the other hand there are many possible parts to disable. Then probably we are talking about 30% fully functional and around 20% functional enough.
It would be useful to have a sense of the number of Quadro and Titan RTXs sold on a monthly basis since launch, as it would provide a way to estimate the percentages better. There again the only difference between a 2080 Ti and a Quadro RTX 6000/8000 is half a TPC (I.e. 4 SMs) and one memory controller, so chips for the former are almost full dies. As a guess, I would think that the GeForce outsells the Quadro/Titan combined by 100:1, based on MindFactory’s 2080 Ti sales. To me this suggests that percentage split is the other way round, 30-40% with SM defects and 10-20% fully functional.

According to reports, AMD booked 30K 7nm wafers per month from TSMC. So if Nvidia takes at least 10K wafers per month, that makes 300 000 at least partially functional chips per month. Sounds enough.
Were those 30k just for their 7nm GPUs - Vega 20, Navi 10 and 14? If so, then discounting the Vega (production numbers will be very small), 30K would cover all desktop and mobile RX 5500, 5600, and 5700 products. Sticking just to desktop, Mindfactory sold just under 4200 of them in January and twice that number of Turing products, with 2080 Tis accounting for 5.7% of that volume.

Now we don’t know how wafers Nvidia have booked but let’s say it’s proportional to Mindfactory’s sales figures - an iffy assumption, I know, but it’s just for discussion purposes. 5.6% of 60k is roughly 3400 wafers, which, going by our percentages, would give 80,000 2080 Ti dies. That’s obviously a lot lower than 300,000 but I suspect the reality is somewhere between the two.

Heaven only knows what it’s like for the GA102 ? It’s not surprising that AMD are in no rush to be launching enormous monolithic GPUs.
 
Honestly the need for graphics horsepower has declined significantly. And graphics in games are not the jump they used to be; we are no longer in Crysis 1 or Far Cry 2 vs Deus Ex or Counterstrike 1 level graphical jump era in a 6 year span. There are 7 year old cards like the 780/290 that can play the latest titles just fine at 1080p medium settings and 5 year old cards that can do the same at high/very high settings @1080p/1440p. Hell, if the GTX 580 3GB had driver updates/vulkan it could still hang at 1080p low/medium.
You know what else is 7 years old? The current console generation. That's hardly a coincidence. The capabilities of the consoles influence large parts of the game development ecosystem, both directly/obviously in the case of cross-platform titles, and indirectly/subtly even on games with no direct console connection (tools infrastructure, audience expectation, studio expectation, consumer hardware, etc.)

With the new console generation around the corner, I think the demands from your typical game two years from now will be different from what we've seen in the prior two.
 
I suggest that around 30% of chips are fully functional. Estimating how many are "functional enough" for partially disabled products is much harder. Those partially disabled chips are almost full but on the other hand there are many possible parts to disable. Then probably we are talking about 30% fully functional and additionally around 20% functional enough. 50% total usable.

Those figures may sound excessive but again, we are talking about massive chip here. As Zeppelin is considered quite large at 212 mm², then what is 775 mm² ? That is ultra massive.

According to reports, AMD booked 30K 7nm wafers per month from TSMC. So if Nvidia takes at least 10K wafers per month, that makes 300K at least partially functional chips per month. Sounds enough.
Yiields must be good tho as NV are using the 2080 die in 2060 chips eg 2060 KO.
That must mean v good Yields.
 
As was expected


that's erroneous thinking. if you are nr 1 you are nr 1. people will simply buy your cards or put them into OEM systems just because you're number one.
That is erroneous thinking, OEMs care about profit margins. I'd say that this is a bigger deal for laptop OEMs, but it's not a big deal in the desktop segment
 
Yeah DLSS just helps literally everything. Although I do find with an RTX 2080 at 1440p. I can get over 60fps in any ray traced game without DLSS. The performance impact isn’t as bad as everyone bangs on about. Maybe it was at first but it got patched out.
60 fps is too slow. I want 144 fps.
The problem is not, how many fps you can perceive.
The more fps you have, the sharper and more fluent moving objects will be displayed.
 
Last edited:
I think the premise of this article is probably right. I imagine AMDs talent is stretched thin - as amazing what they are achieving on the CPU/Server and hopefully APU side.
If I was Lisa - my aim with GPU would to use all the knowledge Microsoft and Sony most be giving them - to get these midrange GPUs just perfect - low power draw , low cooling needs ( consoles stuck on top of other electrics in poor ventilated spaces ) - to get the drivers really humming ( I can't comment as I have a RX2060 ) - to get the maximum performance from units - With M/S on board with PC like Xboxes this will translate to better windows performance - then compete hard in the mid range with Nvidia.
I brought a RX2060 over say 5700 as I do encoding - and AMD does not compete .
I have to imagine most of their effort must be about consoles at the moment - that's 300 million units they have to supply in the coming years - I'm sure they will get refined .

TL/DR with AMD producing GPUs for next gen consoles- they can get them really good (stable & efficient ) - and pass this on to capture more of the midmarket and tweak them get some % increases
 
Honestly the need for graphics horsepower has declined significantly. And graphics in games are not the jump they used to be; we are no longer in Crysis 1 or Far Cry 2 vs Deus Ex or Counterstrike 1 level graphical jump era in a 6 year span. There are 7 year old cards like the 780/290 that can play the latest titles just fine at 1080p medium settings and 5 year old cards that can do the same at high/very high settings @1080p/1440p. Hell, if the GTX 580 3GB had driver updates/vulkan it could still hang at 1080p low/medium.

Yeah but with new games coming out like Microsoft Flight Sim you will need all the horsepower you can get if you want to display it at it's best. So for all of the rich out there open you're wallets buy the best so you can look at the candy then help bring the prices down for the last gen cards so some of us might be able to get a bargain lol.
 
Honestly the need for graphics horsepower has declined significantly. And graphics in games are not the jump they used to be; we are no longer in Crysis 1 or Far Cry 2 vs Deus Ex or Counterstrike 1 level graphical jump era in a 6 year span. There are 7 year old cards like the 780/290 that can play the latest titles just fine at 1080p medium settings and 5 year old cards that can do the same at high/very high settings @1080p/1440p. Hell, if the GTX 580 3GB had driver updates/vulkan it could still hang at 1080p low/medium.
Ehem, under which stone are you living?
That are ancient games.
Even an RTX 2080Ti can't achieve 144 fps in most current games.
 
AMD just didn't bother to make big enough chips because it's quite expensive.

5700XT is only 251 mm² while GTX 1080Ti is almost double 471 mm². Not to mention 2080 Ti that is 775 mm².

If AMD just have bothered to make around 400+mm² RDNA chip "bigger 5700 XT", it would have been miles faster than 2080 Ti. AMD decided it wasn't worth it. So much about "NVIDIA is always a whole generation ahead" *nerd*
I am sorry what is the voltage and tdp of both comparative cards...they would have if they could have, and they did not for a reason RDNA2 will close that efficiency gap a bit, but eventually AMD will get there.
 
The top end cards are only a fraction of the market anyway, as long as performance is on par and the price is right, the fact it isn't faster than a 3080ti doesn't really mean much.

Now, competing on features like ray tracing may be a different story. But, on the other side of things,AMD making much of their tech free, like freesync, does tend to win out over time. Slow and steady wins the race. Nvidia tends to be first to market but AMD has the better long game.

Either way, I'm excited for both series of cards and we, as consumers, should win out in the end when it comes to price.

When I see new Adrenaline user interface, I don't want AMD card. Not because it's AMD, but because of the website pretending to be control panel, bloated with ridiculous stuff all over the place, like an 12 year though of it and made it. There is nothing pretty about nVidia GUI either, but hey, everything is where it used to be for past 20 years, they just add stuff. And there are no commercials going on.
 
AMD building an Nvidia killer is like if toyota wanted to catch up to the Chiron, but needed to build something to kill the Veyron first.

It's like...just not happening lol.
 
AMD building an Nvidia killer is like if toyota wanted to catch up to the Chiron, but needed to build something to kill the Veyron first.

It's like...just not happening lol.


Yeah just like no way AMD could ever catch Intel, right? LOL.

This rumor is stupid anyway, rogame who is a real leaker with well known track record on twitter has just today leaked that big navi top end will be 80 CU's with 12GB GDDR6. So that's 2X 5700 XT CU's. Not even counting higher clocks and efficiency improvements with RDNA 2. Real performance will be 2.5X 5700XT judging by that, which means obviously will utterly destroy a 2080Ti (will be like twice as fast as 2080 Ti) and likely give 3080 TI a heck of a fight at the very least, maybe be faster than anything Nvidia has, but we will see.

The fact Techspot posted this on the same day Rogame leaked what a monster Big Navi is, almost smells like calculated damage control TBH. But techspot sources are "their butt", whereas rogame is bonafide.

Again also as others have pointed out, 2080Ti is like 30% faster than 5700XT, so AMD introducing a 30% faster card after all this isn't even plausible. Hell the Xbox Series X GPU (confirmed 52 CU's, not even clocked near as high as they could be in a discrete GPU, plus RDNA 2>RDNA 1 efficiency improvements) will be as fast or faster than 30%>5700XT. Techspot really thinks AMD's fastest GPU this fall will be in Xbox? LOLOLOLOLOLOL.
 
Back