tacobravo
Posts: 166 +192
As long as the price/performance ratio good, it doesn't matter
Honestly the need for graphics horsepower has declined significantly. And graphics in games are not the jump they used to be; we are no longer in Crysis 1 or Far Cry 2 vs Deus Ex or Counterstrike 1 level graphical jump era in a 6 year span. There are 7 year old cards like the 780/290 that can play the latest titles just fine at 1080p medium settings and 5 year old cards that can do the same at high/very high settings @1080p/1440p. Hell, if the GTX 580 3GB had driver updates/vulkan it could still hang at 1080p low/medium.
DLSS is huge, it will allow budget grade RTX cards to compete with top end AMD cards and potentially even look better than them aswell. Huge amounts of support for it now too and more to come.
I’m currently playing through death stranding and DLSS quality mode looks sharper and overall better than DLSS off with anti aliasing, it’s very impressive and has come along so much since the scruffy implementation on the first set of games to use it.
It’s funny, Nvidia made a big song and dance out of ray tracing but ray tracing at this point is still a bit of a gimmick, it’s only really any good in Metro Exodus in my opinion. But DLSS on the other hand is turning out to be the real game changer.
You've just explained it yourself as to why the yields weren't bad - 12FFN is a tighter metal pitch revision of 16FF. In other words, it's a well matured process node. Mind you, we should really be more specific as to exactly what we mean by yield: wafer fabrication, wafer sorting, die binning, packaging. When I'm saying the likes of the TU102 and 104 have been fielding decent yields, I'm specifically referring to a combination of wafer sorting and die binning - I.e. the ratio of dies that can be used in any end product to the total number of dies fabricated.
However, if we refer to yield as being simply the total number of working dies from a single wafer, then one can argue that it is 'poor' - but this would be unfair to claim so, as the likes of the TU102 is huge. Even at 100% yield, a 300 mm wafer will only turn out 60 TU102 chips, needed for 11 different end products. This is partly why the prices are so high: just the sheer number of wafers that have to be manufactured to create the volume required.
Chips using 12FFN will still be manufactured on the 300mm production lines, using pretty much the same design rules and libraries as 16FF/12FFC, so it's not a case that TSMC have a separate production area just for Nvidia - that production just has to slot in with everything else. There's only something like 5 plants that handle those wafer too.
Yeah DLSS just helps literally everything. Although I do find with an RTX 2080 at 1440p. I can get over 60fps in any ray traced game without DLSS. The performance impact isn’t as bad as everyone bangs on about. Maybe it was at first but it got patched out.DLSS unlocks the door to ray tracing for next gen Nvidia cards. It's that simple.
Without DLSS it'll still be too costly. With it, you can see it will be viable sooner rather than later.
Are you suggesting that out of the maximum 60 or so TU102 dies achievable from a 300mm wafer, only 30% (18 chips) are functional or 30% of all the functioning dies end up in a 2080 Ti?14LPP is somewhat similar vs 12FFN and if AMD really got 80% yield for 212 mm² chip, then Nvidia (assuming similar defect density) would get somewhere around 30% for fully working 2080 Ti.
So fully working 2080 Ti dies have bad or at most mediocre yields, no matter how mature process. Die size is just so big.
The need for more graphics horsepower has increased not so much from big leaps in graphics engines, but more from the increased availability of higher resolution displays. I game at 2160p now. No way I would have been doing that 5 years ago.
AMD just didn't bother to make big enough chips because it's quite expensive.
5700XT is only 251 mm² while GTX 1080Ti is almost double 471 mm². Not to mention 2080 Ti that is 775 mm².
If AMD just have bothered to make around 400+mm² RDNA chip "bigger 5700 XT", it would have been miles faster than 2080 Ti. AMD decided it wasn't worth it. So much about "NVIDIA is always a whole generation ahead"
Are you suggesting that out of the maximum 60 or so TU102 dies achievable from a 300mm wafer, only 30% (18 chips) are functional or 30% of all the functioning dies end up in a 2080 Ti?
That particular GPU is used in the following products:
GeForce RTX 2080 Ti
Quadro RTX 6000 + non-active cooling version
Quadro RTX 8000 + non-active cooling version
TITAN RTX
The Quadro and Titan cards are full TU102 chips, with only clocks speeds and local memory sizes being the differentiators; only the 2080 Ti uses a partially disabled chip. In January 2020, from one German retailer alone, just under 500 GeForce 2080 Tis were sold. If a single 300mm wafer was only producing 18 viable chips and they all went into 2080 Tis, then 28 wafers would be needed for that one month, for that one store.
Expand that across all retailers across the globe, and include chips for the Quadro, and 30% simply isn't viable, regardless as to how much the end products cost. At worst, it's going to be 50%.
that's erroneous thinking. if you are nr 1 you are nr 1. people will simply buy your cards or put them into OEM systems just because you're number one.The top end cards are only a fraction of the market anyway, as long as performance is on par and the price is right, the fact it isn't faster than a 3080ti doesn't really mean much.
It would be useful to have a sense of the number of Quadro and Titan RTXs sold on a monthly basis since launch, as it would provide a way to estimate the percentages better. There again the only difference between a 2080 Ti and a Quadro RTX 6000/8000 is half a TPC (I.e. 4 SMs) and one memory controller, so chips for the former are almost full dies. As a guess, I would think that the GeForce outsells the Quadro/Titan combined by 100:1, based on MindFactory’s 2080 Ti sales. To me this suggests that percentage split is the other way round, 30-40% with SM defects and 10-20% fully functional.I suggest that around 30% of chips are fully functional. Estimating how many are "functional enough" for partially disabled products is much harder. Those partially disabled chips are almost full but on the other hand there are many possible parts to disable. Then probably we are talking about 30% fully functional and around 20% functional enough.
Were those 30k just for their 7nm GPUs - Vega 20, Navi 10 and 14? If so, then discounting the Vega (production numbers will be very small), 30K would cover all desktop and mobile RX 5500, 5600, and 5700 products. Sticking just to desktop, Mindfactory sold just under 4200 of them in January and twice that number of Turing products, with 2080 Tis accounting for 5.7% of that volume.According to reports, AMD booked 30K 7nm wafers per month from TSMC. So if Nvidia takes at least 10K wafers per month, that makes 300 000 at least partially functional chips per month. Sounds enough.
You know what else is 7 years old? The current console generation. That's hardly a coincidence. The capabilities of the consoles influence large parts of the game development ecosystem, both directly/obviously in the case of cross-platform titles, and indirectly/subtly even on games with no direct console connection (tools infrastructure, audience expectation, studio expectation, consumer hardware, etc.)Honestly the need for graphics horsepower has declined significantly. And graphics in games are not the jump they used to be; we are no longer in Crysis 1 or Far Cry 2 vs Deus Ex or Counterstrike 1 level graphical jump era in a 6 year span. There are 7 year old cards like the 780/290 that can play the latest titles just fine at 1080p medium settings and 5 year old cards that can do the same at high/very high settings @1080p/1440p. Hell, if the GTX 580 3GB had driver updates/vulkan it could still hang at 1080p low/medium.
Yiields must be good tho as NV are using the 2080 die in 2060 chips eg 2060 KO.I suggest that around 30% of chips are fully functional. Estimating how many are "functional enough" for partially disabled products is much harder. Those partially disabled chips are almost full but on the other hand there are many possible parts to disable. Then probably we are talking about 30% fully functional and additionally around 20% functional enough. 50% total usable.
Those figures may sound excessive but again, we are talking about massive chip here. As Zeppelin is considered quite large at 212 mm², then what is 775 mm² ? That is ultra massive.
According to reports, AMD booked 30K 7nm wafers per month from TSMC. So if Nvidia takes at least 10K wafers per month, that makes 300K at least partially functional chips per month. Sounds enough.
That is erroneous thinking, OEMs care about profit margins. I'd say that this is a bigger deal for laptop OEMs, but it's not a big deal in the desktop segmentAs was expected
that's erroneous thinking. if you are nr 1 you are nr 1. people will simply buy your cards or put them into OEM systems just because you're number one.
60 fps is too slow. I want 144 fps.Yeah DLSS just helps literally everything. Although I do find with an RTX 2080 at 1440p. I can get over 60fps in any ray traced game without DLSS. The performance impact isn’t as bad as everyone bangs on about. Maybe it was at first but it got patched out.
Below 1080p must be notebook gamers.Except the vast majority of pc gaming consumers are 1080p or below.
Honestly the need for graphics horsepower has declined significantly. And graphics in games are not the jump they used to be; we are no longer in Crysis 1 or Far Cry 2 vs Deus Ex or Counterstrike 1 level graphical jump era in a 6 year span. There are 7 year old cards like the 780/290 that can play the latest titles just fine at 1080p medium settings and 5 year old cards that can do the same at high/very high settings @1080p/1440p. Hell, if the GTX 580 3GB had driver updates/vulkan it could still hang at 1080p low/medium.
Ehem, under which stone are you living?Honestly the need for graphics horsepower has declined significantly. And graphics in games are not the jump they used to be; we are no longer in Crysis 1 or Far Cry 2 vs Deus Ex or Counterstrike 1 level graphical jump era in a 6 year span. There are 7 year old cards like the 780/290 that can play the latest titles just fine at 1080p medium settings and 5 year old cards that can do the same at high/very high settings @1080p/1440p. Hell, if the GTX 580 3GB had driver updates/vulkan it could still hang at 1080p low/medium.
I am sorry what is the voltage and tdp of both comparative cards...they would have if they could have, and they did not for a reason RDNA2 will close that efficiency gap a bit, but eventually AMD will get there.AMD just didn't bother to make big enough chips because it's quite expensive.
5700XT is only 251 mm² while GTX 1080Ti is almost double 471 mm². Not to mention 2080 Ti that is 775 mm².
If AMD just have bothered to make around 400+mm² RDNA chip "bigger 5700 XT", it would have been miles faster than 2080 Ti. AMD decided it wasn't worth it. So much about "NVIDIA is always a whole generation ahead"
The top end cards are only a fraction of the market anyway, as long as performance is on par and the price is right, the fact it isn't faster than a 3080ti doesn't really mean much.
Now, competing on features like ray tracing may be a different story. But, on the other side of things,AMD making much of their tech free, like freesync, does tend to win out over time. Slow and steady wins the race. Nvidia tends to be first to market but AMD has the better long game.
Either way, I'm excited for both series of cards and we, as consumers, should win out in the end when it comes to price.
AMD building an Nvidia killer is like if toyota wanted to catch up to the Chiron, but needed to build something to kill the Veyron first.
It's like...just not happening lol.