Nvidia GeForce RTX 4060 Ti 16GB Reviewed, Benchmarked

Yes, there is some minor work required, but the engineering and costs involved are minimal. The 7nm design can be used almost as is, with a simple conversion process that requires minimal work, according to https://www.angstronomics.com/p/ps5-refresh-oberon-plus and https://www.anandtech.com/show/14290/tsmc-most-7nm-clients-will-transit-to-6nm. TSMC itself seems to think that involved work and costs are so low that almost everyone will move from 7nm to 6nm.
Regarding die size reduction: as PS5 example shows and TSMC itself advertises, moving a design from N7 to N6 can reduce die size by 15%, so no, it's far from zero. 15% smaller dies gives you around 18% more dies per wafer. N6 manufacturing is actually slightly less complicated and uses only one more EUV layer, while reusing the same fabs and tools, so any wafer cost increase is probably minimal. Due to increased dies per wafer, cost per die is almost certainly lower, otherwise Sony would have no reason to migrate PS5 to 6nm without changing the overall design in any way.
So, yeah, it isn't free lunch, but it's an easy and cheap way to reduce manufacturing costs and reduce power usage slightly, which is why a lot of TSMC clients migrated to it.
Still easy is not free. Saying that customers will switch from 7nm to 6nm mean they make new products on 6nm, not that old products are shifted for 6nm production unless amounts are huge. I heavily doubt AMD will make anything on 6mm that they make on 7nm. So far every AMD 6nm product is something new they didn't make on 7nm.

As for GPU shortage, AMD didn't have quick solution using 6nm for existing products. Only chance was to do something designed for that. 6500XT was only option.
 
Still easy is not free. Saying that customers will switch from 7nm to 6nm mean they make new products on 6nm, not that old products are shifted for 6nm production unless amounts are huge. I heavily doubt AMD will make anything on 6mm that they make on 7nm. So far every AMD 6nm product is something new they didn't make on 7nm.

As for GPU shortage, AMD didn't have quick solution using 6nm for existing products. Only chance was to do something designed for that. 6500XT was only option.
It's not necessarily transitioning of whole products, but big IP blocks and pieces. AMD, for example, has RDNA2, Zen 2 and 3 in their 6nm chips, because they could reuse 7nm designs. And that's a big part of why transition from N7 to N6 is so fast - it may be mostly new products, but it's a lot of 7nm IP as well. Although this in theory makes porting whole chips to 6nm even easier, if someone wants it, like Sony did. I'm not insisting it happens a lot, I'm saying it's easy, design level compatible, cost effective and saves die area, which you disagreed with.

As for "6nm being a solution to shortage", I repeat - there are no separate 6nm fabs nor production lines. It couldn't be a solution, because it's the same capacity that was mostly taken by 7nm products back then. Everything from 7nm family is produced in the same fabs using mostly the same manufacturing steps, devices, resources, with only relatively small differences. It's a single process family, TSMC reports it's market /production share, utilization rates and revenue together, experts say their wafers are very similar in prices, all of its member share fabs etc. And this is why TSMC can quickly replace 7nm output with 6nm as clients transition. N6 became the most popular 7nm-class process in like two years without wasting 7nm capacity or having to invest in separate 6nm capacity to satisfy demand. They just naturally switched more and more fab output from N7 to N6 as more N6 and less N7 orders were made.
 
It's not necessarily transitioning of whole products, but big IP blocks and pieces. AMD, for example, has RDNA2, Zen 2 and 3 in their 6nm chips, because they could reuse 7nm designs. And that's a big part of why transition from N7 to N6 is so fast - it may be mostly new products, but it's a lot of 7nm IP as well. Although this in theory makes porting whole chips to 6nm even easier, if someone wants it, like Sony did. I'm not insisting it happens a lot, I'm saying it's easy, design level compatible, cost effective and saves die area, which you disagreed with.

As for "6nm being a solution to shortage", I repeat - there are no separate 6nm fabs nor production lines. It couldn't be a solution, because it's the same capacity that was mostly taken by 7nm products back then. Everything from 7nm family is produced in the same fabs using mostly the same manufacturing steps, devices, resources, with only relatively small differences. It's a single process family, TSMC reports it's market /production share, utilization rates and revenue together, experts say their wafers are very similar in prices, all of its member share fabs etc. And this is why TSMC can quickly replace 7nm output with 6nm as clients transition. N6 became the most popular 7nm-class process in like two years without wasting 7nm capacity or having to invest in separate 6nm capacity to satisfy demand. They just naturally switched more and more fab output from N7 to N6 as more N6 and less N7 orders were made.
Again, 6nm was response for shortage. There was nothing AMD could do on 7nm, fully booked for current products. For 6nm AMD at that time had on production some minor products and IO chips for Epyc and Ryzen. That means AMD could sacrifice some 6nm production for 6500XT. That meant something was going to be released later than sooner. But for temporary solution against GPU shortage, that was wise move.

Even more arguments against 20/100 score. How about finally fixing it?
 
Again, 6nm was response for shortage. There was nothing AMD could do on 7nm, fully booked for current products. For 6nm AMD at that time had on production some minor products and IO chips for Epyc and Ryzen. That means AMD could sacrifice some 6nm production for 6500XT. That meant something was going to be released later than sooner. But for temporary solution against GPU shortage, that was wise move.

Even more arguments against 20/100 score. How about finally fixing it?
Using N6 for Navi 24 might have been partially encouraged by shortage, as it allowed for more dies per wafer than N7, but that's all. I don't see any sense in your IO die theory. The only other 6nm product from AMD back then was MI 250X accelerator and it was still months before Zen 4 or even Zen 3+, which also came some time before 6nm IO dies. Also assuming AMD had used 100% of their bought N7 capacity, but had some N6 left, they could have actually exchanged it for N7, if they really wanted, as both are just chunks of output from the same fabs using the same resources. I doubt this was actually the case though, as semiconductor shortage wasn't yet resolved and they had Zen 3+ coming soon, a much more important product. Also Navi 24 was a laptop-first chip, so if anything was sacrified for 6500XT, it was some Navi 24 laptop GPUs that gave way to 6500 XT desktop cards. Both are the same die, the same 6nm product though.
 
Finally a review I can use for reference to promote the $450 6800 for 1440p gaming. Who knows maybe by this Black Friday Steve's preferred pricing for $350 for the 4060ti 16 gigs of vram might come to fruition.
New nvidia cards suddenly made old amd cards the best value on the market
 
I just watched a 10 game benchmarking video on the RTX 4060ti 16gb 1080 1440 and 4k and not one time did it ever go over 8gb on vram while in game. One of the major complaints about any 8gb model is the limitations is gaming above 1080. Another statistic that you missed or didn't provide is cost per frame. The RX 6800 at 1080 in the 15 game average is $3.53 and the RTX 4060 ti 16gb is $4.30. I would think twice before buying the RTX 4060 ti 16gb.
 
Back