Nvidia does in its
financial statements. For example, in the quarter ending October 2022, the data center division had a revenue of $3.83 billion, gaming was $1.57 billion, provis was $0.2 billion, and automotive & embedded brought in $0.25 billion.
They show revenue, but I was refering to margin. Margin isnt split up by group, rather it is given as one lump sum.
I'd bet good money that the margins on that $3.83 billion in datacenter were a lot higher then the $1.57 billion from gaming. Even when gaming was over $3 billion it'd be the same case.
Given that the 4070 Ti's GDDR6X is only made by Micron and only used by Nvidia, the cost of the RAM is going to be notably higher than GDDR6. A couple of years ago, the latter was around $15 per module, bulk, but speeds have gone up quite a bit since then. I wouldn't be surprised if 21 Gbps GDDR6X was $20+ per module, though not as high as $33, which is how much it would be if the six modules on the 4070 Ti cost $200.
In the 7900xt/x threads someone dug up the GDDR^ modules AMD is using, and their tray price appeared to be $22 per 2GB of VRAM. Extrapolation led the 7900xt to have a $220 memory bus.
I doubt GDDR6X cost that much more, but it wouldnt surprise me if the floor for 4070ti memory price wasnt at least $150 total.
It is interesting to note that Nvidia has successively increased the price of the higher-spec xx70 model for 4 generations. The 3070 Ti, on Samsung's cheap 8N and 8 GDDR6X modules, launched at $599 which makes the 4070 Ti 33% more expensive. The older model itself was 20% more than the 2070 Super, which was 25% more than the 1070 Ti; that Pascal model was 25% more expensive than the old GTX 970. Had Nvidia used the same increases, between 20 and 25%, for the jump from Ampere to Ada, the 4070 Ti would have been between $718 and $748.
True, but my question is how much do costs factor into that? the 900 series was built on TSMC 28nm, which was the last major GPU process that was cheaper then its predecessor, and it was quite mature by the time the 900 series arrived as 28nm was used for both the 600 and 700 series as well.
14nm was costlier then 28, and samsung 8 was costlier then that. TSMC has been jacking up prices significantly the last few nodes, and 4nm is their current premium node (which is why AMD used the cheaper 5/6nm nodes). A single 28nm wafer cost $2500 in 2014, now a single 300mm2 wafer of 3/4nm costs $20,000. the 7nm wafers were only $10,000
GPUs and SoCs to get more expensive
www.tomshardware.com
Now, assumption at the time was that nvidia used samsung 8nm because it was cheaper, if that's the case, then the new 4070ti, which is roughly the same size as the 3060, would cost over double the amount to produce, assuming the same yields. Somebody out there can do the math I'm sure and figure out the actual cost per GPU die. Rough guestimate IIRC was $100 per 3060 die. Assuming thats true, a single 4070ti die would be $200 at least, given I cant find the cost of a 8nm samsung wafer but it appears to have been $7000-8000 or so. Add another $200 for memory, and another $50 for the PCB and all of its components, you're up to $450 already, not counting the cost of manufacture, transport, or support for drivers, ece.
Is it overpriced? Yeah. Is it realistic to expect this GPU to have been a 4060 for $400? I dont honestly think so.
Of course, this was never going to be a 4070 Ti -- let's not forget that this is Nvidia's aborted attempt at pulling the wool over the consumer's eyes, with its '4080 12GB' farce.
That's mud in nvidia's face and frankly greed has played a big part in the entire 4000 series. That's honestly not surprising though, every company on earth have been acting like robber barons since the lockdowns.