The company who's one of TSMC's biggest clients and has a market cap bigger than Intel "cant afford" TSMC's best nodes?AMD mostly can't afford using TSMC's best nodes like Apple (especially) and Nvidia, soon Intel too with Arrow Lake as well (high-end Arrow Lake chips are confirmed to be using 3nm TSMC)
I think you meant won't bother. Why produce midrange cards at cutting edge nodes?
Those nodes have a point only when going for outright performance where cost is much less of a concern.
I don't foresee AMD producing $2000+ gaming cards in the near future. Do you?
They dont? I must have missed a memo but Nvidia has been using TSMC as far as I can remember. Even collaborating on "special" nodes like 12nm for Turing.Nvidia don't rely on TSMC to the same extent. They went Samsung with 3000 series and already secured Intel 20A/18A. Samsung 2nm too probably.
I would not be surprised if RTX 6000 series is made on Intel 18A (or Samsung again, if their 2nm node is decent with good yields)
So they went for Samsung for one generation - an outlier. Likely Samsung offered them a really good deal but Samsung's 8nm node (meant for mobile chips initially) was just bad in terms of yields and power. AMD was very competitive with RX 6000 series that used TSMC's 7nm node.
As for Intel - that wont happen in the near future. Intel's capacity is miniscule and even Intel themselves don't believe in their own node. Why else would they use TSMC for Arrow Lake GPU chiplet? And now you think Intel will produce much bigger GPU dies for Nvidia on their unproven and low volume 18A node?
They do tho. You think Nvidia would as competitive with RX 7000 series if they used TSMC's 5nm or even 6nm nodes?Nvidia don't need to use peak nodes for gaming to beat AMD.
AMD is about equal on raster with 4080S while using inferior 5nm/6nm node and if Nvidia used 5nm for 4090 then it would not be ~18% faster in raster compared to 7900 XTX. Nvidia's advantage is RT is more to do with compute unit allocation and architecture than node difference.