As a different exercise to Steve's, for seeing how cost-effective the RTX 4080 is compared to its predecessors, I took the geomean average fps figures from this review and those for the 3080 and 2080 cards. Then using the vendor's declared launch MSRP, this is the number of dollars per fps, on average, across the tested games in each respective review.
RTX 4080 review
View attachment 88669
RTX 3080 review
View attachment 88672
RTX 2080 review
View attachment 88673
Now, the older cards don't perform as well as the newer ones at 4K, and more modern games ask for greater demands of the GPU, but the 4080 isn't automatically too expensive for the performance it's offering.
Yes, it's 72% more expensive than the RTX 3080 with its geomean average fps just being 52% more, but it's also only 20% more expensive than the RTX 2080 Ti, which was 24% slower (all figures are for 4K) than the 3080, when it was tested. Compared to the Turing top-end model, it's a huge leap in performance.
Don't forget that AMD pointed out that fabrication cost on the newer, smaller nodes is getting significantly more expensive:
View attachment 88674
The AD103 is fabricated using TSMC's N4 node, which is part of their N5 family of fabrication methods. The TU104 was made using 12FFN, a Nvidia-spec refinement of their N16, so from AMD's chart, one can estimate that the Lovelace chip costs anywhere between 40% and 70% more to fabricate than the Turing die used in the RTX 2080 (graph suggests a 2 to 2.5 times increase in same sized die cost; AD103 is 379mm2 compared to 545mm2 for the TU104).
Top-end graphics cards are naturally going to be more expensive because of this. It's also why AMD went down the chiplet route to help keep the RX 7900 XTX at $999. Makes you wonder what they might have set the price to, if they hadn't gone down that road.