Nvidia GeForce RTX 4080 Review: Fast, Expensive & 4K Capable All the Way

The way things are going AMD will be catching up soon. RT its still not that important for the average gamer (IMHO). The truth is that we need AMD in the picture in order to generate competition. I have owned the GTX 1070, RTX 2070 and RTX 3080 and barely I have ever used RT in any of my games. I'll be honest, if the RX 7900 XTX shows better framerates without RT I'm switching to team red. we need to start using our heads and our wallets to change the greedy driven way of thinking of these companies. $1200 for a graphic card its absolutely ridiculous!
Gtx 1070 never had RT. For 2070 I wouldn't blame you for barely using it, even though for 1080p is kinda workable, but 3080? C'mon! Look, in a perfect world competition brings prices down, but lately, it only pushes them up and then you're willing to pay 1000 bucks for AMD, because Nvidia is so much more expansive, instead of asking yourself: do I fckin need that? Imo, competition starts when AMD gets better per price, same performance, including RT. Case in point, I've recently bought a 6700xt for 450 euros. At this level, RT is irrelevant, so being cheaper than a 3060ti by 80 euros (in my country), it was competitive. If I was willing to spend more, I would have also include RT in a comparison.
 
Last edited:
As a different exercise to Steve's, for seeing how cost-effective the RTX 4080 is compared to its predecessors, I took the geomean average fps figures from this review and those for the 3080 and 2080 cards. Then using the vendor's declared launch MSRP, this is the number of dollars per fps, on average, across the tested games in each respective review.

RTX 4080 review
dollarsperfps01.png

RTX 3080 review
dollarsperfps02.png

RTX 2080 review
dollarsperfps03.png

Now, the older cards don't perform as well as the newer ones at 4K, and more modern games ask for greater demands of the GPU, but the 4080 isn't automatically too expensive for the performance it's offering.

Yes, it's 72% more expensive than the RTX 3080 with its geomean average fps just being 52% more, but it's also only 20% more expensive than the RTX 2080 Ti, which was 24% slower (all figures are for 4K) than the 3080, when it was tested. Compared to the Turing top-end model, it's a huge leap in performance.

Don't forget that AMD pointed out that fabrication cost on the newer, smaller nodes is getting significantly more expensive:

woHjvqju8MDZL2uqbtHAxa.jpg
The AD103 is fabricated using TSMC's N4 node, which is part of their N5 family of fabrication methods. The TU104 was made using 12FFN, a Nvidia-spec refinement of their N16, so from AMD's chart, one can estimate that the Lovelace chip costs anywhere between 40% and 70% more to fabricate than the Turing die used in the RTX 2080 (graph suggests a 2 to 2.5 times increase in same sized die cost; AD103 is 379mm2 compared to 545mm2 for the TU104).

Top-end graphics cards are naturally going to be more expensive because of this. It's also why AMD went down the chiplet route to help keep the RX 7900 XTX at $999. Makes you wonder what they might have set the price to, if they hadn't gone down that road.
 
As a different exercise to Steve's, for seeing how cost-effective the RTX 4080 is compared to its predecessors, I took the geomean average fps figures from this review and those for the 3080 and 2080 cards. Then using the vendor's declared launch MSRP, this is the number of dollars per fps, on average, across the tested games in each respective review.

RTX 4080 review
View attachment 88669

RTX 3080 review
View attachment 88672

RTX 2080 review
View attachment 88673

Now, the older cards don't perform as well as the newer ones at 4K, and more modern games ask for greater demands of the GPU, but the 4080 isn't automatically too expensive for the performance it's offering.

Yes, it's 72% more expensive than the RTX 3080 with its geomean average fps just being 52% more, but it's also only 20% more expensive than the RTX 2080 Ti, which was 24% slower (all figures are for 4K) than the 3080, when it was tested. Compared to the Turing top-end model, it's a huge leap in performance.

Don't forget that AMD pointed out that fabrication cost on the newer, smaller nodes is getting significantly more expensive:

View attachment 88674
The AD103 is fabricated using TSMC's N4 node, which is part of their N5 family of fabrication methods. The TU104 was made using 12FFN, a Nvidia-spec refinement of their N16, so from AMD's chart, one can estimate that the Lovelace chip costs anywhere between 40% and 70% more to fabricate than the Turing die used in the RTX 2080 (graph suggests a 2 to 2.5 times increase in same sized die cost; AD103 is 379mm2 compared to 545mm2 for the TU104).

Top-end graphics cards are naturally going to be more expensive because of this. It's also why AMD went down the chiplet route to help keep the RX 7900 XTX at $999. Makes you wonder what they might have set the price to, if they hadn't gone down that road.
We, as customers, don't have to pay for the Nvidia's mistake going monolithic. The rule for companies is to offer a good product at a better price. AMD is doing so and Nvidia is not. Matter closed. Nvidia survives because of their brainless fanboys. Without them it would be in serious financial problems with the new generation.
 
For me any card above $500 for non-professional use is a complete ripoff.
Its true, but today we have mid to low end cards at that price and historically even 15 years ago there were cards like 8800 ultra and gtx 8800 were both over that price being $850 for the ultra. In the end we need AMD to deliver the competition and keep Nvidia in check!
Many thought Intel will help the situation but it seems they need to learn to crawl first before they even start walking!
 
We, as customers, don't have to pay for the Nvidia's mistake going monolithic. The rule for companies is to offer a good product at a better price. AMD is doing so and Nvidia is not. Matter closed.
Until we have actual performance figures, across multiple games, resolutions, and test systems, the matter isn't closed at all. Without concrete figures, we don't know who's making the mistake here. Let's say the RTX 4080 comes out to be exactly 20% faster than the RX 7900 XTX in standard 4K testing - to some people that would justify the 20% difference in price; for other people, a slower but cheaper card is perfectly acceptable.
Nvidia survives because of their brainless fanboys. Without them it would be in serious financial problems with the new generation.
Nvidia 'survives' because it generates over $3b per quarter in revenue from its data center division. The $2b revenue accrued from gaming, in the last quarter, was down by a huge margin (44% Q-Q, 33% Y-Y), so 'brainless fanboys' don't seem to be helping them out that much.

AMD's gaming revenue was $1.7b, for the same period, so neither appears to have much in the way of zombiefans to help them out (and AMD's figure is bolstered by the console chips, as they too had a decline in card sales).
 
With all due respect to article author I still remember when he was almost left without day one expensive toys from Nvidia. I understand he has a business to run and some compromises are required. I can see his disdain in all recent RTX cards reviews, the body language don't lie.
But going from "what is that RT" to promoting it like today it's long way.
"but there's still the issue of ray tracing, which continues to gain ground and is an important performance metric now"
 
Until we have actual performance figures, across multiple games, resolutions, and test systems, the matter isn't closed at all. Without concrete figures, we don't know who's making the mistake here. Let's say the RTX 4080 comes out to be exactly 20% faster than the RX 7900 XTX in standard 4K testing - to some people that would justify the 20% difference in price; for other people, a slower but cheaper card is perfectly acceptable.

Nvidia 'survives' because it generates over $3b per quarter in revenue from its data center division. The $2b revenue accrued from gaming, in the last quarter, was down by a huge margin (44% Q-Q, 33% Y-Y), so 'brainless fanboys' don't seem to be helping them out that much.

AMD's gaming revenue was $1.7b, for the same period, so neither appears to have much in the way of zombiefans to help them out (and AMD's figure is bolstered by the console chips, as they too had a decline in card sales).

Thanks for the stats - I would argue that the loss in profit is directly related to the loss of industrial scale miners - both AMD and nVidia have had artificially raised profits for the last 2 years. Profit levels are slowly returning to normal levels - although both companies will try to keep prices artificially high for as long as they can - although market forces will slowly bring them down.
 
Until we have actual performance figures, across multiple games, resolutions, and test systems, the matter isn't closed at all. Without concrete figures, we don't know who's making the mistake here. Let's say the RTX 4080 comes out to be exactly 20% faster than the RX 7900 XTX in standard 4K testing - to some people that would justify the 20% difference in price; for other people, a slower but cheaper card is perfectly acceptable.

Nvidia 'survives' because it generates over $3b per quarter in revenue from its data center division. The $2b revenue accrued from gaming, in the last quarter, was down by a huge margin (44% Q-Q, 33% Y-Y), so 'brainless fanboys' don't seem to be helping them out that much.

AMD's gaming revenue was $1.7b, for the same period, so neither appears to have much in the way of zombiefans to help them out (and AMD's figure is bolstered by the console chips, as they too had a decline in card sales).
The 7900 XTX 20% slower than the 4080, I doubt it, specially considering that the former has 12 more billion transistors than the latter. OK, it was just a supposition. I agree with the rest, thanks.
 
With all due respect to article author I still remember when he was almost left without day one expensive toys from Nvidia. I understand he has a business to run and some compromises are required. I can see his disdain in all recent RTX cards reviews, the body language don't lie.
But going from "what is that RT" to promoting it like today it's long way.
"but there's still the issue of ray tracing, which continues to gain ground and is an important performance metric now"

Speaking as some who has bought 4 AMD cards in a row now, the reason he now accepts RT is very simple: There are now 2 GPUs which can run at 4K60 or higher with Ultra settings plus RT (with DLSS 2, eff 3) which means you can finally have your eye candy and also good framerates. Before this, RT was an expensive joke that cost all users too much.

Changing an opinion when new data changes the conversation is something that people can do. People who can't do this are stuck as relics of the past.

That said, I tried RT in CP2077 and some other games and to me the difference is visible but not nearly wow enough for the framerate reduction. Added nothing to the gameplay with a mild improvement to visuals. I'd maybe use it if I had a 4080 or 4090. So RT performance is irrelevant to me right now and I'm pretty sure people can figure that metric out for themselves.
 
The 7900 XTX 20% slower than the 4080, I doubt it, specially considering that the former has 12 more billion transistors than the latter. OK, it was just a supposition. I agree with the rest, thanks.
Just on paper stats alone, the RX 7900 XTX should be easily ahead of the RTX 4080:

7900 XTX vs 4080
Peak FP32 throughput: 61.6 vs 47.7 TFLOPS (+29%)
Peak texel throughput: 962 vs 762 Gtexels/s (+26%)
Peak pixel output rate: 481 vs 281 Gpixels/s (+71%)
Global memory bandwidth: 960 vs 717 GB/s (+34%)

But AMD has made some architectural changes with RDNA 3, compared to version 2, that might (emphasis on might) make it a little harder for the GPU to reach peak utilization, in current games.

I should imagine that, at the very least, it's on par with the 4080.
 
In the case of the RTX 4080, it's a very fast GPU, it's just too expensive for most. In terms of value it's not horrible, but it's not great either.

It is horrible. That's exactly what it is, horrible. It's a GPU launched 2 years after the 3000 series, and it actually regresses in performance/$ compared to the RTX 3080. It's an abject failure. It's a product that literally should not exist, because it fails to provide better value than the product it's replacing, unlike every generation that came before it.

I know it, you know it, everyone knows it. And it's pathetic when review outlets fail to address this. It's like you're hoping your readers won't notice the elephant-shaped bump under the carpet. Are you seriously not gonna call Nvidia out on it's anti-consumer nonsense, and will instead award them with a 90/100?

In other words, we'd nothing but love if these 4080/4090s and $1000+ GPUs become a thing of the past and we go back to the days where a mainstream GPU cost $200-250 and a high-end one would set you back no more than $500-600 (and less than that months after launch). But that's not true today.

Except it is true today. You can get a RX 6600 for $220. You can get a RX 6600 XT for $280. You can get a RX 6800 XT for $520. AMD is having no problem whatsoever selling GPUs for the prices "of the past". And yet for some reason Nvidia can't, despite the fact they're using an older, cheaper process in the 3000 series compared to AMD's 6000 series (Samsung 8 nm vs TSMC 7 nm). Why do you think that is?
 
Just on paper stats alone, the RX 7900 XTX should be easily ahead of the RTX 4080:

7900 XTX vs 4080
Peak FP32 throughput: 61.6 vs 47.7 TFLOPS (+29%)
Peak texel throughput: 962 vs 762 Gtexels/s (+26%)
Peak pixel output rate: 481 vs 281 Gpixels/s (+71%)
Global memory bandwidth: 960 vs 717 GB/s (+34%)

But AMD has made some architectural changes with RDNA 3, compared to version 2, that might (emphasis on might) make it a little harder for the GPU to reach peak utilization, in current games.

I should imagine that, at the very least, it's on par with the 4080.

It sounds like one of the larger complaints can be summed up like this.

As long as you're the first product into that performance segment, you will enjoy the "nothing compares right now" bonus, even after cards that *do* compare are released.

The 3080 received a 90/100 because, if you could get it at MSRP, it truly was a great leap forward in what you received in performance per dollar. Just like the 6800 and 6800XT did, as well.

The 4080 is not that. It simply extended where the end of the existing scale of cost/dollar falls.

A month from now, this 4080 review is still going to say it's a 90/100 card, even after it's most likely going to be surpassed in almost every metric by a card that is $200 cheaper.
 
As a different exercise to Steve's, for seeing how cost-effective the RTX 4080 is compared to its predecessors, I took the geomean average fps figures from this review and those for the 3080 and 2080 cards. Then using the vendor's declared launch MSRP, this is the number of dollars per fps, on average, across the tested games in each respective review.

RTX 4080 review
View attachment 88669

RTX 3080 review
View attachment 88672

RTX 2080 review
View attachment 88673

Now, the older cards don't perform as well as the newer ones at 4K, and more modern games ask for greater demands of the GPU, but the 4080 isn't automatically too expensive for the performance it's offering.

Yes, it's 72% more expensive than the RTX 3080 with its geomean average fps just being 52% more, but it's also only 20% more expensive than the RTX 2080 Ti, which was 24% slower (all figures are for 4K) than the 3080, when it was tested. Compared to the Turing top-end model, it's a huge leap in performance.

Don't forget that AMD pointed out that fabrication cost on the newer, smaller nodes is getting significantly more expensive:

View attachment 88674
The AD103 is fabricated using TSMC's N4 node, which is part of their N5 family of fabrication methods. The TU104 was made using 12FFN, a Nvidia-spec refinement of their N16, so from AMD's chart, one can estimate that the Lovelace chip costs anywhere between 40% and 70% more to fabricate than the Turing die used in the RTX 2080 (graph suggests a 2 to 2.5 times increase in same sized die cost; AD103 is 379mm2 compared to 545mm2 for the TU104).

Top-end graphics cards are naturally going to be more expensive because of this. It's also why AMD went down the chiplet route to help keep the RX 7900 XTX at $999. Makes you wonder what they might have set the price to, if they hadn't gone down that road.
Except that the 4080 is not really a top end chip when talking about die sizes, at 379mm2 it's one of the smaller chips Nvidia makes (smaller than the 3070). There is a huge gap between the 4090 and 4080 in die size, yields shouldn't be that bad. AMD's GCD (the gpu die) is around 300mm2.

The increased cost in the manufacturing process of the die does not explain the cost of the card at all.
 
As long as you're the first product into that performance segment, you will enjoy the "nothing compares right now" bonus, even after cards that *do* compare are released.

The 3080 received a 90/100 because, if you could get it at MSRP, it truly was a great leap forward in what you received in performance per dollar. Just like the 6800 and 6800XT did, as well.

The 4080 is not that. It simply extended where the end of the existing scale of cost/dollar falls.
A month from now, this 4080 review is still going to say it's a 90/100 card, even after it's most likely going to be surpassed in almost every metric by a card that is $200 cheaper.

I agree with most of what you said. Extending the performance/price doesn't make the 4080 great value, but it doesn't make it a bad product. Reviewers don't have a magic ball and I wouldn't want them to "guess" what's going to be good.

A month from now if the Radeon competition is great then it will likely receive a positive review, it will be recommended on top of the RTX, and if people don't buy the 4080 at that inflated price, then the price will have to fall. Consumers can vote with their wallets.

AMD's marketing department must be doing a great job, I see a lot of fanboys defending the underdog and blasting the review for the 90/100 score (generous but not "wrong" IMHO)... but forgetting about the written words, the benchmarks, and the data presented.
 
Last edited:
I agree with most of what you said. Extending the performance/price doesn't make the 4080 great value, but it doesn't make it a bad product. Reviewers don't have a magic ball and I wouldn't want them to "guess" what's going to be good.

A month from now if the Radeon competition is great then it will likely receive a positive review, it will be recommended on top of the RTX, and if people don't buy the 4080 at that inflated price, then the price will have to fall. Consumers can vote with their wallets.

AMD's marketing department must be doing a great job, I see a lot of fanboys defending the underdog and blasting the review for the 90/100 score (generous but not "wrong" IMHO)... but forgetting about the written words, the benchmarks, and the data presented.
If that were true, there is no possible way the 3090 could receive a lower review score, by anyone, than the 3080. It was a superior product in *every* way...except value for MSRP.

Trying to say that the MSRP doesn't matter simply doesn't make sense. It mattered for Ampere, Turing, Pascal...*every* generation of card that has ever been released prior to now.
 
Except that the 4080 is not really a top end chip when talking about die sizes, at 379mm2 it's one of the smaller chips Nvidia makes (smaller than the 3070). There is a huge gap between the 4090 and 4080 in die size, yields shouldn't be that bad. AMD's GCD (the gpu die) is around 300mm2.

The increased cost in the manufacturing process of the die does not explain the cost of the card at all.
If one assumes AMD’s graph is correct and refers exclusively to the fabrication of GPUs on TSMC’s nodes, the jump from 16/14nm (used to make Turing) to 5nm (used to make Ada Lovelace) is an increase in cost for a fixed die size by 2 to 2.5 times. The AD103 is, as you say, is 379mm2 and the TU104 is 545mm2. Combined that would make the AD103 at least 2x(379/545)=1.39 times more expensive to fabricate than the TU104 (RTX 2080). Yes it’s a lot smaller than the Ampere chip but that was manufactured on Samsung’s old and cheaper node, so we can’t really add that into the equation. The point being raised is that raw fabrication costs are a fair bit higher.

And that before one adds in other factors that we have little knowledge about. We don’t know how exactly good the yields are, though TSMC are claiming its the same for N4 as it is for N5. We don’t know how much TSMC charges per N4 wafer compared to N5. The latter is undrr heavy demand but the former is also only made at one fab.

We don’t know if there’s a significant price difference between GDDR6X and GDDR6, but there’s likely to be at least some difference. We don’t know how much influence third party AIB vendors had in the pricing of the 4000 series - they may have been insistent of setting a higher price but they may equally just as easily baulked at it. At the very least it would seem that only EVGA wasn’t happy with the situation.

All in all, I don’t think it’s as simple as ‘Nvidia just whacked the price up because they can’, even though that’s almost certainly part of pricing decision. But no matter: the product is very good at what it does. It’ll be down to the consumer market as to how well it ultimately sells.
 
Another card that is p!ss poor value. As GN said, increasing performance 50% and raising prices 50%b at they same time is not progress but stagnation. This flies in the face of technological improvement. Card should be no more than $799 maybe $100 more than when 3080 launched. I'll bet if the 4080 were a sane price the 7900 twins would also be another $200 less too.
 
It is horrible. That's exactly what it is, horrible. It's a GPU launched 2 years after the 3000 series, and it actually regresses in performance/$ compared to the RTX 3080. It's an abject failure. It's a product that literally should not exist, because it fails to provide better value than the product it's replacing, unlike every generation that came before it.

I know it, you know it, everyone knows it. And it's pathetic when review outlets fail to address this. It's like you're hoping your readers won't notice the elephant-shaped bump under the carpet. Are you seriously not gonna call Nvidia out on it's anti-consumer nonsense, and will instead award them with a 90/100?



Except it is true today. You can get a RX 6600 for $220. You can get a RX 6600 XT for $280. You can get a RX 6800 XT for $520. AMD is having no problem whatsoever selling GPUs for the prices "of the past". And yet for some reason Nvidia can't, despite the fact they're using an older, cheaper process in the 3000 series compared to AMD's 6000 series (Samsung 8 nm vs TSMC 7 nm). Why do you think that is?
Relax dude. Lamborghinis aren't necessary, or a good value, either. But that doesn't mean they "shouldn't exist".

Stop projecting your frustrations with personal income on the rest of the world. If you can't afford it, sorry. But it doesn't mean a product is some sort of an abomination that should be banned. It just means it's not in your future for the moment. DEAL
 
Relax dude. Lamborghinis aren't necessary, or a good value, either. But that doesn't mean they "shouldn't exist".

Stop projecting your frustrations with personal income on the rest of the world. If you can't afford it, sorry. But it doesn't mean a product is some sort of an abomination that should be banned. It just means it's not in your future for the moment. DEAL
Right at the same time, Lambo's aren't the best car for everyone.
The best is the best you can afford. Which is to say most of the time Nvidia cards aren't it due to higher cost and marginal performance uplift.
 
If one assumes AMD’s graph is correct and refers exclusively to the fabrication of GPUs on TSMC’s nodes, the jump from 16/14nm (used to make Turing) to 5nm (used to make Ada Lovelace) is an increase in cost for a fixed die size by 2 to 2.5 times. The AD103 is, as you say, is 379mm2 and the TU104 is 545mm2. Combined that would make the AD103 at least 2x(379/545)=1.39 times more expensive to fabricate than the TU104 (RTX 2080). Yes it’s a lot smaller than the Ampere chip but that was manufactured on Samsung’s old and cheaper node, so we can’t really add that into the equation. The point being raised is that raw fabrication costs are a fair bit higher.

And that before one adds in other factors that we have little knowledge about. We don’t know how exactly good the yields are, though TSMC are claiming its the same for N4 as it is for N5. We don’t know how much TSMC charges per N4 wafer compared to N5. The latter is undrr heavy demand but the former is also only made at one fab.

We don’t know if there’s a significant price difference between GDDR6X and GDDR6, but there’s likely to be at least some difference. We don’t know how much influence third party AIB vendors had in the pricing of the 4000 series - they may have been insistent of setting a higher price but they may equally just as easily baulked at it. At the very least it would seem that only EVGA wasn’t happy with the situation.

All in all, I don’t think it’s as simple as ‘Nvidia just whacked the price up because they can’, even though that’s almost certainly part of pricing decision. But no matter: the product is very good at what it does. It’ll be down to the consumer market as to how well it ultimately sells.
tl;dr:
1. the chip isn't 2x more expensive to make
2. the RAM isn't 2x more expensive (let's not forget the price the 3090 with 24GB GDDR6X is currently selling at now)

That leaves the cooler, PCB and other components price which I also doubt that they're 2x more expensive (especially with the fairly decent power draw the card has).

Being "good at what it does" should never be used as an excuse. Nvidia was clearly banking on the fact that card prices would remain high (which they still are for the 3000 series). Calling them out for launching at stupidly high prices is normal.
 
Relax dude. Lamborghinis aren't necessary, or a good value, either. But that doesn't mean they "shouldn't exist".

Stop projecting your frustrations with personal income on the rest of the world. If you can't afford it, sorry. But it doesn't mean a product is some sort of an abomination that should be banned. It just means it's not in your future for the moment. DEAL
But this card isn't a "lambo". (you could call the future 4090ti a lambo)

It's not about being able to afford things or not, It's about the stupidly high prices they are demanding for all GPUs because they're the market leader. You should be outraged, not defending them. If every generation has such price increases then you'll eventually have GPUs at the same price as a lambo.
 
Back