Nvidia GeForce RTX 3080 Review: Ampere Arrives!

I don't understand why this Intel hatred...
Intel is better for gaming, still, even with 5 year old architecture, with pcie 3.0, with whatever disadvantage you want, it is still better.
Then you come and you slap on our face a 750$ processor from AMD that basically 90% of the market won't buy anyway.
Please be reasonable and do the testing with a 3900x then compare with 10700k/10900K and you'll see that your gaming rig should still be Intel based. Period.
You just try to hide the CPU performance delta by testing only 1440p and 4k, when we know that 1080p is the resolution where the actual difference shows up and it is not insignificant like you try to portray it, it is 10-15%.
Why are you insisting with an inferior CPU for gaming tests, when the focus should be to have the best CPU there is, is beyond me...
Guess I'll read reviews from the rest of the websites that still use Intel cpus.
 
TBH the power consumption is impressive...
This review is ruining my plans: I was looking for a 3070 to replace my 2070 Super, but at this point I don't know how much faster it will be.
 
Many thanks for the detailed review, and extra kudos to you guys for including the Intel/AMD cpu comparison towards the end (it must have been a ton of extra work to run everything twice...)

I think the card overall brought what was expected of it, though I was hoping for less actual power consumption (I know, the specs said 320W TDP, but still, one can only hope).
What baffled me is that even with that ridiculous consumption, the 3080 still managed to capture the crown for power efficiency ( :eek: ), and it still managed to keep the noise down to bearable levels. That is very impressive! Well done NV, looking forward to seeing the rest of the lineup :)
 
About all these cards are good for, is "benchmark" types, and maybe graphic artist/game maker types.
 
Since I still game at 1080P I'm still waiting for all yuze bastages hand me downs to flood the market! :D
 
Power usage is way up there, but the rest is really good. The 4K results are really interesting and 10GB seems to be enough for games today although I would have preferred it to have at least 12GB to help with other types of workloads that do consume a lot of VRAM and those fringe cases where you do see a lot of VRAM usage in games.
 
Power usage is way up there, but the rest is really good. The 4K results are really interesting and 10GB seems to be enough for games today although I would have preferred it to have at least 12GB to help with other types of workloads that do consume a lot of VRAM and those fringe cases where you do see a lot of VRAM usage in games.

But if they did that, you wouldn't buy that sweet 3080ti with 20GB coming in 8 months.

/s
 
Although the card is quit impressive yet I am disappointet by Nvidia claims that it will be 1x more performance anyway everything seems quite good except the Power, lets see how AMD will fare against
 
The 1440p results are less impressive than I was hoping for though of course they are still good. This really seems like a 4K focused card and for anything lower, your returns are notably less. Since the ROPs are about the same but the CUDA core count is doubled, this seems to benefit pixels per frame over total frame count. This seems less useful, even a waste, as a 1080p card but I'll have to get data for that elsewhere.

I wonder if the 3070 will be the same, with significantly better 4K results than 1440p and 1080p.
 
The steep power draw has to be down to Samsung's 8nm process. I bet these chips were originally destined for TSMC 7nm before AMD crowded it out, which is probably a slightly better node for high performance GPUs.
The only real reference points we have for TMSC's N7 process on large scale chips is the Navi 10 and the GA100 - the former is 10.3 billion transistors and in the RX 5700 XT, with a boost clock of 1900 MHz or so, has a TDP of 225W; the GA100 with 54.2 billion transistors in the A100 SMX4 (1410 MHz) has a TDP of 400W.

The GA102 chip is 28.3 billion (2.75x Navi 10, 0.52x GA100) and hits a TGP of 320 W at 1710 MHz. If all things were equal (which they're obviously not, because we don't know what voltage all 3 chips run at, for one thing) a scaled GA100 would 250W, whereas a scaled Navi 10 would be over 500W. So it makes sense that the GA102 would lie somewhere in between the two.

Samsung's N8 may well be a higher power process node compared to TMSC's N7, but the latter isn't going to be so much better than an N7 3080 would be less than 300W.
 
I don't understand why this Intel hatred...
Intel is better for gaming, still, even with 5 year old architecture, with pcie 3.0, with whatever disadvantage you want, it is still better.
Then you come and you slap on our face a 750$ processor from AMD that basically 90% of the market won't buy anyway.
Please be reasonable and do the testing with a 3900x then compare with 10700k/10900K and you'll see that your gaming rig should still be Intel based. Period.
You just try to hide the CPU performance delta by testing only 1440p and 4k, when we know that 1080p is the resolution where the actual difference shows up and it is not insignificant like you try to portray it, it is 10-15%.
Why are you insisting with an inferior CPU for gaming tests, when the focus should be to have the best CPU there is, is beyond me...
Guess I'll read reviews from the rest of the websites that still use Intel cpus.
Everyone keeps calling it hatred, but hatred is such a strong word. Preferences vary from people to people for one, but anyone who is even semi-active on this site will tell you that the readers were the ones who wanted this change (link here), and it's not like the reviewers hate Intel. Intel has been this site's CPU manufacturer of choice for reviews for a better part of a decade, if not since Bloomfield (looking at you, first gen i7s).

If you must continue to think that the site is hating on Intel, If it will help you sleep at night consider this: by using AMD they're showing the worst case on the top end of the performance segment, your "better in gaming" Intel processor will be sure to do better so it gives you something to look foward to.
 
The only real reference points we have for TMSC's N7 process on large scale chips is the Navi 10 and the GA100 - the former is 10.3 billion transistors and in the RX 5700 XT, with a boost clock of 1900 MHz or so, has a TDP of 225W; the GA100 with 54.2 billion transistors in the A100 SMX4 (1410 MHz) has a TDP of 400W.

The GA102 chip is 28.3 billion (2.75x Navi 10, 0.52x GA100) and hits a TGP of 320 W at 1710 MHz. If all things were equal (which they're obviously not, because we don't know what voltage all 3 chips run at, for one thing) a scaled GA100 would 250W, whereas a scaled Navi 10 would be over 500W. So it makes sense that the GA102 would lie somewhere in between the two.

Samsung's N8 may well be a higher power process node compared to TMSC's N7, but the latter isn't going to be so much better than an N7 3080 would be less than 300W.

I think it's beyond comparison between an AMD architecture on TSMC 7nm and an Nvidia one on a different process. We do have analysis that Ampere is power efficient, even though it has a high TDP.

AMD's architectures have not been as power efficient as Nvidia's since forever. Performance per watt has been all Nvidia for five or six generations at this stage, and by a significant margin the past four. To the point a 12nm Nvidia design can use roughly the same power as a 7nm AMD one with close gaming performance. E.g 2070S v 5700XT or 2060S v 5700. Gaming is emphasised because obviously GPU designs these days can vary considerably in their proposed usage scenarios.

What we can ascertain about this Samsung 'LPU' node they call 8nm is based off their 10nm node. Nvidia could easily have production of some consumer Ampere parts on TSMC 7nm, which would give us an interesting point of comparison. They confirmed they are still partners.

What we do know for sure is Samsung bid real low for this contract. They had plenty of capacity, motivation to take market off TSMC and Nvidia have an eye for stronger margins. It's a marriage of convenience.
 
Last edited:
Using this calculator, https://www.calculatorsoup.com/calculators/algebra/percent-difference-calculator.php all of your % differences are all wrong

Death Stranging @ 1440p
3080 = 157
2080 = 111
% difference = 34% (not 41% you claim, but the 3080 is 41% better then the 1080Ti)

Average @ 1440p
3080 = 171
2080 = 115
% difference = 39% (and not 49% faster)

What in the actual f**k is going on here?

Try plugging in the difference between 100 and 50 on the link you posted. The answer is a 66.6 percent difference. While this is technically accurate, this is not the same as percentage increased. For most people describing the difference between 50 fps and 100 fps, the answer would be a 100 percent increase.
 
You clearly can't read, there is a 0% difference between the 3950x and the 10900K at 4k. Look at the benchmarks above! Trolls.

TechSpot, one of the most popular games in the world is COD Warzone and you guys don't include benchmarks. It's literally the only game a lot of us play and you ignore it completely. I also don't understand the lack of 1080P testing. You reviewed a 360hz 1080P monitor yesterday. I have a 280hz 1080P monitor and desperately want to see benches! I get some of your editors don't care about high refresh gaming but a LOT of us do!
COD Warzone is a multiplayer only game, is common knowledge that multiplayer games are difficult to benchmark because of the inherent fact of always being online with different situations going on every single time, difficult to standardized something like that. On the other hand the game uses the same engine as the one on COD MW so you can use that as a base, the COD engines have always run great so there's that.
 
Last edited:
I'm kinda confused and this is why:
"A big 70% performance jump over the RTX 2080 at 4K is impressive, and a huge improvement in cost per frame, so that's a job well done by Nvidia."
Steve, I respect you and Tim to death and watch Hardware Unboxed religiously but this statement is unintentionally misleading. The reason that I say this is because I seriously doubt that anyone can picture the performance of the 2080 the way we can with the 2080 Ti. There are two cards here that people are looking at to see what the generational performance difference is and the 2080 isn't one of them.

What I see here is an extremely modest increase of 23% at 1440p:
CPU_1440p.png

and a less modest increase of 31% at 4K:
CPU_4K.png

Now, some could argue that the RTX 2080 Ti costs $1200USD while the RTX 3080 costs only $700 and that's true but just because the RTX 2080 Ti cost that much doesn't mean that it should have because the top-tier Ti card was the $700 card as recently as the GTX 1080 Ti. It was a terrible price to pay and I'm amazed at just how many people were stupid enough to pay $1200 for that card.now nVidia is using their past bad behaviour to make a normal price look amazing.

It's amazes me just how successfully that nVidia has manipulated their customer base. Their first step was to release the 2080 at $700 which was barely a performance jump over the GTX 1080 Ti (which was also $700). Step two was to release the 2080 Ti which was a made their customers so used to getting screwed over by pricing the 2080 Ti at $1200, that pricing the 3080 at $700 (where the 2080 Ti should have been in the first place and where the 3080 Ti should be priced instead of the 3080), nVidia has made its customer base extremely happy to get screwed over.

Let's look at the pricing progression, shall we?

MSRP of GTX 280 $649
MSRP of GTX 480 $499
MSRP of GTX 580 $499
MSRP of GTX 680 $499
MSRP of GTX 780 $649
MSRP of GTX 780 Ti $699 <- Top-tier Ti line at $699
MSRP of GTX 980 Ti $699 <- Top-tier Ti line at $699
MSRP of GTX 1080 Ti $699 <- Top-tier Ti line at $699
MSRP of RTX 2080 $699 <- Top-tier NON-Ti line at $699
MSRP of RTX 2080 Ti $1199 <- Top-tier Ti line at $1200?!?!?!
MSRP of RTX 3080 $699 <- Top-tier NON-Ti line at $699 (just like RTX 20)
MSRP of RTX 3080 Ti $1199 <- Top-tier NON-Ti line at $699 (just like RTX 20)
MSRP of RTX 3090 $1499 <- Still way more expensive than the Ti

So, nVidia has conditioned the sheep to suddenly accept a $500 jump in price for the top-tier Ti line from $699 to $1199 and the non-Ti from $499 to $649. Don't tell me it's inflation because the cost of the top-tier card was between $500 and $649 for FIVE GENERATIONS (at LEAST ten years). Don't tell me that it's inflation because the IBM PC model 5150 was $2000 in 1984 and we're not paying $12000 for entire PCs that are not even in the same performance universe as that old IBM. Tech is supposed to get cheaper over time with new tech being the same price as the old was (or LESS not more). Here we have nVidia finding a way around that to charge people even more and more money when they shouldn't be (they don't have to after all) and people are CELEBRATING THIS? Seriously?
 
I was considering an upgrade, but I think my now 2 year old 2080ti will survive this generation after all... Really don't like that massive 65 watts of additional power consumption and there's just not enough games really pushing a 2080 Ti at 4K (an average frame rate of 82 at 4K ultra settings on the toughest games out there seems OK to me). I really don't understand this panic - "you must sell your 2080 Ti" nonsense.
 
I don't understand why this Intel hatred...
Intel is better for gaming, still, even with 5 year old architecture, with pcie 3.0, with whatever disadvantage you want, it is still better.
Then you come and you slap on our face a 750$ processor from AMD that basically 90% of the market won't buy anyway.
Please be reasonable and do the testing with a 3900x then compare with 10700k/10900K and you'll see that your gaming rig should still be Intel based. Period.
You just try to hide the CPU performance delta by testing only 1440p and 4k, when we know that 1080p is the resolution where the actual difference shows up and it is not insignificant like you try to portray it, it is 10-15%.
Why are you insisting with an inferior CPU for gaming tests, when the focus should be to have the best CPU there is, is beyond me...
Guess I'll read reviews from the rest of the websites that still use Intel cpus.
The fans were the one who requested the change in the first place taking into account and also showed in the review that the difference is minimal at best at 1440P and nothing at all at 4k, no one cares about a high end card being used at 1080P specially when such a card is being marketed as a true 60fps 4K experience plus PCI-e 4.0 not being supported yet on Intel.

It isn't AMDs fault that Intel doesn't have a CPU better than the 3950X so they can sell it at $750 which is a deal coming from Intels own $1k+ CPUs that barely have more than 10 cores.
 
Last edited:
I think it's beyond comparison between an AMD architecture on TSMC 7nm and an Nvidia one on a different process.
It's not entirely beyond comparison, but that's why I only used Navi 10 and the GA100, as they're both on the same process. They are, of course, vastly different designs.

I added the GA102 into the data sample I used for the simple GPU efficiency examination I did a while back, and it shows that the GA102 is not only an absolute FP32 monster, but also that Nvidia are still keeping to their general design ethos of maximising die size/transistor count for raw processing ability:

updated_data.png
 
I'm kinda confused and this is why:
"A big 70% performance jump over the RTX 2080 at 4K is impressive, and a huge improvement in cost per frame, so that's a job well done by Nvidia."
Steve, I respect you and Tim to death and watch Hardware Unboxed religiously but this statement is unintentionally misleading. The reason that I say this is because I seriously doubt that anyone can picture the performance of the 2080 the way we can with the 2080 Ti. There are two cards here that people are looking at to see what the generational performance difference is and the 2080 isn't one of them.

What I see here is an extremely modest increase of 23% at 1440p:
CPU_1440p.png

and a less modest increase of 31% at 4K:
CPU_4K.png

Now, some could argue that the RTX 2080 Ti costs $1200USD while the RTX 3080 costs only $700 and that's true but just because the RTX 2080 Ti cost that much doesn't mean that it should have because the top-tier Ti card was the $700 card as recently as the GTX 1080 Ti. It was a terrible price to pay and I'm amazed at just how many people were stupid enough to pay $1200 for that card.now nVidia is using their past bad behaviour to make a normal price look amazing.

It's amazes me just how successfully that nVidia has manipulated their customer base. Their first step was to release the 2080 at $700 which was barely a performance jump over the GTX 1080 Ti (which was also $700). Step two was to release the 2080 Ti which was a made their customers so used to getting screwed over by pricing the 2080 Ti at $1200, that pricing the 3080 at $700 (where the 2080 Ti should have been in the first place and where the 3080 Ti should be priced instead of the 3080), nVidia has made its customer base extremely happy to get screwed over.

Let's look at the pricing progression, shall we?

MSRP of GTX 280 $649
MSRP of GTX 480 $499
MSRP of GTX 580 $499
MSRP of GTX 680 $499
MSRP of GTX 780 $649
MSRP of GTX 780 Ti $699 <- Top-tier Ti line at $699
MSRP of GTX 980 Ti $699 <- Top-tier Ti line at $699
MSRP of GTX 1080 Ti $699 <- Top-tier Ti line at $699
MSRP of RTX 2080 $699 <- Top-tier NON-Ti line at $699
MSRP of RTX 2080 Ti $1199 <- Top-tier Ti line at $1200?!?!?!
MSRP of RTX 3080 $699 <- Top-tier NON-Ti line at $699 (just like RTX 20)
MSRP of RTX 3080 Ti $1199 <- Top-tier NON-Ti line at $699 (just like RTX 20)
MSRP of RTX 3090 $1499 <- Still way more expensive than the Ti

So, nVidia has conditioned the sheep to suddenly accept a $500 jump in price for the top-tier Ti line from $699 to $1199 and the non-Ti from $499 to $649. Don't tell me it's inflation because the cost of the top-tier card was between $500 and $649 for FIVE GENERATIONS (at LEAST ten years). Don't tell me that it's inflation because the IBM PC model 5150 was $2000 in 1984 and we're not paying $12000 for entire PCs that are not even in the same performance universe as that old IBM. Tech is supposed to get cheaper over time with new tech being the same price as the old was (or LESS not more). Here we have nVidia finding a way around that to charge people even more and more money when they shouldn't be (they don't have to after all) and people are CELEBRATING THIS? Seriously?
I got your point but the fact is that the 3080 is not a Ti version and it costs the same as the 2080 when it launched, the comparison is fair, even if you compare it against the 2080Ti the 3080 costs hundreds less and offer 30% more performance. In fact I don't think there's going to be a 3080Ti in the future and the 3090 is all Nvidia will offer at the highest end for almost the same price as the 2080Ti.
 
I got your point but the fact is that the 3080 is not a Ti version and it costs the same as the 2080 when it launched, the comparison is fair, even if you compare it against the 2080Ti the 3080 costs hundreds less and offer 30% more performance. In fact I don't think there's going to be a 3080Ti in the future and the 3090 is all Nvidia will offer at the highest end for almost the same price as the 2080Ti.
there will always be a TI/Super version :)
 
Back