Nvidia GeForce RTX 3090 Review: One Massive Graphics Card

But the real question is: Will there be an Ampere TITAN RTX?

Because it they do come out with something more powerful and ridiculous than this, then you'll have to "settle" on this $1500 card when the real deal releases.

My guess is that this card was built to compete with the Radeon Dual Vega II Pro Duo

That's really the only other card this should be compared with.

It's a workstation card that they decided to market at Gamers.
 
Stop looking at absolute values, those don't matter for the comparison we are doing, just the percentages. Absolute values are always all over the place depending on how games are tested.

Even the best and most positive reviews give the 3090 just 15% more FPS vs the 3080 at 4K with an 10900k and that's only because they tested just a few games. If they tested more games they would certainly get closer to the 10% found by Steve.

TL;DR it's not worth buying it unless you do rendering of complex 3D scenes that eat up a ton of VRAM*. it's a fake Titan GPU with all of the restrictions of a gaming card in the drivers. a vanity card.

*and even here we don't know if Nvidia will release the rumoured 20GB 3080.

That wasn't my point at all. My point was that ryzen is a bad CPU choice to pull maximum out of Graphics cards. Using it for reviews like these, well, you might just go with Intel 2600K as well since you are not doing your best anyway.
 
That wasn't my point at all. My point was that ryzen is a bad CPU choice to pull maximum out of Graphics cards. Using it for reviews like these, well, you might just go with Intel 2600K as well since you are not doing your best anyway.
You've clearly not read the review:
"At 4K we’re not CPU limited in any of the tests. A 5.3 GHz 10700K for example, provides the exact same frame rates in all titles tested with the RTX 3090."

The differences in FPS you are seeing between reviewers are mostly because of testing methodology (different settings, different game sections, different method of recording the FPS, variance across tests, etc etc) and other factors. This is why I told you not to focus on the absolute values. The CPU used is fine especially for 4K tests. An 10900K won't change the results more than 1-3% which is within margin or error.

That being said, you might find 1 or 2 games which are better optimised for the 10900K, but 1 or 2 games don't change the percentages when you test so many games (it does change the percentages significantly when you only test 4-6 games)
 
Last edited:
No, it produces just as much heat as the V64 (actually a little more), but it's cooling solution is so much better at wicking it away from the gpu.

That 350-400 watts isn't just disappearing in the ether. It's being efficiently pulled away from the gpu, into a heatsink that weighs more than some laptops.

Have two of these puppies in SLI and you could throw out the boiler out of the house.
 
Absolute poor value card. At least, for most of us.

Hopefully no one "pre-ordered" such a wasteful (monetarily) product for home use.

3080 hits the nail price-wise and performance-wise. It trounces the 2080Ti owners for the price they paid.
 
You've clearly not read the review:
"At 4K we’re not CPU limited in any of the tests. A 5.3 GHz 10700K for example, provides the exact same frame rates in all titles tested with the RTX 3090."

The differences in FPS you are seeing between reviewers are mostly because of testing methodology (different settings, different game sections, different method of recording the FPS, variance across tests, etc etc) and other factors. This is why I told you not to focus on the absolute values. The CPU used is fine especially for 4K tests. An 10900K won't change the results more than 1-3% which is within margin or error.

That being said, you might find 1 or 2 games which are better optimised for the 10900K, but 1 or 2 games don't change the percentages when you test so many games (it does change the percentages significantly when you only test 4-6 games)

well that's not true, or Steve f***ed up something. I can link reviews proving that statement wrong. Faster CPU gives more fps, period. 10900K gives more fps even in 4K, yes, 3080 and 3090 are going to both be limited by CPU up until 4K and often have almost identical performance, but fps will be higher on 10900K no matter how hard you wish it not to be.

edit: reason I am not linking other reviews is because they are from other websites doing the same thing Techspot is. I am not going to "promote" them here, it would be rude.
 
I am not going to "promote" them here, it would be rude.
But it's not rude to question Steve's capabilities, without justifying one's remarks? Not many reviewers have examined CPU scaling with a 3090, but there's absolutely nothing wrong with providing links to such testing. Here's one to begin with:


Notice that in all of the games tested, apart from CS:GO, the 3090 at 4K produced the same results (within the usual margins of error and variability) when running with the i9-10900K at 5.3 GHz, as it did at stock speeds.

If using an AMD Ryzen 3950X is going to limit the RTX 3090's performance so much, then why did Steve's testing produce higher results than Hardware Canuck?

NVIDIA-GeForce-RTX-3090-Review-40.jpg


4K_RSS.png


Or picking another test, why did they get very similar average results in Horizon Zero Dawn, but different 1% Low values?

NVIDIA-GeForce-RTX-3090-Review-32.jpg


4K_HZD.png


The answer is simple. Every reviewer uses a unique test platform, configured differently to everyone else - not just hardware settings, but how the game is used for testing (some take in-game results, like Steve does, others use a built in benchmark if available).

Unless one uses exactly the same workload on the graphics card, in exactly the same environmental conditions, then any comparison between test platforms is a vacuous exercise at best. But of course, that's exactly what Steve did do when he ran the 3080 in the 10900K and 3950K machines:

CPU_4K.png


One might then ask as to why the 3090 isn't consistently faster than 3080 in all games, at 4K, if it's not CPU limited:

4K_3080.png


In terms of theoretical maximum throughputs, the 3090 FE fairs against the 3080 FE like this:

Pixel Rate = 189.8 vs 164.2 GPixel/s (15.3% higher)
Texture Rate = 556.0 vs 456.1 GTexel/s (21.9% higher)
FP32 rate = 35.58 vs 29.77 TFLOPS (19.5% higher)
Bandwidth = 936.2 vs 760.3 GB/s (23.1% higher)

So if the 3090 was completely GPU bound at 4K, in all games, then it should be at least 15% faster than a 3080. Well, we can see this in some of the tests in the above chart, but what about the rest? The answer to this is simple: there are indeed some games that will be CPU limited, even at 4K, with a top end graphics card.

But let's take one last test from Hardware Canucks:

NVIDIA-GeForce-RTX-3090-Review-71.jpg


The 13% CPU overclock is only producing a 1% increase in average performance, with the 3090. The card itself is only 9% faster than the 3080 in this test, and this is with what most people would consider to be the fastest 'gaming' CPU out there.

But this is all because of the game itself, and not the hardware. For example, Death Stranding produces a similar outcome, because the game scales more with threads than it does with core speed.

So all the noise about how an AMD Ryzen 9 3950X is somehow constraining the likes of a 3080 and 3090, and not showing them in their true light, is just that: noise. There's no quantifiable, repeatable evidence to show that this choice of CPU is deliberately constraining the results of these graphics cards at 4K, when examined in the same test conditions.

Besides, it's not like every review of those cards is done using a 10900K platform - a brief survey shows the following:

Tom's Hardware - 9900K
PC Gamer - 10700K
HotHardware - 10980XE
Guru3D - 9900K
TechPowerUp - 9900K
Eurogamer - 10900K
KitGuru - 10900K
ArsTechnica - 8700K
GamersNexus - 10700K
PCWorld - 8700K
PC Perspective - 3900X

That reviewers (ourselves included) are using a range of different platforms should be celebrated, not picked over. All data is useful, no matter whether or not one feels the information is relevant to oneself.
 
Average framerates over 14 games means nothing. That's just rude. :It's like telling a person that his Ferrari will average 50mph because he is going to use it in town mostly, but omitting the fact the when he goes to racetrack (why he could have bought the damn thing) he will average 120mph...

ryzen is not better or same in game as top intel's offer (not 10700, ou are not going to hsve $1500 for 3090 but not have money for 10900K ) Steve knows this. Nothing rude about it. I read reviews made by people that are at least 10 years longer in business then he is. Are THEY lying? Because, as I see it, someone is either a) lying or b) fu***d something up. Later the most likely.

cjH8ApD.png

aPGCRKj.png


solve this.
 
@Ludak021 - the fact that Guru3D got different test results to Steve can't be used as evidence to suggest that the use of the 3950X is somehow hugely slower than a 9900K, because there's no indication whatsoever that their test section of the game is remotely similar to ours.

One only has to see this with other reviews:


The RTX 3090 in their test achieved an average of 104.9 fps and their test platform uses a 10900K in a Asus Maximus 12 Extreme Z490, with 16 GB of DDR4-3600 CL16.


Average of 114 to 115 fps, also using a 10900K but at 5.1 GHz in an ASUS ROG Maximus XII Hero, with 32GB of DDR4-3600 CL18.


Average of 111.7 fps, with a 9900K @ 5 GHz in an EVGA Z390 DARK, with 16 GB of DDR4-4000 CL 19.

All three achieved lower average results in Death Stranding at 4K than Guru3D, but very similar results to our tests. I am suggesting that Hilbert is doing something wrong or somehow lying about his data? Of course not! The difference in figures is because everyone is using a different test run in the game, with different test platforms, to pull in data (Eurogamer do provide a video of their benchmark section).

The only way one can accurately determine the affect the CPU, and rest of the test platform, is having on a game (at 4K, using a 3090) is to use exactly the same benchmark runs, in exactly the same controlled environment. Steve has done this and showed that, on average, at 4K, the likes of the RTX 3080 is GPU bound (whereas it's not at 1440p).

Edit: Nobody here is saying that the Ryzen 9 3950X is a faster CPU than a 10900K in games - our own testing shows that this is clearly not the case. But such tests are conducted in CPU bound situations; shift the performance limit to the GPU, though, and it's a different story, as shown in the 1440p test results in 10900K review linked. Pushed to 4K and there would be no difference at all across a broad number of CPUs.
 
Last edited:
4k Performance per watt seems decent but using doom at 4k, doubt its anywhere near as good @ 1440p in other titles like death stranding though, the 2080 Ti would likely still be in the lead there. Appreciate the testing!
 
In GPU render engines like OctaneRender, Redshift, and V-Ray, the RTX 3080 and 3090 greatly out-performs the RTX 20-series cards, beating the RTX 2080 Ti (which is significantly more expensive) by a large 60% and 90% respectively. Unreal Engine also saw massive performance gains, averaging 60-80% higher performance gains over the RTX 2080 Ti.

Applications that are more CPU-focused like DaVinci Resolve or the Adobe Creative Cloud suite, however, have much more mixed results. In Resolve, the RTX 3080 can still be up to 35% faster than the 2080 Ti in certain situations, while the RTX 3090 is on par with a pair of RTX 2080 Ti cards. However, this drops to just a 10-20% performance gain in Premiere Pro and After Effects. And in Photoshop and Lightroom Classic where GPU acceleration is much less pronounced, there is very little performance gain to be had with either the new RTX 3080 or RTX 3090.
Might wait for the 3080Ti in that case and get two while I wait and save mulla. My 1080ti is nice still but does struggle in some 4K scenarios for editing.
 
Steve's review sure doesn't sound like it's for a card that deserves an 80/100 score...
It's the same score as awarded to the RTX 2080 Ti - a $1200 card at launch. This one costs 25% more but, as the tests show, on average the 3090 is 30% faster at 1440p and 45% faster at 4K. Yes it consumes significantly more power to achieve this, and it's monstrously big, but it's actually a little more efficient than the 2080 Ti. Taken in isolation, it's notably better than the Titan RTX (and $1000 cheaper too); it's the most powerful graphics card out there and really stands out as a viable option for professional graphics artists. One could argue, on those merits alone, that it's perhaps worth a higher score.

Of course, the RTX 3080 is the reason why score isn't higher - only 10% faster, on average at 4K? And costs more than double? I can see why many folks would rate it much lower than 80/100 because of these factors. But if one weighs up its merits for the sector it's best suited for and Nvidia's own competition, 80 seems to be a reasonable enough compromise.

Of course, had Nvidia positioned and advertised it properly in the first place (I.e. as a Titan card), then I don't think anybody would be complaining. The 3080 is faster and a lot cheaper than the 2080 Ti, and a 20GB version with a few more cores could be easily been released as Ti replacement (in the form of a $900 3080 Super or Ti). The 3090 would have then sat perfectly as a Titan RTX replacement.

Nvidia could still release a new Titan (anybody got any ideas as to what they could call it?), using a full GA102 chip and, if Micron are able to get enough high density GDDR6X chips out, loaded with 48 GB of RAM. But it will only be fractionally better than a 3090 and way more expensive.
 
It's the same score as awarded to the RTX 2080 Ti - a $1200 card at launch. This one costs 25% more but, as the tests show, on average the 3090 is 30% faster at 1440p and 45% faster at 4K. Yes it consumes significantly more power to achieve this, and it's monstrously big, but it's actually a little more efficient than the 2080 Ti. Taken in isolation, it's notably better than the Titan RTX (and $1000 cheaper too); it's the most powerful graphics card out there and really stands out as a viable option for professional graphics artists. One could argue, on those merits alone, that it's perhaps worth a higher score.

Of course, the RTX 3080 is the reason why score isn't higher - only 10% faster, on average at 4K? And costs more than double? I can see why many folks would rate it much lower than 80/100 because of these factors. But if one weighs up its merits for the sector it's best suited for and Nvidia's own competition, 80 seems to be a reasonable enough compromise.

Of course, had Nvidia positioned and advertised it properly in the first place (I.e. as a Titan card), then I don't think anybody would be complaining. The 3080 is faster and a lot cheaper than the 2080 Ti, and a 20GB version with a few more cores could be easily been released as Ti replacement (in the form of a $900 3080 Super or Ti). The 3090 would have then sat perfectly as a Titan RTX replacement.

Nvidia could still release a new Titan (anybody got any ideas as to what they could call it?), using a full GA102 chip and, if Micron are able to get enough high density GDDR6X chips out, loaded with 48 GB of RAM. But it will only be fractionally better than a 3090 and way more expensive.

Honestly, I thought that rating was too high for the 2080TI too--and that was a WAY better value than the 3090 was. Like the Turing line overall, it was still incredibly overpriced relative to the performance gains you got vs. Pascal, and the marquee RTX features still don't have broad adoption even now. The 3080 and 3070 reflect a return to normalcy in terms of value.

The 3090, though? It is a complete joke at $1500 to start, and NVIDIA's "8K" gaming angle is nonsense. It seems like the only reason they're pushing it as a gaming card is so they can charge an even more ridiculous premium on the Quadro cards, since the 3090 doesn't have the same hardware and software optimizations for professional use AFAIK.
 
Last edited:
How did you expect 30% given it only has 20% more processing cores than 3080? It's not magic.
Well to be honest, I only knew roughly in my head what the cuda counts were when I made this posts. I didn't look them up or do the math. You are correct, its almost exactly 20% more cuda cores so the performance expectation should be right around 20% greater. Which honestly makes the 2X+ costs even more ridiculous.
 
Average framerates over 14 games means nothing. That's just rude. :It's like telling a person that his Ferrari will average 50mph because he is going to use it in town mostly, but omitting the fact the when he goes to racetrack (why he could have bought the damn thing) he will average 120mph...

ryzen is not better or same in game as top intel's offer (not 10700, ou are not going to hsve $1500 for 3090 but not have money for 10900K ) Steve knows this. Nothing rude about it. I read reviews made by people that are at least 10 years longer in business then he is. Are THEY lying? Because, as I see it, someone is either a) lying or b) fu***d something up. Later the most likely.

cjH8ApD.png

aPGCRKj.png


solve this.
Holy thread resurrection batman- those charts have DLSS on so not running at native resolution.....
 
Back