Nvidia GeForce RTX 5090 Review

All things considered, temperature seems good especially if you compare the size of the coolers between 4090 and 5090. I like that they are using these bigger, probably 120mm fans. Even on cheaper cards, those tiny fans have the ability to produce immense amount of noise, speaking from experience owning 3070.
It's funny that the most exciting thing about the 5090 is the cooler
 
VERY noob question here.... (apologies for my dumbness...)
Is it technically possible for an AIB to release the 5070 with more VRAM than the expected 12GB?
I.e. release a special edition 5070 with 16GB of VRAM for example $650rrp?
Just wondering if the AIB segment could make the offerings a little more attractive on their own...
 
$2k....I've bought cars for less.

[q]only gaming at around 60 fps, which some gamers will find acceptable, but I personally find it less than desirable, [/q]

120 fps or bust, even if I have cut back on the graphics settings, add in a 240Hz refresh and I'm happy.
 
Thanks Huang, looks like the rest of the Blackwell range will be even more pathetic at moving the price performance needle. Unless 5070 Ti beats 4080 Super by > 20% (which won't happen) in raster, I'll be jumping on second hand 4080 (Super) or hopefully 9070XT if AMD can deliver.
 
Disappointment is always more difficult to overcome than anything else and this nvidia release is exactly that.
I have the sinking feeling AMD will follow with their own brand of it shortly.
Prices in the Canadian market will stay the same, or (most likely) increase by at least 10-15%.
No one wins but the shareholders, at least for the time being.
 
VERY noob question here.... (apologies for my dumbness...)
Is it technically possible for an AIB to release the 5070 with more VRAM than the expected 12GB?
I.e. release a special edition 5070 with 16GB of VRAM for example $650rrp?
Just wondering if the AIB segment could make the offerings a little more attractive on their own...
Technically possible, yes. But not 16 GB. The amount of memory you can have on a graphics card is determined by the bus width of the GPU, and you cannot have 16 GB of VRAM on the 5070's 192-bit bus. A 192-bit bus only allows you to have an amount that is a factor of 3 (3 GB, 6 GB, 12 GB, 24 GB...), while a 128-bit or 256-bit bus allows you to have factors of 2 (4 GB, 8 GB, 16 GB, 32 GB...).

What AIBs could do is release a 24 GB version of the 5070, doubling the amount of VRAM by soldering memory dies to both sides of the PCB, the same thing that is done for the 16 GB version of the 4060 Ti. But they need permission from Nvidia to do it, and since nobody is doing it Nvidia doesn't seem too keen on allowing that.

What has been rumored is that Nvidia wants to eventually make new versions of those cards using the new 3 GB GDDR7 chips that are becoming available this year (as opposed to 2 GB GDDR7 chips they use now), which would allow for 12 GB on a 128-bit bus (so a 12 GB 5060 in the future) and 18 GB on a 192-bit bus (so 18 GB 5070 in the future), but for now it's just rumors. Nvidia is already using 3 GB memory dies on their professional/enterprise products though, it will come to consumer products eventually.
 
All this debate is quite frankly, a waste of time.

You are not the target group for the 5090, whales are and they will gladly buy it - no matter the cost.

/thread
Not really the issue. This article and debate just confirms the speculations we had pre-benchmarks for the performance of the GPUs down the stack (5080/5070). And it's not looking good.
 
Last edited:
Anyone on Ada now should really just wait for the 6000 series unless they have a lot of money to burn. This is really targeted at people still on Ampere or older.
 
A touch off topic since the review did not cover DLSS (coming in later article). This is my best understand of DLSS 3. I am not up to speed on how DLSS 4 changes things, but I assume the concept is the same. I don't claim to be an expert and cutting through the marketing BS is not easy.

Deep Learning Super Sampling (DLSS) uses "AI" to boost gaming performance and visual quality. It consists of two main features:

1) Upscaling
- Renders the game at a lower resolution, then uses a trained neural network to upscale it to a higher resolution.
- Enhances anti-aliasing and detail, allowing higher framerates without requiring as much GPU power as native resolution.

2) Frame Generation
- Inserts extra frames that are AI-generated rather than fully rendered. This can effectively double (or more) the displayed framerate.
- Crucially (and the contentious part), it works concurrently with the GPU’s rendering. As soon as a real frame is finished rendering, DLSS immediately begins predicting the next AI-generated frame and displays it. Meanwhile, the GPU is busy rendering the subsequent “real” frame.

Because these "AI frames" are extrapolations, they rely on motion vectors, optical flow, and depth data to guess how the scene and user inputs evolve between rendered frames. Although user commands influence the game engine’s updates (and thereby the motion vectors), the AI’s predictions still reflect slightly older data, so Frame Generation does not reduce input lag. In fast-paced games, this can lead to a sense of detachment—your inputs are processed, but the AI frame you see may not perfectly capture your latest actions.

Despite this limitation, DLSS Frame Generation provides a substantial framerate boost as seen by the human. Coupled with DLSS Upscaling, it can enable higher resolutions and potentially smoother visuals, balancing real-time performance and image fidelity through AI-driven techniques. How useful DLSS is to you really depends on your desired "balance".
You misunderstand a few things.

1) Windows can upscale an iGPU (by stretching the image). DSLL uses "AI" to upscale and fake the missing detail (between the resolutions). When the fake detail goes wrong you get artifacts.

2) The GPU does NOT start rendering the next frame in the background while it is creating the fake frame. But guessing how to change the end pixels in previous real frame is much faster than rendering everything again to create the next frame. So fake frames always add latency but much less latency than the time to create a real frame.

3) There is no complicated sense of detachment. Your latency with fake frames is always higher than without. Even if in the future we get to the point that fake frames are created in parallel to the real ones you still won't have reduced latency just no additional latency. So while fake frames can make a game look smoother they will never make it feel smoother.
 
Technically possible, yes. But not 16 GB. The amount of memory you can have on a graphics card is determined by the bus width of the GPU, and you cannot have 16 GB of VRAM on the 5070's 192-bit bus. A 192-bit bus only allows you to have an amount that is a factor of 3 (3 GB, 6 GB, 12 GB, 24 GB...), while a 128-bit or 256-bit bus allows you to have factors of 2 (4 GB, 8 GB, 16 GB, 32 GB...).

What AIBs could do is release a 24 GB version of the 5070, doubling the amount of VRAM by soldering memory dies to both sides of the PCB, the same thing that is done for the 16 GB version of the 4060 Ti. But they need permission from Nvidia to do it, and since nobody is doing it Nvidia doesn't seem too keen on allowing that.

What has been rumored is that Nvidia wants to eventually make new versions of those cards using the new 3 GB GDDR7 chips that are becoming available this year (as opposed to 2 GB GDDR7 chips they use now), which would allow for 12 GB on a 128-bit bus (so a 12 GB 5060 in the future) and 18 GB on a 192-bit bus (so 18 GB 5070 in the future), but for now it's just rumors. Nvidia is already using 3 GB memory dies on their professional/enterprise products though, it will come to consumer products eventually.
Thank you for your informative reply, its appreciated.
So really we're (I'm) waiting to see what affordable (subjective, I know..) 16GB VRAM (or more) the other teams come out with over the next 6 months to give us (me) hope...
Yes, I know, don't hold my breath...
 
It's true. Biggest improvement (with DLSS 4). DF includes a really good DLSS 4 benchmark review.

I'm not buying an expensive GPU to play around in a handful of games with DLSS4 and pray to the our gamer god Gaben that future games implement the tech. It's embarrassing to push DLSS4 instead of actual hardware improvements.

I'm expecting real improvements in all of the hundreds of games in have in my library and in applications like Blender/Cinema4D. And that only comes with raw hardware improvements.
 
Techspot suffering from the "too big to fail" syndrome. This is not a 80/100 card. It's a 65/100 card, but because it's the biggest show in town, you can't score it lower, right?
 
DISAPPOINT!!!

NO 4k 120Hz RT:ON

So here missing 4k Ray tracing test
only 1440p sure disappoint
 
All this debate is quite frankly, a waste of time.

You are not the target group for the 5090
This.

I'm not buying an expensive GPU to play around in a handful of games with DLSS4 and pray to the our gamer god Gaben that future games implement the tech. It's embarrassing to push DLSS4 instead of actual hardware improvements.

I'm expecting real improvements in all of the hundreds of games in have in my library and in applications like Blender/Cinema4D. And that only comes with raw hardware improvements.
Sure, then it's not for you. For me, it's right up the AI-tech advancement I expected in a consumer-grade gaming GPU from Nvidia. This is likely the future moving forward, where true brute-force GPU performance is no longer the focus but more AI-driven tech.

I upgraded from a 3080 to a 4080 due to Frame Gen and the slight bump in raster. The 5080 might just be the next upgrade path for me, as MFG interests me. And don’t think AMD isn’t thinking the same. 10-20 years from now, you’ll look back and see how far we’ve come with AI doing the heavy lifting on GPUs.
 
Last edited:
Pay 2 or more like 3 thousand bucks plus the at least 1000.00 bucks more for the other parts so we can move our little cartoon character around smoother on the cheap LCD screen. We are the dumbest generation of humans in history.
 
This.


Sure, then it's not for you. For me, it's right up the AI-tech advancement I expected in a consumer-grade gaming GPU from Nvidia. This is likely the future moving forward, where true brute-force GPU performance is no longer the focus but more AI-driven tech.

I upgraded from a 3080 to a 4080 due to Frame Gen and the slight bump in raster. The 5080 might just be the next upgrade path for me, as MFG interests me. And don’t think AMD isn’t thinking the same. 10-20 years from now, you’ll look back and see how far we’ve come with AI doing the heavy lifting on GPUs.
With the 4080 being 52% faster than the 3080 in raster according to techspot's review, you are just reinforcing my argument :)

You made a substantial raw hardware performance upgrade. Thank you for your help. (y) (Y)
 
I think it is especially telling how poor this card is when you consider its a full 2 years since the 40 series... Given its power draw and advanced cooling it seems to do nothing more than an overclocked and volted 4090 onto which they have artificially tagged some exclusive software 'improvements' to try and sweeten the pill. Worst halo-tier graphics card nVidia have ever produced.
 
With the 4080 being 52% faster than the 3080 in raster according to techspot's review, you are just reinforcing my argument :)

You made a substantial raw hardware performance upgrade. Thank you for your help. (y) (Y)

Glad to be of help! What people choose to do with their wallets is their own business, just like it is for you (and me). I simply provide my commentary based on metrics. If the context went over your head, that’s beyond my control :yum
 
Last edited:
Glad to be of help! What people choose to do with their wallets is their own business, just like it is for you (and me). I simply provide my commentary based on metrics. If the context went over your head, that’s beyond my control :yum
I have no problem with people buying what they want with their money (I spent my money on things I shouldn't either). The context is simple: the gen on gen gains are too little and people want more when they upgrade.

Touting DLSS4 as a "future technology" for something that you pay this much... this should just be seen as a bonus to the raw performance you get, not the performance itself. It's why when you upgraded you went for a big +50% in raw performance boost. Your own behaviour is proof of what I said.
 
I upgraded from a 3080 to a 4080 due to Frame Gen and the slight bump in raster. The 5080 might just be the next upgrade path for me, as MFG interests me. And don’t think AMD isn’t thinking the same. 10-20 years from now, you’ll look back and see how far we’ve come with AI doing the heavy lifting on GPUs.
Touting DLSS4 as a "future technology" for something that you pay this much... this should just be seen as a bonus to the raw performance you get, not the performance itself. It's why when you upgraded you went for a big +50% in raw performance boost. Your own behaviour is proof of what I said.
My context was about my upgrade path (3080 -> 4080 -> 5080), with the interest in the 4080 being Frame-Gen. If MFG isn’t for you, then it’s not for you, simple as that. Based on the MFG performance metrics thus far, I stand corrected.

Raw performance as the primary focus is no longer feasible, as highlighted in commentary like Digital Foundry’s 5090 review. Anyone who’s been paying attention to what AI has been doing in tech would understand why this is the direction GPUs are heading. It also makes perfect sense for Nvidia to continue pushing DLSS as they’re leading in the path they know best.

The hesitation toward DLSS is understandable, just as it was when it was first introduced. The 60-series will benefit from both a node change and even greater DLSS features, but there’s no regressing from innovations like MFG. Again, see my context, I have no interest beyond the 5080 as based on metrics, it feels like a sensible move from the 4080 given what features interests me.
 
Last edited:
Back