Nvidia launches RTX 50 Blackwell GPUs: $2,000 RTX 5090, $1,000 RTX 5080, RTX 5070 / Ti are $549 and $749

For the haters (Haters gonna hate, am I right) - Digital Foundry has an epic analysis of Multi Frame Gen and how it is the future of gaming graphics and hardware.

The future is wild!
 
Nvidia made the choice between 5080 and 5070 TI more difficult than I predicted. They kept the 5080 under a grand but also made the 5070 TI pretty close in performance (on paper) for 25% less.

I can't wait to see the HUB reviews so we can see the real performance without fake frames.

I am also curious if we have turned a corner on reducing the performance hit for ray tracing.
 
100%.

I wouldn't consider myself a hardcore gamer, somewhere between casual and hardcore. FG feels terrible. Maybe it's okay on a 240hz monitor with 120 frames rendered, but going from 60 to 90-100 fps (on my 4080 FG doesn't double the frame rate, it's more like 1.5X - 1.7X) feels worse and less responsive than just playing the game at 60fps, so I always turn it off. I haven't played a single game yet with the feature where I didn't turn it off. The only game that felt somewhat okay with FG that I have played was Black-Myth W, but even in that game I ultimately preferred 60 fps over FG and adjusted my settings accordingly.
It's all marketing.

Most of fledglings do not even know what they are chasing, when speaking performance. Max "ANYTHING" is not performance for the top-tier GPU buyers, it's Consistency.

Let me explain. I sold my Vaunted SONY (FW900) CRTs and held on to them as long as possible when flatpanel LEDs hit the gaming scene, and even though these new LEDs could do 1080p at 120Hz, their pixel response & signal latency and <input lag> was 10x greater than the CRT that all of us (at the time) were use to playing on.

It isn't until NOW..
Input lag & pixel response with these new OLED gaming displays, means that Oldschool Gamers can get the feel of responsiveness in games, that we once had. Those exact same SONYs I sold in 2010 are going for $10k a piece now.. because that technology doesnt have latency.


It was never about frames, or even Hz. Multi-sync monitors LOCKED into any pre-set resolution, but @ a constant frequency:

  • 1024 x 768 @ 148Hz VESA
  • 1152 x 864 @ 148Hz VESA
  • 1600 x 1024 @ 120Hz VESA
  • 1600 x 1200 @ 85Hz VESA
  • 1920 x 1080 @ 120Hz VESA
  • 1920 x 1200 @ 85Hz VESA
  • 2048 x 1280 @ 85Hz VESA
  • 2048 x 1536 @ 75Hz VESA
  • 2304 x 1440 @ 60Hz VESA
etc...

Todays monitors are not multi-sync they have a Native resolution/set amount of pixels and can alter their response RATE. Which is something a gamer never really wants, they want CONSISTANCY.

SO when buying a gaming monitor the 1st thing is not Hz, but latency (ie Input lag).


The reason so many of you n00blings are focused on a single metric of MAX FRAMES, is because of MARKETING. While not really grasping the underlying principle of why. The honest truth to every Pro gamer on the planet is not about MAX frames.. it is 100% about avoiding the lowest frames.

example:
Professional don't care if they are hitting 380-420fps... or that someone else is getting 580fps; only that it's not dipping down to 90fps. They are so concerned about lowest frames. Additionally, running your card at 580fps, only to heat soak your GPU 20 minutes into match and have your frames become inconsistent and varied, which is not as good as limited your card to 320fps and keep those frames consistent throughout the match/game sessions.

SO, If you want unfettered gameplay, turn off vsync. Using your monitor as a just display, with no input lag or processing and nothing to get in the way of your gameplay.

Today, any OLED display at 144Hz, with near zero input lag & GPU able to keep consistent frames above that^ is all that is needed for 99.9% of all gamers.



Lastly,
Ai, frame generation, DLSS, are all gimmicks that introduce latency, all for visual fluff and effect. Great for low input entertainment style games, unnecessary and unwanted in multiplayer/competitive games.
 
Benchmark reviews will tell the truth at the end of the day. I will eventually pick me up one of these new 5090s and I guess a new power supply to go with it lol after I sell a kidney or part of my liver.

I'm to the point of waiting for them to make Power Supplies that are dedicated to JUST the Graphics Cards at this point. Heck, total power dedication towards one, specific part would probably allow them to fix the issue of the melted cabling that the 4090 had and the 5090 probably will have. But we need a company willing to go this far, like, maybe, Corsair to build a PSU dedicated solely for graphics cards-only. Then some methodology of install overtop of another PCI-E slot to make it turn on with the motherboard and provide a space for install in a standard PC case.

I mean 575 Watts for a SINGLE COMPONENT is crazy!!!! My mom's entire PC build is just under 200 Watts for everything. My PC build is 700 Watts for everything. In a day and age where efficiency takes the cake, these graphics cards decided to shoot for max performance and skip efficiency it appears!
 
I'm to the point of waiting for them to make Power Supplies that are dedicated to JUST the Graphics Cards at this point. Heck, total power dedication towards one, specific part would probably allow them to fix the issue of the melted cabling that the 4090 had and the 5090 probably will have. But we need a company willing to go this far, like, maybe, Corsair to build a PSU dedicated solely for graphics cards-only. Then some methodology of install overtop of another PCI-E slot to make it turn on with the motherboard and provide a space for install in a standard PC case.

I mean 575 Watts for a SINGLE COMPONENT is crazy!!!! My mom's entire PC build is just under 200 Watts for everything. My PC build is 700 Watts for everything. In a day and age where efficiency takes the cake, these graphics cards decided to shoot for max performance and skip efficiency it appears!
We will see increase power demands. Look at it this way Silicon chips have mostly reached their size versus processing power limits. Therefore to get more processing power the chips have to increase in size to keep adding more and more power chips are going to get bigger and bigger. Look up project Cerberus they created a chip on an entire die that 850,000 cores and 2.6 trillion transistors with expanded on-chip SRAM to 40 gigabytes, memory bandwidth to 20 petabytes per second and total fabric bandwidth to 220 petabits per second but requires 20k powersuppy to run.
https://en.wikipedia.org/wiki/Cerebras
 
I mean 575 Watts for a SINGLE COMPONENT is crazy!!!! My mom's entire PC build is just under 200 Watts for everything. My PC build is 700 Watts for everything. In a day and age where efficiency takes the cake, these graphics cards decided to shoot for max performance and skip efficiency it appears!
500W is the power of the improvised open coil cinder block heater I used to have under my desk to keep warm and make surrogate coffee back in the late '80s in Communist Romania.
Come to think of it, the current GPU situation reminds me a bit of Communist Romania. Only the rich and the connected had the good stuff, the rest of us had to fight over whatever scraps were made available, while the media was telling us what great times we’re living in and the regime's lackeys were praising all the incredible progress we’ve made.
 
Last edited:
How can you ask 2000 fking thousand dollars for a piece of computer then say its efficient with almost 600 watts !! this is insane and ridiculous !

How many 7900 XTX or RX 9070 XT would it take to exceed and match performance of RTX 5090 -- 2 or 3 of them, add up all that power (watts) and then come back to does the RTX 5090 as 1 GPU with 32GB of VRAM offering that level of performance use way more power (for the performance) than any other card.
 
Come to think of it, the current GPU situation reminds me a bit of Communist Romania. Only the rich and the connected had the good stuff, the rest of us had to fight over whatever scraps were made available,
, while the media was telling us what great times we’re living in and explaining ho much progress we’ve made.
This childish self-entitlement mentality is out of control. I grew up in the Communist USSR, where only the rich and connected could buy meat, bar soap, and clothes that fit. Today, neither wealth nor political connections are required to own a video card, you simply need the cost to produce the card itself, plus the GPU maker's average 20% net profit margin. And if you consider great wealth is required to purchase one of these, your career choices likely need reconsidering.
 
Remember the 4070ti was meant to be the 4080. Nvidia only changed course after an outcry about a xx80 only getting a 192-bit wide bus. Puts the new 5080's lower relative spec in context.

What's the bet that DLSS4 multiframe could be done on 40 series too.
 
The whole presentation was disappointing and sketchy AF, IMHO. In my previous post, I presumed we would have the usual generational uplift I.e. a newer card performs as well as the one above in the previous generation (Ex: 5080 performs as well as a 4090; 5070 performs as well as a 4080; etc.). The focus on AI and frame generation and not the actual card abilities themselves raised huge red flags to me and makes me suspect that we may possibly be seeing an RTX 2000 series style uplift (modest 20% uplift over predecessor in same tier and unable to reach performance of card above it in the previous GTX 1000 tier) with the RTX 5000 series. I suspect that actual reviews will not be kind if this is the case, just as they weren't for the RTX 2000 series. At least for the RTX 5080 and below.

What will be interesting to me is how much straight uplift the RTX 5090 brings over RTX 4090. With the massive amount of cores and added VRAM, I would hope that we are looking at around a straight 50% generational uplift at least. (RTX 4090 managed about a 65% straight uplift over RTX 3090 Ti per Internet). I'll be very curious to see what reviewers actually find in their tests and the community's reaction to it.
 
Just a thought: with only one 5090 in their data center, Nvidia can now simultaneously power two "5080" tier Geforce NOW subsciptions. Handy.

It would be more reasonable to give the 5090 a name of a professional product (cfr. Threadripper), as it caters to many professional users. But then it wouldn't function anymore as the halo product and as a status symbol for wealthy gamers. (Yes, now blame me for not being able to afford a 5090. I will crawl back underneath a stone in a minute.)
They used to do that, they called that special class of workstation-yet-consumer cards Titan.

The Titan was the top of the food chain for Nvidia chipsets for that respective generation, both in price and in performance. Having the Titan (or when SLI was still a thing multiple Titans) was how PC gamers would win the measuring stick contest. In those times the *80Ti was a sometimes barely cutdown version of the Titan, which would slot itself as a very high performance yet surprisingly cost effective package when compared to the *80 non-Ti variant. The legendary 1080Ti was a product of this, as was the 2080Ti, the 980Ti, and the 780ti. All high performance to value cards, most of all the 1080Ti.

The last true Titan was the Titan RTX, before shifting to the *090 naming convention, perhaps for consistency? To obfuscate? All I know is the 3090 was practically DOA and the 4090 at least put up a fighting chance in terms of Titan-class performance... on paper the 5090 looks like a proper Titan-class performer with a Titan-class price, but the benchmarks will tell the real story.
 
Nvidia is on it's own tick-tock cadence since Turing.

Turing - New feature set, poor performance increase, poor MSRP Pricing

Ampere - Same feature set, poor performance increase, good MSRP pricing

Ada - New Feature Set, good performance increase, poor MSRP pricing

Blackwell - Same feature set, poor performance increase, good MSRP pricing (at least its not 5080 for $1500).

Nvidia knows the only people it has to sell to are people that already own Nvidia. No real market share gains to be made vs AMD.

So they do a performance gen for the people that will pay and a price gen for people that won't.
 
What will be interesting to me is how much straight uplift the RTX 5090 brings over RTX 4090. With the massive amount of cores and added VRAM, I would hope that we are looking at around a straight 50% generational uplift at least. (RTX 4090 managed about a 65% straight uplift over RTX 3090 Ti per Internet). I'll be very curious to see what reviewers actually find in their tests and the community's reaction to it.
The RTX5090 is essentially two RX5080 in SLI.
 
Like it or not... this is the future:

1. Games are only getting more demanding with 4K/8K, ray tracing, massive open worlds, and complex physics.
2. Traditional brute-force rendering is hitting limits; power, heat, manufacturing
3. AI is stepping in to handle the load smarter, not harder.
4. RTX Neural Shaders are a game-changer; it’s like pre-rendered cutscenes but happening in real time. Crazy efficient and ridiculously good-looking.
5. NVIDIA is leaning hard into neural rendering and frame generation, and honestly, they’re miles ahead in AI.
6. Blackwell GPUs are the start of GPUs that are smaller, cooler, and smarter, with AI doing the heavy lifting.

This is NVIDIA, after all, the leader in AI. Whether you love it or hate it, this is the direction gaming GPUs from NVIDIA are heading. While adoption rates for DLSS 4 may be slower at first, they’re bound to pick up over time. With this approach, I can see near life-like visuals in games becoming a reality much sooner than with traditional rendering methods.
ChatGPT, is this you?!? 🧐
 
I'm planning a big upgrade this year that I'm intending to last 7-10 years, I'm coming from Ryzen 3600X and RTX 2070 Super, I just need to figure out if I'll be getting a 5080 or 5090, if it's really worth the extra cost....

I'm in a similar boat. I got a 2080 Ti and 2950X in 2019. I'll probably get the 5090 and 9800X3D. If I hold onto it for a similar amount of time, ~5-6 years, the costs (including new mobo, RAM, etc.) will average out to ~$1,000 per year or less. Not so bad.
 
Crazy is the 2080 ti with dedicated Tensor cores vs 1080ti had a higher delta gain. Hey at least we can rely on smoke and mirrors to make up for the marketing and hype shortfalls.1000057406.png
 

Attachments

  • Screenshot_20250107_202227_Chrome.jpg
    Screenshot_20250107_202227_Chrome.jpg
    427.7 KB · Views: 4
Back