Nvidia RTX 5000 Blackwell flagship could be 60% or 70% faster than RTX 4090

Status
Not open for further replies.
nVidia pricing has become outrageous so I'm glad that Intel has joined the fray. It's gotten so out of hand that I skipped several generations between GPU upgrades. I skipped the nVidia 2000 series cards and went from a GTX 1080 to an RTX3080. I have no desire to on to a 4000 or 5000 series GPU and I will most likely skip nVidia altogether from here on out and give Intel a try. The Intel B980 looks very promising and at the rumored price of $450 upon release it won't break the bank. Sure, it probably won't best nVidia's flagship cards but it'll be "good enough".
Pity sli is dead
 
You caught me; I'm a fry cook with bad acne. However I'm working on a little plan -- involving industrial lasers and the foreheads of certain species of hammerhead sharks -- that will change all that. World domination! ...with all your videogames locked to a permanent 5 fps. Who'll be laughing then, eh?
Fair enough. I read some of your other posts and indeed you do know what you are talking about. One of your posts for example listed several well known news paper articles, the titles so anyone can check them. All good reads, and relevant.

As mentioned I have read a few of your other posts, after that one I quoted, and they are all good stuff. Actually more than that, professional I would say.

So, I owe you an apology. Sorry.

BTW: I logged on now to say similar to above, but you had already replied.
However it worked out well. I mean your reply, is suitably sarcastic and I deserved it.

It actually made me laugh on second read, (after head smack, and ouch, cringe, yes I deserved that). A nice touch.

Looking forward to reading more your future posts. (With proper comments/replies if I have anything to add or ask).
 
Looking forward to reading more your future posts. (With proper comments/replies if I have anything to add or ask).
I look forward to it; anyone with the intellectual honesty to publicly correct a misassumption is someone well worth debating. Cheers!
 
I'll be eyeing the 5070 if it can double the performance of my 3080 12GB at 1440p x 175Hz on high/ultra settings and has a lot more VRAM for around $1000. I would flog that card for as long as it lasts! Normally I always go with the xx80 series as I feel it has always been the enthusiast's sweet spot for value and performance (at retail $), but I'm open to new ideas - regardless of the model number.
Just like how Nvidia marketed Ada at launch, they will tell you it is 3x faster when you enable DLSS 3.0 Frame generation. So yeah, its already there, but with software trickery. Jokes aside, I am skeptical we will see a big jump in Blackwell's performance, and I am pretty sure that Nvidia will gimp the card somehow by limiting the VRAM to likely 12GB like now. So even if it can run faster, you are still going to be VRAM bound.
 
The reason why the 4090 was about 60% faster than the 3090 was because of the node.

They migrated from Samsung 7nm to TSMC 4nm. It included a good boost in frequency by doing the move also.

The 5090 will probably be another 40% like Turing and Ampere was. They can't make bigger dies, they are already making the biggest dies they are able to.

4090 is more like 75% faster in raster + option for DLSS 3 Frame Gen and even faster RT/PT, no comparison really. Sub 2 GHz average boost vs 3 GHz average boost. 4090 is a whole other beast than 3090. 4090 can easily do path tracing, 3090 crumbles doing it, just like every single AMD GPU.

3090 was made on Samsung 8nm which is closer to 10nm TSMC. The only good thing about Samsung 8nm is the price. Nvidia knew TSMC was going to be packed and went cheap and available instead. AMD struggled with output because TSMC was too busy during the mining craze. Smart decision by Nvidia.

And then Nvidia jumped back on a more expensive node with 4000 series, a node AMD could not afford to use yet.

4nm TSMC is just a whole other level compared to Samsung 8nm. Much better and more expensive process. RTX 4000 series is way more efficient than 3000 series.

Nvidia will also use 3nm for GPUs way before AMD. Nvidia have the money and customers able to pay.

I expect 5090 to be 50-60% faster than 4090. 3nm TSMC + GDDR7 = It will be very fast and very expensive. AMD will have nothing that competes in this bracket. I expect they won't even match 4090 performance for years.

I don't even see AMD compete with 5080 in the next generation since Radeon 8000 series won't get any true high-end SKU. They will probably bring 7900 XT / 4070 Ti performance with the bext 8000 series card. It will be a generation with focus on regaining marketshare, just like Radeon 5000 series with 5700XT being the fastest SKU here.

The high-end GPU market is just a niche market for AMD, and they should focus on low to mid-end and regain marketshare instead. AMD should forget about true high-end. AMD GPUs priced above 700-800 dollars have miniscule sales numbers and Nvidia pretty much owns this market. That is just reality.

AMD needs to bring something TRULY GREAT while IMPROVING features alot, especially FSR (needs to match DLSS/DLAA in both performance and ESPECIALLY image quality while moving) to attract people in the high-end space.
 
Last edited:
Perhaps these will actually be feasible for 8K with DLSS Performance instead of Ultra Performance. Or native 4K rendering/4K DLSS Quality in a lot of games. Plus even the 4090 tends to run into CPU bottlenecks at 1440p and even 4K in a number of games so this card will probably be pointless for anything less than 4K.

You simply pair it with a fast CPU then. 4090 is still the fastest card by far at 1440p. Beating 4080 and 7900 XTX by 20%

It just beat them even more at 4K/UHD or higher. More like 25-30% at 4K-5K.

Even at 1080p, a 4090 is still 15% faster.

Hitting CPU bottleneck occasionally don't mean a faster card is useless. You can use DLDSR anyway to make 1080p look stunning with downsampled 4K/UHD. Meaning a fast card is not wasted even tho your monitor is using a low res.

This is true for 1440p as well. DLDSR @ x2.25 and you will get vastly better visuals, which will look close to 4K/UHD while performance hit is more like 1600p-1800p.
 
Last edited:
\
Nvidia has a choice and it basically comes down to this ..... Take a job that pays $20/hr (Consumer GPUs) or take a job that pays $100/hr (Professional GPUs for AI)

If it were you you would take the $100/hr job so why would you believe Nvidia would act any differently? They are going to take the job that pays the most just like anyone and everyone else would

I agree but if you can take the 100$/hr as you main job and still do the 20$/hr when you are idle from the main you make even more $... And companies endgoal is to maximize profit.

I just think that someday/somehow NVidia will split into AI and Gaming divisions, so gaming don't slow down it's AI division... Somewhat similar to what AMD did with it's operation splitting chip design from foundry operations, Intel is trying this move now too.

About AMD its is no where close to need ai/gaming split because AMD AI bussiness still too small, it won't take resources away from AI, at least yet.

NVidAI lol
 
I agree but if you can take the 100$/hr as you main job and still do the 20$/hr when you are idle from the main you make even more $... And companies endgoal is to maximize profit.

I just think that someday/somehow NVidia will split into AI and Gaming divisions, so gaming don't slow down it's AI division... Somewhat similar to what AMD did with it's operation splitting chip design from foundry operations, Intel is trying this move now too.

About AMD its is no where close to need ai/gaming split because AMD AI bussiness still too small, it won't take resources away from AI, at least yet.

NVidAI lol
They can't, since their consumer and enterprise GPUs are using the same nodes at TSMC with limited output. Selling cheap gaming GPUs just takes away from their Enterprise output which is pointless right now.

AMD had the same problem many times and this is probably also the reason why AMD down-prioritize GPU output.

Nvidia will probably jump on Intel 18A as soon as possible for gaming, at least low to mid-end segment. Or just use an older TSMC node. No need to use the best and most expensive node for this.

Nvidia beat AMD last gen while using a cheap Samsung 8nm node. Nvidia don't need the best possible node to beat AMD it seems.

However high-end stuff like upcoming 5090 and 5080 will use the best node for sure, and they will be expensive in the beginning, many gamers are willing to accept that.

Lets see if AMD steps up in the gaming GPU marked, or drops it to focus on AI as well. It seems that Radeon 8000 series won't be a threat to Nvidia at all.
 
Last edited:
They can't, since their consumer and enterprise GPUs are using the same nodes at TSMC with limited output. Selling cheap gaming GPUs just takes away from their Enterprise output ...
While true today, it's a basic axiom of economics that, in a free market, supply expands to match demand. We're about to the point anyway that gpus will diverge not only in marketing terms, but technical ones as well. 5-10 years from now, a gaming gpu will likely be an entirely different beast from an AI chip, rather than simply a cut-down version of the same die.
 
They can't, since their consumer and enterprise GPUs are using the same nodes at TSMC with limited output. Selling cheap gaming GPUs just takes away from their Enterprise output which is pointless.

AMD had the same problem many times and this is probably also the reason why AMD down-prioritize GPU output.

Nvidia will probably jump on Intel 18A as soon as possible for gaming, at least low to mid-end. Or just use an older TSMC node. Nvidia beat AMD last gen while using a cheap Samsung 8nm node. Nvidia don't need the best possible node to beat AMD it seems.

However high-end stuff like upcoming 5090 and 5080 will use the best node for sure.

You got a point... But perhaps in some time they start to capitalize on that making AI on last gen node and keep making gaming on 2, 3 gen older node... If it's an advance over previous gen gpu they would still make profit on gpu bussiness, while making use of chip making capabilities.

I don't understimate these greedy corps capacity to find ways to make more profit... And I want my gpus...

Still you got a point... perhaps we will get to a point where the best GPU may be iGPUs... perhaps when iGPUs get good enough.
 
While true today, it's a basic axiom of economics that, in a free market, supply expands to match demand. We're about to the point anyway that gpus will diverge not only in marketing terms, but technical ones as well. 5-10 years from now, a gaming gpu will likely be an entirely different beast from an AI chip, rather than simply a cut-down version of the same die.
Thats true and more foundries to choose from.

Intel 18A is probably going to be popular, unless it fails. I could see Nvidia use Intel 18A for gaming stuff in the future and split production, just like they "somewhat" did with Ampere / 3000 series and Ada / 4000 series at Samsung and TSMC.

More foundries = Higher output.

They still producse RTX 3000 series at Samsung as we speak. Mostly 3050 and 3060 series tho, including 3060 Ti.
 
You got a point... But perhaps in some time they start to capitalize on that making AI on last gen node and keep making gaming on 2, 3 gen older node... If it's an advance over previous gen gpu they would still make profit on gpu bussiness, while making use of chip making capabilities.

I don't understimate these greedy corps capacity to find ways to make more profit... And I want my gpus...

Still you got a point... perhaps we will get to a point where the best GPU may be iGPUs... perhaps when iGPUs get good enough.
Yeah true, I think they will do this, unless they are FORCED to use the best nodes possible to be able to compete.

I don't see Nvidia vanish from gaming market AT ALL. Still an important market for them. AI is just prime focus right now.

5090 and 5080 don't really NEED TSMC 3nm, however I think Nvidia will still produce 90% AI on this node, as long as demand is high. Meaning 5090 and 5080 will be EVEN MORE expensive (considering 3nm itself is expensive and GDDR7 also is expensive).

I could take years before AMD has something that can threat 5090 and even 5080.

"Radeon 8700XT" looks to deliver 7900 XT / 4070 Ti performance or so. This will be far below 5080. I think 5080 will beat 4090 by 20-30%.
 
2024 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2023 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2022 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2021 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2020 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2019 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2018 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2017 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2016 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2015 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2014 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2013 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2012 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2011 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2010 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2009 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2008 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2007 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
2006 GPU XYZ *MIGHT BE* 30-6058783% faster !!11!! ... and then the're not
 
Another day another round of next gen overpriced GPUs... Thank you AI bubble.

What amuses me is that they just seat on their meeting tables and thinks
"Hmmm.... how about 30% performance uplif for 5080 but 40% price hike?"
"No down it to 20%, keep the last 10% for the latter Super model... The perf uplift of course."

Not a bubble. Ever since early GPU crypto the GPU for avg consumers was doomed. this is the new normal forever ..

Amd don't care - they will tell you to buy a console a like it and sell m300+ for 15000 each instead of crappy amd gpus

NV don't care - they can afford to not care... at 30,000 per Ai module and up to 100k per black market module NFG.

Intel don't care - Only reason intel is in the gpu game is to figure out how to get into AI on their one.
 
It'll probably be a claimed 50% uplift on paper at release and turn out to be another 10-15% raw compute uplift like normal. The marketing nonsense has gotten out of hand since they can use upscaling technologies to make wild performance claims.
 
Yeah true, I think they will do this, unless they are FORCED to use the best nodes possible to be able to compete.

I don't see Nvidia vanish from gaming market AT ALL. Still an important market for them. AI is just prime focus right now.

5090 and 5080 don't really NEED TSMC 3nm, however I think Nvidia will still produce 90% AI on this node, as long as demand is high. Meaning 5090 and 5080 will be EVEN MORE expensive (considering 3nm itself is expensive and GDDR7 also is expensive).

I could take years before AMD has something that can threat 5090 and even 5080.

"Radeon 8700XT" looks to deliver 7900 XT / 4070 Ti performance or so. This will be far below 5080. I think 5080 will beat 4090 by 20-30%.
If mostly everyone is satisfied with the 4090 rasterization performance and Nvidia will come out with some revolutionary rt performance gain rumored at 2.5x by traditional leaking channels then the 5080 would be probably the go to card imo. It will probably bring significant performance gains at probably minimum out of pocket upgrade from the 4090 resell value. If the 5080 can double the rt performance of the 4090 without framegen then I'll probably purchase this card.
 
Just like how Nvidia marketed Ada at launch, they will tell you it is 3x faster when you enable DLSS 3.0 Frame generation. So yeah, its already there, but with software trickery. Jokes aside, I am skeptical we will see a big jump in Blackwell's performance, and I am pretty sure that Nvidia will gimp the card somehow by limiting the VRAM to likely 12GB like now. So even if it can run faster, you are still going to be VRAM bound.
Sadly, you're very likely right. :confused:
 
It'll probably be a claimed 50% uplift on paper at release and turn out to be another 10-15% raw compute uplift like normal. The marketing nonsense has gotten out of hand since they can use upscaling technologies to make wild performance claims.
Upscaling is here to stay, you might just embrace it. Consoles and PC devs alike, are going feet in. Native res gaming with AA on top is not better. TAA for one, is terrible.

Upscaling will replace AA eventually, since its built in + sharpening. It already replaced 3rd party AA in many new games. Most games today are developed with upscaling in mind.

PS5 Pro will get DLSS like upscaling. Probably Xbox as well. Till then consoles use DYNAMIC RES which is far worse than upscaling.

"Native res" gaming is not better than upscaling. It has tons of issues on its own and most gamers need performance. If not, use DLAA or FSR Native, WILL improve on native EVERY TIME.

I can enable DLAA and beat native quality VERY EASILY or use DLSS Quality and get a 50-75% performance boost while beating native image quality in 9 out of 10 scenes as well. Simply looks better and sharper. Runs better.

Days are over with running native with bad AA on top, atleast for most RTX users. AMD users are still stuck with FSR which needs to be improved massively.

Most people that hate upscaling, are AMD users who have tested FSR only or think of DLSS v1.x.

Today is very very different.

DLSS improved MASSIVELY since release. FSR not so much but still improved.
 
Last edited:
Just like how Nvidia marketed Ada at launch, they will tell you it is 3x faster when you enable DLSS 3.0 Frame generation. So yeah, its already there, but with software trickery. Jokes aside, I am skeptical we will see a big jump in Blackwell's performance, and I am pretty sure that Nvidia will gimp the card somehow by limiting the VRAM to likely 12GB like now. So even if it can run faster, you are still going to be VRAM bound.
99% of PC gamers use 1440p or less and don't need more VRAM and most 4K/UHD+ gamers want 4090 because it simply beats everything else with a big margin for this.

4060 Ti 8GB and 16GB performed identical.

4070 Ti 12GB and SUPER with 16GB performed the same, outside of a few edge cases where RT was enabled in native 4K/UHD which is not even possible for 4090 to do. Upscaling is needed. Alan Wake 2 is made for upscaling in mind and you should not be playing this on "native" res.

You should not believe AMDs VRAM marketing. In reality VRAM is not a problem OUTSIDE of rushed console ports (which strangely enough was sponsored by AMD).

Techpowerups conclusion of 4070 Ti SUPER - "16 GB VRAM rarely makes a difference"

Their 4060 Ti 8GB vs 16GB test - "No significant performance gains from 16 GB VRAM"

But sure keep believing VRAM matters alot. In reality most games will be forced to lower settings because of weak GPUs (or enable upscaling). 6700XT was meant to age well because of 12GB VRAM but in reality it did not, because GPU was too weak to utilize the VRAM in the first place. GPU can simply not run high settings anyway in new and demanding games.

Forcing GPUs to max VRAM on settings that NO-ONE expected them to run in the first place makes no sense.

Mention ONE AMD GPU that can max out all games at 4K/UHD native with RT enabled? Does VRAM save you in this case? Nope, because their GPUs are too weak.

And this is why AMD talks about VRAM in the first place. They sadly have nothing else to talk about. They are not fooling many tho.

I had 6800XT. Now I have 4090. Had both since release really. Not going back to AMD anytime soon unless they improve their feature-set massively.

DLSS, DLAA, DLDSR, Reflex, Nvidia Filters and Freestyle, ShadowPlay, every single feature is just MUCH better on the Nvidia side. This is what you pay for.

If AMD was better, they would atleast be priced on par or even higher. However AMD prices them below Nvidia every single time.

AMD is a CPU company first and foremost. No-one expected them to beat Nvidia in the GPU market and they don't. Nvidia pretty much dominate in all GPU markets. Gaming, AI and Enterprise. This is just reality.
 
Last edited:
99% of PC gamers use 1440p or less and don't need more VRAM and most 4K/UHD+ gamers want 4090 because it simply beats everything else with a big margin for this.

4060 Ti 8GB and 16GB performed identical.

4070 Ti 12GB and SUPER with 16GB performed the same, outside of a few edge cases where RT was enabled in native 4K/UHD which is not even possible for 4090 to do. Upscaling is needed. Alan Wake 2 is made for upscaling in mind and you should not be playing this on "native" res.

You should not believe AMDs VRAM marketing. In reality VRAM is not a problem OUTSIDE of rushed console ports (which strangely enough was sponsored by AMD).

Techpowerups conclusion of 4070 Ti SUPER - "16 GB VRAM rarely makes a difference"

Their 4060 Ti 8GB vs 16GB test - "No significant performance gains from 16 GB VRAM"

But sure keep believing VRAM matters alot. In reality most games will be forced to lower settings because of weak GPUs (or enable upscaling). 6700XT was meant to age well because of 12GB VRAM but in reality it did not, because GPU was too weak to utilize the VRAM in the first place. GPU can simply not run high settings anyway in new and demanding games.

Forcing GPUs to max VRAM on settings that NO-ONE expected them to run in the first place makes no sense.

Mention ONE AMD GPU that can max out all games at 4K/UHD native with RT enabled? Does VRAM save you in this case? Nope, because their GPUs are too weak.

And this is why AMD talks about VRAM in the first place. They sadly have nothing else to talk about. They are not fooling many tho.

I had 6800XT. Now I have 4090. Had both since release really. Not going back to AMD anytime soon unless they improve their feature-set massively.

DLSS, DLAA, DLDSR, Reflex, Nvidia Filters and Freestyle, ShadowPlay, every single feature is just MUCH better on the Nvidia side. This is what you pay for.

If AMD was better, they would atleast be priced on par or even higher. However AMD prices them below Nvidia every single time.

AMD is a CPU company first and foremost. No-one expected them to beat Nvidia in the GPU market and they don't. Nvidia pretty much dominate in all GPU markets. Gaming, AI and Enterprise. This is just reality.
Most is true but didn't Steve from Hardware Unboxed prove that even though the performance Is not lost due to vram limitations the engine resorts to potatoe like textures that pop in and out? Let me know if you want me to find the video specifical? It's definitely not AMD marketing and more of Nvidia business tactics to force premature mid life upgrade cycles. Why expect a consumer to upgrade once every 3 to 4 years when you can limit Vram and force them to upgrade mid cycle?
 
Most is true but didn't Steve from Hardware Unboxed prove that even though the performance Is not lost due to vram limitations the engine resorts to potatoe like textures that pop in and out? Let me know if you want me to find the video specifical? It's definitely not AMD marketing and more of Nvidia business tactics to force premature mid life upgrade cycles. Why expect a consumer to upgrade once every 3 to 4 years when you can limit Vram and force them to upgrade mid cycle?
If Nvidia wanted people to upgrade faster, why would they allow DLSS to be used without RT enabled. AMD was in full panic mode when DLSS 2.x released and FSR still is not even close to DLSS today.

Most RTX users are using DLSS to boost performance without RT enabled and DLAA beats native every time if you solely want maximum fidelity.

Looking at 3070 8GB vs 6700XT 12GB in 4K/UHD (on high settings) in 2024, shows that 3070 still wins in minimum fps with ease - https://www.techpowerup.com/review/sapphire-radeon-rx-7900-gre-pulse/38.html

3070 and 6700XT launched at identical pricing about 4 years ago.

Not that any of the GPUs will do well in native 4K/UHD on high settings but this just shows that 8GB is not a problem for many people, even in 2024 and that GPU power and features actually matter more for longevity.

You won't see a VRAM usage increase before 2028 on PC. This is when next gen consoles hit.

You will still see horribly optimized console ports come out tho and these games are what mainly caused issues for people. At least till they are fixed.

A friend of mine uses 3070, he has been more than satisfied and recently bought The Last of Us, he plays at High settings at 1440p without a single VRAM issue. He still gets higher fps than 6700XT users, which can't push ultra settings either because GPU is too weak.

VRAM never futureproofed a GPU.
 
Last edited:
This rumored increase continues to go down. Now its 60% faster than the 4090, just a few weeks ago the rumor was nearly 2X faster than the 4090. This one sounds more realistic though, I just don't see the 5090 being 2X the 4090 even on 3nm. Also, I think the 5090 will be a ghost, a low supply, extremely high-priced GPU created to have the graphics crown, but most of the 102 series chips going to AI. Also, the 5080 going back to the 'flagship' status on the 103 chip and a 5080 Ti on a really cut down 102 eventually.
 
This rumored increase continues to go down. Now its 60% faster than the 4090, just a few weeks ago the rumor was nearly 2X faster than the 4090. This one sounds more realistic though, I just don't see the 5090 being 2X the 4090 even on 3nm. Also, I think the 5090 will be a ghost, a low supply, extremely high-priced GPU created to have the graphics crown, but most of the 102 series chips going to AI. Also, the 5080 going back to the 'flagship' status on the 103 chip and a 5080 Ti on a really cut down 102 eventually.
Yeah, 5090 will probably be 1999 dollars on release, if not more. 3nm TSMC and 32+ GB of GDDR7 won't be cheap. I could see it hit 60% over 4090 for sure (in raster, probably more in rt and path tracing).

4090 is 10% cutdown and we won't even see the full AD102 chip on a gaming GPU. It is simply not needed. AMD has nothing to threaten Nvidia in the high-end space and Radeon 8000 series won't be any different (actually it will be even worse, since AMD said no true high-end SKU will be available in 8000 series)

AMD won't even have an answer for 5080 next gen. If AMD can compete with 5070 series eventually, they will be good, more is not really needed. AMD should focus on low to mid-end and regain marketshare rather than chasing high-end market which is a niche market for AMD anyway.

The best GPU is always going to be way more expensive. Just like luxury cars, or pretty much any entusiast product. Many people are willing to pay a premium, for something that loses its value faster than a high-end GPU.

Nvidia knows there is a market for 1000-2000 dollar GPUs, and I bet if something in the 3000-4000 dollar bracket existed, it would be selling too.

However regular PC gamers, which mostly uses 1440p or less, won't be needing 5090 or even 5080.

Like 99% of PC gamers use 1440p or less and even several years old GPUs are doing well here. Add upscaling on top, and most are golden. PC gaming has never been easier to do.
 
Last edited:
Upscaling is here to stay, you might just embrace it. Consoles and PC devs alike, are going feet in. Native res gaming with AA on top is not better. TAA for one, is terrible.

It's not a matter of embracing it or not when people use it to erroneously compare against previous generations using garbage like frame gen and claiming that's a performance uplift. It's marketing bs designed to lie to consumers who don't know any better. As always, the reviews will when we finally find out what the actual performance uplift is.
 
Status
Not open for further replies.
Back