Nvidia's RTX 5080 is Actually an RTX 5070 - Wait, What?

NVIDIA has always priced its GPUs as high as the market would allow. Even when AMD offered competitive alternatives at lower prices, consumers still overwhelmingly chose NVIDIA. The brand dominance, superior software ecosystem, and stronger marketing made NVIDIA the default choice for most buyers.

AMD, historically, has struggled to truly surpass NVIDIA or Intel. While they’ve made strides with their Ryzen CPUs, the generational uplift from the 7000 series to the 9000 series isn’t groundbreaking. The performance improvements are incremental at best, and history suggests that AMD’s current lead in CPUs won’t last long.

In the GPU space, AMD has failed to provide meaningful competition and has basically given up. Without true competition, NVIDIA has no reason to lower prices. And coupled with "I want it" or "Just buy it" mentality these are the real issue, competition does drive prices down, but right now, there simply isn’t any, and as long as everyone keeps opening their wallets to Nvidia, this will not change.

But I digress, it has been shown that Nvidia doesn't really care about the consumer GPU market anymore. They give crumbs and and out come the credit cards.

Excellent analysis. One of the issues with AMD and Intel is they're focusing on 2-3 different things and are thus spreading themselves too thin. How can AMD do CPUs, GPUs, and SoCs? They're being a jack-of-all-trades semiconductor and master of none.

Nvidia does one thing and only one thing: GPUs. And they have mastered that from the software to the hardware wherein they can charge $1k for 70 series card that would've been a 60 series card last gen.
 
Last edited:
Ultimately what most of us want is actually an RTX 4060 12/16 GB VRAM at existing 4060 prices.
that would be far more exciting ...

AMD is coming out with an APU (for the now 2 year old AM5 platform), that will have 5050ti lvl of performance, no GPU needed. And AMD already has mobile APU that are close to discreet cards (4060?).

Most won't need a GPU in the near future..
 
NVIDIA has always priced its GPUs as high as the market would allow. Even when AMD offered competitive alternatives at lower prices, consumers still overwhelmingly chose NVIDIA. The brand dominance, superior software ecosystem, and stronger marketing made NVIDIA the default choice for most buyers.

AMD, historically, has struggled to truly surpass NVIDIA or Intel. While they’ve made strides with their Ryzen CPUs, the generational uplift from the 7000 series to the 9000 series isn’t groundbreaking. The performance improvements are incremental at best, and history suggests that AMD’s current lead in CPUs won’t last long.

In the GPU space, AMD has failed to provide meaningful competition and has basically given up. Without true competition, NVIDIA has no reason to lower prices. And coupled with "I want it" or "Just buy it" mentality these are the real issue, competition does drive prices down, but right now, there simply isn’t any, and as long as everyone keeps opening their wallets to Nvidia, this will not change.

But I digress, it has been shown that Nvidia doesn't really care about the consumer GPU market anymore. They give crumbs and and out come the credit cards.

Bruh... people in my Guild were buying Radeon 6900xt for $749, while Other were buying 3080's for twice as much... lulz!

Just like the RTX fans who bought a 4080 two years ago, instead of an 7900xtx... lulz!


AMD has provided so much COMPETITION that twice now nVidia has to rename cards. What AMD doesn't provide is marketing and fake frames..!
 
We know that Nvidia already tried this with Lovelace but rolled back the planned 12GB RTX4080. This time they pushed ahead with the plan. We really need an AMD part that is like 85 percent of the 5080 but half the price.

I think back to Nvidia Tesla where they had no competition and were asking $650 for a launch GTX280 which was ridiculous for the time. Only for AMD to launch the 4870 a few weeks later for $300 which was typically within ~15 percent on titles it lost and as fast on many others.

Within three months the GTX280 was $400 at MSRP and usually retailed below that. It's a long faded memory but there has to be a disruptor for Nvidia behaviour to change.
Except...we later found out that instead of the $6 million cost that Deepseek quoted, it turns out it was actually $500 million to $1.6billion in costs!

https://www.cnbc.com/2025/01/31/deepseeks-hardware-spend-could-be-as-high-as-500-million-report.html

I remember, I got the 4850 but I got the 1 gb version! All other gpus at the time was 512mb. That 4850 lasted for a helluva long time, OC'd to gills.
 
Bruh... people in my Guild were buying Radeon 6900xt for $749, while Other were buying 3080's for twice as much... lulz!

Just like the RTX fans who bought a 4080 two years ago, instead of an 7900xtx... lulz!


AMD has provided so much COMPETITION that twice now nVidia has to rename cards. What AMD doesn't provide is marketing and fake frames..!
Bruh...
Show some sources other than "My guild all bought 6900XTs"

Dont confuse me with an AMD or Nvidia shill, I am neither. I have a Arc A750, however that does not make me an Intel fan.

But if you like I will give you a long list of why people buy Nvidia, just ask.

Bottom line is this plain and simple. If AMD or Intel want to compete in the GPU market and gain market share, they need to step up. It can make you upset, it doesn't change facts.

But I digress, I guess we will have to wait and see what happens in the next few months with the 70 series and 9070.

By the way do you like the new naming scheme AMD has...that is just like Nvidas?

Cmon Bruh!
 
That's total BS. AMD finally resolved issues with AVX512 power consumption. Something Intel solved by, well, disabling AVX512. Intel can start catching AMD by enabling AVX512 again.

AVX512 has always offered huge performance improvements, however previous implementations had serious limitations and/or power consumption was serious issue. AMD solved both problems so saying around 40% difference between 7000 and 9000 series isn't groundbreaking is just BS. Oh yeah, enabling AVX512 take few seconds from developers. So that's not an issue either.
You bring up some fair points, especially about AMD addressing AVX512’s power consumption challenges. Intel disabling AVX512 on consumer Alder Lake and Raptor Lake chips was definitely a controversial decision, but it wasn’t without reason.

Power draw and thermals were a real concern in consumer markets, and Intel focused on efficiency and hybrid architecture instead. Still, AVX512 is a beast for workloads that can take advantage of it, no doubt about that.

However, it’s worth noting that even though AMD made strides with their Zen 4 AVX512 implementation, it still shines mainly in specific workloads like scientific computing, deep learning, and encoding tasks. For typical gaming or general consumer applications, the real-world benefits are far more limited. The 40% performance difference between 7000 and 9000 series you mentioned is impressive but situational, it depends heavily on the workload and how well AVX512 is optimized in the software.

And yeah, enabling AVX512 can be simple for some developers, but that also depends on the complexity of the software stack and target audience. It’s not always just a “few seconds” in every case.
 
You bring up some fair points, especially about AMD addressing AVX512’s power consumption challenges. Intel disabling AVX512 on consumer Alder Lake and Raptor Lake chips was definitely a controversial decision, but it wasn’t without reason.

Power draw and thermals were a real concern in consumer markets, and Intel focused on efficiency and hybrid architecture instead. Still, AVX512 is a beast for workloads that can take advantage of it, no doubt about that.

However, it’s worth noting that even though AMD made strides with their Zen 4 AVX512 implementation, it still shines mainly in specific workloads like scientific computing, deep learning, and encoding tasks. For typical gaming or general consumer applications, the real-world benefits are far more limited. The 40% performance difference between 7000 and 9000 series you mentioned is impressive but situational, it depends heavily on the workload and how well AVX512 is optimized in the software.

And yeah, enabling AVX512 can be simple for some developers, but that also depends on the complexity of the software stack and target audience. It’s not always just a “few seconds” in every case.
Intel's decision to disable AVX-512 on some Lakes was not about power consumption but because having two different architectures on same package is a problem. And having different architectures AND different instruction sets on same package is even bigger problem.

You of course mean Zen5. Problem is that those "typical applications" do not use AVX-512 at all. That's the problem. Another problem is that with Zen5 there are basically no valid reasons not to use AVX512. It's just developers are so stupid and lazy that they don't want to use AVX-512 because they tested it with some 2016 Intel CPU and because of high power consumption using AVX-512 CPU was indeed slower.

Enabling AVX-512 takes few seconds. It may not always give huge benefits but with Zen5 it's really see many scenarios where AVX-512 would make software slower. So why not use it?

This tweet pretty much tells it all:
CineBench R23 does not use AVX-512. Maxon has told me that the benefit of using AVX-512 is negated by the clock-down required to keep everything stable, so it isn't used. I'm told this is also why it is disabled in Intel's Embree, (which Cinema4D uses).
Yeah, because some Intel trash sucks with AVX-512, let's not use it with Zen5 "(y) (Y)"
 
Intel's decision to disable AVX-512 on some Lakes was not about power consumption but because having two different architectures on same package is a problem. And having different architectures AND different instruction sets on same package is even bigger problem.

You of course mean Zen5. Problem is that those "typical applications" do not use AVX-512 at all. That's the problem. Another problem is that with Zen5 there are basically no valid reasons not to use AVX512. It's just developers are so stupid and lazy that they don't want to use AVX-512 because they tested it with some 2016 Intel CPU and because of high power consumption using AVX-512 CPU was indeed slower.

Enabling AVX-512 takes few seconds. It may not always give huge benefits but with Zen5 it's really see many scenarios where AVX-512 would make software slower. So why not use it?

This tweet pretty much tells it all:
Yeah, because some Intel trash sucks with AVX-512, let's not use it with Zen5 "(y) (Y)"
Intel's decision to disable AVX-512 on certain platforms was indeed more about architectural consistency and design complexity than purely power consumption. Having different microarchitectures on the same package complicates everything from scheduling to cache coherency and instruction set management. Disabling AVX-512 across the board simplifies validation, reduces potential performance inconsistencies, and avoids support headaches.

Regarding Zen5 and AVX-512?

You're absolutely right, Zen5 changes the game. AMD’s implementation of AVX-512 is much more efficient compared to early Intel CPUs that struggled with it. Those older experiences may have scared off developers, but Zen5 offers a far better balance between performance gains and power consumption.

It’s a bit unfair though to call developers "stupid and lazy". Many developers just haven't had good reasons to adopt AVX-512 yet because most consumer level applications don’t demand it. This could change with Zen5, especially for workloads like scientific computing, video processing, AI inference, and cryptography, where AVX-512 shines.

The real opportunity here is to educate and encourage developers to revisit AVX-512, especially now that it’s practical and more consistent across platforms like Zen5. Once tooling, libraries, and compilers catch up and make it easier to adopt, we'll likely see much broader usage.

Encouragement is key, will it happen? Time will tell, for consumer applications, probably not.
 
Intel's decision to disable AVX-512 on certain platforms was indeed more about architectural consistency and design complexity than purely power consumption. Having different microarchitectures on the same package complicates everything from scheduling to cache coherency and instruction set management. Disabling AVX-512 across the board simplifies validation, reduces potential performance inconsistencies, and avoids support headaches.

Regarding Zen5 and AVX-512?

You're absolutely right, Zen5 changes the game. AMD’s implementation of AVX-512 is much more efficient compared to early Intel CPUs that struggled with it. Those older experiences may have scared off developers, but Zen5 offers a far better balance between performance gains and power consumption.

It’s a bit unfair though to call developers "stupid and lazy". Many developers just haven't had good reasons to adopt AVX-512 yet because most consumer level applications don’t demand it. This could change with Zen5, especially for workloads like scientific computing, video processing, AI inference, and cryptography, where AVX-512 shines.

The real opportunity here is to educate and encourage developers to revisit AVX-512, especially now that it’s practical and more consistent across platforms like Zen5. Once tooling, libraries, and compilers catch up and make it easier to adopt, we'll likely see much broader usage.

Encouragement is key, will it happen? Time will tell, for consumer applications, probably not.
For hybrid architectures, decision was mostly to reduce problems. For some, it was just because power consumption was not worth it.

Problem is, there are basically no drawbacks on enabling AVX-512 on Zen5. There were many drawbacks on older CPUs but Zen5 changes that. And yes, I call them lazy because using one extra switch on compiler takes seconds. I agree that improvements on software that may be problematic or are actually hard to implement can be left out. But just compiler switch, yes, I call them lazy for good reason. Of course I talk about somewhat modern software that is still updated, for legacy ones I agree that using AVX-512 is not worth it.

It will change but considering how easy thing we are talking about, speed should be much higher.
AMD is coming out with an APU (for the now 2 year old AM5 platform), that will have 5050ti lvl of performance, no GPU needed. And AMD already has mobile APU that are close to discreet cards (4060?).

Most won't need a GPU in the near future..
Yeah, and that will also mean AMD discrete share goes even lower. And then everyone is "thinking" discrete = at least $1000 card but it actually can be $40 trash.

Discrete share is basically one of stupidest metrics since Nvidia has 0% share outside it.
 
I had a personal experience with the RX5070/5080 issue last year. I had a system with an RX 5070 that ran pretty well for my uses. I intended to upgrade from an old NVIDIA GPU on a small system. I ordered three RX5080s. One was unusable. Another had poor performance. The third was obviously not up to the 5080 level with insufficient power connector.
 
This analysis isn't entirely accurate or fair to NVidia. The fact is that they chose to elevate the high end card to a new level (and a new official price point, though it's not actually more than people were paying for the 4090), while making only incremental improvements on the rest of the lineup. The larger price spread between the 5080 and 5090 reflect that. It also leaves room in the lineup for a 4080 Ti, should they choose to launch such a product.
 
I applaud Tim on this RTX 50 Series breakdown.
It's nice to see nGreedia getting it's dues.


Quick note and not sure why not mentioned right off the bat with nVidia's newest Blackwell architecture, is that the $1999 RTX 5090:

^Does not use the full GB202-300-A1 GPU core. (it kinda implied, but not mentioned or even graphed)

The RTX 5090 will have only have 170 SMs enabled out of the total of 192 SMs and will feature only 21,760 cores instead of the total of 24,576 cores. 11.4% redux, a tad more than the -11.1% redux of 4090 versus its full Ade Lovelace AD102 die.

The perfect 5090 dies will probably be sold as a future Quadro card.
 
This happens all the time. It was the same with AMD's names. If they had named it right, it would have been a natural growth from last gen.

7900 XTX should just have been 7900 XT
7900 XT should have been 7800 XT
7800 XT should have been 7700 XT

and so forth... Then it would have made sense compared to 6900 XT, 6800 XT etc.
 
This is why I'll stick to my 7900 XTX for quite some time. Maybe I am mistaken, but IMHO, it's basically the same as the 5080 with AMD Fluid Motion Frame 2. Performance is roughly the same and upscaling and frame generation technology is similar.
I will skip the 5000 series, as was the case for the 4000 series, and as will be the case for the 9000 AMD cards.
At this time, I *never* ever buy a new gen brand new GPU. I always wait at least one generation and buy second hand. This way, I never buy the premium asked by companies like Nvidia. And frankly, video cards are a *luxury*, and performance is good, even if you have to reduce quality in the settings a little.
Buy a good 7900 XTX second hand, you will not regret it! And I have had Nvidia cards for the last 15 years or so. (Last AMD card was a 7970 Ghz Edition if I remember well).
Inflation plus greediness puts the customer in this situation, but I will not play this game. I could buy a 5090, I am lucky enough to have this kind of money, but I simply will not as this money will be better used for something else for my family/house/whatever.
 
Last edited:
This may sound a little cheesy but if AMD play this correctly, they can benefit from this. No question, there are no GPU that can match the xx90 series but it is only for a certain market. The average person won't go out to get a xx90 series. For us the 4090 are 45k, 4080 26k, 7900XTX for 19.2k. If you favour Raytracing and dlss then obviously the Nvidia cards makes more sense but at a hefty price tag. I personally prefer dlss but I would Rather go with the 7900XTX as I am not willing to pay a premium just for dlss. So for me the AMD card makes more sense. If AMD can drop their new GPUs at a descend price, they can benefit from Nvidia's high pricing. Say for example. Not sure about the prices so gonna try. If they drop the 9070 for 500 usd then it will make a lot more sense to go with the AMD card(wishful thinking) There are also the rumour of a x3d model so we will see how that pan out.
 

We know that Nvidia already tried this with Lovelace but rolled back the planned 12GB RTX4080. This time they pushed ahead with the plan. We really need an AMD part that is like 85 percent of the 5080 but half the price.

I think back to Nvidia Tesla where they had no competition and were asking $650 for a launch GTX280 which was ridiculous for the time. Only for AMD to launch the 4870 a few weeks later for $300 which was typically within ~15 percent on titles it lost and as fast on many others.

Within three months the GTX280 was $400 at MSRP and usually retailed below that. It's a long faded memory but there has to be a disruptor for Nvidia behaviour to change.

The RX 9070XT might be the answer, it should be as fast as the RTX 4080 in rasterizing and apparently it may not be that far in RT.

It's become clear that Blackwell brings pretty much NO changes in raster architecture, instead focusing on AI, no wonder they didn't disclose shader performance, only RT/Tensors.

Not all is bad though, supposedly Neural Shaders will be a game changer, giving huge performance/IQ gains, but I'm pretty sure by the time its implemented, the RTX60 will be on the horizon.
 
This is such a dumb article. Nvidia is fixing the retarded price schemes of the past
Go to a grocery store and compare the prices of 5lbs of sugar vs 1lbs.
Guess what? You get much better value the more you spend. That is how it goes for most things in the market so now the 5090 is positioned to offer a lot more for the money instead of paying a LOT more for a bit more performance.
Doesn't take a genius to understand that concept
Except... we're NOT talking about sugar here... So, next time, Nvidia will sell a 6080 with GTX 1080 performance level at $1000 and the 6090 at $3000. Yummy...
 
Blackwell is just like Ada Lovelace architecture... a flop at gaming!
Great for CUDA workers though. But, most those people standing in line are wasting their money, if they are Gamers. Nvidia is a cult for them.


In March, a reckoning is coming and it will not bode well for fanboism, as AMD's Radeon RX 9070XT will do 5080 Gaming for 60% less money, minus 12%'ish performance.

Or loosely put... what crack-head would pay 60% more, for 12% performance gain?



AMD's stance in 2025:
-AMD cancelled NAVI 41 (ie 9090XTX) & NAVI 42 (ie 9080XT) 7 months ago..
Because RDNA4 architecture is so powerful that shrunk down to NAVI 44 size/wattage (ie RX 9070XT) serves 80% of the Mainstream Gaming market. And N44 chip is so small and cheap to make that NVidia will not be able to compete with AMD's Price/Performance.

-And for the high-end Pro gaming market..? After next month, AMD will be focusing on Pro Gaming segment with their top-tier Chiplets coming late this year.

-And for the low end gaming market..? AMD will be releasing Gaming APU's for AM5 socket & all in one systems/consoles/handhelds, etc.
 
In March, a reckoning is coming and it will not bode well for fanboism, as AMD's Radeon RX 9070XT will do 5080 Gaming for 60% less money, minus 12%'ish performance.

Or loosely put... what crack-head would pay 60% more, for 12% performance gain?
So it is more or less an XTX with a price tag like 550-600 bucks? Dunno man it sounds too good to be true.
 
So it is more or less an XTX with a price tag like 550-600 bucks? Dunno man it sounds too good to be true.
Yes (& no).

Yes,
What I am hearing is that the Radeon RX 9070XT will ring in between a 7900GRE & "touch near" the XTX in pure raster. So, AMD's new "mid-tier" RX 70xt Gaming card (when it lands in March), won't infringe on XTX's "Flagship" status and top tier space.!

Aftermarket cards will add overhead.!


(And No),
We HAD expected these cards to land @ $550-$600... but giving the piss-poor performance uplift of Blackwell & the lack of performance uplift for the $999 RTX5080 (ovr the 4080), forced AMD to hold back until March and stockpile RX 9070xt's...

..and RAISE the price!


Rumor is, AMD is going to celebrate RDNA4 by allowing Vendors a higher MSRP than intended, cuz they are going to sell like HOTCAKES regardless of price.
 
Last edited:
Discrete share is basically one of stupidest metrics since Nvidia has 0% share outside it.

Though I guess Nintendo Switch implies some Nvidia APU share?

Personally, 9700XT is a day one purchase for me, my last Nvidia was a 970 or something...
 
Though I guess Nintendo Switch implies some Nvidia APU share?

Personally, 9700XT is a day one purchase for me, my last Nvidia was a 970 or something...
No. Discete and overall share are not about consoles. So Nvidia has basically 0% share outside discrete and AMD gains much more share now when every CPU has at least some kind of GPU.
 
Back