The Real Nvidia GPU Lineup: GeForce RTX 5060 is Actually a Mediocre 5050

This is the second time that you prove that everything's been moved down two tiers. The 5090 is the only one that deserves to be ultra high end.

The 5060's are actually 5040's with the 8GB being a 5030 at best.
No, the 5060 is a 5060. Nvidia has decided, that's how it works.

It may be bad, but your argument is worse.
While you make excuses for their greed Tim is totally correct.
 
This is the second time that you prove that everything's been moved down two tiers. The 5090 is the only one that deserves to be ultra high end.

The 5060's are actually 5040's with the 8GB being a 5030 at best.
While you make excuses for their greed Tim is totally correct.
I'm not making excuses, I'm describing the exact reality, Nvidia decides what the product is. Period.

And I'm saying that as someone who hasn't used any Nvidia product for over a decade.

However, I'm surprised techspot posted such a bland article, with little or no technical content, neither from a financial point of view nor from the chip development. Normalize your table with this information, even I want to know:
Chip size vs cost/wafer vs total development cost vs dGPU market size to amortize the total amount.
 
Last edited:
When nVidia launched Titan-class (now x90) over a decade ago, the chief criticism from journos for that new performance class was that Titan was barely 5-10% faster than that generation’s x80 Ti part despite being 2x the price. Looks like nVidia, as of RTX 40, took those criticisms to heart, as seen in both the relative class shrinkage in x80 and below, and the increasing gulf in performance from x80 to flagship. Without a serious threat from a competitor, nobody should expect this to change from nVidia.
 
"The RTX 5060 would not have looked so bad as a relatively weak hardware configuration with just 8 GB of VRAM if it had been branded as an RTX 5050 and priced closer to $200."

Like I've been saying all along, thank you for backing it up with the numbers and graphs.

Nvidia have successfully convinced the vast majority to accept the new pricing, AMD has decided to follow Nvidia to the letter, Intel don't want to compete in the higher end (yet).
 
I’m not here to bash Nvidia too much.....I get why others do, but nothing in tech (or life) stays the same.

The percentage of CUDA cores, memory, and bandwidth in a GPU isn’t some fixed metric that should dictate comparisons across future models. Hardware evolves, and so do the performance trade offs.

I think the RTX 5060 is fine where it is, but the Ti version should only come with 16GB, having an 8GB variant feels unnecessary, I do agree. Performance wise, these cards aren’t terrible, but their value would feel much more reasonable if prices were just a bit lower, and actually sold at that price rather than being inflated through the usual pricing schemes we see today.
 
"The RTX 5060 would not have looked so bad as a relatively weak hardware configuration with just 8 GB of VRAM if it had been branded as an RTX 5050 and priced closer to $200."

There will never be another $200 market in GPUs, there is no profit at that level.

Like I've been saying all along, thank you for backing it up with the numbers and graphs.

Numbers and graphs might look convincing, but they don’t always tell the full story when comparing technology across different generations. Hardware evolves, priorities shift, and benchmarks that seemed meaningful years ago may no longer hold the same weight. Specs alone can’t capture improvements in architecture, efficiency, or real world usability, sometimes, the biggest advancements aren’t reflected in raw numbers at all.

Nvidia have successfully convinced the vast majority to accept the new pricing, AMD has decided to follow Nvidia to the letter, Intel don't want to compete in the higher end (yet).

I can agree with this....AMD isn’t just following Nvidia, but Intel as well, with pricing that’s becoming increasingly exorbitant. However, I don’t see Intel remaining in the discrete GPU market for long unless it can turn a solid profit. Without competitive performance and strong consumer demand, its presence in this space may be short lived.
 
When nVidia launched Titan-class (now x90) over a decade ago, the chief criticism from journos for that new performance class was that Titan was barely 5-10% faster than that generation’s x80 Ti part despite being 2x the price. Looks like nVidia, as of RTX 40, took those criticisms to heart, as seen in both the relative class shrinkage in x80 and below, and the increasing gulf in performance from x80 to flagship. Without a serious threat from a competitor, nobody should expect this to change from nVidia.
If I remember correctly, the Titan series launched before the 700 series of Kepler GPU's and enthusiasts assumed that the Titan would be some gaming monster when in reality Titan was more of a workstation card. I also seem to remember it was the price of the Titan that really turned alot of people off especially with the legendary 780 Ti coming after.
 
I think everybody's forgetting one thing here.

This dirtball we live on has finite resources. Stuff doesn't just magically replenish itself overnight after we've dug it out of the ground (or wherever it originates).

Everybody seems to assume that the manic development pace of electronics is just going to continue in the same way for ever.....at exactly the same level of pricing per resources offered. Doesn't work like that.

As Loadedaxe quite rightly says, priorities change as expectations - and other tech - evolve. And as the finite resources of the planet dwindle, the same number of industry players fight among themselves ever more fiercely for their "slice" of what's left. Nobody wants to be left out as the "poor relation".....with the end result that prices just skyrocket ever more frantically.

And it's a vicious circle. The "glory days" of consumer GPUs may already be behind us, and the existing major players are doing their level best to wax fat on the reputations of past successes.

S**t happens, guys.

Miq.
 
The metric that I favor is the die size since comparing to the flagship is more of a relative metric. So, for the fully enabled x60 cards the die size used to be around 220mm2 (660, 960), so yes, at 180mm2 the 5060ti has a small die. if nvidia would have built a gpu 25% bigger for the 5060 we would have seen consistent performance increases. The memory bandwidth is there, the memory is ok at 16GB, but without any competition there is no incentive to do so. Not even AMD is interested in selling cheap GPUs since they can build 3 zen 5 dies with the silicon from a single 9060 GPU. Probably the biggest problem is the lack of competition for TSMC. They keep raising the prices for all nodes and keep production constrained.
 
Here I've put together a quick table showing how weak the above article is. In general the size of the dies varied little from 760, the 2060 and 4060 are the points outside the curve. Normalizing for inflation, prices are also almost unchanged while all development and wafer costs have increased, at some point low end GPUs will disappear because they are unviable. AIBs have to make money, retailers have to make money. It's not viable.

The performance you're looking for is in the die space now dedicated to RT and ML, without which all chips would be around 20% faster without increasing die size or price. Mystery solved.

Screenshot-20250610-094102.png

Screenshot-20250610-093418.png
 
It is literally in the article in the end - wafer costs increased so same margins mean prices increase. Tim acknowledges that in the article if anybody bothered to actually read it. And he also clearly states that's sadly the reality but techspot cares about the consumer bang for buck and console competition and able to play games in 2 years time on a 300$ gpu.
 
Here I've put together a quick table showing how weak the above article is. In general the size of the dies varied little from 760, the 2060 and 4060 are the points outside the curve. Normalizing for inflation, prices are also almost unchanged while all development and wafer costs have increased, at some point low end GPUs will disappear because they are unviable. AIBs have to make money, retailers have to make money. It's not viable.

The performance you're looking for is in the die space now dedicated to RT and ML, without which all chips would be around 20% faster without increasing die size or price. Mystery solved.

Screenshot-20250610-094102.png

Screenshot-20250610-093418.png

It's clear to me that the article's goal is to stir controversy and attract clicks, rather than provide in-depth, technical, or informative content. I’m well aware and it's quite evident that integrating RT and AI stuff into chips compromises a significant portion of the raw performance they could otherwise deliver.
 
The charts comparing the different generations are just amazing. Now, I know it's hard (possibly adding an isometric third dimension ?), but combining them with MSRP would actually show a much worse picture.
 
I think everybody's forgetting one thing here.

This dirtball we live on has finite resources. Stuff doesn't just magically replenish itself overnight after we've dug it out of the ground (or wherever it originates).
That's beside the point.
Everybody seems to assume that the manic development pace of electronics is just going to continue in the same way for ever.....at exactly the same level of pricing per resources offered. Doesn't work like that.
Maybe so, but, IMO, manufacturers should be honest about it, and if all they can produce is crap, then it should be priced like crap.
As Loadedaxe quite rightly says, priorities change as expectations - and other tech - evolve. And as the finite resources of the planet dwindle, the same number of industry players fight among themselves ever more fiercely for their "slice" of what's left. Nobody wants to be left out as the "poor relation".....with the end result that prices just skyrocket ever more frantically.
I'm not buying that argument. nVidia chose this path long ago well before any perceived "dwindling of resources." IMO, its simple greed with no need for excuses of any sort.
And it's a vicious circle. The "glory days" of consumer GPUs may already be behind us, and the existing major players are doing their level best to wax fat on the reputations of past successes.
Buyers of these cards are not stupid. This trend will lead to a black eye on the reputations of any manufacturer who follows. We've seen that happen before in virtually every industry that has tried it. Give your customers sh!t for products, sales will tank.

If there is any reason, IMO, its the current AI fad that is leading companies to think that everyone will want AI, but no one will want a great GPU.
S**t happens, guys.
Good luck with that.
 
Buyers of these cards are not stupid. This trend will lead to a black eye on the reputations of any manufacturer who follows. We've seen that happen before in virtually every industry that has tried it. Give your customers sh!t for products, sales will tank.
All evidence points to that not being the case… people are generally stupid… they buy all sorts of unnecessary items at inflated costs ALL THE TIME and have been doing so for thousands of years. Not sure why you think GPUs are an exception to this.
 
I'm not making excuses, I'm describing the exact reality, Nvidia decides what the product is. Period.

And I'm saying that as someone who hasn't used any Nvidia product for over a decade.

However, I'm surprised techspot posted such a bland article, with little or no technical content, neither from a financial point of view nor from the chip development. Normalize your table with this information, even I want to know:
Chip size vs cost/wafer vs total development cost vs dGPU market size to amortize the total amount.
NVidia has spent over a decade establishing meaningful a meaningful naming scheme where you can expect a certain level of performance just by looking at the name. The destroyed that and the problem is that it is confusing the consumer, probably intentionally, to increase margins.
Here I've put together a quick table showing how weak the above article is. In general the size of the dies varied little from 760, the 2060 and 4060 are the points outside the curve. Normalizing for inflation, prices are also almost unchanged while all development and wafer costs have increased, at some point low end GPUs will disappear because they are unviable. AIBs have to make money, retailers have to make money. It's not viable.

The performance you're looking for is in the die space now dedicated to RT and ML, without which all chips would be around 20% faster without increasing die size or price. Mystery solved.

Screenshot-20250610-094102.png

Screenshot-20250610-093418.png
Why bother wasting silicon on RT cores when the card isn't even powerful enough to run raytracing?
 
This is a fascinating article, I love these! Can you make one with AMD? I'd love to see how that lineup has (or has not) changed over time, and if that sheds light on the interplay between Nvidia and AMD both in terms of tech specs at each tier but also their pricing.
 
NVidia has spent over a decade establishing meaningful a meaningful naming scheme where you can expect a certain level of performance just by looking at the name. The destroyed that and the problem is that it is confusing the consumer, probably intentionally, to increase margins.
Why bother wasting silicon on RT cores when the card isn't even powerful enough to run raytracing?
That's the tricky part because even iGPUs have some die area dedicated to RT these days... no one will have the guts to say that this is nothing more than useless marketing.
 
Prize and percentage not correct

5080 42fps vs 5090 64fps example: Doom dark ages 4k not 50% but dont care percentage when I see +10fps +20fps that nothing difference

The differnce when like run 90-100FPS a A= good card and B= not good run 30-40fps so clear this


Prize? What a hell 999$ for 5080??? never ever 999$
1700$ starting the cheapest cards here in EU and now 1500$
 
Excellent information and well put together -- Cards today seem as selling much more often and much higher than past gens over MSRP. I realize NVidia is selling RTX cards like crazy, but are they really to gamers, or are they going it small A.I. firms or some sort A.I. use vs to gamers? Then again look at Steam data where NVidia also tops the lists. The 6080, guessing it will continue (since people keep paying more and more) will go up considerably -- 6070 could easily top $1k.
 
Back