Nvidia GeForce RTX 5060 Ti 8GB Review: Instantly Obsolete

Just can't wait for the RTX 5060 reviews.
The entry level cards were never meant to play current games at high/ultra settings.
Looking back in time this was always the case since GeForce 2.
Entry level was always medium/low settings or resolution. DLSS it's just a nice trick but won't help much at this level.
And game developers are getting real lazy also.

 
I do wish nvidia had done a 5060 16gb to fill the price gap instead of a 5060 ti 8gb. Say a 250 8gb 5060, 330 16gb 5060, 400 5060 ti 16gb. It's just sad to see how the die is indeed wasted with that vram unable to use any of the performance it clearly has as the 16gb test shows
 
Just can't wait for the RTX 5060 reviews.
The entry level cards were never meant to play current games at high/ultra settings.
Looking back in time this was always the case since GeForce 2.
Entry level was always medium/low settings or resolution. DLSS it's just a nice trick but won't help much at this level.
And game developers are getting real lazy also.
Lazy or not, a decade of VRAM stagnation will cause issues with game development. There is only so much optimisation you can do for textures until you hit a brick wall. Outside of maybe a 5030, 8GB should not be a thing anymore.
 
TechSpot, why don't you do benchmarks at DLSS Performance instead of Quality? The Transformer model, whenever it can be used, offers basically the same image with much better fps at Perf.
 
I remember when GPUs had too much VRAM for their capabilities - AMD and nVidia would release low end cards with high levels (comparitively) of RAM when the cards themselves were not powerful enough to run at resolutions that needed the RAM.

Editied to add - for example the HD7750 had a 4GB DDR3 version when the HD7970 only had 3GB.......
 
TechSpot, why don't you do benchmarks at DLSS Performance instead of Quality? The Transformer model, whenever it can be used, offers basically the same image with much better fps at Perf.
"basically" is not equal. we gamers generally look at optimal settings to achieve playable FPS with the highest possible image quality that we subjectively accept. some prefer high 100+ FPS (like me), and some want better image quality at maybe 60FPS.
 
Well not so relevant but it seems like I'll have to upgrade my 3070ti in near future. I'll try to squeeze 2 more years and change to AMD. Hopefully by that time, there will be a good value GPU for 1440p.
 
TechSpot, why don't you do benchmarks at DLSS Performance instead of Quality? The Transformer model, whenever it can be used, offers basically the same image with much better fps at Perf.

Since 55x drivers nvidia add custom DLSS preset to driver profiles, and since 57x drivers it is available in nvidia app for whole wide audience. So now DLSS presets are basically obsolete and irrelevant when you can set render percentage that suits you the best.

I play cyberpunk+PT at 64% DLSS with minor quality drop and stalker at 80% DLSS for more perfomance with zero quality drop - and that depends on initial resolution.

So reviewers are best to keep testing native only at ultras - both so users could judge GPU and subtract DLSS % at glance and make your decision, and media to fry measly Nvidia *** for castrated videocards.
 
TechPowerUp has several of them, clamoring that is bad game developer's fault that 8GB cards are not enough.

I seem to remember reading this sh** also on this site too. We are all guilty of accepting the mantra for too long. Refreshing, that Steve and some other reviewers are finally taking a crystal clear stance on that. This was not so clear in the era of the 3070 8GB and also not so clear when comparing 4060 TI 8 vs 16GB.

let’s be honest: 8GB of VRAM was never "enough" even when games were older and had lower requirements. Because better gaming was blocked by it.

This was mainly Nvidias strategy, but let's not forget that AMD did this too for a very long time until RDNA2 and a lot of people and reviewers agreed back than (or never saw an issue with it) that it is ok if the game fits into 8GB VRAM.

We have to thank them all but especially Nvidia for stopping progress on modern gaming for many years to come. Just look at the Steam Hardware surveys to understand what pc devs are mainly aiming for. The best we get in VRAM usage are console ports. Ultra texture packs exists, but they are rare and come as add-ons, sometimes only for 24GB GPUs, so most users can't activate them.

Texture quality is being held back dramatically. You can't perceive it, because many games look even good at 8-12GB VRAM, but it's because developers are squeezing ultra textures into the small VRAM, using all kinds of tricks and shortcuts. They are heavily compressing assets, cutting resolution in less obvious areas, and swapping data more aggressively. This results in texture pop-in, shimmer, and subtle loss of detail in games, but most don't know that the VRAM is to blame for that.

And then AI. This is the most absurd aspect of this story. The very company that forced gamers to use small VRAM GPUs now happens to be on the frontier of AI GPU development. Want smarter NPCs? Want your game world to react like a living, breathing system? Generated Textures? AI Generated game situations that make every game experience unique? Sorry, Nvidia did not see that coming. Local LLMs are starting to be explored for dialogue and behavior systems, and they need VRAM— a lot more than 8GB. And a lot more than 16GB if the game should fit in as well! If we want that future, we need the mainstream hardware that’s ready for it.

Absurd, because Nvidia also creates the technology for implementing AI in games. But right now only the most simple models fit, because VRAM is too rare. And because they want their higher VRAM cards for B2B/Datacenter customers. Again, they don't care about gaming progress.

8GB was holding us back. 12-16GB will hold us back too. If we want real progress — not just in visuals, but in immersion, AI, and simulation — we need to let go of the idea that 8/10/12/16Gb is enough for mainstream cards. At least consumers need a high VRAM option with a mid tier card. But we will make the same mistake again with 12-16GB in the next years, I fear.

I guess only console HW progress and their pc ports will change this over the years. Let's pray a new PS or Xbox will get 32GB unified memory and than gamers will start to see what can be done with textures and with AI by gaming developers. And they will learn what to demand from GPU makers, in the first place from Nvidia, but also from AMD and Intel.

But again, somebody and their novice fanboy will tell us here or somewhere else that 12 or 16GB VRAM is enough. They won't learn from the past and they can't understand that VRAM size nowadays is the single essential KPI for any progress in gaming technology. Forums are full of these people. Apparently they want to play the same games over and over.
 
Last edited:
TechPowerUp has several of them, clamoring that is bad game developer's fault that 8GB cards are not enough.

Like......did these people do the same thing with 2GB cards? or 512MB cards? IDK what it is about 8GB but some people are determined to clutch their pearls over it.

Actually I would have, but no games came out that wouldn't run on cards just a few years old, its only with UE5 and RTX that suddenly vram is an issue, quite frankly at 1080p every game should fit into 6gb vram at medium/high, 8gb at the most. To your point 2gb Cards first released in 2010 for AMD and 2012 for Nvidia, and remained the ram standard until Pascal in 2016 with 3 and 6gb cards, remember GTX 660, 660 Ti, 760, and 960 where standard 2gb cards, and until about 2020 you could run every new game at 1080p on a GTX 960 2gb card at medium/high settings, and on the older 660-760 cards medium was fine. Same thing happened in 2005 ATI and Nvidia went 512 on high end, midrange got it with 8800GT, and 512 remained viable until about 2013/2014 to run games at medium settings. Now why was this? Because game devs programed efficiently, they optimized properly, and they weren't as rushed, today the publishers rush them, and they don't optimize.
 
Why so? 2 extra GB either way, but totally opposite result?
There are several games that use over 10GB of VRAM at reasonable settings but stay under 12GB.

The reason this occurs is that consoles have 16GB of shared VRAM (I.e., they use it for RAM too). So many games were created with a soft limit on VRAM based on how much RAM the game likely needed. So developers only have to shift down to 6GB saved for RAM to hit the 10GB wall on the 3080 whereas saving only 4GB is reaching a reasonable limit.

This of course is the starting point for PC games which can up quality settings and AI upscaling & frame gen also use VRAM, but still I think you can see how starting point matters which is where the console hardware comes into play.

Techspot doesn't like my screen grab images for some reason:
 
Actually I would have, but no games came out that wouldn't run on cards just a few years old, its only with UE5 and RTX that suddenly vram is an issue, quite frankly at 1080p every game should fit into 6gb vram at medium/high, 8gb at the most. To your point 2gb Cards first released in 2010 for AMD and 2012 for Nvidia, and remained the ram standard until Pascal in 2016 with 3 and 6gb cards, remember GTX 660, 660 Ti, 760, and 960 where standard 2gb cards, and until about 2020 you could run every new game at 1080p on a GTX 960 2gb card at medium/high settings, and on the older 660-760 cards medium was fine. Same thing happened in 2005 ATI and Nvidia went 512 on high end, midrange got it with 8800GT, and 512 remained viable until about 2013/2014 to run games at medium settings. Now why was this? Because game devs programed efficiently, they optimized properly, and they weren't as rushed, today the publishers rush them, and they don't optimize.
Previous developer optimization is largely a myth.

The difference today is that GPUs did not respond to the added VRAM in consoles like they have in the past. In the past, 2ish years after a new gen of consoles dropped GPUs responded with more VRAM to stay competitive.

Today, covid slowed upgrades followed by AI which fundamentally broke the incentives for VRAM. That is, AI loves VRAM and therefore responding to consoles added VRAM not only makes the GPU more competitive for gaming it makes it more competitive for AI which can be sold for MUCH more money. So Nvidia is foremost protecting it's AI cash cow by limiting VRAM on gaming GPUs. The better margins and earlier obsolesce on gaming hardware are more of a secondary bonus for them. Put another way, it's more about no losing a huge chunk of 90% of their revenue rather than padding 10% of their revenue.
 
Without the blue bar charts and comparative overview, it's quite hard to see how it performs at a glance, compared to, say 4060.
 
Back