Why Are Modern PC Games Using So Much VRAM?

Or wait a while, waiting for the game to reduce price and the developers release patches solving performance issues. But, in essence, you are right.

For me "wait a while" was one of the biggest takeaways from this article. Not only do you pay a premium for the latest games, but as a game gets older, the cost of the hardware to run it decreases dramatically. Heck, if the trends in this article continue and you are willing to play "old games", then you can get a lot of value by waiting a few years.
 
its weird right, its like they artificially bloated it when it came to pc.

then theres games like doom eternal that can run on relics yet still look amazing either way, and in some ways rockstar and their games too, gta5 is old and it was one of the first games I installed when I upgraded my gpu.

In my opinion, idSoftware techies/artists are almost the only ones who do serious optimization of their products, living up to the expectations of their historical reputation.

while most of their games are in tighter or less expansive spaces, few games look as good and run as well even in a toaster as the last two Doom. I think the 2004 Doom asked for more GPU power due to its novel lighting and shadow system, and the first Rage the same (although somewhat less) due to its particular texturing system, but the new Dooms are a league by themselves in terms of techniques of optimization.

Incredibly, Doom Eternal has more detail/visual complexity, graphics load than 2016 Doom and runs the same or even better.
 
Last edited:
I'm wondering if there's a typo in the article or if I am misreading something. The articles mentions that a 4K (3840x2160) framebuffer requires four times as much memory as 1080p (1920x1080). It also states that for two games tested (Black Flag and Last Light), 4K graphics required twice as much VRAM as 1080p. But then the article asks, "so how does decreasing the frame resolution result in such a significant decrease in the amount of VRAM being used?"

Shouldn't the question be the opposite? Why did VRAM requirements only change by a factor of two instead of four?

I get why the article mentioned the area of each frame. Back in the olden days when VRAM was only used for framebuffers, resource usage was proportional to the area of the image, not the length. That is, if you multiply both width and height by n, the resources used would change by n². But the test results in this article show VRAM usage changing by an amount much less than squared, perhaps linearly.

I have some guesses why this is: 1. Textures are often mentioned as one the heaviest users of VRAM, which makes sense since their size is proportional to n², but changing the screen resolution doesn't necessitate a change in the texture resolution, meaning VRAM usage may be a constant. 2. LIkewise, polygon data doesn't (inherently) change when the screen resolution changes, so perhaps a significantly large amount of VRAM is used by 3-D meshes? 3. The fundamental algorithms of raytracing require RAM proportional to n², but the article implies that the Bounding Volume Hierarchy optimization is responsible for the bulk of memory usage in raytracing. I may be wrong about this, but since the Bounding Volume Hierarchy works in the scene space, I would expect its VRAM usage to be independent of screen resolution.

But, I'm no expert and have no measurements of how much of an effect any of those have. I'm just speculating and maybe it's something completely different.

Whatever the answer is, I'd like to see a future article discussing why resolution doesn't play a bigger role in VRAM usage for 3-D games.
 
In my opinion, idSoftware techies/artists are almost the only ones who do serious optimization of their products, living up to the expectations of their historical reputation.
That was the case when Id was just focused on PC-only titles; once they expanded to include consoles, things began to change with Rage. Rage 2, Doom (2016), and Doom Eternal all had problems on the PC at launch. Performance, especially in the Doom remake, has always been Id's strong point, but the development house is not without its share of issues (though obviously fewer than others).
 
I'm wondering if there's a typo in the article or if I am misreading something. The articles mentions that a 4K (3840x2160) framebuffer requires four times as much memory as 1080p (1920x1080). It also states that for two games tested (Black Flag and Last Light), 4K graphics required twice as much VRAM as 1080p. But then the article asks, "so how does decreasing the frame resolution result in such a significant decrease in the amount of VRAM being used?"

Shouldn't the question be the opposite? Why did VRAM requirements only change by a factor of two instead of four?
Not so much a typo, but more a case of my writing perhaps not being particularly clear enough. You've actually answered your own question in the rest of your post:

I have some guesses why this is: 1. Textures are often mentioned as one the heaviest users of VRAM, which makes sense since their size is proportional to n², but changing the screen resolution doesn't necessitate a change in the texture resolution, meaning VRAM usage may be a constant. 2. LIkewise, polygon data doesn't (inherently) change when the screen resolution changes, so perhaps a significantly large amount of VRAM is used by 3-D meshes?
Unless specifically coded to do so, games won't load smaller LOD versions of assets when the resolution is decreased -- in the case of textures, they'll simply be sampled less frequently at lower resolutions, thus improving performance, though not the memory footprint. The same is essentially true of meshes, and vertex and index buffers.

The fundamental algorithms of raytracing require RAM proportional to n², but the article implies that the Bounding Volume Hierarchy optimization is responsible for the bulk of memory usage in raytracing. I may be wrong about this, but since the Bounding Volume Hierarchy works in the scene space, I would expect its VRAM usage to be independent of screen resolution.
Yes, the BVHs are independent of frame resolution, but the top level structure has to be created each frame, so it's not a static figure. Other aspects of an RT algorithm are resolution dependent -- for example, the results of the ray tracing are written into a specific buffer, before being copied into the working render target; memory load is heavily impacted by the depth of the ray tracing recursion, so more pixels equals more rays equals increased loads. It's easy to overuse memory, with ray tracing, because just using default routines can easily result in some shaders being allocated more memory than they need.

Whatever the answer is, I'd like to see a future article discussing why resolution doesn't play a bigger role in VRAM usage for 3-D games.
It's simple, really -- the memory footprint of all the static assets (textures, meshes, etc) is considerably larger than the transient ones.
 
Well I can tell you The Last of Us Part I kicks a GTX1650 right in the nuts, 4GB VRAM is right out for stable frame rates. (I'm running in Ubuntu, wine + vkd3d, Nivida driver of course.) It says it's using 5.8GB out of 6GB VRAM (I have no idea why it says 6GB and not 4...) at the options screen with just that curtain, window, and tree on screen. It looks GORGEOUS, and FPS is actually fine until you turn around, then you get massive stutters for a moment as it probably has to replace like half the VRAM contents. Since I'm not typically spinning around in circles it's fine.

I do wonder if these games couldn't do with some serious optimizations though, maybe they are not using LOD (Level of Detail) type stuff -- the theory on that being, a distant object that is like half an inch tall on-screen does not need the fully detailed mesh and textures, use simpler ones until you get closer to the object. I mean, I don't blame them -- if PS5 and current XBox have enough RAM to not need LOD to be used, then it would have just delayed the game to no benefit (on PS5 and XBox), it'd be a HUGE benefit on PC but who knows how much game engine work it'd take to add LOD if the engine doesn't have it already.
 
I'm wondering if there's a typo in the article or if I am misreading something. The articles mentions that a 4K (3840x2160) framebuffer requires four times as much memory as 1080p (1920x1080). It also states that for two games tested (Black Flag and Last Light), 4K graphics required twice as much VRAM as 1080p. But then the article asks, "so how does decreasing the frame resolution result in such a significant decrease in the amount of VRAM being used?"

Shouldn't the question be the opposite? Why did VRAM requirements only change by a factor of two instead of four?

I get why the article mentioned the area of each frame. Back in the olden days when VRAM was only used for framebuffers, resource usage was proportional to the area of the image, not the length. That is, if you multiply both width and height by n, the resources used would change by n². But the test results in this article show VRAM usage changing by an amount much less than squared, perhaps linearly.

I have some guesses why this is: 1. Textures are often mentioned as one the heaviest users of VRAM, which makes sense since their size is proportional to n², but changing the screen resolution doesn't necessitate a change in the texture resolution, meaning VRAM usage may be a constant. 2. LIkewise, polygon data doesn't (inherently) change when the screen resolution changes, so perhaps a significantly large amount of VRAM is used by 3-D meshes? 3. The fundamental algorithms of raytracing require RAM proportional to n², but the article implies that the Bounding Volume Hierarchy optimization is responsible for the bulk of memory usage in raytracing. I may be wrong about this, but since the Bounding Volume Hierarchy works in the scene space, I would expect its VRAM usage to be independent of screen resolution.

But, I'm no expert and have no measurements of how much of an effect any of those have. I'm just speculating and maybe it's something completely different.

Whatever the answer is, I'd like to see a future article discussing why resolution doesn't play a bigger role in VRAM usage for 3-D games.
the screen resolution affects almost only to the intermediate buffers, more so if the render engine is a deffered render/lightning one (need several fat FP32 buffers + 8 bits/channles ones)
 
A second note I have on this -- one thing some games do and some don't that makes a major difference in resource usage... The Last of Us Part I, the leather couch has little wrinkles in it.. the clothes have wrinkles (whether the mesh has wrinkles, or a texture shader adds those wrinkles, I don't know..)... I think it was Crytech engine where someone asked the developers a few years ago "How do you keep your system requirements so low and FPS so high?" They pointed out, instead of having the GPU do, say, wrinkled clothes, they just had regular clothing with a "wrinkled clothes" texture on it, it looked the same but a small fraction the GPU resources needed.

I doubt developers who have already developed advanced systems for hair rendering, wrinkles on clothing and furniture, etc., are going to eliminate that -- but here's hoping some developers do still use techniques like that to keep requirements down a bit. (And, for those with a monster video card -- that'll make it more likely you can hit that 120 or 240FPS if you want to do so.)
 
That was the case when Id was just focused on PC-only titles; once they expanded to include consoles, things began to change with Rage. Rage 2, Doom (2016), and Doom Eternal all had problems on the PC at launch. Performance, especially in the Doom remake, has always been Id's strong point, but the development house is not without its share of issues (though obviously fewer than others).
Yes, back in the day I applied various tweaks through the command line and Rage configuration files to make it work better, before the official patches came out
 
Kind of seems like 8GB of RAM should be the entry point for any new series of gaming GPUs being released. I don't know why Nvidia didn't plan on something like the following for their 4000 series cards.

4050 - 8GB
4060 - 12GB
4070 - 16GB
4080 - 20GB
4090 - 24GB

That could have been a really clear dividing line between each tier.
 
Oh no, more visual clarity requires more video RAM, what a shock horror, I want my money back!
 
Perhaps a rule of thumb is that you need as much VRAM as the unified memory on a console. For the last few years 8 GB has been plenty because the PS4 had 8 GB of unified memory.

If that’s correct then PC gaming will be a problem for developers until most people are rocking 16GB GPUs. Since the 4070 Ti doesn’t even have 16 GB, this implies that the problem will stick around for most of this console lifecycle, and also that the current gen Nvidia cards (except for possibly the 4090) are even worse value than people believe them to be.
 
The conclusion:

- games are being released in alpha or beta stage and very unoptimized on PC as people can customize settings and buy new cards

- the game industry receives money to set most high-end settings just for the more profitable game cards. It is not a coincidence.

- on PCs or non unified memory consoles, managing what should go to ram or vram and max settings for that size costs tones of testing and optimization. Game studios want to release asap so why bother? People can upgrade...

That is the sad truth: gaming is becoming very expensive (games cost more, GPU costs more).
 
Expensive graphics cards should have more than 8GB of VRAM. 8GB of VRAM should also be enough for current games to not look and run like crap if devs implement basic stuff like texture streaming. Focusing on only one of these problems only tells half the story.
 
The consoles don't have that much memory compared to previous consoles. Double that's it. Everything I've read about their designs was the asset stream from SSD would treat the SSD as extra system memory. The PS5's design is entirely around this.
 
Everything I've read about their designs was the asset stream from SSD would treat the SSD as extra system memory.
Just the bandwidth alone the storage system isn't particularly comparable to the GDDR6 system. Peak throughput on the SSD, with compressed data, is around 8.5 GB/s and the controller can effectively stream around 20 GB/s or so. The unified RAM has a peak throughput of 448 GB/s, without accounting for the use of compression.

I know in the early promotional talks that Sony did on the PS5, it used phrases such as 'RAM-like' to describe the SSD and controller, but peel away the marketing hyperbole and one is still left with a setup that isn't massively different to the combination of PC using DirectStorage correctly. Without DS, a PC becomes somewhat like the PS4/PS4 Pro -- limited by the I/O overhead and loading times.

direectstorage1.2_SSD_benchmark.jpg

While the above is a simple demo from Microsoft, to demonstrate the benefits of DirectStorage, note the load time -- theoretically, that's enough to fill all of the VRAM on a 7900 XTX or 4090 in 2.25 seconds. On a HDD, the same test takes 4.31 seconds to load 6.61 GB so to fill all of the VRAM in those cards would take just under 16 seconds.

This is what PC games really need to be using but, of course, it's more work for developers to do, on top of what's already a long list of tasks to undertake when making a AAA title for that platform.
 
Version 1.04 was the one used in the testing.
This test? Techspot tested this april 4 -> https://www.techspot.com/review/2656-the-last-of-us-gpu-benchmark/

The huge 24-27GB patch came out late april and it lowered VRAM usage alot

A friend of mine has 3070 and maxes the game out at 1440p now

Funny that an AMD sponsored title was the first game to cauce issues on 8GB ;) A game that does not even look that good.

AMD did this many timse before. Remember the Shadow of Mordor texture pack? I do. Used twice the VRAM yet textures looked identical, all they did was lower the compression :joy:

Meanwhile Atomic Heart uses 5.7GB at 1440p maxed out on my 3080, while looking way better than The Last of Us remake...
 
Last edited:
Witch test? Techspot tested this april 4 -> https://www.techspot.com/review/2656-the-last-of-us-gpu-benchmark/

The huge 24-27GB patch came out late april and it lowered VRAM usage alot

A friend of mine has 3070 and maxes the game out at 1440p now
The testing for the very article that you're commenting on. The difference the latest patch made was also highlighted in the piece:

"...in version 1.03 of The Last of Us, we saw average VRAM figures of 12.1 GB, with a peak of 14.9 GB at 4K Ultra settings. With the v1.04 patch, those values dropped to 11.8 and 12.4 GB, respectively. This was achieved by improving how the game streams textures..."
 
The testing for the very article that you're commenting on. The difference the latest patch made was also highlighted in the piece:

"...in version 1.03 of The Last of Us, we saw average VRAM figures of 12.1 GB, with a peak of 14.9 GB at 4K Ultra settings. With the v1.04 patch, those values dropped to 11.8 and 12.4 GB, respectively. This was achieved by improving how the game streams textures..."
Yeah but allocation is not equal to actual usage.

A GPU with 16-20 or 24GB will always use more VRAM than a 8-12GB card.

Just like a PC with 32GB RAM generally will use more than a 16GB machine.

You can still force 8GB cards to max out, putting everything to absolute max - yet the GPU is too weak to do so anyway, so it's kind of pointless

High preset = highly playable on a 3070 series card;

 
Yeah but allocation is not equal to actual usage.
You clearly haven't read the article, or at the very least, not read it properly -- actual usage was recorded, not the amount allocated.

Edit: A new patch for TLOU was released yesterday, so I repeated the test again (4K Ultra, no upscaling, average and peak VRAM usage figures over a ten minute run):

v1.03 = 12.1 GB avg, 14.9 GB max
v1.04 = 11.8 GB avg, 12.4 GB max
v1.05 = 10.7 GB avg, 13.8 GB max

So definite improvements to overall VRAM loads, but the game is still trying to exceed the available local budget when moving into a new cell or loading a level.
 
Last edited:
Back