VRAM to the Test: How Much Memory Is Enough?

Steve

Posts: 3,044   +3,153
Staff member

Since publishing our annual graphics card roundup, we've received several reader inquiries regarding the performance difference between GPUs sporting 2GB and 4GB. Therefore we have put together a clock-for-clock comparison of the GeForce GTX 960 and Radeon R9 380 using 2GB and 4GB cards. Also along for the ride is the previous-gen Radeon R9 290 4GB and the newer (rebadged) R9 390 8GB, which have again been compared at the same clock speeds.

For testing we'll be using eleven games including new titles such as Tom Clancy's Rainbow Six Siege, Star Wars Battlefront, Just Cause 3, Assassin's Creed Syndicate and The Witcher 3: Wild Hunt. All games have been tested at 1920x1080 and 2560x1600 using two graphics quality settings.

First we will look at a max VRAM usage scenario using the best in-game graphics quality settings, which we expect to be borderline playable on the mainstream GTX 960 and R9 380 with console-like 30fps. Then we'll be backing off the quality settings a bit for a more realistic and desired gaming scenario.

Read the complete article.

 
The factor that really matters in regard to VRAM is bandwidth. In general, this doesn't look like much and numbers will at first lie, but there is a significant difference between the bandwidth of cards using DDR3 and GDDR5. If the card is using DDR3, the GPU is most cases will be capped on performance due to insufficient overall bandwidth. The GDDR5 cards are much closer to balance or the bandwidth in general is more than enough to not notice performance differences between different speeds.

Next part, there is a difference between VRAM allocated and VRAM used. All of your memory usage numbers are "allocated" usage. With allocated usage, all of the objects are loaded in memory and in the hands of the GPU and driver's control. In most cases, not all of that allocated usage is being used. It is sitting idle because the loaded object does not exist in the current level or is not in draw distance. Drivers should be intelligent enough to offload underused or unused objects from VRAM when VRAM is at max allocation to system RAM. You don't see performance drops in the games using 4.5GB of RAM between the R9 290 and 390 cards because at least 1GB of that allocated data is not being used.

Final comment, Nvidia's GTX Titan X has 12GB of GDDR5 RAM. At stock speeds, its RAM has 336GB per second bandwidth. That bandwidth permits every byte of data in that 12GB of VRAM to be accessed only 28 times per second. In general, it is impossible for the Titan X to achieve 30 frames per second if the rendering process truly uses all 12GB of RAM. Practically, it will be lucky to achieve 14 frames per second when utilizing 12GB worth of data manipulation per second.
 
I believe we do not have enough video card examples that would display the perfect balance between performance of the card itself and its memory. So we can mostly speculate.

And from all the benchmarks that I've seen over the last year, it suggests that a top of the line performance video card needs 8GB of RAM to have zero bottlenecks when working in 4K resolution.

And this is why nVidia GTX 980 Ti falls a little short behind Titan X in 4K tests, because it got 6GB instead of the perfect 8GB needed for those tests. I believe if it did have 8GB, it wouldn't be behind Titan X in any game.

But of course, to make 4K gaming really better, it requires better video cards than the ones available today, which in turn may bring up the plank for memory requirements.
 
"Perhaps the only valid argument here is that the larger frame buffer could support Crossfire better, but we haven’t seen any concrete evidence of this yet."

Could you have tested this next time instead of making claims without having included it in your tests? For example two 2GB 960s vs two 4GB 960s and the same for the AMD cards, I already knew what you we're telling me in this article, it's blatantly obvious that doubling the VRAM for a single card setup is pointless, at least from my experience. I've just never had the opportunity to test two double VRAM cards in SLI or Xfire setups, I just don't have those kinds of resources at my disposal anymore.

And from all the benchmarks that I've seen over the last year, it suggests that a top of the line performance video card needs 8GB of RAM to have zero bottlenecks when working in 4K resolution.

And this is why nVidia GTX 980 Ti falls a little short behind Titan X in 4K tests, because it got 6GB instead of the perfect 8GB needed for those tests. I believe if it did have 8GB, it wouldn't be behind Titan X in any game.

But of course, to make 4K gaming really better, it requires better video cards than the ones available today, which in turn may bring up the plank for memory requirements.

I don't even think 8GB will cut it when games start to truly utilize the resolution, or I should say GPUs are capable of rendering that many pixels. If a 4GB card is just enough to handle 1080p (2 million pixels) with all the eye candy it would only be logical to speculate a 16GB card would be required to render the 8 million pixels in a 4K display. Hence why the Titan is barely enough as it stands to accomplish this massive resolution in my opinion, although Nvidia did position it well with 12GB, designed obsolescence. Currently too expensive as well, we have a couple years before it becomes mainstream, until then I'm fine with my 4GB card.
 
Using 2x 2GB 680 and I'm still waiting for Pascal
I am working with 1 gtx 680 at 3840x1024 and runs everything including Arma 3 really smooth. I am waiting for Pascal also unless I find a used 980ti for around 300 bucks but I most likely will buy a brand new Pascal card with flagship PCB. I am also never going to sell my asus dcuII 680 since its soo solid plus it maybe can be used in an extra system I can build
 
Great article. I only wish there was a simulated "game of the future" test in here. Many run a few other programs that will hog up vRAM, something like that.

I'd expect that in a year or two, we will see more and more games require that 4 GB of RAM. I think what we saw in the test is AMD cards benefiting more from the increased vRAM because they have the larger memory bandwidth to utilize it. A 4 GB 960 will almost always be a waste of money due to severe bandwidth limitations. By these tests alone, $20 for that kind of performance boost for AMD users now and in the future is a no brainer.
 
I think some of these benchmarks are a touch off myself. I have a Radeon R9 270(admittedly with a factory overclock but nonetheless less powerful than all of the cards being tested), and at very high settings on Shadow of Mordor at 1080p I only ever drop below 50fps when there's a very large explosion or massive amounts of enemies on-screen(I.e. the entire population of a stronghold). Am I truly to believe that my R9 270 is outperforming a GTX 960, in a game optimized for NVIDIA hardware no less? Now on the AMD optimized side, I can run Star Wars: Battlefront at Ultra settings at 1080p and maintain a consistent 60fps. Once again, am I truly to believe that my R9 270 is outperforming its newer and better brethren?

I think this article needs to be re-done with actual gameplay instead of benchmarks likely run with in-game tools(almost always designed to strain your system beyond normal gameplay conditions) or external tools(usually just plain inaccurate).
 
Pretty misleading to test a bunch of cards that can't effectively use more than 4gb (some not even) at resolutions where there's not any real need for more than 4gb (ie not testing in 4k) and then come to the conclusion that anything more than 4gb is marketing hype.

Why not test the 290 and 390 at 4k?
 
Large amount of VRAM should provide an advantage in multiple monitors set up, if the GPU has enough power and bandwidth to push the pixels through.
 
I think some of these benchmarks are a touch off myself. I have a Radeon R9 270(admittedly with a factory overclock but nonetheless less powerful than all of the cards being tested), and at very high settings on Shadow of Mordor at 1080p I only ever drop below 50fps when there's a very large explosion or massive amounts of enemies on-screen(I.e. the entire population of a stronghold). Am I truly to believe that my R9 270 is outperforming a GTX 960, in a game optimized for NVIDIA hardware no less? Once again, am I truly to believe that my R9 270 is outperforming its newer and better brethren?

It's been proven time and again Shadow of Mordor prefers AMD GPUs. Given a 960 isn't technically that much faster than a 760, it should be no surprise that a 270/X ocassionaly draws really near to a 960, even matching it in performance. Especially in games like Shadow of Mordor that seriously like AMD GPUs.
 
What this test demonstrates is that VRAM is NOT a bottleneck for performance. One or two percentage points in frame rates does not tell the whole picture.

However running the same benchmark and increasing framerates and then graphing those differences will generate a stress/strain curve can give more insight into the performance of VRAM.

Also NONE of these benchmarks support DX12.

DX12 this year is going to be the API of choice amongst ALL game developers. Running DX11 benchmarks only adds to the confusion.

As a consumer I want to know how DX12 and Mantle will impact not only my gameplay but what hardware is available to take advantage of the API.

This piece while interesting fails to deliver the goods.

DX12 is changing gaming. This piece FAILS to answer how does VRAM affect DX12 gaming. And how is my gaming dollar best spent.
 
It's been proven time and again Shadow of Mordor prefers AMD GPUs. Given a 960 isn't technically that much faster than a 760, it should be no surprise that a 270/X ocassionaly draws really near to a 960, even matching it in performance. Especially in games like Shadow of Mordor that seriously like AMD GPUs.

How does Shadow of Mordor prefer AMD GPUs when the game was optimized with NVIDIA GPUs in mind(as most WB published titles are)? Are you maybe thinking of a different title?
 
Also NONE of these benchmarks support DX12.
How many commercially available games are using DX12 ?
DX12 this year is going to be the API of choice amongst ALL game developers.
Really? You can provide proof of this?
So far DX12 uptake is actually lags behind that of DX11's introduction. DX12 was unveiled nearly 8 months ago - you would think that the larger game developers (especially those with a hand in developing Mantle-based games) with the time and resources to devote to the extra coding and QA workload might already have had DX12 games and patches running amok in the consumer space within 2-3 months of Win10's RTM back in July 2015.

By the time DX12 hits its stride, many of the cards being tested here will have been downgraded one or two performance tiers - assuming they haven't already been reclassified as EoL, which makes comparisons for price/perf a complete waste of time, and outright comparisons a completely flawed exercise given the current developmental state of DX12 and paucity of games using it.
How does Shadow of Mordor prefer AMD GPUs when the game was optimized with NVIDIA GPUs in mind(as most WB published titles are)? Are you maybe thinking of a different title?
Hardware vendor sponsorship doesn't preclude another vendor's cards running as well or better with the game. Many Nvidia sponsored games have favoured AMD hardware, just as the reverse is true.
 
Last edited:
So is this the result of DX11 being crap to code for? I'm wondering when DX12 steps up to the plate if we will see a greater distance between 2Gb and 4GB performance?
 
I think some of these benchmarks are a touch off myself. I have a Radeon R9 270(admittedly with a factory overclock but nonetheless less powerful than all of the cards being tested), and at very high settings on Shadow of Mordor at 1080p I only ever drop below 50fps when there's a very large explosion or massive amounts of enemies on-screen(I.e. the entire population of a stronghold). Am I truly to believe that my R9 270 is outperforming a GTX 960, in a game optimized for NVIDIA hardware no less? Now on the AMD optimized side, I can run Star Wars: Battlefront at Ultra settings at 1080p and maintain a consistent 60fps. Once again, am I truly to believe that my R9 270 is outperforming its newer and better brethren?

I think this article needs to be re-done with actual gameplay instead of benchmarks likely run with in-game tools(almost always designed to strain your system beyond normal gameplay conditions) or external tools(usually just plain inaccurate).

So do you test with the very high or ultra-quality settings? Run the in-game benchmark tool and let me know your score with the game quality settings maxed out. Shadow of Mordor is one of the few games we use the benchmark tool for rather than a section of the game. In every game we test 60 seconds of the most demanding section we can find and the Shadow of Mordor benchmark provides that nicely for us.

Even from your point of view I don’t see why the article needs to be ‘re-done’ when nine of the latest gamers were tested using Fraps in a demanding section of the game.

Pretty misleading to test a bunch of cards that can't effectively use more than 4gb (some not even) at resolutions where there's not any real need for more than 4gb (ie not testing in 4k) and then come to the conclusion that anything more than 4gb is marketing hype.

Why not test the 290 and 390 at 4k?

It would be very misleading to suggest that the 290 and 390 are anywhere near powerful enough to run at 4K. As I said in the article in many of the tests we aren’t even eating up all the VRAM and yet the performance is dropping beyond acceptable levels. Increasing the resolution higher just to show that the 390 8GB can render 11fps and the 290 4GB 5fps, doesn’t really prove anything worthwhile and would in itself be misleading.

Large amount of VRAM should provide an advantage in multiple monitors set up, if the GPU has enough power and bandwidth to push the pixels through.

Which they don’t ;) I don’t see Crossfire 380’s or SLI 960’s as being a viable option.

I believe we do not have enough video card examples that would display the perfect balance between performance of the card itself and its memory. So we can mostly speculate.

And from all the benchmarks that I've seen over the last year, it suggests that a top of the line performance video card needs 8GB of RAM to have zero bottlenecks when working in 4K resolution.


And this is why nVidia GTX 980 Ti falls a little short behind Titan X in 4K tests, because it got 6GB instead of the perfect 8GB needed for those tests. I believe if it did have 8GB, it wouldn't be behind Titan X in any game.


But of course, to make 4K gaming really better, it requires better video cards than the ones available today, which in turn may bring up the plank for memory requirements.

Sorry this isn’t true. You could fit the 390 and 390X with 32GB of VRAM and it still won’t be faster at 4K than a 4GB 290, at least anything worth writing home about. Furthermore there is absolutely no way the 980 Ti is coming up short against the Titan X at 4K because the 6GB memory buffer is being saturated. The 980 Ti should be 8% slower than the Titan X as it features 8% fewer CUDA cores.
 
I think some of these benchmarks are a touch off myself. I have a Radeon R9 270(admittedly with a factory overclock but nonetheless less powerful than all of the cards being tested), and at very high settings on Shadow of Mordor at 1080p I only ever drop below 50fps when there's a very large explosion or massive amounts of enemies on-screen(I.e. the entire population of a stronghold). Am I truly to believe that my R9 270 is outperforming a GTX 960, in a game optimized for NVIDIA hardware no less? Now on the AMD optimized side, I can run Star Wars: Battlefront at Ultra settings at 1080p and maintain a consistent 60fps. Once again, am I truly to believe that my R9 270 is outperforming its newer and better brethren?

I think this article needs to be re-done with actual gameplay instead of benchmarks likely run with in-game tools(almost always designed to strain your system beyond normal gameplay conditions) or external tools(usually just plain inaccurate).

So do you test with the very high or ultra-quality settings? Run the in-game benchmark tool and let me know your score with the game quality settings maxed out. Shadow of Mordor is one of the few games we use the benchmark tool for rather than a section of the game. In every game we test 60 seconds of the most demanding section we can find and the Shadow of Mordor benchmark provides that nicely for us.

That's my entire point, though. The benchmark tool, assuming it reads accurately(for Shadow of Mordor this is true, for other games less so) puts forward situations intensive to a point you almost never experience in normal gameplay, if you experience them at all, and are not indicative of real-world performance.
 
That's my entire point, though. The benchmark tool, assuming it reads accurately(for Shadow of Mordor this is true, for other games less so) puts forward situations intensive to a point you almost never experience in normal gameplay, if you experience them at all, and are not indicative of real-world performance.

The entire point of the benchmark tool is to put your system through the most demanding scenario the game has to offer. If you pass the test with a satisfactory frame rate then this will ensure that you can play the entire game without any hiccups.

For the purpose of this article the in-game benchmark makes perfect sense because it is here that we will see any differences between the 2GB and 4GB cards. Not in a basic section of the game where frame rates are high.
 
The entire point of the benchmark tool is to put your system through the most demanding scenario the game has to offer. If you pass the test with a satisfactory frame rate then this will ensure that you can play the entire game without any hiccups.

For the purpose of this article the in-game benchmark makes perfect sense because it is here that we will see any differences between the 2GB and 4GB cards. Not in a basic section of the game where frame rates are high.

If you never encounter it in actual gameplay though, then clearly the game doesn't actually have that scenario to offer, rendering the test moot!
 
If you never encounter it in actual gameplay though, then clearly the game doesn't actually have that scenario to offer, rendering the test moot!

I just put the Radeon R9 270X in my test system and played the game using the Ultra quality settings at 1080p. It averaged 48fps in the built-in benchmark and 41fps in the story mode.

Using the very high preset @ 1080p.
In-built benchmark = 59fps
In-game story mode attacking the orcs in the rain = 49fps

So it seems the scene at the start of the game where you first fight the orcs is more demanding than the built-in benchmark.
 
It's like anything else. Most consumers are *****s.
More HAS TO BE better. More ram, faster processors, more megapixels in the camera
sensor. Just look at smart phones. People trade them 1-2 years OR LESS. They think
benchmark numbers = faster performance.
No wonder this (USA) country is broke. Making uninformed decisions on crap they replace
because some big box "expert" tells them so.
 
This article is flawed. Try to "game" and not benchmark the games you tested in the article with a 2GB vs 4GB VRAM card. At very high settings newer games tend to slow down or worse, stutter. Crysis 3, GTA4/5, Batman Arkham Knight, R6 Siege, Witcher 3, Battlefield 4/Hardline/Homefront, all benefit from having 4GB of VRAM. Not only the difference in cost is really low, you are present and future proofed, as all those games have less stutter and more FPS.

Less benchmarks and more gameplay please.
 
This article is flawed. Try to "game" and not benchmark the games you tested in the article with a 2GB vs 4GB VRAM card. At very high settings newer games tend to slow down or worse, stutter. Crysis 3, GTA4/5, Batman Arkham Knight, R6 Siege, Witcher 3, Battlefield 4/Hardline/Homefront, all benefit from having 4GB of VRAM. Not only the difference in cost is really low, you are present and future proofed, as all those games have less stutter and more FPS.

Less benchmarks and more gameplay please.

I have spent a great deal of time gaming on a 2GB GTX 960 in my HTPC and I have not seen all the stuttering issues you speak of. I game on this card at 1080p on a TV and it plays very well.
 
Back