VRAM to the Test: How Much Memory Is Enough?

Therefor it would be interesting to see 2GB compared to 4GB cards in SLI/CF configurations and tested at 4k.
Kind of sounds like academic interest only IMO. Trying to make a case to show a difference* in what amounts to basically an unrealistic scenario. Crossfired R9 380's would be extremely marginal for 4K usage, and generally cost more ($360-400) than a single R9 390 ($275-300) which is a better performer and comes without the headache of multi-GPU driver profile issues.
The same could be said for the GTX 960. A couple of 4GB cards are around $400, yet an AIB OC'd GTX 970 at $285-320 comfortably outperforms them given the vagaries of SLI scaling.

I'm actually trying to think of a scenario where someone would voluntarily choose between two comparatively expensive lower performing cards and a single more powerful, cheaper, and probably less power hungry card. I could understand the dilemma if the user already had a single 2GB/4GB 380/GTX 960 and wanted to double up, but then a comparison is pretty superfluous since they just get another of what they already own. The only other scenario doesn't make a lot of sense - owning a 2GB card then needing to decide whether to buy another 2GB card, or ditch the one you have to buy two 4GB cards that are more expensive than a single more powerful card. Even if you managed to score a couple of 380's/960's as cheap as the single 390/GTX970 OC card, why would anyone compromise after plunking down a sizable amount of cash on a 4K panel?

Just seems all academic really unless I'm missing a scenario that is "real world"

* There will obviously be differences. A 2GB pair will need to swap textures in/out of system RAM more often. Moving those textures over the PCIE bus causes GPU stalls (framerate dips), but at 4K (or using FSAA/downsampling) the bigger issue will always be the lack of GPU power (fillrate).
 
Last edited:
What CPU are you using just out of interest?

I just played a bit of Assassins Creed Syndicate with a 660 Ti on my Skylake test system and it ran very well using the high quality settings at 1080p.

Minimum was 39fps with a 49fps average. The frame time data also looked good as well.

I have an i7-860 at the stock speed of 2.8ghz from the LGA 1156 days. Is that why it is not running right??
 
You're not going to notice much until you hit 4k resolutions, usually. The omission of 4k is confusing, to me.
 
What CPU are you using just out of interest?

I just played a bit of Assassins Creed Syndicate with a 660 Ti on my Skylake test system and it ran very well using the high quality settings at 1080p.

Minimum was 39fps with a 49fps average. The frame time data also looked good as well.

I have an i7-860 at the stock speed of 2.8ghz from the LGA 1156 days. Is that why it is not running right??

I have the same CPU running at the same stock speeds (though I have a GTX960 2GB). I've also feared that it might be starting to become a bottleneck. So I did a test with MSI Afterburner last night, running Assassin's Creed Syndicate at 1080p in borderless fullscreen mode (so no in-game v-sync), locked to 60 FPS with RivaTuner. I put the settings at High, with the exception of Textures, which were at Medium. To compensate, I put the ambient occlusion up to HBAO+. I got framerates between 40-55, with rare lower drops in cutscenes and rare peaks of 60 when looking at the sky. Frametimes looked right in comparison.

(Sidenote: I usually play with the game locked to 40 FPS in an attempt to smooth it out a little more. I tried locking at 30 FPS, but that felt too juddery to me -- maybe I'm just spoiled by 60 FPS. Frametimes become quite consistent if you lock it.)

Looking at the CPU logs: All logical cores are in use, with Core #1 seeing the highest activity, but only around 70-80%, with the highest jumps going up to above 80. The other cores are all a little lower, in the 50-60% range. To me, it looks like the i7-860 gets a bit of a workout, but is not pushed to the max.

I'd actually love to see an article on TechSpot, or anywhere, that pairs older CPUs like the i7-860 with modern GPUs to see how much bottlenecking there is. For instance, this CPU is technically speaking below the specs for Witcher 3, but I was able to run that quite well. You could even add overclocking to see how much that remedies the bottlenecks, if there are any. If anyone knows of such an article, please let me know.

This article was quite interesting as well; thanks for that, Steve!
 
I have an i7-860 at the stock speed of 2.8ghz from the LGA 1156 days. Is that why it is not running right??

Yes at just 2.8GHz that CPU will be creating a system bottleneck, I would try my hand at some overclocking if I were you ;)
 
You could have tried something ridiculous in those tests - like Skyrim with all possible model overhauls and HD texture packs; Like running 2-3 games at the same time on different monitors (you know... like "multi-boxing" some MMO); Also not sure, but does the streaming to twitch or youtube demand also some additional VRAM?

Right now the situation is that all the game developers are trying to optimize for the most common VRAM sizes ... that's why the results are as they are. Boosting the VRAM size on cards maybe helps to up the average, giving developers more room to make even better games.

Right now we don't even have almost any DX12 games on the market... that could potentially use more polygons and objects (and thus more textures) at the same time on the screen... maybe this would change something? Also worth trying to test out.
 
So, as a new to PC gaming consumer looking for VHQ to ultra at 4K, pushing to a 65" JU7100 for gaming & home theater to an 88" SUHD JS9500 (maybe simultaneously on occasion), what would you suggest? It sounds like utilizing multiple GPU's is the only option. Maybe 4Gb for HTPC purposes & an 8GB for the 4k gaming?

From what IU am reading is that at 4k during intensive scenes, while vram usage is not off the charts, there is a bottleneck somewhere. What is causing the significant frame drop during these times? Is it as suggested in the comments a matter of bandwidth? What would some of you suggest for a setup capable of handling 4K video & 8 audio channels for HTPC purposes & also for handling 4K gaming, for the ultimate experience, lot's of detail & butter smooth transitions? What is the gold standard GPU for 2016?
 
So, as a new to PC gaming consumer looking for VHQ to ultra at 4K, pushing to a 65" JU7100 for gaming & home theater to an 88" SUHD JS9500 (maybe simultaneously on occasion), what would you suggest? It sounds like utilizing multiple GPU's is the only option. Maybe 4Gb for HTPC purposes & an 8GB for the 4k gaming?

From what IU am reading is that at 4k during intensive scenes, while vram usage is not off the charts, there is a bottleneck somewhere. What is causing the significant frame drop during these times? Is it as suggested in the comments a matter of bandwidth? What would some of you suggest for a setup capable of handling 4K video & 8 audio channels for HTPC purposes & also for handling 4K gaming, for the ultimate experience, lot's of detail & butter smooth transitions? What is the gold standard GPU for 2016?

The 88" SUHD JS9500 is a $20,000 TV, are you honestly asking if you should by a second $300 GPU?
 
I have the same CPU running at the same stock speeds (though I have a GTX960 2GB). I've also feared that it might be starting to become a bottleneck. So I did a test with MSI Afterburner last night, running Assassin's Creed Syndicate at 1080p in borderless fullscreen mode (so no in-game v-sync), locked to 60 FPS with RivaTuner. I put the settings at High, with the exception of Textures, which were at Medium. To compensate, I put the ambient occlusion up to HBAO+. I got framerates between 40-55, with rare lower drops in cutscenes and rare peaks of 60 when looking at the sky. Frametimes looked right in comparison.

(Sidenote: I usually play with the game locked to 40 FPS in an attempt to smooth it out a little more. I tried locking at 30 FPS, but that felt too juddery to me -- maybe I'm just spoiled by 60 FPS. Frametimes become quite consistent if you lock it.)

Looking at the CPU logs: All logical cores are in use, with Core #1 seeing the highest activity, but only around 70-80%, with the highest jumps going up to above 80. The other cores are all a little lower, in the 50-60% range. To me, it looks like the i7-860 gets a bit of a workout, but is not pushed to the max.

I'd actually love to see an article on TechSpot, or anywhere, that pairs older CPUs like the i7-860 with modern GPUs to see how much bottlenecking there is. For instance, this CPU is technically speaking below the specs for Witcher 3, but I was able to run that quite well. You could even add overclocking to see how much that remedies the bottlenecks, if there are any. If anyone knows of such an article, please let me know.

This article was quite interesting as well; thanks for that, Steve!

Yes at just 2.8GHz that CPU will be creating a system bottleneck, I would try my hand at some overclocking if I were you ;)

Just a quick report back: I upgraded from the 2.8GHz i7-860 to the 4.0GHz i7-6700K. I didn't notice much changes -- framerate averages were pretty much the same as in my initial post. Though I didn't do an exact comparison, any changes would be pretty insignificant, I venture. So I'd say the i7-860 isn't really the bottleneck in this instance. :)
 
Just a quick report back: I upgraded from the 2.8GHz i7-860 to the 4.0GHz i7-6700K. I didn't notice much changes -- framerate averages were pretty much the same as in my initial post. Though I didn't do an exact comparison, any changes would be pretty insignificant, I venture. So I'd say the i7-860 isn't really the bottleneck in this instance. :)

Cool! Thank you for that! I really just need the funds to replace this video card, like I suspected. But I am in one of those "not so sure what is a good buy video card wise as a replacement to the one I have." I have looked at the GTX 960, but it doesn't inspire me. The GTX 970 is basically at the price point I paid for the GTX 660Ti I currently own. Hmmm.....
 
Cool! Thank you for that! I really just need the funds to replace this video card, like I suspected. But I am in one of those "not so sure what is a good buy video card wise as a replacement to the one I have." I have looked at the GTX 960, but it doesn't inspire me. The GTX 970 is basically at the price point I paid for the GTX 660Ti I currently own. Hmmm.....

Well, here's the thing: what Steve implied is true. The CPU will start to hold you back eventually -- soonish, I'd say -- depending on the game. If you're looking to buy a new card, you should take into account the possibility that anything above a certain range (let's say the GTX960) will be held back by the processor. This is why I upgraded to Skylake now, so I can upgrade the GPU later without fear of bottlenecks. I'm waiting for NVIDIA's new Pascal generation to see what it does.

In any case, if you want to upgrade just the GPU right now: as an owner of the GTX960, I can report that it's starting to become less capable at running stuff at 1080p60 as VRAM requirements increase. Going back to the original intent of this article, getting the 4G model hardly helps. Again, it all depends on how well the game is optimised, of course. So yeah, I hope that helps you make an informed decision.
 
Well, here's the thing: what Steve implied is true. The CPU will start to hold you back eventually -- soonish, I'd say -- depending on the game. If you're looking to buy a new card, you should take into account the possibility that anything above a certain range (let's say the GTX960) will be held back by the processor. This is why I upgraded to Skylake now, so I can upgrade the GPU later without fear of bottlenecks. I'm waiting for NVIDIA's new Pascal generation to see what it does.

In any case, if you want to upgrade just the GPU right now: as an owner of the GTX960, I can report that it's starting to become less capable at running stuff at 1080p60 as VRAM requirements increase. Going back to the original intent of this article, getting the 4G model hardly helps. Again, it all depends on how well the game is optimised, of course. So yeah, I hope that helps you make an informed decision.

I think it is also hard to decide, since Nvidia doesn't even cover the price bracket between the 960 and 970. Going from a 960 to a 970 is a big price jump. Back when the GTX 660Ti came out, it filled in that sweet spot price wise, between the 660 and 670. Essentially, the 660Ti was a 670, just with a different memory bus/controller. Nvidia even had the 650Ti Boost to fill the gap between the 650 and 660. Instead, Nvidia on this go around, comes out with a GTX 950, which I find to be useless. Might as well spend a few bucks extra for the 960. Maybe the 950 was intended to replace the 750Ti? I am glad to hear from someone who is has an 960 tell me that it is starting to already feel behind even at 1080P gaming.
 
I think it is also hard to decide, since Nvidia doesn't even cover the price bracket between the 960 and 970. Going from a 960 to a 970 is a big price jump. Back when the GTX 660Ti came out, it filled in that sweet spot price wise, between the 660 and 670. Essentially, the 660Ti was a 670, just with a different memory bus/controller. Nvidia even had the 650Ti Boost to fill the gap between the 650 and 660. Instead, Nvidia on this go around, comes out with a GTX 950, which I find to be useless. Might as well spend a few bucks extra for the 960. Maybe the 950 was intended to replace the 750Ti? I am glad to hear from someone who is has an 960 tell me that it is starting to already feel behind even at 1080P gaming.

Yeah, you're right. In fact, I too had the 660 Ti, until it gave the ghost. I upgraded to the then brand-new 960, but only because there wasn't a 960 Ti and, as you say, the 970 was a bit much at the time. Though, if you look at the lifespan of the two cards, it's obvious that the 970 will far outlive the 960. I may go for the x70 range from now on, if I can pull it off financially.
 
Back