VRAM to the Test: How Much Memory Is Enough?

I just put the Radeon R9 270X in my test system and played the game using the Ultra quality settings at 1080p. It averaged 48fps in the built-in benchmark and 41fps in the story mode.

Using the very high preset @ 1080p.
In-built benchmark = 59fps
In-game story mode attacking the orcs in the rain = 49fps

So it seems the scene at the start of the game where you first fight the orcs is more demanding than the built-in benchmark.

Then once again something is strange there, as I still maintain a steady framerate in that scene.
 
Then once again something is strange there, as I still maintain a steady framerate in that scene.

Nothing strange at all, this is very normal. What isn't normal is for a GPU to maintain the same frame rate under varied conditions, unless of course you have vsync enabled.

This guy gets between 50 - 80fps on this overclocked R9 270 at 1080p using the high settings (NOT very high, just high)...
 
Interesting benchmarks. I have Assassins Creed Syndicate, and my GTX 660TI 2GB just chokes on it. Can hardly play even with the settings turned all the way down. And that is at 1080P. I wasn't sure if it was just my card, or if Ubisoft just has poor optimization. Seeing your benchmarks of it here on the GTX 960, it looks like it is playable at 1080P on high settings, but above that it just chokes. On Nvidia's website, their benchmarks claim for the best 1080P experience, you need a GTX 970. I believe that was at ultra settings though. Nice to know that a GTX 960 with 2GB wasn't any slower than the 4GB model. Thank you for the article!
 
I did not find these results surprising since even newer games like Assassin's Creed: Syndicate are designed around the memory limitations of the consoles. In order to really consume some VRAM, there needs to be a PC game with a greater number of textures loaded for a scene, or there needs to be much higher resolution textures like with Rainbow Six. Otherwise, the only way to eat up the VRAM is to manually edit a game's settings and push the min/max shadow mapping resolution way, way beyond normal. Even at 1080p, it's possible to suck up the VRAM, and the high res shadow maps are probably closer to what we'd see with "future games." (Well, assuming other methods aren't viable by then.) At the same time, this test isn't "optimal" based upon performance versus shadow quality results, since the shadow map resolution would be overkill in most cases, so it would be gimmicky at best.
 
Interesting benchmarks. I have Assassins Creed Syndicate, and my GTX 660TI 2GB just chokes on it. Can hardly play even with the settings turned all the way down. And that is at 1080P. I wasn't sure if it was just my card, or if Ubisoft just has poor optimization. Seeing your benchmarks of it here on the GTX 960, it looks like it is playable at 1080P on high settings, but above that it just chokes. On Nvidia's website, their benchmarks claim for the best 1080P experience, you need a GTX 970. I believe that was at ultra settings though. Nice to know that a GTX 960 with 2GB wasn't any slower than the 4GB model. Thank you for the article!

What CPU are you using just out of interest?

I just played a bit of Assassins Creed Syndicate with a 660 Ti on my Skylake test system and it ran very well using the high quality settings at 1080p.

Minimum was 39fps with a 49fps average. The frame time data also looked good as well.
 
It would be very misleading to suggest that the 290 and 390 are anywhere near powerful enough to run at 4K. As I said in the article in many of the tests we aren’t even eating up all the VRAM and yet the performance is dropping beyond acceptable levels. Increasing the resolution higher just to show that the 390 8GB can render 11fps and the 290 4GB 5fps, doesn’t really prove anything worthwhile and would in itself be misleading.

Bollocks, mate. In the past 12 months I've owned a 290x Lightning, a 280x Tri-X OC, a 290 Tri-X OC and a 980 Ti. I used every one of them in 4k on GTAV (the only game I play in your test). Both the 290 and 290x were capable of solid 30fps (v-sync on, fps in benchmark varied between 35-45fps on both with it off) in 4k with every setting besides grass (high) on very high/ultra with no AA.

My opinion on this article has gone from "misleading" to "the author doesn't know what he's talking about". To proclaim the 290/390's "nowhere near powerful enough to run at 4k" is either a gross exaggeration or an admission of ignorance. If you guys are too lazy to do proper testing before reaching a verdict, you're welcome to send me the cards and I'll test it myself and write the article for you. Better than half-assing it and possibly reaching the wrong conclusion and spreading it as fact anyway.
 
Bollocks, mate. In the past 12 months I've owned a 290x Lightning, a 280x Tri-X OC, a 290 Tri-X OC and a 980 Ti. I used every one of them in 4k on GTAV (the only game I play in your test). Both the 290 and 290x were capable of solid 30fps (v-sync on, fps in benchmark varied between 35-45fps on both with it off) in 4k with every setting besides grass (high) on very high/ultra with no AA.

My opinion on this article has gone from "misleading" to "the author doesn't know what he's talking about". To proclaim the 290/390's "nowhere near powerful enough to run at 4k" is either a gross exaggeration or an admission of ignorance. If you guys are too lazy to do proper testing before reaching a verdict, you're welcome to send me the cards and I'll test it myself and write the article for you. Better than half-assing it and possibly reaching the wrong conclusion and spreading it as fact anyway.

I would rather leave it at, I don't know what I am talking about, than argue that the 290X/390X is powerful enough for 4K gaming, a statement which I am very much standing by.

Your 280x Tri-X OC must have been very impressive, I would have never thought the Radeon HD 7970 would be a decent 4K gamer. Well seems we were 4K ready many years ago.
 
I would rather leave it at, I don't know what I am talking about, than argue that the 290X/390X is powerful enough for 4K gaming, a statement which I am very much standing by.

Your 280x Tri-X OC must have been very impressive, I would have never thought the Radeon HD 7970 would be a decent 4K gamer. Well seems we were 4K ready many years ago.

The 280x required much more massaging of settings than the 290's to attain 30fps. Sadly (not really) it had issues from day 1 and only made it about a week before being returned and replaced with the 290 Tri-X.

As far as the 290's at 4k, your own tests returned near 70fps at 1600p on comparable settings (I don't like FXAA, it makes everything blurry imo). Not sure why you'd think it's a stretch they would be able to maintain 30fps at 2160p with similar settings.
 
You are testing in the obsolete (at least to me) 1080 resolution. I play exclusively in 4k and utilize more than 4GB of VRAM every time.
 
The 280x required much more massaging of settings than the 290's to attain 30fps. Sadly (not really) it had issues from day 1 and only made it about a week before being returned and replaced with the 290 Tri-X.

As far as the 290's at 4k, your own tests returned near 70fps at 1600p on comparable settings (I don't like FXAA, it makes everything blurry imo). Not sure why you'd think it's a stretch they would be able to maintain 30fps at 2160p with similar settings.

These aren’t consoles so I don’t find a 30fps average anywhere near acceptable, especially when spending this much on hardware. Without any form of AA there are still far too many jaggies at 4K, also FXAA comes at a very minimal performance hit.

There are 102% more pixels at 4K than 1600p so averaging 70fps at 1600p doesn’t mean much. Basically are looking at a quarter of the performance seen at 1080p which means in games like Crysis using the very high settings with no form of AA you are looking at an average of 14fps.

But hey we received 80fps in Assassin’s Creed Syndicate without any AA enabled so you are good for around 20fps at 4K.

It’s not like we haven’t tested every current generation GPU extensively at 4K…
https://www.techspot.com/review/1011-nvidia-geforce-gtx-980-ti/

If you play at 4K on a lowly 390 then good on you, I am sure most of the Internet agrees with me when I say there still isn’t a single GPU that can really deliver acceptable 4K performance.

You are testing in the obsolete (at least to me) 1080 resolution. I play exclusively in 4k and utilize more than 4GB of VRAM every time.

First of all we tested at 1600p as well which is slightly greater than the 2560x1440 resolution which is growing in popularity. As I have said above the cards tested have no business at 4K, especially the GTX 960 and R9 380.

It is nice that you play at 4K but even at that resolution you aren’t using more than 4GB of VRAM ‘every time’. In fact most of the games I tested don’t use more than 4GB of VRAM even at 4K. Moreover there are a few tests online that compare the 290X and 390X clock for clock at 4K and found no difference in performance, frame rates were miserably low however.
 
These aren’t consoles so I don’t find a 30fps average anywhere near acceptable, especially when spending this much on hardware. Without any form of AA there are still far too many jaggies at 4K, also FXAA comes at a very minimal performance hit.

There are 102% more pixels at 4K than 1600p so averaging 70fps at 1600p doesn’t mean much. Basically are looking at a quarter of the performance seen at 1080p which means in games like Crysis using the very high settings with no form of AA you are looking at an average of 14fps.

But hey we received 80fps in Assassin’s Creed Syndicate without any AA enabled so you are good for around 20fps at 4K.

It’s not like we haven’t tested every current generation GPU extensively at 4K…
https://www.techspot.com/review/1011-nvidia-geforce-gtx-980-ti/

If you play at 4K on a lowly 390 then good on you, I am sure most of the Internet agrees with me when I say there still isn’t a single GPU that can really deliver acceptable 4K performance.

Ugh, the 30fps = consoles thing is silly, and I disagree entirely on both that point and jaggies in 4k. Playing at a locked 30fps that never dips is an entirely different animal than consoles that vary between 20-30 fps. Plus, none of those cards had HDMI 2.0. My choice was 4k30fps on my 55" 4k, or 1080p on my 55" 4k. I went with 4k30fps for pretty obvious reasons. As for jaggies? I sit 6' from my 55". I see no jaggies unless I am actively looking for them from about 3' away and doing nothing else. 4k with no AA has about the same amount of jaggies as 1080p with 4x MSAA - are you saying 4x MSAA 1080p is far too many jaggies? As for FXAA yes, very minor performance hit, I personally just think it looks like garbage. I like 4k because of the definition of the textures it brings out, FXAA makes everything look blurry (in every case I've tried it), so I don't use it.

Tested extensively? You seem to have just turned everything to ultra regardless of playability and showed the results without any attempt to actually demonstrate whether or not it's feasible to run the games at 4k on the cards tested. If the standard for you is "everything cranked to the max without regard for whether it makes sense or not" then of course you're going to come to the conclusion no single card can run 4k. By those standards, most cards aren't even capable of 1080p. Throwing in your 30fps=for consoles attitude, the only single card in the VRAM test "capable of 1080p" is the 290/390... so why did you even bother testing the others at 1080p, let alone 1600p? You're contradicting yourself.

I disagree with you and the majority of the internet on no single card being capable of playing 4k. They make it, and it's called the 980 Ti. Sure, I cannot absolutely max out every setting in 4k and get 60fps in AAA titles. I can however play with basically every setting maxed that makes a visual difference while toning down others that make little to none. Grass in GTAV is an example of a setting that makes very little visual difference between "high" and "ultra" but a significant performance benefit to turning it down. God Rays in FO4 is another example - tiny difference between med and max visually, yet a huge performance gap. (side note: Fallout 4 in 4k uses every last mb of VRAM my 980 Ti can muster... but... but... anything more than 4gb is marketing hype!) And of course, anti-aliasing in everything is pointless in my opinion in 4k.

Perhaps with the next article you do, you could put more emphasis on actually testing what it is you claim to be testing and less in overly defensive comment replies full of snark, biased opinion and contradicting statements on what constitutes "playable". Because I'm not impressed by either your VRAM or 4k article, both seem to have been done in a way to reach a predetermined belief held by the author rather than to actually come up with a scientific conclusion, which is bad for everyone. Leads to things like "most of the internet" agreeing that "no single card can run 4k". When authors on purportedly reputable tech sites half-*** benchmarks because they're too lazy or biased to do the test right, most readers will still believe the conclusion that was reached at face value, and you have misinformation like "no single card can run 4k" or "anything more than 4gb of VRAM is marketing hype" spread - because most people don't know any better (and don't get tons of free review samples to know for themselves) and look to sites like this for that information. By not being thorough, for whatever reason, you're actively making gamers less informed - which I thought was the exact opposite of the point of articles like this.
 
A few extra numbers to help show what is happening.

First, the current GPUs have a decent balance between quantity of VRAM and how often they can acces VRAM. In a post to this article on another forums, someone made a comment about how older cards had more obvious performance improvement when increasing VRAM.

AMD/ATI's Radeon HD4870 had a 512MB, 1GB, and 2GB model cards released. The 1GB and 2GB cards showed improvement over the 512MB card. The bandwidth to quantity ratio shows the GPU can manipulated 512MB 230 times/second, 1GB 115 times per second, and 2GB 57 times per second. Those show some very capable numbers at that time.

Then you have the R9 380 used in this article. The R9 380 has the bandwidth to utilize 2GB of data 91 times per second, or 4GB of data 45 times per second. As shown, the R9 380 does see some improvement in certain games, meaning its bandwidth permit working with a larger pool of data per frame.

For the GTX 960, it can utilize 2GB of data 56 times per second. To utilize 4GB worth of data per frame, the GTX 960 can only access its memory 28 times per second, which makes it impossible to achieve 30FPS when actively using 4GB worth of data.

Now looking over at the R9 290 and R9 390, the R9 290 can manipulate 4GB of data 80 times per second, meanwhile the R9 390 can manipulate 8GB of data 48 times per second and 4GB of data 96 times per second. The issue here is the 2 GPUs are not being pushed.

To continue into the future with DX12 improvements, I would recommend AMD's Fury X for handling DX12 over the GTX 980 TI or Titan X. The 980TI and Titan X are already fully utilized under Nvidia's DX11 drivers and can't handle actively using more data than they already do with DX11. The AMD Fury X can manipulate 4GB worth of data 128 times per second. The only slowdown I can see immediately is the Fury X only has 64 ROPs while the Titan X and 980Ti have 96 ROPs, yet the Titan X and 980 TI lack the memory bandwidth to fully utilize 96 ROPs, but the caching helps to compensate for that.

If there is something to say about this, Nvidia's GTX 900 series is running at max capacity with no room for improvement to show off DX12. Meanwhile AMD's cards have spare performance in various areas that the DX12 drivers can use. The difference is how Nvidia's DX11 drivers operate versus how AMD's DX11 drivers operate. A benchmark comparing the Fury X to the Titan X or the 980Ti based on a single core, single core- hyperthreaded, dual core, dual core hyperthreaded, quad core, and quad core hyperthreaded in the same CPU family may be a worthwhile test.
 
Ugh, the 30fps = consoles thing is silly, and I disagree entirely on both that point and jaggies in 4k. Playing at a locked 30fps that never dips is an entirely different animal than consoles that vary between 20-30 fps. Plus, none of those cards had HDMI 2.0. My choice was 4k30fps on my 55" 4k, or 1080p on my 55" 4k. I went with 4k30fps for pretty obvious reasons. As for jaggies? I sit 6' from my 55". I see no jaggies unless I am actively looking for them from about 3' away and doing nothing else. 4k with no AA has about the same amount of jaggies as 1080p with 4x MSAA - are you saying 4x MSAA 1080p is far too many jaggies? As for FXAA yes, very minor performance hit, I personally just think it looks like garbage. I like 4k because of the definition of the textures it brings out, FXAA makes everything look blurry (in every case I've tried it), so I don't use it.

Tested extensively? You seem to have just turned everything to ultra regardless of playability and showed the results without any attempt to actually demonstrate whether or not it's feasible to run the games at 4k on the cards tested. If the standard for you is "everything cranked to the max without regard for whether it makes sense or not" then of course you're going to come to the conclusion no single card can run 4k. By those standards, most cards aren't even capable of 1080p. Throwing in your 30fps=for consoles attitude, the only single card in the VRAM test "capable of 1080p" is the 290/390... so why did you even bother testing the others at 1080p, let alone 1600p? You're contradicting yourself.

I disagree with you and the majority of the internet on no single card being capable of playing 4k. They make it, and it's called the 980 Ti. Sure, I cannot absolutely max out every setting in 4k and get 60fps in AAA titles. I can however play with basically every setting maxed that makes a visual difference while toning down others that make little to none. Grass in GTAV is an example of a setting that makes very little visual difference between "high" and "ultra" but a significant performance benefit to turning it down. God Rays in FO4 is another example - tiny difference between med and max visually, yet a huge performance gap. (side note: Fallout 4 in 4k uses every last mb of VRAM my 980 Ti can muster... but... but... anything more than 4gb is marketing hype!) And of course, anti-aliasing in everything is pointless in my opinion in 4k.

Perhaps with the next article you do, you could put more emphasis on actually testing what it is you claim to be testing and less in overly defensive comment replies full of snark, biased opinion and contradicting statements on what constitutes "playable". Because I'm not impressed by either your VRAM or 4k article, both seem to have been done in a way to reach a predetermined belief held by the author rather than to actually come up with a scientific conclusion, which is bad for everyone. Leads to things like "most of the internet" agreeing that "no single card can run 4k". When authors on purportedly reputable tech sites half-*** benchmarks because they're too lazy or biased to do the test right, most readers will still believe the conclusion that was reached at face value, and you have misinformation like "no single card can run 4k" or "anything more than 4gb of VRAM is marketing hype" spread - because most people don't know any better (and don't get tons of free review samples to know for themselves) and look to sites like this for that information. By not being thorough, for whatever reason, you're actively making gamers less informed - which I thought was the exact opposite of the point of articles like this.

Look mate I don’t have time to argue with you, if you think a constant 30fps is acceptable and no worse than 60fps+ then I can’t help you. There is loads of evidence online supporting what I am saying so I really don’t need to add anything extra.

http://www.30vs60fps.com/

Watch this at 1080p 60fps…

More…

There are hundreds more videos like these online.

Input lag is real at 30fps, games such as first person shooters feel rubbish at 30fps compared to 60fps and car racing games are much the same. Project CARS and F1 2015 feel far more realistic at 60fps.

Let’s leave the anti-aliasing argument alone as well. Clearly this is subjective but I did LOL a bit when you said there are more jaggies at 1080p using 4xMSAA ;)

http://www.overclock.net/t/1516395/is-anti-aliasing-pointless-in-4k

Also here is some 290X 4GB vs. 390X 8GB testing at 4K (low to no AA used), enjoy…

Rather than rant on like a fool about how we are lazy, bias or whatever else you are going to come up with, how about you link us to some creditable sources that back up what you are saying.
 
I think you might be missing the main point.
8 GB of RAM VS less makes a BIG difference on games that need the RAM for full textures (HD textures). Grand Theft Auto 5, Mordor are two examples. They use up to 7 GB of RAM. In the future, more and more RAM will be needed for these hi res textures.

This applies to high end gaming users that want 60 fps on everything and want the max settings.

If it's only $50-$100 more, then you might as well future proof yourself.
 
I think you might be missing the main point.
8 GB of RAM VS less makes a BIG difference on games that need the RAM for full textures (HD textures). Grand Theft Auto 5, Mordor are two examples. They use up to 7 GB of RAM. In the future, more and more RAM will be needed for these hi res textures.

This applies to high end gaming users that want 60 fps on everything and want the max settings.

If it's only $50-$100 more, then you might as well future proof yourself.

Sorry that's missing the point. More VRAM doesn't make a GPU more powerful. It has been proven a number of times that when loading the 390X up with 6GB's+ of data it is still no faster than the 290X, it is a horse power issue.

Spending more on the 390X over the 290X to "future proof yourself" is crazy. You are spending more money in the hope that one day it will pay off. All you know with absolute certainty is that right now it is a complete waste of money. How long do you plan to hold onto a graphics card such as the 390X in the hope that the investment will pay off?

I am almost 100% confident that by the time games are regularly pushing up to 8GB's of VRAM usage the 390X will be no where near powerful enough to deliver playable performance under those conditions.
 
Sorry that's missing the point. More VRAM doesn't make a GPU more powerful. It has been proven a number of times that when loading the 390X up with 6GB's+ of data it is still no faster than the 290X, it is a horse power issue.
Having fun beating your head against a wall?
I think regardless of how many times you explain that the GPU runs out gas before the vRAM becomes the limiting factor, some people are still going to obsess about a few percentage points being a deal breaker even if the game is borderline playable at best.
8 GB of RAM VS less makes a BIG difference on games that need the RAM for full textures (HD textures). Grand Theft Auto 5, Mordor are two examples.
Really? Most comparisons I've seen using HD texture packs for both tend to back up what Steve is trying valiantly to get across. At 19x10 and 25x16 resolutions vRAM usage has virtually no impact, while 4K and above starts seeing separation...but renders the game at a framerate not indicative of what virtually anyone would willingly subject to - rather I suspect, people would turn down the game IQ to get a better gameplay experience...which again nullifies the framebuffer disparity.
Here's an example of Shadow of Mordor scaling 4GB vs 8GB using the aforementioned HD texture pack put together by Scott Wasson once of Tech Report, now an employee of AMD's RTG. Please note how the separation between 4GB and 8GB cards really only increases once the cards are well past 4K resolution and <30 f.p.s.

mordor-290x-390x.gif


And I'll quote Scott here - who mirrors Steve's own observations. Quelle surprise!
The 290X's 4GB of memory doesn't put it at a relative disadvantage at 4K, but the cracks start to show at 5760x3240, where the gap between the two cards grows to four FPS. At 7680x4320, the 4GB card is clearly struggling, and the deficit widens to eight FPS. So we can see the impact of the 390X's added VRAM if we push hard enough.

From a purely practical standpoint, these performance differences don't really matter much. With FPS averages of 16 and 20 FPS, respectively, neither the 290X nor the 390X produces playable frame rates at 5760x3240, and the highest resolution is a slideshow on both cards.
[Source]
So well done Hammayon, you have definitely proven that hardware vendors marketing is indeed very effective. I can now see why they went to the trouble of offering 4GB versions of the GT 640 and R7 250E.
 
Last edited:
Having fun beating your head against a wall?
I think regardless of how many times you explain that the GPU runs out gas before the vRAM becomes the limiting factor, some people are still going to obsess about a few percentage points being a deal breaker even if the game is borderline playable at best.

Really? Most comparisons I've seen using HD texture packs for both tend to back up what Steve is trying valiantly to get across. At 19x10 and 25x16 resolutions vRAM usage has virtually no impact, while 4K and above starts seeing separation...but renders the game at a framerate not indicative of what virtually anyone would willingly subject to - rather I suspect, people would turn down the game IQ to get a better gameplay experience...which again nullifies the framebuffer disparity.
Here's an example of Shadow of Mordor scaling 4GB vs 8GB using the aforementioned HD texture pack put together by Scott Wasson once of Tech Report, now an employee of AMD's RTG. Please note how the separation between 4GB and 8GB cards really only increases once the cards are well past 4K resolution and <30 f.p.s.

mordor-290x-390x.gif


And I'll quote Scott here - who mirrors Steve's own observations. Quelle surprise!

[Source]
So well done Hammayon, you have definitely proven that hardware vendors marketing is indeed very effective. I can now see why they went to the trouble of offering 4GB versions of the GT 640 and R7 250E.

Why can't you like a post 10 times? :)

Those results remind me of what I discovered many years ago now...
http://www.legionhardware.com/articles_pages/gigabyte_geforce_gtx_680_4gb,4.html
 
6 gB of 390X vs 290X 4 GB. ARe you kiidding me? Which game are you referring to. Seriously tell me it's not a game that doesn't utilize more than 4 GB of ram. it only effects those games that use more than 4 GB of RAM.

Not all games use more than 4 GB. Most games don't. So of course there is no effect for those!!!!! Test a game that uses the extra textures for **** sake ******* .

Secondly if performance does get worse, it's not necessarily the framerate, it's microstuttering and other visual problems.

finally no matter what you are wrong.

Texture memory requireiments have been steadily increasing over the years which is why VRAM keeps increasing. obviously it increases because it's being used in games.

This trend will obviously continue has it has been happening for 20 years.

SO obviously get the more ram if your buying a high end card dishing out 700 bucks ore more.

Of cousre I assume you people all have good paying jobs and are educated and well off.

Otherwise, don't buy the extra ram and save money and cut down the settings. You have no business maxing out settings anyway if you don't have a high end card period.
 
6 gB of 390X vs 290X 4 GB. ARe you kiidding me? Which game are you referring to. Seriously tell me it's not a game that doesn't utilize more than 4 GB of ram. it only effects those games that use more than 4 GB of RAM.

Not all games use more than 4 GB. Most games don't. So of course there is no effect for those!!!!! Test a game that uses the extra textures for **** sake ******* .

Secondly if performance does get worse, it's not necessarily the framerate, it's microstuttering and other visual problems.

finally no matter what you are wrong.

Texture memory requireiments have been steadily increasing over the years which is why VRAM keeps increasing. obviously it increases because it's being used in games.

This trend will obviously continue has it has been happening for 20 years.

SO obviously get the more ram if your buying a high end card dishing out 700 bucks ore more.

Of cousre I assume you people all have good paying jobs and are educated and well off.

Otherwise, don't buy the extra ram and save money and cut down the settings. You have no business maxing out settings anyway if you don't have a high end card period.

You obviously don't bother reading the information given to you so I am not going to waste my time. That said feel free to provide some actual evidence that supports what you are saying.

Tag @dividebyzero, I'm out :p
 
6 gB of 390X vs 290X 4 GB. ARe you kiidding me? Which game are you referring to. Seriously tell me it's not a game that doesn't utilize more than 4 GB of ram. it only effects those games that use more than 4 GB of RAM.
I think you will find that "usage" has already been explained both here and elsewhere. Usage includes allocation. Driver kernal and OS will allocate vRAM if it detects that more is available - hence different vRAM "usage" depending upon framebuffer size.
Not all games use more than 4 GB. Most games don't. So of course there is no effect for those!!!!! Test a game that uses the extra textures for **** sake ******* .
You mean like Shadow of Mordor with the HD texture pack that you were convinced would show the disparity on framebuffers? The same Shadow of Mordor with HD texture pack results I linked to that directly refute what you are saying?
Secondly if performance does get worse, it's not necessarily the framerate, it's microstuttering and other visual problems.
Here's an idea. When someone posts links to illustrate a point, maybe you should take some time to follow and read them...
As you can see, the 8GB Radeons avoid these frame-time spikes above 50 ms. So do all of the GeForces. Even the GeForce GTX 780 Ti with 3GB manages to sidestep this problem.

Why do the 4GB Radeons suffer when GeForce cards with 4GB don't? The answer probably comes down to the way GPU memory is managed in the graphics driver software, by and large. Quite possibly, AMD could improve the performance of the 4GB Radeons in both Mordor and Far Cry 4 with a change to the way it manages video memory.

So, it seems the issue is less hardware related than AMD deciding not to optimize for the smaller framebuffer. In this case, hand waving that issue away and providing a solution of buying a more expensive card with a higher vRAM capacity reeks of marketing.
finally no matter what you are wrong.
The "no matter what" being virtually every independent tech review comparison, including one done by a reviewer who has been offered a position with a graphics company because of his competency.
These people are all wrong because some anonymous forum poster citing no proof say so? Colour me skeptical...and very much amused.
Yep, sometimes it's best just to close the door on the padded cell and leave them to their own devices (high framebuffer ones obviously!).
 
Last edited:
I'm sad to see that you guys don't came to the conclusion to test all these cards in Crossfire and SLI configurations, as you stated graphics cards that are not powerful enough to utilize the VRAM are used as a marketing gimmick.
This is not necessarily true in multi-gpu setups, so I would like to see another benchmark review for all these cards in crossfire and sli configs and also testing it in 4k in addition to the 1080p and 1600p.
 
I'm sad to see that you guys don't came to the conclusion to test all these cards in Crossfire and SLI configurations, as you stated graphics cards that are not powerful enough to utilize the VRAM are used as a marketing gimmick.
This is not necessarily true in multi-gpu setups, so I would like to see another benchmark review for all these cards in crossfire and sli configs and also testing it in 4k in addition to the 1080p and 1600p.

We made note that this is possibly the only advantage in the conclusion. That said I don't think it is necessary to test the GTX 960 or R9 380, are you really looking at buying two of these graphics cards? For much less the R9 390 or GTX 970 deliver better performance.

I would like to do some Crossfire testing with the 290/390 cards though.
 
I'm sad to see that you guys don't came to the conclusion to test all these cards in Crossfire and SLI configurations, as you stated graphics cards that are not powerful enough to utilize the VRAM are used as a marketing gimmick.
This is not necessarily true in multi-gpu setups, so I would like to see another benchmark review for all these cards in crossfire and sli configs and also testing it in 4k in addition to the 1080p and 1600p.
I'm sure Steve would happily oblige if you provide the necessary additional hardware.
We made note that this is possibly the only advantage in the conclusion. That said I don't think it is necessary to test the GTX 960 or R9 380, are you really looking at buying two of these graphics cards? For much less the R9 390 or GTX 970 deliver better performance.

I would like to do some Crossfire testing with the 290/390 cards though.
As you say, Crossfiring/SLI a couple of lower tier cards is a losing game in comparison with one single more powerful card, but I'd also have to wonder about high-end cards also. A couple of enthusiast cards are going to need some CPU horsepower to drive them to their potential. Assuming you're investing in a decent CPU, its system and cooling to maximize the performance would anyone skimp a few dollars to get a lower vRAM capacity card even if they were available? Offhand I'd think you would have to compare current cards with those of an older EoL'ed series. None of the current high end cards have multiple vRAM configurations, and comparing earlier cards in Crossfire with 4GB vs 8GB doesn't seem to produce any real differences - for example Sapphire's 290X 8GB card in CFX vs the 295X2 (effectively 290X 4GB CFX)
BF44K4.png
 
Last edited:
We made note that this is possibly the only advantage in the conclusion. That said I don't think it is necessary to test the GTX 960 or R9 380, are you really looking at buying two of these graphics cards? For much less the R9 390 or GTX 970 deliver better performance.

I would like to do some Crossfire testing with the 290/390 cards though.

I'm not personally interested in buying 960 or 380, I own a R9 295x2 and 4770k OC at 4.5Ghz in my setup. I'm mainly just interested in the question of VRAM and it's pros and cons. For example it seems to me a 290x/390x with 8GB VRAM is an overkill and they could've stopped at 4GB or at least 6GB.

Above 4GB it seems no longer necessary to have more VRAM, not even for 4K games. It's already shown in this review that even tho GTA V for example used 4.5GB of VRAM, there was no difference between the 4GB R9 290 and 8GB R9 390. Therefor it would be interesting to see 2GB compared to 4GB cards in SLI/CF configurations and tested at 4k.
 
Back