Nvidia CEO says GTX 970 memory controversy "won't happen again"

Scorpus

Posts: 2,162   +239
Staff member

For the past month, Nvidia has faced its fair share of bad PR thanks to the GeForce GTX 970's memory allocation issue. As has been uncovered by users and explained by Nvidia, the GTX 970's last 512 MB of VRAM has reduced bandwidth, making it slower than the rest of the card's 4 GB of memory. This has caused owners to complain that it's not a true 4 GB card.

Now, for the first time since the memory allocation issue was uncovered, Nvidia CEO Jen-Hsun Huang has spoken publicly about it. In an open letter, Huang says that Nvidia "failed to communicate" to their marketing team and users about the true design of the GTX 970: "This new feature of Maxwell should have been clearly detailed from the beginning."

Huang states that Nvidia's only intention was "to create the best GPU for you", and part of creating the best GPU for today involves providing 4 GB of memory for games that require it. Nvidia should have been more transparent about the GTX 970's memory design, with Huang stating that "we won't let this happen again".

Despite Huang's attempts at apologizing for the GTX 970's issues, it's looking like the company might have to turn up in court over it. An owner of the card has filed a class-action lawsuit with the US District Court for Northern California, claiming that Nvidia misled customers about the GTX 970's performance.

If a judge decides that there is merit to the lawsuit, and Nvidia is found guilty of false advertising, owners of the GTX 970 could receive compensation.

Permalink to story.

 
Again: the performance didn't change overnight; it's today the same performance than the advertised on launch date. Even that technicality can kill the case. Specifications were the "misleading" part.
 
Again: the performance didn't change overnight; it's today the same performance than the advertised on launch date. Even that technicality can kill the case. Specifications were the "misleading" part.

I think most people know that by now and that's not what they are getting sued for. They're in hot water because the actual specs of the card are different than what's stated. Even if it doesn't have an impact on performance it still constitutes false advertising.

Given that it was just shown that Nvidia gained a large chunk of market share largely due to the 970, I would not be surprised for a judge to rule against them. The fact is, it MAY have swayed certain people's purchasing decisions and that is more than enough.

I'm a 970 user myself and I find the High Resolution performance rather bad. The memory issues wasn't really brought to light in reviews until after it was revealed, so if feels sort of disappointing.
 
I would not be surprised for a judge to rule against them.
Both class action suits (Ostrowski and Santiago) are jury trials. So the fates are in the hands of twelve non-technical minded people (by the time the challenges run their course).
I'm a 970 user myself and I find the High Resolution performance rather bad. The memory issues wasn't really brought to light in reviews until after it was revealed, so if feels sort of disappointing.
For me and a few others, we knew there was something impacting performance (as I noted in post #3 here back in October), just not the full architectural extent of the difference between the full die GM 204 and the GTX 970. Any decent review that covered either 4K or DSR noted the fall in performance and gameplay quality, but many review readers tend to focus primarily upon framerate even when the latency/VRAM issue is quite apparent. Probably a holdover from how benchmarking has only recently embraced frame latency as a metric.
 
For me and a few others, we knew there was something impacting performance (as I noted in post #3 here back in October), just not the full architectural extent of the difference between the full die GM 204 and the GTX 970. Any decent review that covered either 4K or DSR noted the fall in performance and gameplay quality, but many review readers tend to focus primarily upon framerate even when the latency/VRAM issue is quite apparent. Probably a holdover from how benchmarking has only recently embraced frame latency as a metric.

With all due respect, those [source links] have to be the worst examples you could have chosen.

Your first link shows the 970 dominating up until 4K, and even then still ended up only 5fps behind the 980. I see no negative from that whatsoever. Not to mention none of those cards offered a playable framerate anyway. If you were referring to the 290 and 290X catching up at 4K, well how is falling behind a mere 1fps and 2fps behind those two cards a bad thing? Remember, they were more expensive AND had a full 4GB! If anything, that shows the power of Maxwell, not its weakness.

Your link showing latency is even more confusing, because the 970 came in second after the 290, and only because Crysis 3 performs better on Hawaii cards or else it would have been first. It also actually had the thinnest line of all the cards tested. Again, the 970 doesn't have a full 4GB like the 290 does. :)
 
For me and a few others, we knew there was something impacting performance (as I noted in post #3 here back in October), just not the full architectural extent of the difference between the full die GM 204 and the GTX 970. Any decent review that covered either 4K or DSR noted the fall in performance and gameplay quality, but many review readers tend to focus primarily upon framerate even when the latency/VRAM issue is quite apparent. Probably a holdover from how benchmarking has only recently embraced frame latency as a metric.

With all due respect, those [source links] have to be the worst examples you could have chosen.

Your first link shows the 970 dominating up until 4K, and even then still ended up only 5fps behind the 980. I see no negative from that whatsoever. Not to mention none of those cards offered a playable framerate anyway. If you were referring to the 290 and 290X catching up at 4K, well how is falling behind a mere 1fps and 2fps behind those two cards a bad thing? Remember, they were more expensive AND had a full 4GB! If anything, that shows the power of Maxwell, not its weakness.

Your link showing latency is even more confusing, because the 970 came in second after the 290, and only because Crysis 3 performs better on Hawaii cards or else it would have been first. It also actually had the thinnest line of all the cards tested. Again, the 970 doesn't have a full 4GB like the 290 does. :)


But at the end of the day, false / misleading information especially on a high-end piece of hardware IS WRONG; be it accidental or intentional.

ANALOGY: you bought a car and the spec-sheet stated that there is a spare tire included. Car arrived, no spare tire included (sure it doesn't affect performance *duh*).

bottom line is you want what you purchased to perform as it is intended and as stated in the specifications given by the manufacturer. That is what people are angry about / fighting for.
 
With all due respect, those [source links] have to be the worst examples you could have chosen.
Your first link shows the 970 dominating up until 4K, and even then still ended up only 5fps behind the 980.
Resolutions up to 4K aren't triggering the vRAM buffer issue for the most part. The difference between the 970 and 980 is pretty constant at 12-15%. Once the frame buffer is loaded up the difference becomes markedly worse (~25%). A pure frames per second difference offers no context.
I see no negative from that whatsoever. Not to mention none of those cards offered a playable framerate anyway.
That is why 1. The issue took so long to be discovered, and 2. Why the issue is overrated by many GTX 970 owners - many just don't encounter real life gaming scenarios that induce the issue. Deliberately loading vRAM framebuffers with high AA, HD texture mods, 4K gaming, and DSR that show it up isn't the usage scenario for many 970 users- many of whom are playing at 19x10 or 2560x1440.
I was simply providing some empirical evidence of the effect - not making a judgement on its worth
Your link showing latency is even more confusing, because the 970 came in second after the 290...
Why would you compare the issue with a different architecture? If the comparison was between the 970 and the 290/290X, there wouldn't be an issue aside from price. What the 970 hullabaloo is about is comparing it to the 980 - both nominally 4GB, 256-bit cards.
Bearing that in mind, look at the latency spikes comparison between the 980 and 970, albeit not the best example since reviews don't go out of their way to choke the hardware they're reviewing. You'll note the latency spikes at the 32 and 40 second marks, and the wider variation of frame times with the 970.
G14K-Crysis3_3840x2160_PLOT.png
G24K-Crysis3_3840x2160_PLOT.png
 
But at the end of the day, false / misleading information especially on a high-end piece of hardware IS WRONG; be it accidental or intentional.

ANALOGY: you bought a car and the spec-sheet stated that there is a spare tire included. Car arrived, no spare tire included (sure it doesn't affect performance *duh*).

bottom line is you want what you purchased to perform as it is intended and as stated in the specifications given by the manufacturer. That is what people are angry about / fighting for.

It was an honest mistake. Nothing more. The reviews are still true.
 
Last edited:
Resolutions up to 4K aren't triggering the vRAM buffer issue for the most part. The difference between the 970 and 980 is pretty constant at 12-15%. Once the frame buffer is loaded up the difference becomes markedly worse (~25%). A pure frames per second difference offers no context.

That is why 1. The issue took so long to be discovered, and 2. Why the issue is overrated by many GTX 970 owners - many just don't encounter real life gaming scenarios that induce the issue. Deliberately loading vRAM framebuffers with high AA, HD texture mods, 4K gaming, and DSR that show it up isn't the usage scenario for many 970 users- many of whom are playing at 19x10 or 2560x1440.
I was simply providing some empirical evidence of the effect - not making a judgement on its worth

Why would you compare the issue with a different architecture? If the comparison was between the 970 and the 290/290X, there wouldn't be an issue aside from price. What the 970 hullabaloo is about is comparing it to the 980 - both nominally 4GB, 256-bit cards.
Bearing that in mind, look at the latency spikes comparison between the 980 and 970, albeit not the best example since reviews don't go out of their way to choke the hardware they're reviewing. You'll note the latency spikes at the 32 and 40 second marks, and the wider variation of frame times with the 970.
G14K-Crysis3_3840x2160_PLOT.png
G24K-Crysis3_3840x2160_PLOT.png

The irony here is you're using charts from pcper...

First up, pcper.com tests BF4 @ ~6K (150% resolution scale)

"Clearly there is a frame time variance difference between the GTX 970 and GTX 980. How much of that is attributed to the memory pool difference compared to how much is attributed to the SMM / CUDA core difference is debatable but it leaves the debate open.

http://www.pcper.com/reviews/G...

Next up, wccftech.com

The games in which users seem to have been facing these issues include Shadow of Mordor and Far Cry 4 with the total VRAM usage limited to 3.5 GB. Some users tried to test their cards using the memory burner test that is available on MSI’s Kombuster however several cards failed to load it past 3.0 GB. Since this was a issue being faced by many, I put my own GeForce GTX 970 through the tests in Far Cry 4, Shadow of Mordor and even the MSI Memory burner test. The results are quite surprising since the GALAX GeForce GTX 970 EXOC BLACK that I have equipped on my rig managed to load all of the 4 GB VRAM buffer in the memory burner test and ran with out any issues. Same goes for the games, Shadow of Mordor delivered around 60-65 FPS on average at maxed out settings with 3.6 GB memory usage. The game ran flawless without any stuttering or frame drops. Far Cry 4 was a similar scenario taking up around 3.8 GB of VRAM on 2560×1440 resolution with 2x TXAA and delivered a smooth frame rate.

http://wccftech.com/users-repo...
 
Last edited:
If a judge decides that there is merit to the lawsuit, and Nvidia is found guilty of false advertising, owners of the GTX 970 could receive compensation.

So in a few years time when the lawsuit is over 970 owners are likely to receive a equivalent of Nvidia G210 as compensation....
 
If a judge decides that there is merit to the lawsuit, and Nvidia is found guilty of false advertising, owners of the GTX 970 could receive compensation.

So in a few years time when the lawsuit is over 970 owners are likely to receive a equivalent of Nvidia G210 as compensation....
And only for North Americans, the rest of the worlds population who bought the card can go to hell.
 
Both class action suits (Ostrowski and Santiago) are jury trials. So the fates are in the hands of twelve non-technical minded people (by the time the challenges run their course).

For me and a few others, we knew there was something impacting performance (as I noted in post #3 here back in October), just not the full architectural extent of the difference between the full die GM 204 and the GTX 970. Any decent review that covered either 4K or DSR noted the fall in performance and gameplay quality, but many review readers tend to focus primarily upon framerate even when the latency/VRAM issue is quite apparent. Probably a holdover from how benchmarking has only recently embraced frame latency as a metric.

I believe I read the review on this website. Yeah usually I'm looking for frame-rate and how smooth FPS is. Most review place just give you an FPS and not even a chart/graph showing frame rate over time. I believe I was also viewing the review with a bit of rose-tinted glasses. Everyone was talking up how good a card it was so I didn't really look deep into it. That's what you get when you buy into hype.
 
Back