AMD to Nvidia: "prove it, don't just say it"

Emil

Posts: 152   +0
Staff

Two weeks ago, AMD released its flagship GPU, the AMD Radeon HD 6990. The card achieved the highest default single graphics card score of X3303 using the 3DMark11 benchmark, the latest DirectX 11 benchmark from FutureMark, so the company issued a press release calling it the "World's Fastest Graphics Card." This week, Nvidia issued a press release for the Nvidia GTX 590 and also called it the "World's Fastest Graphics Card." Unsurprisingly, AMD was not amused.

"At AMD we pride ourselves on both the excellence of our products, and in the integrity of our messaging," Dave Erskine, Senior Public Relations Manager for Graphics Desktop at AMD, said in a statement. "So now I issue a challenge to our competitor: prove it, don't just say it. Show us the substantiation."

AMD argues that Nvidia's statement is baseless. The company said it combed through Nvidia's announcement "to understand how it was that such a claim could be made and why there was no substantiation based on industry-standard benchmarks." The chip giant argues that since Nvidia did not reference an industry benchmark such as 3DMark 11 in its press release, it cannot possibly say that the GTX 590 beats the Radeon HD 6990.

TechSpot has reviewed both graphics cards. You can read our take at the following links: AMD Radeon HD 6990 Review: Sumptuous Dual-GPU Power and Nvidia GeForce GTX 590 Review: Dual-GPU Wars. Here's a relevant excerpt from the latter's Final Thoughts section:

It's impossible to conclude the GeForce GTX 590 is faster than the Radeon HD 6990 as it really depends on the game, and that's precisely why we test with fourteen titles now. Based on our findings, the GTX 590 was just 1% faster than the HD 6990 on average. The GTX 590's only big win was in Far Cry 2 and some argue that the game isn't a worthy test candidate given the large influence Nvidia had on its development.

If we removed Far Cry 2 from the results, we would actually have the GTX 590 to be 2% slower than the Radeon HD 6990. It's also interesting to note that a vast majority of the GTX 590's losses were in DX11 titles: S.T.A.L.K.E.R.: Call of Pripyat (-15%), Metro 2033 (-20%), Battlefield Bad Company 2 (-9%), and Aliens vs. Predator (-10%).

In short, the cards are both quite close, but AMD insists that only its product has the right to be crowned king. Hopefully Nvidia won't ignore its main competitor and will issue a response.

Permalink to story.

 
bold
isnt it?
but with the last topics covering some problems with 590 - some heat is building around :)

Db0 & Rick are my gurus on that matter - waiting to see whit what they will come

cheers for both
 
PR's talking smack !!! Fight, Fight, Fight !

Since AMD seem to be touting the the line "Faster than the world's fastest graphics card. We should know..." as a banner across their site it pretty much amounts to handbags at dawn.

Kind of reminds me of the Monty Python Crimson Permanent Assurance sketch...

Just out of interest...even if it's only mine. I was correlating the GTX590/HD6990 bench results in order to see how much (if any) linkage there was between AMD and Intel chipset PCI-E lane utilization and how benchmark averages/raw fps were affected by test systems running six or four core (and with/without HT) AMD and Intel rigs.
This is a results table from those 783 benchmarks (1920x1080/1200, 2560x1440/1600, 5040x1050 and 5760x1080) -as it pertains to fps percentages- showing the relative percentage "wins" for each card. A byproduct of the figures myself and a couple of other enthusiasts were putting together for the chipset/CPU analysis.
You will note that neither side is a conclusive winner....as many of the individual review sites have already pointed out.

EDIT: Nvidia numbers in green, AMD in red

Notes:
No results are included that include benchmarks run with PhysX or 3D enabled. Nor have I included synthetics (at least in this table). Results are included where disparities are attributable to immature driver support -Nvidia card's with DAO, Shogun 2, and AMD card's with Starcraft 2 (DX11) and BattleForge for example.
The benches included are from the reviews posted at the sites listed on the right hand side of the page.

It might also be noted that AMD could claim the undisputed crown simply by ordering testing done at 5x1 (five screen portrait mode)...likewise Nvidia could also claim the high ground by testing three-screen surround gaming in 3D or using 64xAA.
 
In my opinion they both have the right to say their card is the fastest because they are technically tied as each has some advantages over the other depending on the game.
 
And I laugh out loud as AMD boasts its supposed "superiority" by a mere 1-2%... SMH

Both of these cards are monsters; it has come to a point where stability, compatibility and support (in that order) is more important, not speed. Well, at least to me it is.
 
"Metro 2033 (-20%)"

Wow, that was a title that was very closely developed with NVIDIA to the point that it was even one of their PhysX 'spotlight' titles as S.T.A.L.K.E.R.: Call of Pripyat was to AMD's DX11 - upgrade and you could be playing stuff that looks as awesome as this - push.

And speaking of such, correct me if I am wrong by is not PhysX an openly developed standard? If you had AMD shipping with thier appropriate PhysX drivers wouldn't NVIDIA lose that little thorn that they have been jabbing into AMD's side for years now?


Big kids and their companies...
 
"Metro 2033 (-20%)"
Wow, that was a title that was very closely developed with NVIDIA to the point that it was even one of their PhysX 'spotlight' titles as S.T.A.L.K.E.R.: Call of Pripyat was to AMD's DX11 - upgrade and you could be playing stuff that looks as awesome as this - push.
4A, the developers of Metro 2033 can't do a whole lot about the architecture that their game plays on. Another case in point would be DiRT2- developed with AMD hardware-and given away with AMD hardware. Unless you raise the game image quality settings to maximum and the screen resolution likewise (i.e. make it memory frame-buffer dependant), Nvidia GTX4xx/5xx cards tend to play it at a faster than AMD HD5xxx/6xxx cards.
And speaking of such, correct me if I am wrong by is not PhysX an openly developed standard?
Nope. It's proprietry Nvidia tech.
There was some talk of Nvidia offering licensed use of PhysX in the past but I suspect that falls into the "urban legend" category.
If you had AMD shipping with thier appropriate PhysX drivers wouldn't NVIDIA lose that little thorn that they have been jabbing into AMD's side for years now?
AMD can use Bullet physics if they so choose.
AMD have no real interest in game physics, and even less in committing funds/software development to it.
Big kids and their companies...
Pretty much, but then if they didn't play the game then it (the high end cards) wouldn't get commented upon or written about- and the low cards that are sold in bulk wouldn't have a PR face....and Average Joe couldn't pretend that their G310 or HD5450 gives them proprietry stake in their multinational of choice.
 
Quote by lawfer
Both of these cards are monsters; it has come to a point where stability, compatibility and support (in that order) is more important, not speed. Well, at least to me it is.

Judging by the exploding GTX 590 cards, I'd say AMD has a leg up on stability.
 
whats the point of putting out quantity instead of quality, both sides need to get there act together
would rather 1 new card a year then a heap that have problems eg overheat, bad drivers etc

mickie-g
 
Meh, it boils down to heat, stability and drivers. Since both are dual GPUs, whoever has the better drivers supporting more titles deserves the consumer's money.

But come on, who is buying these since most of you guys would rather have SLI or Xfire setups anyway right???????
 
Meh, it boils down to heat, stability and drivers. Since both are dual GPUs, whoever has the better drivers supporting more titles deserves the consumer's money.

But come on, who is buying these since most of you guys would rather have SLI or Xfire setups anyway right???????

As far as I'm concerned. If you have $700 to spend on a GPU, youre not using a mATX. If your an 'enthusiast' you want Xfire or SLI for the unregulated performance and the OC'ing headroom. And at the same time, xfire or SLI is cheaper or the same price Who wants these things?
 
red1776 said:
Meh, it boils down to heat, stability and drivers. Since both are dual GPUs, whoever has the better drivers supporting more titles deserves the consumer's money.

But come on, who is buying these since most of you guys would rather have SLI or Xfire setups anyway right???????

As far as I'm concerned. If you have $700 to spend on a GPU, youre not using a mATX. If your an 'enthusiast' you want Xfire or SLI for the unregulated performance and the OC'ing headroom. And at the same time, xfire or SLI is cheaper or the same price Who wants these things?
So they can brag that they spent as much or more on their "two" graphics cards* than the rest of their system of course. And the *****s (possibly like me) who would even consider running them in form factors they REALLY should not be in lol.

*Easily achieved with one workstation card, but they don't count.
 
red1776 said:
Meh, it boils down to heat, stability and drivers. Since both are dual GPUs, whoever has the better drivers supporting more titles deserves the consumer's money.

But come on, who is buying these since most of you guys would rather have SLI or Xfire setups anyway right???????

As far as I'm concerned. If you have $700 to spend on a GPU, youre not using a mATX. If your an 'enthusiast' you want Xfire or SLI for the unregulated performance and the OC'ing headroom. And at the same time, xfire or SLI is cheaper or the same price Who wants these things?

I have found that people don't buy these instead of going SLi or crossfire, they buy them to run in SLi or crossfire. How many motherboards have 4 PCi-e Slots? Even if they have that many getting proper airflow around 4 cards would be a nightmare. Could you imagine the wire management you would have to do? I can see how having 4 videocards would be a pain and judging by how these are priced they are aimed at that market.

I'd rather have the 4 videocards, I think it'd look cooler.
 
Guest said:
Quote by lawfer
Both of these cards are monsters; it has come to a point where stability, compatibility and support (in that order) is more important, not speed. Well, at least to me it is.

Judging by the exploding GTX 590 cards, I'd say AMD has a leg up on stability.

The cards only exploded because they were overvolted to try to attain standard GTX580 clocks from what I could gather
 
I have found that people don't buy these instead of going SLi or crossfire, they buy them to run in SLi or crossfire. How many motherboards have 4 PCi-e Slots? Even if they have that many getting proper airflow around 4 cards would be a nightmare. Could you imagine the wire management you would have to do? I can see how having 4 videocards would be a pain and judging by how these are priced they are aimed at that market.

I'd rather have the 4 videocards, I think it'd look cooler.
I think you may be right about that for people who have quad crossfire as an afterthought. I mean if you are thinking quadfire out of the gate, you will be thinking of a quad PCIE slot board at build time. My gaming machine is a quadfire and the wire management and airflow isn't so tough with a full tower. I crank all four of my cards to 1020Mhz core for intense games (Metro 2033,LP2, Crysis etc) and keep them under 85c.If you have entertained and looked into going quad Xfire, you no doubt have read about the 'heat stacked effect' if you have a good negative pressure setup, and proper amount of CFM, it really can be eliminated as a problem. This machine is my third personal quadX, and I have not had an alarm bell go off as of yet.

Could you imagine the wire management you would have to do?

Why yes I can :) I have the added attraction of not only 4 cards, but three separate power supplies. I will let you be the judge.
https://www.techspot.com/gallery/member-galleries/p4172-ok-2c-so-now-its-really.html

Its not perfect. But with 4 VGA's,three PSU's, 1000 CFM of fans,two mods, and every 5.25" bay filled with components, I don't think its too badly wired.( Admuhz would argue otherwise as you can see:D )
If you are thinking about it, and have a full size tower, it can be done while keeping temps in line Y.
 
I understand wanting to be the best, but anyone willing to spend the $$$ on the latest and greatest probably already has a 580 or 6970 and can just buy another. These are almost too expensive to buy and from what i've read the 6990 is as loud as your average leaf blower when it gets going.

Reminds me of two construction guys having the 'my truck is better than your truck' argument.

I'll be much more impressed at whoever makes an affordable dual GPU first.
 
I would like to point out that the Techspot.com review actually used the 267.24 BETA forceware drivers, whereas the official WHQL are 267.91, and the 267.24 drivers didn't actually add official support for the GTX590.

Just sayin'
 
There's about a 50/50 split with FC2 benches across the net. All I think without exception used the Ranch demo. Averaging out the results would show less than 1% difference in the cards fps (ave.)
So for every TPU there's basically an OC3D, or Alienbabeltech, or HT4U. Astride the middle a number of sites also spilt their findings between each card -Hardware.fr (single and multiscreen) or rank the cards dead even.
For a repeatable demo there is some variance in testing. Techspot/Legion Hardware and Hardware.france's results represent the outliers at both ends of the scale, although less than the 150+% swings in Batman:Arkham Asylum since one of the new nvidia drivers used in some testing breaks SLI support.
 
lawfer said:
And I laugh out loud as AMD boasts its supposed "superiority" by a mere 1-2%... SMH

I think maybe you missed a point along the way... AMD hit with the HD6990 and registered monster 3DMark 11 numbers, which gave them every right to say they were the fastest. Then nVidia releases the GTX590, which earns 3DMark 11 numbers 10% LOWER than the HD6990, and yet nVidia still says they are now the fastest. On every standardized graphics test I've seen, AMD's product comes out on top. And, if you look at the overall scores on all of the comparisons, AMD still comes out on top.

AMD aren't "boasting superiority" so much as telling nVidia to prove it, because so far the numbers haven't really supported nVidia's claims.
 
Heated competition among two manufacturers can only benefit the consumer at the end with good products that will give PC gaming a superior edge over consoles.
 
I think maybe you missed a point along the way... AMD hit with the HD6990 and registered monster 3DMark 11 numbers, which gave them every right to say they were the fastest. Then nVidia releases the GTX590, which earns 3DMark 11 numbers 10% LOWER than the HD6990, and yet nVidia still says they are now the fastest. On every standardized graphics test I've seen, AMD's product comes out on top. And, if you look at the overall scores on all of the comparisons, AMD still comes out on top.
Except in 3D Mark Vantage it would seem...
(1), (2)
Or how about the Heaven benchmark, Tessmark or compute synthetics like SmallLuxGPU ?
Of course if you don't consider the above benchmarks and tests "standardized" then that's a whole different ball game.

Depending on the fine print attached to any claim you could make a case for either card. Making a claim for one card based on one synthetic that favours a manufacturer of personal choice has a tendency to mark that person as less than objective in my opinion. I'm sure you are aware of the tech community's colloquial name for such people.
AMD aren't "boasting superiority" so much as telling nVidia to prove it, because so far the numbers haven't really supported nVidia's claims.
And nor will they.
And nor will AMD be able to categorically claim the same - note the "Faster than the worlds fastest card banner" across the top of the AMD site at present.
Until one or the other could demonstrate total superiority it comes down to PR's and a bunch of other people who will never buy either card expelling so much hot air. For every F1 2010 result there will be a Starcraft 2, for every 3D Mark 11, a compute mark, for every performance/mm² a noise level reading or percentage overclock to percentage gain metric.

The day that one manufacturer does it all is the day that we only have one manufacturer.
 
dividebyzero said:
Except in 3D Mark Vantage it would seem...
(1), (2)
Or how about the Heaven benchmark, Tessmark or compute synthetics like SmallLuxGPU ?
Of course if you don't consider the above benchmarks and tests "standardized" then that's a whole different ball game.

Depending on the fine print attached to any claim you could make a case for either card. Making a claim for one card based on one synthetic that favours a manufacturer of personal choice has a tendency to mark that person as less than objective in my opinion. I'm sure you are aware of the tech community's colloquial name for such people.

Oooh, I missed those little gems. See, now that's what nVidia needs to hold up and say "See this AMD? Suck it!" heh

And, I agree, it seems to be a photo finish on which unit can claim the top of the graphics mountain. All depends on which reviews you read where, and what paces those reviews put them through. Either way, the competition keeps them on their toes, and we win with newer/better/faster hardware.
 
I had a referral contact me an hour ago. I built a GTX 580 SLI screamer for his buddy and he really really really wants to build a GTX 590 machine. I am attempting to disabuse him of the notion, however I am very tempted to shut the hell up so I can bench this thing for myself.....Is that unethical? :haha:
Maybe I will talk him into a 'Slifire'....you know , 1 x 590 and 1x 6990 and a Hydra board. Now thats unethical!
 
Back