Nvidia GeForce GTX 1080 Review: The Mad King of GPUs

I'd give it 100+ if that were the case. 30% higher performance than its direct competitors, using a fraction of the power, less architectural resources, and a 40%+ (much higher if you're comparing to Fiji ;)) overclock headroom - and both on 28nm...sounds like miracle territory let alone 100%.

Exactly. So perfect score can be achieved with only die shrink. What would get die shrink + big architechture improvements? This shows clearly that scoring system is indeed broken.

VR-render-_0.png
VR-overhead.png

That does not tell almost anything about VR. In that case render time seems to be almost constantly 1/FPS with very little variance.

You got it all wrong. By your logic every flagship device should get a 100/100 because it's the best when released and it cannot possibly be better. This is so simple yet you can't understand it, instead you say I hate Nvidia for some reason only known to you.

Exactly. This is what I have said from beginning. Also something Nvidia fanboys have great difficulties to understand.
 
Last edited:
Score 100/100?

- AMD's Async shaders implementation is superior compared to Nvidia's.
- This card is using GDDR5X memory, so first HBM2 card will receive at least 150/100?
- DirectX 12 support is about same level as Maxwell, again AMD is miles ahead.

Giving perfect score somethiong is just ridiculous as it leaves no room for improvements. Some of which already exists or will soon become. Nvidia fanboy score spotted.
Score 100/100?

- AMD's Async shaders implementation is superior compared to Nvidia's.
- This card is using GDDR5X memory, so first HBM2 card will receive at least 150/100?
- DirectX 12 support is about same level as Maxwell, again AMD is miles ahead.

Giving perfect score somethiong is just ridiculous as it leaves no room for improvements. Some of which already exists or will soon become. Nvidia fanboy score spotted.
looooooooooooooooooooooool such bullshit such amd rageness how much does amd pay you?
 
Exactly. So perfect score can be achieved with only die shrink. What would get die shrink + big architechture improvements? This shows clearly that scoring system is indeed broken.
That would require changing a paradigm that Steve has been using for years. As example, you seem intent on making a case via comparison, so here's one for you. The reference R9 290 scored a 95. The GTX 1080 loses out on pricing in the product stack, but that's about it. The reference 290 ran to 94C, was loud, has abysmal power usage under video playback, has just ugly power usage under gaming, and very un-stellar overclocking potential - and loses out on every one of those metrics, overall performance, and features to the GTX 1080 by a handsome margin. Would you say that it is 5% worse than the GTX 1080 based on that information?

Personally I don't see it as much of a problem. The Olympics (and other sporting groups) have been handing out perfect 10's for years, often to to different people doing the same disciple. You think they are all equal too? Many sites use a Gold-Silver-Bronze or 5-star rating system that also top out the rating system. Hardware Heaven for instance hands out gold awards like candy out of the back of a windowless Econoline outside a grade school: Fury X? Gold, Zotac GTX 980 Ti? Gold, R9 290X Vapor-X 8GB (Huge price premium for no discernible performance increase)? Gold, R9 Fury? Gold, just about every other GTX 980 Ti? Gold, Gold, reference GTX 980? You guessed it - Gold, GTX 780 Ti, yeah, that's Gold too etc, etc. Are all these cards created equal? Plenty of other sites do much the same. If you want to overhaul the rating system, I suggest you do your own reviews and show everyone where they're going wrong. And remember to front up and defend your ratings to everyone who queries the score- whether they be legit, trolls, or guerrilla marketeers.
More to the point, does anyone buy a $700 graphics card based on a single review score and without weighing up and analyzing all the aspects of the review?
In fact, architechtrurally Pascal is very small improvement over Maxwell. Many of those "new features" need spesific software support and as I already mentuined, async compute is about five years behind AMD.
I think you and many other people overestimate the impact of async compute. AMD excels because it allows the card to actually run not bogged down with driver overhead. Unless AMD is going to heavily sponsor async compute titles, most developers aren't going to bother. it is just added coding expense that has has to take in multiple architectures and GPUs to be effective which isn't going to add anything for release dates and QA.
As the Hitman dev guy said:
On the other hand, it’s quite surprising to read that even AMD cards merely got a 5-10% performance boost, especially after AMD endorsed HITMAN’s implementation as the best one yet. Async Compute, which has been used for SSAA (Screen Space Anti Aliasing), SSAO (Screen Space Ambient Occlusion) and the calculation of light tiles in HITMAN, was also “super hard” to tune; according to IO Interactive, too much Async work can even make it a penalty, and then there’s also the fact that PC has lots of different configurations that need tuning.
AotS developer:
Saying that Multi-Engine (aka Async Compute) is the root of performance increases on Ashes between DX11 to DX12 on AMD is definitely not true. Most of the performance gains in AMDs case are due to CPU driver head reductions. Async is a modest perf increase relative to that.
While having what amounts to first gen async compute is a handicap, I'm not sold on it being the "be-all-and-end-all" that you paint it. If you are so worried about GPUs lacking full feature sets, why aren't you railing against AMD's complete lack of conservative rasterization support (which looks likely to continue with Polaris) CR is a feature of DX11 as well as DX12 games.
 
Exactly. So perfect score can be achieved with only die shrink. What would get die shrink + big architechture improvements? This shows clearly that scoring system is indeed broken.
That would require changing a paradigm that Steve has been using for years. As example, you seem intent on making a case via comparison, so here's one for you. The reference R9 290 scored a 95. The GTX 1080 loses out on pricing in the product stack, but that's about it. The reference 290 ran to 94C, was loud, has abysmal power usage under video playback, has just ugly power usage under gaming, and very un-stellar overclocking potential - and loses out on every one of those metrics, overall performance, and features to the GTX 1080 by a handsome margin. Would you say that it is 5% worse than the GTX 1080 based on that information?

Personally I don't see it as much of a problem. The Olympics (and other sporting groups) have been handing out perfect 10's for years, often to to different people doing the same disciple. You think they are all equal too? Many sites use a Gold-Silver-Bronze or 5-star rating system that also top out the rating system. Hardware Heaven for instance hands out gold awards like candy out of the back of a windowless Econoline outside a grade school: Fury X? Gold, Zotac GTX 980 Ti? Gold, R9 290X Vapor-X 8GB (Huge price premium for no discernible performance increase)? Gold, R9 Fury? Gold, just about every other GTX 980 Ti? Gold, Gold, reference GTX 980? You guessed it - Gold, GTX 780 Ti, yeah, that's Gold too etc, etc. Are all these cards created equal? Plenty of other sites do much the same. If you want to overhaul the rating system, I suggest you do your own reviews and show everyone where they're going wrong. And remember to front up and defend your ratings to everyone who queries the score- whether they be legit, trolls, or guerrilla marketeers.
More to the point, does anyone buy a $700 graphics card based on a single review score and without weighing up and analyzing all the aspects of the review?
In fact, architechtrurally Pascal is very small improvement over Maxwell. Many of those "new features" need spesific software support and as I already mentuined, async compute is about five years behind AMD.
I think you and many other people overestimate the impact of async compute. AMD excels because it allows the card to actually run not bogged down with driver overhead. Unless AMD is going to heavily sponsor async compute titles, most developers aren't going to bother. it is just added coding expense that has has to take in multiple architectures and GPUs to be effective which isn't going to add anything for release dates and QA.
As the Hitman dev guy said:
On the other hand, it’s quite surprising to read that even AMD cards merely got a 5-10% performance boost, especially after AMD endorsed HITMAN’s implementation as the best one yet. Async Compute, which has been used for SSAA (Screen Space Anti Aliasing), SSAO (Screen Space Ambient Occlusion) and the calculation of light tiles in HITMAN, was also “super hard” to tune; according to IO Interactive, too much Async work can even make it a penalty, and then there’s also the fact that PC has lots of different configurations that need tuning.
AotS developer:
Saying that Multi-Engine (aka Async Compute) is the root of performance increases on Ashes between DX11 to DX12 on AMD is definitely not true. Most of the performance gains in AMDs case are due to CPU driver head reductions. Async is a modest perf increase relative to that.
While having what amounts to first gen async compute is a handicap, I'm not sold on it being the "be-all-and-end-all" that you paint it. If you are so worried about GPUs lacking full feature sets, why aren't you railing against AMD's complete lack of conservative rasterization support (which looks likely to continue with Polaris) CR is a feature of DX11 as well as DX12 games.

At times you make my life so easy :) Excellent comment as usual, wish I could like it a dozen times.
 
I think there is a typo in the 1080 review. Can you verify because I plan on running 5 monitors off it. You said it can drive up to 4 monitors but I see 5 outputs. 3 DS, 1 HDMI, 1 DVI = 5 monitors. Is this a mistake?
 
looooooooooooooooooooooool such bullshit such amd rageness how much does amd pay you?

Truth hurts it seems.

That would require changing a paradigm that Steve has been using for years. As example, you seem intent on making a case via comparison, so here's one for you. The reference R9 290 scored a 95. The GTX 1080 loses out on pricing in the product stack, but that's about it. The reference 290 ran to 94C, was loud, has abysmal power usage under video playback, has just ugly power usage under gaming, and very un-stellar overclocking potential - and loses out on every one of those metrics, overall performance, and features to the GTX 1080 by a handsome margin. Would you say that it is 5% worse than the GTX 1080 based on that information?

I would not say R9 290 is 5% worse than GTX1080, but I would say that 95/100 for R9 290 proves that scoring system is broken. Exactly what I'm saying this time also.

Personally I don't see it as much of a problem. The Olympics (and other sporting groups) have been handing out perfect 10's for years, often to to different people doing the same disciple. You think they are all equal too? Many sites use a Gold-Silver-Bronze or 5-star rating system that also top out the rating system.

On individual sports results usually can be compared. So like winning olympics gold medal on mens 100 meters with time like 9.60 is much better result than winning same medal with time like 9.80. However, winning with better time does not give more than one gold medal. So while comparing with gold medals only is almost impossible (runner 1 won gold medal, runner 2 won gold medal, who is better?) comparing with absolute results is quite easy (runner 1 time was 9.60 and so that is much better than 9.80). So basically using medal or star system it's easier to avoid this kind of problems, because gold medal does not mean perfect but 100/100 means perfect. So when giving 100/100 score, product should be very close to perfect. And GTX1080 is far from perfect.

Hardware Heaven for instance hands out gold awards like candy out of the back of a windowless Econoline outside a grade school: Fury X? Gold, Zotac GTX 980 Ti? Gold, R9 290X Vapor-X 8GB (Huge price premium for no discernible performance increase)? Gold, R9 Fury? Gold, just about every other GTX 980 Ti? Gold, Gold, reference GTX 980? You guessed it - Gold, GTX 780 Ti, yeah, that's Gold too etc, etc. Are all these cards created equal? Plenty of other sites do much the same. If you want to overhaul the rating system, I suggest you do your own reviews and show everyone where they're going wrong. And remember to front up and defend your ratings to everyone who queries the score- whether they be legit, trolls, or guerrilla marketeers.
More to the point, does anyone buy a $700 graphics card based on a single review score and without weighing up and analyzing all the aspects of the review?

Youi are essentially defending this broken system giving examples of other broken systems. I know, I have seen many sites (many of then do not exist anymore) that never gave any product anything lower than "9/10". So saying other systems are broken does not fix this. So you say that anyone who does not make own reviews, should not give any critics to others? Then tell me why this article even have comment section? Anyone who wants to comment should make their own review *nerd*

More to the point, you are hugely overestimating there. Even multi billion systems have been bought without reallly analyzing system carefully. Basically we can divide buyers to three categories:

- Enthusiasts who want to know what they are buying. This group is very small.
- Not enthusiasts but smart enough to ask someone if something is good buy. This group is small.
- "I don't really care, I have money so give me computer, that's not expensive anyway". This group is big.

So it's quite easy to claim that many (most?) people who buy $700 graphics card do not actually read single review.

I think you and many other people overestimate the impact of async compute. AMD excels because it allows the card to actually run not bogged down with driver overhead. Unless AMD is going to heavily sponsor async compute titles, most developers aren't going to bother. it is just added coding expense that has has to take in multiple architectures and GPUs to be effective which isn't going to add anything for release dates and QA.

As the Hitman dev guy said:

AotS developer:

While having what amounts to first gen async compute is a handicap, I'm not sold on it being the "be-all-and-end-all" that you paint it. If you are so worried about GPUs lacking full feature sets, why aren't you railing against AMD's complete lack of conservative rasterization support (which looks likely to continue with Polaris) CR is a feature of DX11 as well as DX12 games.

Problem you said always exists. More features=more work, that is not going to change anytime soon. However, modern games require much bigger amount of work than 15 year old games, somehow modern games genreally look much better than 15 year old games. In fact, there are many features that are now widely used that previously fell into same category with async shaders (it causes too much work so nobody is not going to use it). If this "more work" is serious problem, Nvidia or AMD should not bring any new features to GPU's.

Hitman and AOS are among first DX12 titles, so no surprise developers have difficulties. Getting along with new technologies always take time.

Conservative rasterization is very irrelevant feature compared to others included in feature level 12_0 as PS4/X1 does not support it. AFAIK that's Nvidia's extension and it seems likely that Microsoft wanted to please Nvidia and added it to DX12. Found this from AMD https://community.amd.com/message/1308482

Some of our hardware can support functionality similar to that in the NVIDIA extension you mention, but we are currently not shipping an extension of our own. We will likely hold off until we can come to a consensus with other hardware vendors on a common extension before exposing the feature, but it will come in time.
 
Thanks DBZ ,He lays the facts down like a smackdown. a 1000 likes for you..

those that have all the score systems broken ,I only await your review. and the new scoring system..that every one will use because its not broken..lol.:p

My guess, when NVidia needs to up async compute ,and DX 12 performance, they will. but really ,how many members posting this shite are ready to take advantage ?anyone? next generation maybe..oh,and top tier are still too pricey .so your gonna want these features from the midrange card that you intend to purchase.Right?

Well, let's see, I was born and raised on Coastal Maine and have been to Newfoundland several times. I love the freaking place! A neighbor was a truck driver (before retiring) and used to make a Newfoundland run carrying fruit, mostly bananas (explains a lot!). .

every one is welcome,come on back ,bring your flyrod..and some bananas,and some RUM..
 
I'm too skeptical to trust anything that says 100% :)
The cooling is worrisome. That Nvidia/Titan cooler should be much better than those scores, especially given the energy improvements. If it can't keep the temp below 80 degrees on current games, how will it fare on future games? It's especially worrying given the price hike, as the review seems convinced that cheaper options from other suppliers will offer better cooling.
Otherwise, it looks like a great card - one worthy of a flagship product. I'd have been impressed and far more respectful of a 90-95 score. 100 sets off the BS alarms.
 
looooooooooooooooooooooool such bullshit such amd rageness how much does amd pay you?

I'd hope that AMD would spend their money on R&D (that is rice and delicious crackers) instead of hiring a novice marketing member. They are cash strapped as it is.
 
Its an excellent card and an excellent review. I think the only things it lacks are in the cooler department and the price of getting one with that cooler. I really like these 2000+mhz clock speeds. Makes me want a pair of them and to see what water cooling will do. I think I want to wait for some Classified's or lightning editions of this card so we can crank them up to 2200+mhz!
 
Great review Steve, did you note:

Ability of switching the cooling without scary-level surgery?

Temps at stock cooling when load on overclock? power consumption for same?

Thanks!
 
As for the temps in the review that show higher than the temps NVidia reported ,that could come down to binning or how the hsf was mounted, I've had 2 cards the same, bought together,with one 12 to 14 degrees hotter than the other ,after remounting the HSF and applying artic silver.
 
Makes the pro-duo look quite silly I would imagine, if anyone would ACTUALLY REVIEW ONE

I'm sure Steve would love to review one,Trick will be getting his hands on a sample I'm curious to see the temps,oc potential,and where it sits on the charts,ooohh and the juicy comments from the peanut gallery..
 
As for the temps in the review that show higher than the temps NVidia reported ,that could come down to binning or how the hsf was mounted, I've had 2 cards the same, bought together,with one 12 to 14 degrees hotter than the other ,after remounting the HSF and applying artic silver.
I would guess that most reviewers used the stock fan profile. The demo systems had a custom fan profile applied. The Nvidia demo overclock was ad hoc. The demo wasn't running as fluidly as the techs liked so they overclocked one of the demo units on the fly to smooth out the framerate.
On individual sports results usually can be compared. So like winning olympics gold medal on mens 100 meters with time like 9.60 is much better result than winning same medal with time like 9.80.
The conversation is centred on scores - not times. Ignoring a valid argument to avoid an uncomfortable like example just to introduce a more flawed example doesn't do this discussion any credit.
Nadia Comanici scored a perfect 10 in both the prelims and finals of the uneven bars* during the Montreal Olympic Games. Silivas Daniela scored exactly the same in the same event at the Seoul games despite a different routine. People crying outrage: Something approaching 0. *Example used solely as captaincranky bait
Youi are essentially defending this broken system giving examples of other broken system
No. What I am saying is that the systems are used extensively because most people don't really care. Their biggest bugbear is that one product scores lower than another product. Most people also realize that a scoring system of any kind is arbitrary depending upon on the weight the reviewer places on each individual facet of the product - and that weighting might might not apply to their own individual list of bullet points
So you say that anyone who does not make own reviews, should not give any critics to others? Then tell me why this article even have comment section? Anyone who wants to comment should make their own review
I don't think it's against forum rules to assign your own score in the comments section (Many already do).
More to the point, you are hugely overestimating there. Even multi billion systems have been bought without reallly analyzing system carefully. Basically we can divide buyers to three categories:
- Enthusiasts who want to know what they are buying. This group is very small.
- Not enthusiasts but smart enough to ask someone if something is good buy. This group is small.
- "I don't really care, I have money so give me computer, that's not expensive anyway". This group is big.
So you are advocating that an enthusiast tech site, reviewing enthusiast products in a comprehensive manner taking in the majority of the features and parameters of usage, along with some expert observations tailor their articles to a distilled down to one single number so that the uninformed (who obviously don't follow the site if they can't follow a a few pages of a review) can bypass a whole information packed review and skip to the last page to get a single number. Mmmmm ok.
So it's quite easy to claim that many (most?) people who buy $700 graphics card do not actually read single review.
Are you familiar with the phrase: "There's no cure for stupid"
As for most, or even many, people who buy $700 cards not ever reading a review....firstly, I don't think that is actually true, and secondly, if it were true what the hell does it matter what score is put up if they aren't reading it :SMH:
Problem you said always exists. More features=more work, that is not going to change anytime soon. However, modern games require much bigger amount of work than 15 year old games, somehow modern games genreally look much better than 15 year old games. In fact, there are many features that are now widely used that previously fell into same category with async shaders (it causes too much work so nobody is not going to use it). If this "more work" is serious problem, Nvidia or AMD should not bring any new features to GPU's.
The majority of games WON'T use new features unless they are sponsored. You talk like just because this has existed in the past, it doesn't preclude new features taking off. No. Newer games take longer to develop, are frequently bugged on release, have features culled, and aren't getting any cheaper if you take into account how many games are carved up into a smaller game beefed up with money-spinning DLC's, purchasable add-ons, and micro-transactions. Most older games relied upon a game engine and the hardware vendors doing the heavy lifting. That heavy lifting now falls on the shoulders of developers - most of whom see the game as commodity product. If they had any real love the genres they wouldn't make the games so predictable, boring, and short
Hitman and AOS are among first DX12 titles, so no surprise developers have difficulties. Getting along with new technologies always take time.
You don't seem to understand that DX12 - especially async compute- needs a great deal more attention to detail than just about any other facet of the API. The 100th game is going to require the same attention to architecture optimization as the 1st game - probably more so as new DX11/12 compliant architectures are added to the pool.
Conservative rasterization is very irrelevant feature compared to others included in feature level 12_0 as PS4/X1 does not support it. AFAIK that's Nvidia's extension and it seems likely that Microsoft wanted to please Nvidia and added it to DX12. Found this from AMD https://community.amd.com/message/1308482
They'd better hope that Nvidia suddenly stops sponsoring AAA titles then. I'm pretty sure Just Cause 3 has shipped many,many more units than AotS.
jc3_3840_2160.png
 
Last edited:
The conversation is centred on scores - not times. Ignoring a valid argument to avoid an uncomfortable like example just to introduce a more flawed example doesn't do this discussion any credit.Nadia Comanici scored a perfect 10 in both the prelims and finals of the uneven bars* during the Montreal Olympic Games. Silivas Daniela scored exactly the same in the same event at the Seoul games despite a different routine. People crying outrage: Something approaching 0. *Example used solely as captaincranky bait

No exactly about times but something that is comparable. Gold medals are equal but results are not.

No. What I am saying is that the systems are used extensively because most people don't really care. Their biggest bugbear is that one product scores lower than another product. Most people also realize that a scoring system of any kind is arbitrary depending upon on the weight the reviewer places on each individual facet of the product - and that weighting might might not apply to their own individual list of bullet points

Point rating systems are mainly used to tell overall rating of product for those who are too lazy to read review carefully. So while many people realize that every scoring system depends on reviewer's own views, for many higher score = better product. Good example of this is Metacritic, quite popular site btw. They just translate review scores to 0-100 system and calculate average. Higher metascore will surely translate to "better product" in many minds. Altough their rating system is totally broken.

I don't think it's against forum rules to assign your own score in the comments section (Many already do).

I already did it too. This is much better way to say than "make your own review".

So you are advocating that an enthusiast tech site, reviewing enthusiast products in a comprehensive manner taking in the majority of the features and parameters of usage, along with some expert observations tailor their articles to a distilled down to one single number so that the uninformed (who obviously don't follow the site if they can't follow a a few pages of a review) can bypass a whole information packed review and skip to the last page to get a single number. Mmmmm ok.

Just like that. I previously mentioned Metacritic. That site has no use without rating systems. I bet most readers check only review score and forget reading whole review. Generally amount of information available today is much more than anyone can handle. Too much information = most people do not have time to extensively read long articles. So they just jump to last page and see review score, because that gives some kind of overall rating. No at all same as reading whole article, but better than nothing. No time to read whole article easily means just looking score.

Not everyone that reads this site are enthusiasts. I know way too well so called "computer experts" that Think they know things well. After some conversation it becomes clear that they have checked 1-2 benchmark scores from one article. Even worse are those who know that site X gave good rating for product Y.

Are you familiar with the phrase: "There's no cure for stupid"
As for most, or even many, people who buy $700 cards not ever reading a review....firstly, I don't think that is actually true, and secondly, if it were true what the hell does it matter what score is put up if they aren't reading it :SMH:

As I mentioned previously, many people don't bother to read whole article. More perhaps read last page, even more read last chapter and even more check only overall score. For that last category 100/100 rating tells this is must buy.

The majority of games WON'T use new features unless they are sponsored. You talk like just because this has existed in the past, it doesn't preclude new features taking off. No. Newer games take longer to develop, are frequently bugged on release, have features culled, and aren't getting any cheaper if you take into account how many games are carved up into a smaller game beefed up with money-spinning DLC's, purchasable add-ons, and micro-transactions. Most older games relied upon a game engine and the hardware vendors doing the heavy lifting. That heavy lifting now falls on the shoulders of developers - most of whom see the game as commodity product. If they had any real love the genres they wouldn't make the games so predictable, boring, and short

My point was that many feratures that are now widely used were new features in the past. What you said about new games is true. As long as that kind of system produces money, it will be used. Reviewers are also to blame. They always complaing how Call of Duty part X does not bring anything new. When another game tries something very different (take for example Dark Messiah) and does not succeed immediately, it's bashed by reviewers. Problem they ofter overlook is the fact that maling games is not free, it has to produce profit and so polishing game forever is rarely possible. Also trying something very different rarely poduces very good product with first try. Somebody tries something different, reviewers say it's bad game, poor sales, because of poor sales no successor, more games are like COD part X, reviewers complain how all games are like COD part X, somebody tries something different...

That was about games. Returning to this review. GTX1070 is basically not much else than Maxwell made with 16nm tech. If this, basically die shrink, receives 100/100 rating, can we expect that future products are more than die shrinks? Logic I said about games apply with some accuracy to this one also. If takling safe route gets awarded with 100/100, who would take any risks and develop something that benefits in the future?

You don't seem to understand that DX12 - especially async compute- needs a great deal more attention to detail than just about any other facet of the API. The 100th game is going to require the same attention to architecture optimization as the 1st game - probably more so as new DX11/12 compliant architectures are added to the pool.

I understand that very well but as I previously said, almost every new technology requires more attention than previous technologies. As time goes on, new technologies become more common and do not need as much detail as they need now.

They'd better hope that Nvidia suddenly stops sponsoring AAA titles then. I'm pretty sure Just Cause 3 has shipped many,many more units than AotS.
jc3_3840_2160.png

It seems that among 28nm chips AMD leads there.
 
They only compete when completely ignoring power consumption. A fact that I hope changes soon.

More features mean more power comsumption. Also this power consumption thing is quite funny. Before Maxwell nobody cared about power consumption. Now it's about most important thing ever.
 
More features mean more power comsumption.
consumption - even cellphones have spell checks these days

And you are delusional if you think AMD card's have more features all because they use more power.

Before Maxwell nobody cared about power consumption. Now it's about most important thing ever.
You should probably go back a few more generations. I know people that were commenting about power consumption with Fermi(I must admit I am relatively new to this where GPU's are involved). But I will humor you. Maxwell it is then. If this wasn't a topic until Maxwell then, that is likely because Maxwell became the more efficient card. Meaning AMD started falling greatly behind, giving nVidia the opportunity to use efficiency against AMD.

You are here pointing fingers at nVidia while completely ignoring AMD's downfall. And with great arrogance trying to call efficiency irrelevant.
 
consumption - even cellphones have spell checks these days

And you are delusional if you think AMD card's have more features all because they use more power.

That's hard to say. Because AMD wants to offer full package, it's impossible to say how much that affects on power consumption. Also we have not yet seen Polaris.

You should probably go back a few more generations. I know people that were commenting about power consumption with Fermi(I must admit I am relatively new to this where GPU's are involved). But I will humor you. Maxwell it is then. If this wasn't a topic until Maxwell then, that is likely because Maxwell became the more efficient card. Meaning AMD started falling greatly behind, giving nVidia the opportunity to use efficiency against AMD.

You are here pointing fingers at nVidia while completely ignoring AMD's downfall. And with great arrogance trying to call efficiency irrelevant.

Even if power consumption was commented with Fermi, it had very little importance. When Maxwell came, power effiency was most important feature ever according to so called Nvidia fanboys. Before Maxwell it had almost zero importance.

What is AMD's downfall? AMD didn't release new architechture for 28nm, just modified old ones bit. Probably because 20nm parts were cancelled and they thought nobody would buy old tech cards. Looking how many used GTX980 Ti cards are for sale over internet right now, even Nvidia owners seem to think 28nm cards are were not so good buy after all. It seems that AMD underestimated how customers are willing to pay huge amounts of money for old and obsolete stuff.
 
If you really have to ask, you have no grounds to point a finger at nVidia. Open your eyes and weigh the differences fairly.

Nvidia has less features and less power consumption. Nvidia is better on DX11 software.

AMD has more features and more power consumption. AMD is better on DX12 software. Also AMD has not yet released latest architechture and AMD probably has much more manufacturing capacity for new technology parts than Nvidia has.
 
Nvidia is the gold standard of high-end PC gaming, going on about 10 years.
The GTX 970 is the most used GPU on steam for a reason.

Even when AMD has held the single GPU crown, it was brief and came with setbacks, low min FPS, stuttering, hotter cards, more power consumption, etc. The 980Ti was never moved from its throne, it just shared the trophy.

AMD is the underdog with inferior, hotter, power sucking architecture, less features, more driver issues, etc.
You buy AMD to save money.
 
Last edited:
Point rating systems are mainly used to tell overall rating of product for those who are too lazy to read review carefully.
I'm not terribly interested in how people too lazy to educate themselves should be given special dispensation.
Good example of this is Metacritic, quite popular site btw. They just translate review scores to 0-100 system and calculate average. Higher metascore will surely translate to "better product" in many minds. Altough their rating system is totally broken.
Their averaged scores are aggregated from a sites that vary from the valid to little more than blogs and poorly disguised marketing. Again, there's no cure for stupid.
Just like that. I previously mentioned Metacritic. That site has no use without rating systems.
I'd argue that it doesn't have any use even with a rating system.
I bet most readers check only review score and forget reading whole review. Generally amount of information available today is much more than anyone can handle. Too much information = most people do not have time to extensively read long articles. So they just jump to last page and see review score, because that gives some kind of overall rating. No at all same as reading whole article, but better than nothing. No time to read whole article easily means just looking score.
So you'd like the score to better cater to people who can't be bothered to spend a few minutes researching their prospective purchase, while also catering to their twitter/ADHD tendencies by eliminating the need to read the review. I hope I never have to see an era where enthusiast sites have to dumb down content to Twitter sized word bites liberally sprinkled with emoji's for those whose attention strays after half a dozen words.
Personally, I hope these people just bypass TechSpot and move on to sites fully geared for 2 minute attention spans and a low level of mental acuity.
Not everyone that reads this site are enthusiasts. I know way too well so called "computer experts" that Think they know things well. After some conversation it becomes clear that they have checked 1-2 benchmark scores from one article. Even worse are those who know that site X gave good rating for product Y.
I have a system for that. It's called adding to their knowledge base when I can. I'll provide further reading links and maybe a synopsis of the information. I'd prefer to offer information rather than have the site kowtow to the lowest level of interest.
As I mentioned previously, many people don't bother to read whole article.More perhaps read last page, even more read last chapter and even more check only overall score. For that last category 100/100 rating tells this is must buy.
So the site should better tailor their articles to people who can't be bothered reading said articles so these people, who can't be bothered researching their possible hardware buy will feel....what exactly? A 95/100 is also pretty much a must buy. You think these people are then going to go back and re-read the entire article just to see if the missing 5% impacts their purchase? They couldn't be bothered in the first instance.
I have no sympathy for anyone who makes a substantial purchase without researching it beforehand. Anyone who relies on snippits from a single review (let alone its final distilled score) and blames anyone but themselves for the outcome AND is too stupid to return the item if dissatisfied really deserves everything they get.
It seems that among 28nm chips AMD leads there.
You missed the point. The AMD lead is less than half the average it posts in other games (aggregated) at 4K. What I posted was the best case scenario for AMD bearing in mind 4K's love of high texture fill rates. If I were to choose the middle ground where Nvidia's TAU's weren't limiting their cards, and AMD's cards were likewise unaffected by raster op inefficiencies at much lower resolutions, the difference is more noticeable
jc3_2560_1440.png

Even if power consumption was commented with Fermi, it had very little importance. When Maxwell came, power effiency was most important feature ever according to so called Nvidia fanboys. Before Maxwell it had almost zero importance.
These things are cyclic. Fermi was a compute-centric architecture, and still stands as one of the best archs for compute efficiency. Yes, during Fermi's reign (and GT200 before it) Nvidia fanboys paid no attention to perf/watt. But you know who held it as paramount? AMD fanboys. When Evergreen arrived, perf/watt and perf/mm (a newly important metric that arrived overnight) was the sole point of interest. As soon as AMD pushed "always on" compute and wattage climbed with the GCN architecture, perf/watt suddenly became irrelevant to AMD fanboys.
It cuts both ways. Always has.
What is AMD's downfall? AMD didn't release new architechture for 28nm, just modified old ones bit. Probably because 20nm parts were cancelled and they thought nobody would buy old tech cards.
Nope. AMD's R&D couldn't sustain multiple developments and AMD had too many irons in the fire. Console development, an expensive to run logic layout business (since sold to Synopsys), a poorly thought out attempt at making a splash in the ARM server architecture market, and very likely a substantial ongoing investment in HBM integration which began at least 5 years ago....not to mention a long running APU/CPU architecture development.
If you want to distill AMD's woes down to a single point, it is their managements lack of strategic planning, goal setting, and a reliance upon being reactive rather than proactive in the industry. Too busy trying to imitate those more successful, but putting little thought into how to achieve goals and the actual returns on investment and time (see the SeaMicro acquisition for a prime example) once a course of action is embarked upon.
Looking how many used GTX980 Ti cards are for sale over internet right now, even Nvidia owners seem to think 28nm cards are were not so good buy after all.
That is one interpretation, but I don't think it is correct. As a serial upgrader myself, the best time to sell old hardware is just before the new series arrives. You still recoup a reasonable amount of your original purchase cost and can use the funds to offset the new purchase.
Nvidia has less features and less power consumption. Nvidia is better on DX11 software.
Not just DX11 software.If that were solely the case, why did it need Nvidia to publicize frame pacing, ShadowPlay, GeForce Experience, and a host of other software that AMD has eagerly tried to adapt to its own uses. I'm guessing that Ansel and MSP will also find themselves with AMD analogues in the not too distant future
Also AMD has not yet released latest architechture and AMD probably has much more manufacturing capacity for new technology parts than Nvidia has.
I've been hearing a near constant stream of this marketing since Raja Koduri claimed that Polaris and 14nm was well ahead of Pascal and 16nm....
"We believe we're several months ahead of this transition, especially for the notebook and the mainstream market" said Koduri. "The competition is talking about chips for cars and stuff, but not the mainstream market."
...Yet Nvidia have demonstrated the largest non-Intel GPU in the world on 16nm with series production underway (with over 4500 pre-sold at $10K/per), have the GTX 1080 reviewed and a week from retail availability. the volume market GTX 1070 basically ready to go (holding it back obviously a marketing strategy) and mass market GP106 due to arrive in a month.
:rolleyes:
 
Last edited:
Back