The Best Graphics Cards 2016: TechSpot's top picks for every budget

AMD is failing? What a joke.
They are powering all three console mfg's into the future and their GPU card range is doing just fine.
Sony, Microsoft and who else?

http://www.extremetech.com/gaming/232402-nvidia-not-amd-will-power-the-nintendo-nx

http://arstechnica.com/gaming/2016/07/report-nintendos-nx-is-a-tegra-powered-hdtvportable-hybrid/

All the reporting I read was a tegra chip in the NX.

The NX is in the future and the Polaris secret ... is still a secret. ; )
 
Way to go guys. We reached page 3 as of now because some are not satisfied that most of the cards recommended as of prices and availability are nvidia.

So to summarize everything Hardreset's comments. Here it is. Please dont argue. This is based from availability, prices, driver support, futureproofing, power, heat, dx11, dx 12, architecture.

The Best Graphics Cards 2016: Hardreset's top picks for every budget

Best Overall Graphics Card:
RX 480 8GB

Best Performance For Your Money
RX 480 8GB

Best Mainstream GPU
RX 480 4/8GB

Best Budget
RX 480 4/8GB

Best HTPC/Compact Card
RX 480 4/8Gb

Best Mobile GPU(s)
RX 480 4/8Gb

See you guys later :)
 
Ah HardReset .. it's been a while. Still riding the "AMD is my lord and savior" train (is anybody surprised?). Lets be fair for a bit he is right the mid tier space heater... I mean RX 480 is the best thing that ever happened to us right? Low price, MAD specs, runs DOOM like there's no tomorrow, stomps nVidia into submission in every broken console port ... erm ... I mean DX12 title out there! All hail the mighty RX 480 ! Also here's a thought how come in Deus EX AMD's DX12 is lower than their own DX11?

https://a.disquscdn.com/uploads/mediaembed/images/4255/6938/original.jpg
http://a.disquscdn.com/uploads/mediaembed/images/4265/4500/original.jpg


Also in BF1 what's with the frame-times on Radeons ? Title which is mind you stamped as AMD Gaming Evolved.
http://a.disquscdn.com/get?url=http://techreport.com/r.x/dxmdpreview/dx12-50.png&key=i7n9hGtKrJLHn0Evw-RWyg&w=800&h=230
Still riding the AMD/DX12/Vulkan hype-train? Still waiting on those magic drivers that will sprinkle pixie dust on the 5 year old cards and give them an extra 50 FPS at 4k ? The only reason why AMD is putting their eggs in the "low level API" basket is because they are quite inept when it comes to the software backing up their own bloody hardware. They would be stupid not to, it's less work for them. And in case you are asking for proof check the results for DOOM where the 970 gives Fury X a run for it's money in OpenGL.

https://community.amd.com/thread/202636
https://community.amd.com/thread/200378
https://community.amd.com/thread/200351

Also check Linux OpenGL performance where the Fury X (it's still the best AMD has to offer) struggles against a 960.
Another thing you're looking cross-eyed at is that special kind of thinking that DX12 is all about Async. By the way newsflash you don't optimize hardware to a low level API, you optimize software to a hardware exposed by a low level API. And since we're on this subject did you know that it's not all about Async ? Mind blowing right?
https://en.wikipedia.org/wiki/Feature_levels_in_Direct3D - you might find this interesting (Unless nVidia paid off Wikipedia as well).
There is more to DX12 than that and you might find out that nVidia supports things AMD does not. Implementations for rendering as examples Maxwell and Pascal rasterizes and processes fragments in tiles while AMD process in screen space. This allows nVidia GPU to use less memory bandwidth, and keep all the important data in L2 cache, to ensure the GPU is completely saturated. While for AMD the data has to travel back and forth between GPU memory and L2 cache, causing bottlenecks and cache misses, which lead to Fragment shading which in turn takes up 60-80% or more of rendering time, so a bottleneck here makes a huge impact. This is one of the primary reasons why nVidia can perform better with much lower memory bandwidth. And this is just scratching the wall. Oh and by the way what's your opinion on AMD's cards that do not render the same things in the same frame as their nVidia counterparts?
 
Last edited:
And so the crazy comes out in force... the author is NOT an Nvidia fanboy - Steve recommends AMD when there is reason to do so (see the budget section) - there just isn't any reason to do so with this year's lineup.

This fanboyism has been proven many times.

And I already told you why your "sources" were simply a bunch of malarkey... You can't prove a card's drivers are slowing it down unless you actually run the SAME THINGS with different drivers!!! This isn't exactly rocket science!

No, you didn't. Your logic is not valid. I put an example to illustrate why:

Manufacturer has "fast card" A and "slow card" B. New game C comes out, there is very bad bug on drivers D that make C run badly. Manufacturer decides to release driver E that fixes bug. So now C runs much better BUT that driver fix is available only to B.

Now, "slow card" B is much faster than "fast card" A because driver bug was fixed for B but not for A. Using your logic, A is not made any slower as speed with drivers D is same as with drivers E.

So comparing sme card with different drivers essentially don't tell anything. It basically tells that Nvidia applies driver fixes for new cards but not for old cards. That's exactly why my evidence about Nvidia making old cards look slow is valid.

As for "how does the 960 lose to the 780 when the 780 should be superior"?

Same way the superior Nvidia cards occasionally lose to AMD cards - different titles leverage different cards...

In general, however, the 780 still outperforms the 960 - just like Nvidia cards generally outperform their AMD counterparts.

I already asked an example where old AMD card with superior specs loses to new AMD card with substantially lower specs. Probably those examples don't exist as AMD have better driver support for older cards.

That "generally" includes old games that have no interest now.

And let's go to your craziest comment now... "How is AMD losing in the present?" Let's spell it out for you... BECAUSE EVERY SINGLE ONE OF THEIR CARDS ISN'T AS GOOD AS THEIR NVIDIA COUNTERPART!!!

Sorry for the caps, but I'm beginning to think that perhaps you don't read so well, so I'm regressing to Grade 1 reading level for you :)

Yeah right, new AAA title that favours neither AMD or Nvidia https://www.computerbase.de/2016-07/doom-vulkan-benchmarks-amd-nvidia/

@HardReset "If someone offered me a 1080 or RX460 I'd choose the RX460 because future proof"

This shows your completely inept. I bought a r9 280 when it was the best for my buck, it came with 3 aaa titles, which I sold to recoup some cash. Now I own a 970, I snagged it off ebay last year for 250$. Instead of ignorantly defending inferior cards maybe you should check some benchmarks? Take a computer class, get informed. Maybe take a look at wikipedia.

Writing this comment for me? Next time comment something I wrote, not something someone else said I wrote.

Ah HardReset .. it's been a while. Still riding the "AMD is my lord and savior" train (is anybody surprised?). Lets be fair for a bit he is right the mid tier space heater... I mean RX 480 is the best thing that ever happened to us right? Low price, MAD specs, runs DOOM like there's no tomorrow, stomps nVidia into submission in every broken console port ... erm ... I mean DX12 title out there! All hail the mighty RX 480 ! Also here's a thought how come in Deus EX AMD's DX12 is lower than their own DX11?

https://a.disquscdn.com/uploads/mediaembed/images/4255/6938/original.jpg
http://a.disquscdn.com/uploads/mediaembed/images/4265/4500/original.jpg

Also in BF1 what's with the frame-times on Radeons ? Title which is mind you stamped as AMD Gaming Evolved.
http://a.disquscdn.com/get?url=http://techreport.com/r.x/dxmdpreview/dx12-50.png&key=i7n9hGtKrJLHn0Evw-RWyg&w=800&h=230

Edit: That DX12 tested was BETA version.

Care to explain WHY DX12 is slower than DX11? Probably because game is buggy. Quite usual today that games are fixed after launch.

That BF1 seems to be Deus Ex also.

Still riding the AMD/DX12/Vulkan hype-train? Still waiting on those magic drivers that will sprinkle pixie dust on the 5 year old cards and give them an extra 50 FPS at 4k ? The only reason why AMD is putting their eggs in the "low level API" basket is because they are quite inept when it comes to the software backing up their own bloody hardware. They would be stupid not to, it's less work for them. And in case you are asking for proof check the results for DOOM where the 970 gives Fury X a run for it's money in OpenGL.

https://community.amd.com/thread/202636
https://community.amd.com/thread/200378
https://community.amd.com/thread/200351

Who cares about OpenGL on Doom when Vulkan is much faster? If you look at dates, those threads are too old to have any meaning anymore as Doom was patched after them and new drivers released.

Also check Linux OpenGL performance where the Fury X (it's still the best AMD has to offer) struggles against a 960.
Another thing you're looking cross-eyed at is that special kind of thinking that DX12 is all about Async. By the way newsflash you don't optimize hardware to a low level API, you optimize software to a hardware exposed by a low level API. And since we're on this subject did you know that it's not all about Async ? Mind blowing right?
https://en.wikipedia.org/wiki/Feature_levels_in_Direct3D - you might find this interesting (Unless nVidia paid off Wikipedia as well).

I see no reason to support OpenGL because Vulkan is much better. Even Khronos group agrees with that. OpenGL development is quite slow today as Vulkan more or less replaces it.

There is more to DX12 than that and you might find out that nVidia supports things AMD does not. Implementations for rendering as examples Maxwell and Pascal rasterizes and processes fragments in tiles while AMD process in screen space. This allows nVidia GPU to use less memory bandwidth, and keep all the important data in L2 cache, to ensure the GPU is completely saturated. While for AMD the data has to travel back and forth between GPU memory and L2 cache, causing bottlenecks and cache misses, which lead to Fragment shading which in turn takes up 60-80% or more of rendering time, so a bottleneck here makes a huge impact. This is one of the primary reasons why nVidia can perform better with much lower memory bandwidth. And this is just scratching the wall. Oh and by the way what's your opinion on AMD's cards that do not render the same things in the same frame as their nVidia counterparts?

Nvidia supports some things AMD does not but those are mosty useless. Nvidia has edge on memory bandwidth usage that essentially means Nvidia does not need as much memory bandwidth as AMD. Still AMD can put wider memory bus to compensate that, it adds little cost but is not very big problem. Also Vega perhaps can at least partially fix this problem. Also that has nothing to do with DX12 features or support.

When looking at GPU level, AMD has much more hardware for Mantle features and as DX12 and Vulkan are based in Mantle, AMD has clearly edge on there. Just like Doom Vulkan support proves.

My opinion is that different architectures have different strengths and weaknesses.
 
Last edited:
This fanboyism has been proven many times.



No, you didn't. Your logic is not valid. I put an example to illustrate why:

Manufacturer has "fast card" A and "slow card" B. New game C comes out, there is very bad bug on drivers D that make C run badly. Manufacturer decides to release driver E that fixes bug. So now C runs much better BUT that driver fix is available only to B.

Now, "slow card" B is much faster than "fast card" A because driver bug was fixed for B but not for A. Using your logic, A is not made any slower as speed with drivers D is same as with drivers E.

So comparing sme card with different drivers essentially don't tell anything. It basically tells that Nvidia applies driver fixes for new cards but not for old cards. That's exactly why my evidence about Nvidia making old cards look slow is valid.



I already asked an example where old AMD card with superior specs loses to new AMD card with substantially lower specs. Probably those examples don't exist as AMD have better driver support for older cards.

That "generally" includes old games that have no interest now.



Yeah right, new AAA title that favours neither AMD or Nvidia https://www.computerbase.de/2016-07/doom-vulkan-benchmarks-amd-nvidia/



Writing this comment for me? Next time comment something I wrote, not something someone else said I wrote.



Edit: That DX12 tested was BETA version.

Care to explain WHY DX12 is slower than DX11? Probably because game is buggy. Quite usual today that games are fixed after launch.

That BF1 seems to be Deus Ex also.



Who cares about OpenGL on Doom when Vulkan is much faster? If you look at dates, those threads are too old to have any meaning anymore as Doom was patched after them and new drivers released.



I see no reason to support OpenGL because Vulkan is much better. Even Khronos group agrees with that. OpenGL development is quite slow today as Vulkan more or less replaces it.



Nvidia supports some things AMD does not but those are mosty useless. Nvidia has edge on memory bandwidth usage that essentially means Nvidia does not need as much memory bandwidth as AMD. Still AMD can put wider memory bus to compensate that, it adds little cost but is not very big problem. Also Vega perhaps can at least partially fix this problem. Also that has nothing to do with DX12 features or support.

When looking at GPU level, AMD has much more hardware for Mantle features and as DX12 and Vulkan are based in Mantle, AMD has clearly edge on there. Just like Doom Vulkan support proves.

My opinion is that different architectures have different strengths and weaknesses.
You just don't get it, do you....When evidence is so clearly against you, you simply change the argument... then return to your old one thinking that the previous evidence no longer applies...

Let's just go REALLY simple, because clearly your mind can't handle too much at once:

1) A card can only be judged to have "slowed down because of drivers" if you test that card with the SAME TITLE with DIFFERENT DRIVERS. When a card performs differently on a DIFFERENT title, there are far too many variables to ascertain the cause. So just because a 960 outperforms the 780 on some random title doesn't mean ANYTHING other than that specific title is optimized for the 960, not the 780...

2) As for the buyer recommendations... CARD FOR CARD, AMD loses each and every battle with its Nvidia counterpart. This isn't Steve's "Nvidia Fanboy"... it's simple benchmarks (you know, those things that actually prove how a card performs that you only look at when they favour AMD) and cost...

The only "win" for AMD (which Steve DOES include) is the budget segment - which is simply because Nvidia doesn't have a current generation card to compete with yet (1050 is coming - where are AMD's high-end cards though?).

3) Future performance. I (and others) have tried to make you understand this but you either can't (or won't) get it... To buy a low-mid tier card expecting AAA title gaming in the future is simply unrealistic, no matter what card you purchase. A $200 card will not be able to play top games at ultra resolutions over 60FPS in 3 years, no matter what.

So what if your AMD card can play a game at 14FPS while an Nvidia card plays it at 13FPS in 3 years... both are unplayable!!

If you want a card that will continue to play AAA games far into the future, you need to spend more - and right now, that means your best bet is Nvidia's 1070 (or 1080 if you have the cash). AMD simply can't compete in this price bracket.
_____________________________
I hope that's simple enough for you to grasp, although, with your past performance, I'm assuming this will completely fly over your head....
 
You just don't get it, do you....When evidence is so clearly against you, you simply change the argument... then return to your old one thinking that the previous evidence no longer applies...

ROFLMAO. You can find tons of evidence behind my links. You just don't understand them.

Let's just go REALLY simple, because clearly your mind can't handle too much at once:

1) A card can only be judged to have "slowed down because of drivers" if you test that card with the SAME TITLE with DIFFERENT DRIVERS. When a card performs differently on a DIFFERENT title, there are far too many variables to ascertain the cause. So just because a 960 outperforms the 780 on some random title doesn't mean ANYTHING other than that specific title is optimized for the 960, not the 780...

Card can be judged to slow down if difference against another card suddenly increases. One title, in particular, does not seem to be optimized for GTX 960 but still GTX 780 is much slower. Also this slowed down thing. Again, clear example. Card A speed is 100, Card B speed is 100. Later Card A speed is 100 and Card B speed is 120. We can say that Card A has slowed down when compared to Card B or what?

2) As for the buyer recommendations... CARD FOR CARD, AMD loses each and every battle with its Nvidia counterpart. This isn't Steve's "Nvidia Fanboy"... it's simple benchmarks (you know, those things that actually prove how a card performs that you only look at when they favour AMD) and cost...

The only "win" for AMD (which Steve DOES include) is the budget segment - which is simply because Nvidia doesn't have a current generation card to compete with yet (1050 is coming - where are AMD's high-end cards though?).

Loses battle, yeah https://www.computerbase.de/2016-07/doom-vulkan-benchmarks-amd-nvidia/

AMD is preparing 100/100 card.

3) Future performance. I (and others) have tried to make you understand this but you either can't (or won't) get it... To buy a low-mid tier card expecting AAA title gaming in the future is simply unrealistic, no matter what card you purchase. A $200 card will not be able to play top games at ultra resolutions over 60FPS in 3 years, no matter what.

So what if your AMD card can play a game at 14FPS while an Nvidia card plays it at 13FPS in 3 years... both are unplayable!!

What you mean by "ultra resolutions"? Look at there https://uk.hardware.info/reviews/67...s-test-results-full-hd-1920x1080-+-frametimes

Doom 1080 Ultra gives over 60 FPS for R9 280x without Vulkan. About three years ago R9 280 could be bought under $200 (at least I bought it). Assuming Vulkan gives 20% boost and R9 280 is about 10% slower than R9 280x, R9 280 would achieve around 70 FPS on 1080p Ultra.

So, 3 years old $200 card gives over 60 FPS on 1080p Ultra on AAA title. Not bad, eh?

If you want a card that will continue to play AAA games far into the future, you need to spend more - and right now, that means your best bet is Nvidia's 1070 (or 1080 if you have the cash). AMD simply can't compete in this price bracket.
_____________________________
I hope that's simple enough for you to grasp, although, with your past performance, I'm assuming this will completely fly over your head....

GTX 1070 or GTX 1080 are among worst bets right now. Both are slow cards (slow memory, small chip) so they will get old very quickly. Also GTX 1080 is far from enough to 4K. So buying GTX 1070 or GTX 1080 is simply waste of money, no matter how you try to promote Nvidia's old technology. Given those facts, it's no wonder Nvidia market share on discrete cards went down 7% last quarter. I expect downhill to continue.
 
ROFLMAO. You can find tons of evidence behind my links. You just don't understand them.
But the thing is... your evidence is just links to forums that, when read, tend to disprove your claims... let's look below to see :)

Card can be judged to slow down if difference against another card suddenly increases. One title, in particular, does not seem to be optimized for GTX 960 but still GTX 780 is much slower. Also this slowed down thing. Again, clear example. Card A speed is 100, Card B speed is 100. Later Card A speed is 100 and Card B speed is 120. We can say that Card A has slowed down when compared to Card B or what?

No, IT CAN'T!!! All this proves is that this title favours new cards.... it DOES NOT PROVE ANYTHING ELSE!! The ONLY way to see if drivers have slowed down a card is to use them on the same title... but since you can't find any evidence for this, you simply repeat your flawed logic over and over again.




Once again, you go back to one flawed Doom benchmark... just about EVERY OTHER TITLE favours Nvidia... are you really trying to tell me that the Fury will always outperform a 1070?!?! If so, I want some of whatever you are smoking.



What you mean by "ultra resolutions"? Look at there https://uk.hardware.info/reviews/67...s-test-results-full-hd-1920x1080-+-frametimes

Doom 1080 Ultra gives over 60 FPS for R9 280x without Vulkan. About three years ago R9 280 could be bought under $200 (at least I bought it). Assuming Vulkan gives 20% boost and R9 280 is about 10% slower than R9 280x, R9 280 would achieve around 70 FPS on 1080p Ultra.

So, 3 years old $200 card gives over 60 FPS on 1080p Ultra on AAA title. Not bad, eh?

I mean greater than 1080P, and better than Medium settings... Pretty much all the cards past the Nvidia 750 can play Doom at those settings... And you keep using Doom... this isn't evidence, it's simply a cherry-picked irrelevant link - and even when you click on it, all it proves is that pretty much EVERY card plays Doom at crappy settings...

Let's look at some Witcher 3 benchmarks.... not quite a current title... but close...
https://www.techspot.com/review/1006-the-witcher-3-benchmarks/page2.html

Check about halfway down the page... 1440P resolution at medium settings. Notice how only the Nvidia 980 gives an average of 60FPS (and barely) while your vaunted "future proof" AMD cards all fail.... This is what I mean by future-proofing... No cheap card can play at 60FPS after a couple years!!

GTX 1070 or GTX 1080 are among worst bets right now. Both are slow cards (slow memory, small chip) so they will get old very quickly. Also GTX 1080 is far from enough to 4K. So buying GTX 1070 or GTX 1080 is simply waste of money, no matter how you try to promote Nvidia's old technology. Given those facts, it's no wonder Nvidia market share on discrete cards went down 7% last quarter. I expect downhill to continue.
Yes... they are terrible cards... that's probably why AMD didn't bother to put out a competing card at that price level... why would anyone want a card that can run 4k resolutions using less power and heat than a weaker AMD card? Only a fool I guess...

It's logic like this that makes the rest of our forum posters mock you...
 
No, IT CAN'T!!! All this proves is that this title favours new cards.... it DOES NOT PROVE ANYTHING ELSE!! The ONLY way to see if drivers have slowed down a card is to use them on the same title... but since you can't find any evidence for this, you simply repeat your flawed logic over and over again.

Quite funny that game favours new cards as it was published before those new cards arrived ;)

Much better theory is that there was something wrong with drivers and that was fixed for newer cards only.

Using your flawed logic, Windows 7/8 will not get any worse when neither gets support for Kaby Lake or Zen. So Windows 7/8 is as good as before compared it Windows 10. Still many complain. Why?

Once again, you go back to one flawed Doom benchmark... just about EVERY OTHER TITLE favours Nvidia... are you really trying to tell me that the Fury will always outperform a 1070?!?! If so, I want some of whatever you are smoking.

New AAA title with support for new technology and not directly favouring either AMD or Nvidia is many times more important than random old technology title that favours Nvidia. Right? Right.

I mean greater than 1080P, and better than Medium settings... Pretty much all the cards past the Nvidia 750 can play Doom at those settings... And you keep using Doom... this isn't evidence, it's simply a cherry-picked irrelevant link - and even when you click on it, all it proves is that pretty much EVERY card plays Doom at crappy settings...

Let's look at some Witcher 3 benchmarks.... not quite a current title... but close...
https://www.techspot.com/review/1006-the-witcher-3-benchmarks/page2.html

Check about halfway down the page... 1440P resolution at medium settings. Notice how only the Nvidia 980 gives an average of 60FPS (and barely) while your vaunted "future proof" AMD cards all fail.... This is what I mean by future-proofing... No cheap card can play at 60FPS after a couple years!!

So $550 Nvidia card cannot run AAA game 1440p @ medium settings just 8 months after card was released? Previously you said

If you want a card that will continue to play AAA games far into the future, you need to spend more - and right now, that means your best bet is Nvidia's 1070 (or 1080 if you have the cash). AMD simply can't compete in this price bracket.

GTX 1080 is obsolete before next summer. So why buy GTX 1080 as it gets old before one year? That's why Radeon cards are more future proof. You save money for better card as no current card is future proof.

Yes... they are terrible cards... that's probably why AMD didn't bother to put out a competing card at that price level... why would anyone want a card that can run 4k resolutions using less power and heat than a weaker AMD card? Only a fool I guess...

It's logic like this that makes the rest of our forum posters mock you...

AMD wanted more market share and got it. AMD's cards are selling hugely while Nvidia desperately tries to get rid of overpriced GTX 1070 and 1080 cards. Even GTX 1080 is far from enough for 4K gaming.
 
Quite funny that game favours new cards as it was published before those new cards arrived ;)
Not relevant - the point is, again, that a card can only be judged to have "slowed down" by testing it AGAINST ITSELF!!! Let's pretend I'm a sprinter.... I used to run the 100 Meter in 10 seconds... now, I'm 2 years older, and run the 100 Meter in 12 seconds... you can fairly say that I got slower with age, right?

Now... let's try again... I'm a sprinter... I used to be able to beat you by 2 seconds in the 100 meter.... Now, 2 years later, you're beating me by 1 second in the 200 meter!! Did I get slower? Or did you get faster? We don't know!!!!! Unless we have both of our times from the same distance, we will never know!!

This is the type of logic and evidence that you keep presenting.... this is why we all mock you!

New AAA title with support for new technology and not directly favouring either AMD or Nvidia is many times more important than random old technology title that favours Nvidia. Right? Right.

You chose Doom... again... it's ONE TITLE... pretty much all of the other games favour Nvidia.... hence the reviews... hence the recommendations... get it yet? I somehow doubt it, but I'll keep trying :)

So $550 Nvidia card cannot run AAA game 1440p @ medium settings just 8 months after card was released? Previously you said GTX 1080 is obsolete before next summer. So why buy GTX 1080 as it gets old before one year? That's why Radeon cards are more future proof. You save money for better card as no current card is future proof.

That's my point... not yours... none of the AMD cards could play it either... that's why buying a card for "future proofing" is not very intelligent!!! Your Radeons were NOT future proof - they were all SLOWER than the Nvidia card!!



AMD wanted more market share and got it. AMD's cards are selling hugely while Nvidia desperately tries to get rid of overpriced GTX 1070 and 1080 cards. Even GTX 1080 is far from enough for 4K gaming.
My bad... I keep making the mistake of trying to argue logically with you... you'd think I'd have learned...

Can you provide some evidence as to AMD suddenly outselling Nvidia? I can't find any, but I'm sure you will be able to get some... Of course, if we look at the past several years, we can find plenty of evidence for Nvidia outselling AMD....
 
@HardReset

Oh god if I had to quote you for every single time you spew nonsense I would waste more time than I can spare.

A. If it worked for ID it doesn't have to work the same way for the other devs as well. See Talos Principle where the failed Vulkan wrapper makes the whole thing a terrible experience.

B. You care about OpenGL in Doom because people like you hype the gains for AMD if they would have bothered to fix their crap drivers would be of a max 10% instead of 34%. Also feel free to run your own personal bench and compare since I'm willing to bet that you run AMD's hardware.

C. Yet wrong again Vulkan is not based on Mantle. As you can read in the specs, Vulkan is built on SPIR-V. Which is the compiler infrastructure and intermediate representation of a shader language which is the basis for OpenCL (2.1) and Vulkan. The features of Vulkan is built on top of this, and this architecture has nothing in common with either Mantle nor Direct3D*. What Vulkan has inherited from Mantle is not the underlaying architecture, but some aspects of the front end.

D. You can Beta test a crap console port all you want it's still crap at the end of the day. Oh and by the way the game is still DX11. There is however a DX12 wrapper. But if you lose frames in DX12 with whatever hardware you have why bother running DX12 when there is no visual difference between the two? Here's a hint Direct3D 12 will be a standard when win7 users < win10 users and not before. Hype DX12 all you want it doesn't mean it's a valid point of view. DX 12 won't pull AMD's finances out of the toilet. Oh and let's not forget GTX 1070 is outselling the RX 480 3:1 (yes there are different tier cards, yes one is 2x the price of the other and no nVidia didn't lose market share because of the RX 480 last batch of finance reports you just quoted, those did not include the RX 480 sales from AMD neither did the 1080/1070 from nVidia, the share was lost because AMD dropped the prices on the R9's). Free advice: Do your homework.

E. Do you know what would imply to widen the memory controller? That's 1 thing the 2nd would be that even if they widen the bus it won't make the internal bottlenecks vanish. That is a flaw which can be corrected only with overhauling the whole arch which won't happen anytime soon since AMD can't support it financially. That's why you see horrible frametimes sometimes in AMD's hardware it's not something that they can fix with a driver update. Why do you think the Fury was priced like it was when they hit the market? Why do you think the RX 480 is being sold for $200 (I know I know ... I live in a dreamland where you can actually get an RX 480 for $200). Why do you think AMD sells so low? Because it's cheap enough to mass produce that's why. That's also why the reference model felt like a slab of cheap plastic. That's why the cooling was subpar. That's why they overvolted the crap out of these cards so they can get better yields from that cesspool of a foundry they keep using for the past ... well ever...If you think AMD is doing this because they "care" about your performance 2 years from now you're barking up the wrong tree. If AMD ever had the upper hand in any segment history should teach you that they always matched/outpriced their competition.

F. "Nvidia supports some things AMD does not but those are mostly useless" - Oh I'm sorry I didn't know that. Thank you for the free enlightenment lesson. I forgot that DX 12 is all about async for you people. Read up on it.

Edit: I've used "AMD" and "Crap" in the same sentence. Triggered yet?
 
Not relevant - the point is, again, that a card can only be judged to have "slowed down" by testing it AGAINST ITSELF!!! Let's pretend I'm a sprinter.... I used to run the 100 Meter in 10 seconds... now, I'm 2 years older, and run the 100 Meter in 12 seconds... you can fairly say that I got slower with age, right?

Assuming I ran 100 meter in 10 seconds and after 2 years I run 10 seconds and you run 12 seconds. Then you got slower and I got faster compared to you.

Now... let's try again... I'm a sprinter... I used to be able to beat you by 2 seconds in the 100 meter.... Now, 2 years later, you're beating me by 1 second in the 200 meter!! Did I get slower? Or did you get faster? We don't know!!!!! Unless we have both of our times from the same distance, we will never know!!

This is the type of logic and evidence that you keep presenting.... this is why we all mock you!

Let's take that logic into games.

Nvidia cards are A and B. A used to be able to beat B by 10% in title C.... Now, 2 years later, B is beating A 10% in title D. Did A get slower? Or did B get faster? We don't know!!!!! Unless we have both of our results from same title we will never know!!

So your logic does not apply.

You chose Doom... again... it's ONE TITLE... pretty much all of the other games favour Nvidia.... hence the reviews... hence the recommendations... get it yet? I somehow doubt it, but I'll keep trying :)

Once again, you said Nvidia is better on every price range but Doom says not. And once again, that is new AAA title so that has many times more weight than older titles or titles optimized for AMD or Nvidia. Because it tells quite much about how cards will perform in the future.

That's my point... not yours... none of the AMD cards could play it either... that's why buying a card for "future proofing" is not very intelligent!!! Your Radeons were NOT future proof - they were all SLOWER than the Nvidia card!!

Slower than similarly priced Nvidia card bought at same time?

My bad... I keep making the mistake of trying to argue logically with you... you'd think I'd have learned...

Can you provide some evidence as to AMD suddenly outselling Nvidia? I can't find any, but I'm sure you will be able to get some... Of course, if we look at the past several years, we can find plenty of evidence for Nvidia outselling AMD....

I said AMD got lot market share from Nvidia on discrete GPU market. As for total GPU market, Intel clearly outsells both Nvidia and AMD combined. Nvidia has only 0.4% lead over AMD so I expect that soon AMD GPU shipments will surpass Nvidia's.
 
@HardReset

Oh god if I had to quote you for every single time you spew nonsense I would waste more time than I can spare.

A. If it worked for ID it doesn't have to work the same way for the other devs as well. See Talos Principle where the failed Vulkan wrapper makes the whole thing a terrible experience.

B. You care about OpenGL in Doom because people like you hype the gains for AMD if they would have bothered to fix their crap drivers would be of a max 10% instead of 34%. Also feel free to run your own personal bench and compare since I'm willing to bet that you run AMD's hardware.

A. There are games that are more or less finished and then there are games that are more or less unfinished. So is problem with Talos Principle or Vulkan itself? I would say: Talos Principle.

B. Why AMD should support OpenGL as Vulkan is better? Nvidia has good examples of supporting not so great technologies. Gameworks ahem...

C. Yet wrong again Vulkan is not based on Mantle. As you can read in the specs, Vulkan is built on SPIR-V. Which is the compiler infrastructure and intermediate representation of a shader language which is the basis for OpenCL (2.1) and Vulkan. The features of Vulkan is built on top of this, and this architecture has nothing in common with either Mantle nor Direct3D*. What Vulkan has inherited from Mantle is not the underlaying architecture, but some aspects of the front end.

D. You can Beta test a crap console port all you want it's still crap at the end of the day. Oh and by the way the game is still DX11. There is however a DX12 wrapper. But if you lose frames in DX12 with whatever hardware you have why bother running DX12 when there is no visual difference between the two? Here's a hint Direct3D 12 will be a standard when win7 users < win10 users and not before. Hype DX12 all you want it doesn't mean it's a valid point of view. DX 12 won't pull AMD's finances out of the toilet. Oh and let's not forget GTX 1070 is outselling the RX 480 3:1 (yes there are different tier cards, yes one is 2x the price of the other and no nVidia didn't lose market share because of the RX 480 last batch of finance reports you just quoted, those did not include the RX 480 sales from AMD neither did the 1080/1070 from nVidia, the share was lost because AMD dropped the prices on the R9's). Free advice: Do your homework.

C. AMD stopped developing Mantle because most of it was adopted to Vulkan. So Vulkan is very much Mantle and it seems that Mantle and Vulkan offer about same performance gain on AMD cards. If not based, we can say that Mantle and Vulkan has much in common.

D. Crappy console ports are crappy, no matter if there's DX11 or DX12. Still that does not make DX12 any worse. Some games already demand Windows 10 and Kaby Lake or Zen will not receive full Windows 7 support.

About that "RX 480 sales not included" blah blah.

http://www.anandtech.com/show/10613...-grabs-market-share-but-nvidia-remains-on-top

AMD sold approximately 5.5 million of desktop discrete GPUs in the first half of 2016, up from about 4.2 million in the same period a year ago. The company’s management says that the recent introduction of its Polaris-based Radeon RX 470 and 480 GPUs helped it to increase unit sales of graphics cards in the channel in Q2

“Our strong second quarter graphics performance was capped by the launch of our new Polaris-based RX 480 GPUs at the end of June

Who needs to do homework?

E. Do you know what would imply to widen the memory controller? That's 1 thing the 2nd would be that even if they widen the bus it won't make the internal bottlenecks vanish. That is a flaw which can be corrected only with overhauling the whole arch which won't happen anytime soon since AMD can't support it financially. That's why you see horrible frametimes sometimes in AMD's hardware it's not something that they can fix with a driver update. Why do you think the Fury was priced like it was when they hit the market? Why do you think the RX 480 is being sold for $200 (I know I know ... I live in a dreamland where you can actually get an RX 480 for $200). Why do you think AMD sells so low? Because it's cheap enough to mass produce that's why. That's also why the reference model felt like a slab of cheap plastic. That's why the cooling was subpar. That's why they overvolted the crap out of these cards so they can get better yields from that cesspool of a foundry they keep using for the past ... well ever...If you think AMD is doing this because they "care" about your performance 2 years from now you're barking up the wrong tree. If AMD ever had the upper hand in any segment history should teach you that they always matched/outpriced their competition.

F. "Nvidia supports some things AMD does not but those are mostly useless" - Oh I'm sorry I didn't know that. Thank you for the free enlightenment lesson. I forgot that DX 12 is all about async for you people. Read up on it.

Edit: I've used "AMD" and "Crap" in the same sentence. Triggered yet?

E. What is flaw and what is not? Different architectures have different strengths. Nvidia's architecture sucks at async shaders and pretty much on everything related to Vulkan and/or DX12. Fury was first card to have HBM memory, so not everything went smoothly, that is not surprising. Still, AMD had HBM card on market, we are still waiting for Nvidia's HBM or HBM2 card on consumer market.

RX 480 is cheap to produce and AMD wants market share back, so it's not high priced. Price got higher as there is more demand than supply. AMD RX 480 has much better VRM than Nvidia GTX 1080, so it's not that simple. Why not overvolt to get better yields as cards still sell?

F. Nvidia has bit more feature support for DX12 but those features are mostly useless and last time I read, AMD has no intention to support them as they wait for "more open" alternative.
 
So far I have been humiliating others. You're typical Nvidia fanboy. You say someone is wrong but cannot tell where. Stop trolling.
You're the fanboy here pal. And you haven't got a clue what you're talking about. But please, don't let me prevent you from continuing to humiliate yourself!
 
Oh dear. Hard Reset, you love to humiliate yourself don't you!

So far I have been humiliating others. You're typical Nvidia fanboy. You say someone is wrong but cannot tell where. Stop trolling.

Clearly Triggered.

A. Talos Principle. To translate what you said "I bought a BMW with no engine. Therefore BMW makes crap cars". Last time I checked the code makes the engine not the other way around. But hey what do I know right?

B. Because Linux users want some support for something other than DOOM (which mind you Vulkan support is still in beta and still crap). Talk about being narrow.

C. AMD stopped Mantle because they couldn't strike enough deals with devs to make an impact. Furthermore whatever Khronos Group is using is the front end (I'm serious when I said to read up on things). AMD dropped it because it wasn't viable anymore for them or tangible.

D. "Crappy console ports are crappy, no matter if there's DX11 or DX12. Still that does not make DX12 any worse. Some games already demand Windows 10 and Kaby Lake or Zen will not receive full Windows 7 support." - Strange thing it's Gaming Evolved Title and yet you decide to mention Gameworks. That's rich. So far Gaming Evolved brought 2006 visuals to 2016. Nothing more nothing less. And again you fail to grasp the context. I'm not bashing DX12 here I'm bashing people that cry "X is doing far better than Y" without any basis and/or without any knowledge about the subject.

C. "Our strong second quarter graphics performance was capped by the launch of our new Polaris-based RX 480 GPUs at the end of June" - what does this tell you? April, May and June (Q2) the GPU was released on the 29th of JUNE. AMD Radeon 400 Series : Release date 29 June 2016. Do. Your. Homework.

D. Wow just wow. Last time I checked nVidia was the first to run Vulkan. nVidia still has better support for non MS OS's. Async however it's there even if it's on a software level because the arch doesn't need the compute part to run fully saturated. Meanwhile you have the Fury which most of the time was running with 1/3 of it's performance hindered by compute cores waiting for "work". So let's see waiting on devs to optimize for your arch seems very legit to me. Why do you overvolt the chips to get better yields ? Because GloFlo has and always had supply issues they cracked up the voltage so more chips would clock at their designed speeds thus more make the cut. Better VRM because the thing runs as "cold" as Fermi and that's just for a mid-tier card . Although the overclocking abilities are sub Fermi. And AMD can't afford to pay off those 2-3 pieces they sold in the first day if customers had their rigs blown up. (see this is baiting... Triggered yet?)

F. If AMD doesn't support them that doesn't mean that others can't or won't. Furthermore that "more open" comment actually made me giggle. Because as of now AMD is unable to add more DX12 features on this arch. It's old.

Talking sense into you feels like digging through reinforced concrete with a wooden spoon. So go ahead and type all you want you're not entertaining me anymore
 
Last edited:
So far I have been humiliating others. You're typical Nvidia fanboy. You say someone is wrong but cannot tell where. Stop trolling.
See... this is why we need to understand that HardReset is special... he has the ability to be completely trounced in an argument by multiple users, yet still 100% believe that not only is he right, but that he's crushed all of us...

Maybe we should try dealing with him the same way you'd deal with a puppy... well done HardReset - go get your treat!
 
See... this is why we need to understand that HardReset is special... he has the ability to be completely trounced in an argument by multiple users, yet still 100% believe that not only is he right, but that he's crushed all of us...

Maybe we should try dealing with him the same way you'd deal with a puppy... well done HardReset - go get your treat!
I will be the first to admit that the poster you are referring to has coherent points on this site. The issue seems to be only in the AMD/Vulkan/Doom sphere where the cognitive bias presents.
 
Manufacturer has "fast card" A and "slow card" B. New game C comes out, there is very bad bug on drivers D that make C run badly. Manufacturer decides to release driver E that fixes bug. So now C runs much better BUT that driver fix is available only to B.

Now, "slow card" B is much faster than "fast card" A because driver bug was fixed for B but not for A. Using your logic, A is not made any slower as speed with drivers D is same as with drivers E.

So comparing same card with different drivers essentially don't tell anything. It basically tells that Nvidia applies driver fixes for new cards but not for old cards. That's exactly why my evidence about Nvidia making old cards look slow is valid.

I'm going to go ahead and assume the concept of driver maturity doesn't make any sense to you at all. But I'll try just the same.

So by your logic (if you want to call it that and I have trouble truly grasping it most of the times) you have card A and B, card A has been out for several months if not longer and there have been several driver fixes over these months to address bugs and so on improving performance gradually.

Then card B comes out and head to head with card A using driver D (which is a mature driver for card A) it loses, so to fix these issues with card B (not card A because the drivers are fine and don't actually need to be fixed) driver E gets released that contains NO IMPROVEMENTS for card A because they are not necessary but do contain improvements for card B augmenting the performance that now surpasses card A.

Again to iterate this is because card A never had a bug that needed to be patched, it was only on card B hence the patch that ONLY applies to card B. This isn't a new concept, it's common sense in the industry that you can only get so much performance out of a GPU.

Another example; if a game is in development for over a year and card B wasn't out yet they were probably developing the game to run on card A having never seen the architecture behind the silicon on card B. The game comes out performing as expected on card A because it was developed with that card (A) in mind, then to improve performance a new driver is released (driver E) to fix the issue on card B, because once more, card A was around during development as were the drivers (D) for card A. And here's the kicker, if the game is developed with a specific GPU companies support the drivers are being tweaked for said title before the game even launches! Mind blowing eh?
 
Back