AMD Radeon RX 480 Review: Performance for the masses

Don't think anyone dislikes rooting for the underdog. But the underdog has to bring something worth rooting for. Since the low end runs hot and powerhogs I'm really curious about what the high end will bring.


winter is coming...but seriously. I do hope AMD can get their highend to market ,or its gonna be a 900+ bucks cdn,for a 1080, for ever. the 980/980 ti held its ground until Nvidia had to beat themselves at their own game.sad having to compete with themselves ,intel is doing it as well. does absolutely nothing for innovation ,
 
Last edited:
(due to pci-e power issue and other small issues).
I don't think the PCI-E power issue is an issue at all. I think the problem is they were a bit late to the game with just a mediocre product. They should have either come out with their big dog first and tried to beat or match the 1080 or priced the 480 so low that no one could afford to drop the 970 even close in price. I think AMD is hurting on both the CPU and GPU sides of the business and has to make some immediate changes to make sure there is competition on both sides. If they don't compete then it will be bad for everyone.

When I build new machines I typically get the fastest GPU or two that I can afford at the time. Years ago my machines would bounce back and forth between nVidia and ATI cards. The last few machines I've built have all had nVidia cards. Perhaps AMD has bitten off a bit more than it can chew buying ATI out. The last few generations of AMD card have been rehashes of the same architecture but with different numbering schemes.
 
I don't think the PCI-E power issue is an issue at all.
No sir that's a real issue and AMD admitted that too so they worked on a driver level fix (their claim) and the polaris driver will be published in a day or two. While some people claim that there is no other way to solve this without gimping the performance of the card, I believe the AIB versions will provide problem-free product sooner or later with no worry by the user side. The point is, this issue was the "cherry on top" when talking about the disastrous launch of Polaris, not a good impression forconsumers at all

http://www.tomshardware.co.uk/amd-rx-480-power-driver,news-53401.html
 
Why is this card getting negative points for cooling when the GTX 1080 is thermally throttled and got a perfect score? This makes a bit more noise than the GTX 1080 but keeps the card cooler yet you are only docking AMD points. Same thing for overclocking, Nvidia reference cards couldn't even overclock due to heat issues.

Steve, the author, stated the reason he gave the GTX 1080 a perfect score was because it was the best price/performance at the time. What about the RX 480? It is the best price/performance right now.

It's called, preferential treatment. I read a brand new article today elsewhere, but a popular tech site, that is still printing Nvidia's false claim the GTX 1080 doubles the performance of the GTX 980 even though that same sight has done a review proving results to the contrary.

They are going to continue to spread false information, the fanboys will continue to provide them with site clicks and in the process other products and brands will not receive the same level of consideration or respect.
 
Last edited:
Well, the 1060 has been announced at $250 and will be out later this month... That basically ends any reason to buy this card... AMD fails again...
 
Well, the 1060 has been announced at $250 and will be out later this month... That basically ends any reason to buy this card... AMD fails again...

We will see how they match up. Don't forget AMD is free to lower their MSRP to remain competitive if need be.
 
Where's your evidence for 11%?

And let's not forget power consumption....

Speculation based on the specs of the card putting it right around the 980. It's not like they can afford to make it much faster than than as the 1070 only performs around a 980 Ti. Nvidia isn't going to make a $200 card that creeps anywhere close to the 1070.
 
Vulkan benchmarks should be way more important to all of you than a relatively small difference in power consumption. When it was 150W of the GTX 970 vs 300W of the R9 390, I could understand. But right now, it likely won't be more than 50W, meaning, the power consumption argument is just an excuse to simply hate on AMD.
 
You have to evaluate how long you will stick with your card. If you upgrade every year or every two years, you can make the argument that within that time Vulkan nor DX12 will become relevant.

If you're like me, where you want to use your card for 4 years or so (I still have an HD 6850), Vulkan/DX12 benchmarks weigh in a lot more in determining my choice.
 
Vulkan/DX12 benchmarks weigh in a lot more in determining my choice.
In my opinion Vulkan for cross-platform support is more important than DX12. At first though you were weighing Vulkan importance to power consumption, which is a future uncertainty vs a current constant (pardon the pun with power consumption).
 
Well, in a way I was. The power consumption difference of 50W is a 'meh' reason to consider one card over the other. It's less than your average light bulb, and even then, a lot of people can't really afford to game more than 3 to 4 hours daily, further reducing the difference.

We know that Vulkan is the future. If you can choose between a card that gets a huge boost from it, and one that doesn't but uses 50W less, I would pick the former, since the 50W is insignificant. Especially if you can go another year without upgrading so to speak.
 
Talking about DX12 performance right now is stupid, end of story. Let me spell it out for a couple of you; lets look at the past shall we.

DX11 cards release (e.g. Radeon 5870/GTX480). Almost no games support it, only a few tech demos and a couple upcoming games. Second gen dx11 cards release (e.g. Radeon 6870). Hey, we have a couple dx11 games now, but still nothing to worry about.

By the time DX11 became mainstream (and actually made a noticeable visual difference from the dx9/10 option that was still available in most all those same games), the aforementioned cards were pretty much worthless when it came to DX11 performance.

Last gen AMD/NV cards support DX12(GTX980/ GCN1.1+), one more so than the other.

Current AMD/Nvidia cards support DX12, and one does so better than the other. So what, by the time dx12 becomes mainstream the RX480 will be utterly worthless.
 
Talking about DX12 performance right now is stupid, end of story. Let me spell it out for a couple of you; lets look at the past shall we.

DX11 cards release (e.g. Radeon 5870/GTX480). Almost no games support it, only a few tech demos and a couple upcoming games. Second gen dx11 cards release (e.g. Radeon 6870). Hey, we have a couple dx11 games now, but still nothing to worry about.

By the time DX11 became mainstream (and actually made a noticeable visual difference from the dx9/10 option that was still available in most all those same games), the aforementioned cards were pretty much worthless when it came to DX11 performance.

Last gen AMD/NV cards support DX12(GTX980/ GCN1.1+), one more so than the other.

Current AMD/Nvidia cards support DX12, and one does so better than the other. So what, by the time dx12 becomes mainstream the RX480 will be utterly worthless.

AMD cards support DX 12 all the way back to the 7970. Nvidia cards "support" DX 12 in that they support a very limited feature set and rely on software emulation for Async Compute. DX 12 will become relevant either this generation or the next. Nvidia is going to have to put significant work into it's cards to support Async Compute. Right now we a starting to see a good trickle of DX 12 titles.
 
AMD cards support DX 12 all the way back to the 7970. Nvidia cards "support" DX 12 in that they support a very limited feature set and rely on software emulation for Async Compute. DX 12 will become relevant either this generation or the next. Nvidia is going to have to put significant work into it's cards to support Async Compute. Right now we a starting to see a good trickle of DX 12 titles.
So perhaps Nvidia hardware will no longer be as power efficient as they are now? Cause there are debates about AMD that implementing certain features (like async compute) increases power consumption on their cards. Probably AMD is a bit ahead of Nvidia in this "feature race" because they have experience since the first GCN

Talking about DX12 performance right now is stupid, end of story.
You are partly right but try to take Vulkan (which is able to use similar advanced features like DX12) into consideration. Check the new Vulkan patch for Doom for example. RX480 gets a good boost with Vulkan in Doom. Probably we will see many new games using Vulkan "soon"
 
So perhaps Nvidia hardware will no longer be as power efficient as they are now? Cause there are debates about AMD that implementing certain features (like async compute) increases power consumption on their cards. Probably AMD is a bit ahead of Nvidia in this "feature race" because they have experience since the first GCN

You are partly right but try to take Vulkan (which is able to use similar advanced features like DX12) into consideration. Check the new Vulkan patch for Doom for example. RX480 gets a good boost with Vulkan in Doom. Probably we will see many new games using Vulkan "soon"

Implementing Async compute will cost Nvidia power savings but we have no idea how much. If Nvidia does a radical redesign for their next gen cards the power usage of Async compute could be outshined by the power savings of an improved architecture. Nvidia could also just tack on Async compute but it would mean loss of die space and extra power use.
 
Talking about DX12 performance right now is stupid, end of story. Let me spell it out for a couple of you; lets look at the past shall we.

DX11 cards release (e.g. Radeon 5870/GTX480). Almost no games support it, only a few tech demos and a couple upcoming games. Second gen dx11 cards release (e.g. Radeon 6870). Hey, we have a couple dx11 games now, but still nothing to worry about.

By the time DX11 became mainstream (and actually made a noticeable visual difference from the dx9/10 option that was still available in most all those same games), the aforementioned cards were pretty much worthless when it came to DX11 performance.

Last gen AMD/NV cards support DX12(GTX980/ GCN1.1+), one more so than the other.

Current AMD/Nvidia cards support DX12, and one does so better than the other. So what, by the time dx12 becomes mainstream the RX480 will be utterly worthless.
You're assuming that under DX12/Vulkan the cards will be just as slow as under DX11. It's apparently incomprehensible to you that the cards can receive such a boost that they can go at least a year longer.

Leaving this here...;

original.jpg
 
This is directly related to the Vulkan boost in Doom posted earlier. It's the % increase in performance of OpenGL vs Vulkan;
9c7494ed47ce4b5c8bc568e2be8feb0d.png
 
Apparently, my post went over your heads. History repeats itself, and having the feature set right now is probably not much of a selling point.

Remember, the GTX460/470/480 (all of them) literally crushed the Radeon 5870 in one major DX11 feature; tessellation. However, by the time DX11 became commonplace and useful aside from just a performance boost, the GTX480 was dated and couldn't run relevant dx11 titles very well.

The exact same thing will probably happen with ACE. Yes, AMD currently crushes Nvidia when it comes to this, but how much will that matter when DX12 is commonplace? Right now, most all DX12 games will also ship with a DX11 render making DX12 less than useful outside of a performance boost.

Also, its like some of you are conveniently ignoring that the performance boost mostly just helps AMD catch up to Nvidia's base performance. Yes, there will be a couple of games where that doesn't hold true, but we can't just cherry pick those. Take a look back at mantle, most of the time it merely closed the gap between nvidias base dx11 performance and AMD's lesser base dx11 performance.

Regarding power draw and ACE, are you serious? ACE is implemented in hardware for AMD; as where it will be software for Nvidia. Adding something via software will not all of a sudden wreck nvidias power draw, it will still have to stay within spec.

Before one of you comes in here calling me an nvidia shill, I switch back and forth between the two brands pretty regularly; my recent cards (since dx11) are noted below.

Radeon 5870 -> Geforce 760 -> R9 280X -> Geforce 970 (looks like I'm due to switch sides again)

AMD cards support DX 12 all the way back to the 7970. Nvidia cards "support" DX 12 in that they support a very limited feature set and rely on software emulation for Async Compute. DX 12 will become relevant either this generation or the next. Nvidia is going to have to put significant work into it's cards to support Async Compute. Right now we a starting to see a good trickle of DX 12 titles.

The radeon 7970 does not support the full dx12 feature set, GCN 1.0 cards only support DX12 up to feature level 11_1. For reference, GCN 1.2 (which would be the Fury X) only supports up to 12_0, by contrast the 900 series from nvidia supports 12_1.

AMD has hardware ACE, but that's the only ace in its pocket.
 
Once again.... If you are buying a low-mid-level card in an attempt to be "future proof", you're not very bright.... You need to go high end if you want reasonable performance in 2 years time.... So if you are buying for right now - go ahead and buy this... But don't go off spouting how great this card will be with dx12 titles in the future - you want to play those, either buy a 1070/1080 or wait a year :)
 
Once again.... If you are buying a low-mid-level card in an attempt to be "future proof", you're not very bright.... You need to go high end if you want reasonable performance in 2 years time.... So if you are buying for right now - go ahead and buy this... But don't go off spouting how great this card will be with dx12 titles in the future - you want to play those, either buy a 1070/1080 or wait a year :)

Yeah, really. GTX 980 Ti cost over $700 just two months ago and now AMD's mid end card is faster on DX12/Vulkan games. Money well spent.
 
Apparently, my post went over your heads. History repeats itself, and having the feature set right now is probably not much of a selling point.

Remember, the GTX460/470/480 (all of them) literally crushed the Radeon 5870 in one major DX11 feature; tessellation. However, by the time DX11 became commonplace and useful aside from just a performance boost, the GTX480 was dated and couldn't run relevant dx11 titles very well.

The exact same thing will probably happen with ACE. Yes, AMD currently crushes Nvidia when it comes to this, but how much will that matter when DX12 is commonplace? Right now, most all DX12 games will also ship with a DX11 render making DX12 less than useful outside of a performance boost.

Also, its like some of you are conveniently ignoring that the performance boost mostly just helps AMD catch up to Nvidia's base performance. Yes, there will be a couple of games where that doesn't hold true, but we can't just cherry pick those. Take a look back at mantle, most of the time it merely closed the gap between nvidias base dx11 performance and AMD's lesser base dx11 performance.

Regarding power draw and ACE, are you serious? ACE is implemented in hardware for AMD; as where it will be software for Nvidia. Adding something via software will not all of a sudden wreck nvidias power draw, it will still have to stay within spec.

Before one of you comes in here calling me an nvidia shill, I switch back and forth between the two brands pretty regularly; my recent cards (since dx11) are noted below.

Radeon 5870 -> Geforce 760 -> R9 280X -> Geforce 970 (looks like I'm due to switch sides again)



The radeon 7970 does not support the full dx12 feature set, GCN 1.0 cards only support DX12 up to feature level 11_1. For reference, GCN 1.2 (which would be the Fury X) only supports up to 12_0, by contrast the 900 series from nvidia supports 12_1.

AMD has hardware ACE, but that's the only ace in its pocket.
You're using nVidia cards as your reference, and that's the problem. Have you asked yourself why AMD cards get such a huge boost while nVidia's don't? And look at how AMD cards age compared to nVidia's. This is quite the nice summary;

12191156



The FL12_1 supported by nVidia is not really significant. There is no reason for these two features that make up FL12_1 according to nVidia to not be part of the standard FL12_0 spec, other than giving nVidia bragging rights about how their cards would be superior to AMD's under DX12. Some people even started to argue that nVidia supports DX12.1 compared to AMD's DX12, even though there is no DX12.1. It was a marketing scheme and nothing more. I'm not saying AMD supports those two features. They don't. But the FL12_1 makes it look like the nVidia cards are superior for DX12, when they really aren't, since a lot of the feature levels in the standard FL12_0 are supported on nVidia, but on a lower tier than AMD's, actually making AMD's cards more future proof.

Why am I saying it's a marketing scheme again? Because of all the list of features, combined with the different tiers, there are only two features, conservative rasterization and rasterizer-ordered views, that were required for that additional feature level. Yet, AMD is a full tier higher in resource binding, is a full tier higher in supporting UAVs across all stages, is a full tier higher in resource heap, and supports stencil reference value from pixel shader while nVidia's cards don't. In fact, when you look at Microsoft's WARP12, with 12_1 as the feature level, nVidia fails in 5 of the 11 required features. AMD fails in three of them only. From that perspective, AMD is closer to FL12_1 than nVidia is. In fact, conservative rasterization on nVidia hardware is actually tier 2 compared to the required tier 3 to fully comply to the FL12_1 spec, meaning, they marketed supporting FL12_1 based on only supporting one from the full list; rasterizer-ordered views

And remember. AMD supported FL11_1 since GCN 1.0, as in, 2011. Do you know why Maxwell 1 released in 2014 was still FL11_0? Because they only had Tier 2 for the UAVs across all stages feature. And guess what. Pascal STILL does not fully support FL11_1, and yet, nVidia claims FL12_1 support.

So yes. Even GCN 1.0 can outshine Pascal due to the lower tier of nVidia's UAV support.

Don't talk about AMD only having ACEs when you haven't done your homework. nVidia is behind and that's a fact, whether you like it or not.
 
Last edited:
Apparently, my post went over your heads. History repeats itself, and having the feature set right now is probably not much of a selling point.

Remember, the GTX460/470/480 (all of them) literally crushed the Radeon 5870 in one major DX11 feature; tessellation. However, by the time DX11 became commonplace and useful aside from just a performance boost, the GTX480 was dated and couldn't run relevant dx11 titles very well.

The exact same thing will probably happen with ACE. Yes, AMD currently crushes Nvidia when it comes to this, but how much will that matter when DX12 is commonplace? Right now, most all DX12 games will also ship with a DX11 render making DX12 less than useful outside of a performance boost.

Also, its like some of you are conveniently ignoring that the performance boost mostly just helps AMD catch up to Nvidia's base performance. Yes, there will be a couple of games where that doesn't hold true, but we can't just cherry pick those. Take a look back at mantle, most of the time it merely closed the gap between nvidias base dx11 performance and AMD's lesser base dx11 performance.

Regarding power draw and ACE, are you serious? ACE is implemented in hardware for AMD; as where it will be software for Nvidia. Adding something via software will not all of a sudden wreck nvidias power draw, it will still have to stay within spec.

Before one of you comes in here calling me an nvidia shill, I switch back and forth between the two brands pretty regularly; my recent cards (since dx11) are noted below.

Radeon 5870 -> Geforce 760 -> R9 280X -> Geforce 970 (looks like I'm due to switch sides again)



The radeon 7970 does not support the full dx12 feature set, GCN 1.0 cards only support DX12 up to feature level 11_1. For reference, GCN 1.2 (which would be the Fury X) only supports up to 12_0, by contrast the 900 series from nvidia supports 12_1.

AMD has hardware ACE, but that's the only ace in its pocket.

No modern video card supports the whole DX 12 feature set but AMD having hardware async compute is a pretty huge advantage for DX 12 and Vulkan and you can see that from games that utilize it.
 
Back