AMD Radeon R9 295X2 Review: A Dual-GPU Beast

I really want to see you do a review... it is difficult as heck to get something of this quality out in 48 hours... you have no right to complain to Steve if you havent done a review yourself, or even in that amount of time. If you are unhappy with Steves review or methodology, go to another site where you are satisfied.
I think we are going to need some ice water to cool off that burn!

I do agree and wish it could have been done in time for this review.
Here @Steve if you want to do more tests ill trade you 2 of my 290X cards for that 295X2 or buy you a couple of 290's :p

You got all those tests out in 48 hours, im impressed I bet the office was an oven after having to test all those different cards and configs.

You know things I just don’t then, we will have to agree to disagree.

Also again it pays to have the right facts. The 6970's are clocked 50MHz higher and also feature memory that is clocked 125MHz higher resulting in 6% more bandwidth per GPU.

Time and time again it has been proven that sticking two GPU's on a single PCB does nothing to improve performance over two single cards if all other specifications are the same. If you think otherwise then you are the one that doesn't know what you are talking about.

The 6990 has Dual bios options and ran at 830mhz in the default position with a 1250mhz memory clock while bios 2 the core clock was bumped to 880mhz which was a stock 6970 while still maintaining the 1250mhz memory clock. I bumped my memory clock to the 6970 and managed to match 6970's so your more than correct steve.
 
I enjoy most of the content that's provided on TechSpot. But, the hardware reviews are always top notch. Good job, Steve. Keep it up.

Sucks that the card is $1500 as that is a bit beyond what I'm usually willing to dedicate to graphics. It's good to see AMD back on top, though. Especially with the frame time issues being worked out.
 
How will the new Nvidia beta drivers change the results of this comparison? If the gains in SLI performance are half of what they claim, then two 780's could outpace this monstrosity in certain situations.

Im also presently surprised at the frame time variance. Hopefully they can improve crossfire that much for separate cards. It is apparent that as a company they took complaints of micro-stutter, heat, and noise all very seriously which I really appreciate. The price point is also exactly what I expected even if it is a bit high. Great writeup as always. I think I've read every gpu review here since the 660 wind-force came out and they are the best out of any site I've found.
 
How will the new Nvidia beta drivers change the results of this comparison? If the gains in SLI performance are half of what they claim, then two 780's could outpace this monstrosity in certain situations.
The 335.70 beta driver isn't a magic bullet. If the game isn't CPU limited in the first place then the driver doesn't make much impact- unless the game wasn't previously driver optimized of course.
As for a comparison, the R9 295X2 shades a GTX 780 Ti SLI setup at stock clocks overall.
ComputerBase pegged the advantage at ~6% (3820x2160, FXAA/4xAA), which turned into a 3% advantage for the 780Ti's when both were setups were overclocked*. I wouldn't bother with the 8xAA results since only two games support the setting, and Crysis 3 ran around single-digit frames per second.

The 2560x1600 results, as you can see from the link yield much the same scenario. the 780Ti's and the 295X2 are basically equal (margin of error), but the Nvidia cards have better overclocking overhead (as would a couple of R9 290/290X's for that matter).

* Bear in mind that the 295X2 managed a 1080MHz OC clock. The 780Ti's (stock reference models) were overclocked fairly modestly to 1006MHz.

Given the number of reviewers noting how warm the PSU cables and plugs get with the 295X2, it's probably safe to assume that a couple of single GPU cards using four PCIE cables to deliver 500-600+ watts is preferable to trying to draw the same current via two cables- so for benchmarking (and this card is aimed at those who do), the take home message is stick to the single cards for OC potential and benching, and maybe look at this card if price/performance means less to you than novelty....unless you're planning a killer ITX/mATX build and absolutely need to maximize performance from a single x16 slot.
 
Last edited:
Obviously I have't had time to test Eyefinity but can't imagine any possible reason why things would change here between the R9 290X CF cards and the R9 295X2, not a single reason, certainly not one that makes any logical sense anyway.

http://www.anandtech.com/show/7930/the-amd-radeon-r9-295x2-review/9
11FPS difference in Battlefield 4 @ 3840 X 2160.
8FPS difference in Crysis Warhead @ 3840 X 2160.

Is that logical enough for you?
And for power?
A possible 41W difference under low power.
Anandtech said:
Compared to the 290X CF “Uber”, the 295X2 delivers virtually identical performance while drawing 41W less at the wall. To be sure, a 41W savings is nothing to sneeze at, but it also does little to resolve Hawaii’s status as a power-hungry chip. When NVIDIA is delivering better performance yet for 110W less at the wall, there’s no getting around the fact that AMD really did need a 500W budget to bring a pair of Hawaii GPUs on to a single board in this manner.

I really want to see you do a review... it is difficult as heck to get something of this quality out in 48 hours... you have no right to complain to Steve if you havent done a review yourself, or even in that amount of time. If you are unhappy with Steves review or methodology, go to another site where you are satisfied.
Lol.
I had no issue until he said, "there is no reason to add them because they are so similar", and then later mentioned he didn't have another.(if I didn't see that the first time I apologize) That's 2 different reasons/explanations.
Either one doesn't merit not having them, for multiple reasons. Temps, power draw, overclock headroom an of course, raw performance. I like all the stroking of everybody in this thread, quality likes! :makesjackoffmotionwithhands:.
I don't know why I said what I did so immaturely in my first post I could have stated my viewpoint with more class and usually do, maybe it was my crappy lunch yesterday?
Maybe I need Turkey & Bacon everyday.
 
Last edited:
http://www.anandtech.com/show/7930/the-amd-radeon-r9-295x2-review/9
11FPS difference in Battlefield 4 @ 3840 X 2160.
8FPS difference in Crysis Warhead @ 3840 X 2160.

Is that logical enough for you?
And for power?
A possible 41W difference under low power.

It's hard to argue/discuss things with you because you don't seem to understand the data you are looking at or the test conditions. The R9 295X2 has to be compared to the R9 290X CF cards in Uber mode, here you see the 1fps we talked about.

In normal mode the 290X CF's cards are clock down as the fan can't keep them cool enough which obviously isn't an issue for the water-cooled 295X2. Uber mode spins the fan right up so for the most part the 290X CF cards won't be clocking down and therefore running at the same clock speeds. (though this isn't so much of an issue for after market cards like the Gigabyte WindForce 290X as the cooler is much better than the AMD job).

I really hope I don't have to explain once again that two GPU's on separate PCB's in Crossfire deliver the same performance as two GPU's on the same PCB as long as the clock speeds are all the same.

If I do can I suggest you keep that R9 290X money you were going to donate and use it for your own educational purposes. Buy yourself a second hand Radeon HD 6990 that we spoke of earlier and a pair of 6970's. Clock them all to the same speed and prepare to be enlightened.
 
I really hope I don't have to explain once again that two GPU's on separate PCB's in Crossfire deliver the same performance as two GPU's on the same PCB as long as the clock speeds are all the same.
.

Whether its an inch or mile, whether they have the same clocks or not, they are different products with different labels and different coolers with different overclock ability with different power draw having different temps with different prices and your average enthusiast wants to see them compared.

http://www.anandtech.com/show/7930/the-amd-radeon-r9-295x2-review/9

11FPS difference in Battlefield 4 @ 3840 X 2160.
8FPS difference in Crysis Warhead @ 3840 X 2160.
Matching the clock speeds may not give the same results here.
A possible 41W difference under low power is not the same.
Trying to talk to me like a fool is not helping you, there is nothing about GPU's you understand that I don't.
 
Last edited:
Whether its an inch or mile, whether they have the same clocks or not, they are different products with different labels and different coolers with different overclock ability with different power draw having different temps with different prices and your average enthusiast wants to see them compared.

http://www.anandtech.com/show/7930/the-amd-radeon-r9-295x2-review/9

11FPS difference in Battlefield 4 @ 3840 X 2160.
8FPS difference in Crysis Warhead @ 3840 X 2160.
Matching the clock speeds may not give the same results here.
A possible 41W difference under low power is not the same.
Trying to talk to me like a fool is not helping you, there is nothing about GPU's you understand that I don't.

You set the tone in this thread not me and if that were even remotely true we wouldn’t even be having this discussion. Still I am not here to pointlessly argue who knows more, doesn’t really matter.

I replied the way I did because you seemed to be under the impression that I was wrong about the R9 290X CF cards delivering the same performance as the R9 295X2. Anandtech showed that they were within 1fps of one another but you were telling me its 11fps.

Yes I know the 290X CF and 290 CF cards weren’t included, we have covered that in-depth. I have apologized that we weren’t able to include them in time and even explained why.

I have also tried to explain that the 290X CF cards are virtually identical to the 295X2 in terms of performance.

Don’t be offended if I ignore future posts in this thread, I believe the discussion has come full circle now.
 
Anandtech showed that they were within 1fps of one another but you were telling me its 11fps...I have also tried to explain that the 290X CF cards are virtually identical to the 295X2 in terms of performance
I am not arguing the performance is nearly identical.
I am arguing its worth seeing the difference simply because they are different products, no matter how similar they are, no matter what their specs are set at.

11FPS difference in Battlefield 4 @ 3840 X 2160.
8FPS difference in Crysis Warhead @ 3840 X 2160.
A possible 41W difference under low power.

Is different.
Don’t be offended if I ignore future posts in this thread, I believe the discussion has come full circle now.
Ya know, I don't even care anymore.
I know what I've seen over the years, and Anandtech tested what I wanted to see. I respect your viewpoint that their similarities/clock tweaks may not warrant a direct comparison and I know you didn't have all parts needed to test.
Still, from the 590 to 580's, a 6990 to 6970's and so on and forth, there have been difference in the past, ranging from heat, power draw, overclock-ability, specs/design/chipset structure and yes, even performance.

Saying all dual GPU variants have performed exactly the same in the past as their Crossfire/SLi counterparts in all or most tests, or that everytime they are just the same exact 2 GPU's on one PCB is blatantly false information.
For example.
Both the GeForce GTX 580 SLI and the GeForce GTX 590 have 1024 Shader Processing Units. The two GPUs are based on different architectures, but deliver an equivalent shader performance. To compare, we must continue to look at the memory bandwidth, Texture and Pixel Rates. In this case, the GeForce GTX 590 has 28.3 GTexel/s better Texture Fill Rate, 21.2 GPixel/s better Pixel Fill Rate, and 135.5 GB/sec greater memory bandwidth, so should have a significantly faster performance than the GTX 580s.
The GeForce GTX 580 SLI requires 488 Watts to run and the GeForce GTX 590 requires 365 Watts.
 
Last edited:
Arguing with a reviewer about how to review cards when he has been doing this a longer time than you is beyond foolish. Whether its 2 of the same GPU on one card or 2 seperate PCB's the performance remains the same clock for clock. Battlefield 4 does not have a built in benchmark tool so that all comes within margin of error because recreating the EXACT scene you do for every card is very difficult and will cause slight variations in your performance whether you want it to or not.

Even with a built in benchmark, conditions can change slightly on the computer itself or the program. This is just common sense which is why the phrase margin of error fits nicely for testing. A 1 FPS difference is all within the margin of error and rightfully so. If you ran the benchmark 100 times for both cards at the same clocks (And the cards maintained said clock 100% of the time) they would average out to be the same (Or extremely close).
 
Arguing with a reviewer .
You telling someone not to argue is like Godzilla telling someone not to destroy buildings, your a brown nosing hypocritical runt that argues all the time.
You've been humbled and it's noticeable because you don't run your mouth like you used to.
about how to review cards when he has been doing this a longer time than you is beyond foolish
I am not intimated by any reviewer and no one is exempt from critique, everyone makes mistakes. I called out a couple sites on their performance claims when certain CrossfireX setups had major stuttering/frame time issues but the reviewer didn't include them. When they tested it, he found out the peak performance numbers were not indicative of how choppy some setups could run and brought to life other driver and CrossfireX issues with multiple setups and this was true for an entire quarter of the year until AMD addressed it and admitted to it.
Most reviews now include frame latency testing.

In this article, the reviewer stated all previous dual GPU variants of their comparative counterparts are so close they are not worth comparing. Thats not true, a 590 was much more power efficient then 580's and also quieter.
http://www.anandtech.com/show/4239/nvidias-geforce-gtx-590-duking-it-out-for-the-single-card-king/16
If I am wrong about the architecture and performance being so close when each setup has their clocks matched so be it. People still want to see the results, ALL OF THEM. I'll take being wrong about one thing, and accurate about the rest.

I won't be back to check on this, so type whatever you want.
 
Last edited:
The GTX 590 v GTX 580 SLI comparison probably isn't the best comparison. The 590 has a very aggressive power limitation (BIOS and hardware) which throttles the card half to death in a lot of scenarios thanks to the Wal Mart specced voltage regulation Nvidia saddled the card with. The 590 also runs its core and hot clock 21% slower (608/1215 vs 772/1544 for the 580's), and its memory 15% slower (3414 effective versus 4008). Having had 580's...
zSf3i.jpg

...I don't think I'd consider the 590 as a direct analogue of the two single cards. They are similar only from the common point of a fully enabled die, but clocks and BIOS render them cousins at best. The HD 4870 and 4850 are also exactly the same in that respect for instance, but they are distinct in performance based on clock differences
 
I am not arguing the performance is nearly identical.

I am arguing its worth seeing the difference simply because they are different products, no matter how similar they are, no matter what their specs are set at.

11FPS difference in Battlefield 4 @ 3840 X 2160.

8FPS difference in Crysis Warhead @ 3840 X 2160.

A possible 41W difference under low power.

Is different.

The R9 295X2 has to be compared to the R9 290X CF cards in Uber mode, here you see the 1fps we talked about.

In normal mode the 290X CF's cards are clock down as the fan can't keep them cool enough which obviously isn't an issue for the water-cooled 295X2. Uber mode spins the fan right up so for the most part the 290X CF cards won't be clocking down and therefore running at the same clock speeds.

Still, from the 590 to 580's, a 6990 to 6970's and so on and forth, there have been difference in the past, ranging from heat, power draw, overclock-ability, specs/design/chipset structure and yes, even performance.

Saying all dual GPU variants have performed exactly the same in the past as their Crossfire/SLi counterparts in all or most tests, or that everytime they are just the same exact 2 GPU's on one PCB is blatantly false information.

For example.

I really hope I don't have to explain once again that two GPU's on separate PCB's in Crossfire deliver the same performance as two GPU's on the same PCB as long as the clock speeds are all the same.

Finally after re-quoting myself to cover those points ‘again’ I am confused, yet again. Why are you talking about the GeForce GTX 590 now? First it was the Radeon HD 6990 and now the GTX 590, at least the 6990 made a little sense.

It has already been said the GTX 590 is based on a slightly updated version of the GF110 architecture. Not only that but the core and shader clocks have been heavily reduced while the memory is also underclocked. Clocked 21% slower with 15% less bandwidth per GPU shouldn’t make the fact that it uses less power a shocker.

Again with an update: Two GPU's on separate PCB's in Crossfire/SLI deliver the same performance as two GPU's on the same PCB as long as the clock speeds and architectures are all the same.

In this entire thread the only time you have been correct is in your observation that we didn’t include the 290 CF cards in the review. I am sorry about that and we are updating the results to include them in the future.

I am not intimated by any reviewer and no one is exempt from critique, everyone makes mistakes. I called out a couple sites on their performance claims when certain CrossfireX setups had major stuttering/frame time issues but the reviewer didn't include them. When they tested it, he found out the peak performance numbers were not indicative of how choppy some setups could run and brought to life other driver and CrossfireX issues with multiple setups and this was true for an entire quarter of the year until AMD addressed it and admitted to it.

Most reviews now include frame latency testing.

So it was you who worked out what was going on with the Crossfire setups and how to test for the stuttering. We owe a debt of gratitude and the biggest issue with our review should be the Testing Methodology page where we don't credit you.
 
Last edited:
Just a quick note...
Taking another sites comparison, and looking at the best scenario for GPU workload comparison (4K)
g6xvNsl.jpg

The average difference in performance between the 295X2 and CF'ed 290X's is 1.3% in favour of the dual card which I'm pretty certain falls under the heading "margin of error". The 295X2 has a 1.8% advantage in core clock which would effectively nullify that 1.3% gain.
 
The point is, they are both running HawaiiXT GPU's and are in general minus clock speeds equivalent. You put two GPU's on the same PCB does not mean the card will run better or worse than the single GPU equivalent when 2 single cards are compared with another.

All that has to be done is match clock speeds on both memory and core and magically you see almost exact matches of performance.

If you clocked a 590 to the speeds of a 580, it matches...

You telling someone not to argue is like Godzilla telling someone not to destroy buildings, your a brown nosing hypocritical runt that argues all the time.
You've been humbled and it's noticeable because you don't run your mouth like you used to.
Pot calling the Kettle black?
Your arguing a common fact (A fact you should know since apparently you have owned every video card under the sun 'rolls eyes') of Video cards that every person and their dog understands and knows with people who review cards more than you ever have. The cards don't magically lose or gain performance just because they are on the same PCB, its the same CHIP.

I am not intimated by any reviewer and no one is exempt from critique, everyone makes mistakes.
Yet there was no mistake here other than the fact you were completely and utterly wrong arguing a non-valid point that has been proven time and time again. Take 2 cards, get the dual GPU variant and compare them with matched clocks, voila. Arguing this point so hard while using BF4 as an example when that game has no built in benchmark tool is laughable. Results will vary on a game like this because recreating the exact scenario is nigh impossible which causes problems in recording results for the game.
 
Last edited:
Hey @Steve do you think the new Catalyst 14.4 RC driver released today would alter the results? It brings official support for the GPU so I was wondering if you think it could alter results.
 
Hey @Steve do you think the new Catalyst 14.4 RC driver released today would alter the results? It brings official support for the GPU so I was wondering if you think it could alter results.

It scales the same as two R9 290X cards so I doubt it.
 
Seems PowerColor have decided to ditch the AIO for their latest take on the Devil 13 concept for the 295X2. It also looks as though they aren't keen on the current overload on the reference card, so they are outfitting theirs with four PCI-E 8-pin plugs
PowerColor%20Radeon%20R9%20295X2%20Devil%2013%20power%20connectors.jpg

PowerColor%20Radeon%20R9%20295X2%20Devil%2013.jpg

More pics at Hardware.info
 
Back