AMD teases upcoming dual-GPU Hawaii graphics card

Cards are failing under prolonged mining...Well yea I mean did you expect anything less...
Well, I was referring primarily to the 7950's dying from all causes (including switching on the system, or via normal use)- not only mining...and that's without the spate of reduced BoM models from Gigabyte and Sapphire not being able to maintain stock settings
Im only pointing out a fact, and either way Hawaii's crossfire is more than stable so unless there is some royal ****-up with crossfire over PCIE with dual GPU cards
The principle is the same for individual or dual cards with XDMA. The old Crossfire bridge signalling/transfer maxed out at ~0.88GB/sec versus 31.5GB/sec for PCIe 3.0 ( @ x16 electrical). Theoretically, latency should actually be reduced with shorter electrical connections.
...there should not be problems with drivers
An AMD board launch with no driver issues, now that would be a red letter day for sure.
Who knows, when it comes more into the light we can make all our judgement's then since the rumor and guess seems to be an AIO cooler.
Of that I'm well aware- since I'm the one that linked to the information a few days ago back in post #19.
As for advance judgement, this card is going to have to be something that defies history if it doesn't fail in comparison with two single GPU cards- whether it be price, noise/heat, driver support (inc longevity), performance, or reliability. Dual cards never win out over two single cards without some severe compromises somewhere along the line.
Something tells me that if this card ships with an AIO attached it will either be much more expensive than two single GPU cards, or it will effectively be drip-fed into the channel....which brings us to
I'm more concerned the power limiters of the cards and stuff than driver support
Well, the leak mentioned that the 295X (?) will be clocked up to 1000MHz, so you're looking at close to full spec CFX 290/290X depending upon core count. Crossfired 290X are ~550W - 600W under heavy gaming workloads
1390541180GVyEsgrEO9_10_1.gif

Which is already a reasonable (but manageable) strain to put an AIO under...not so once overclocking comes into play I would imagine.
 
Well, I was referring primarily to the 7950's dying from all causes (including switching on the system, or via normal use)- not only mining...and that's without the spate of reduced BoM models from Gigabyte and Sapphire not being able to maintain stock settings

Odd enough, had heard about that but I was mostly just talking about the other cards.

The principle is the same for individual or dual cards with XDMA. The old Crossfire bridge signalling/transfer maxed out at ~0.88GB/sec versus 31.5GB/sec for PCIe 3.0 ( @ x16 electrical). Theoretically, latency should actually be reduced with shorter electrical connections.

Well yes the interface is better and superior to the crossfire bridge by significant proportions. I was mostly just referring to proper support with 2 GPU's sending out their connections to a third 290X or another 295X (Im guessing its dubbed). Mostly just meaning unless theres some royal problem with them supporting the two with drivers. But I highly doubt that, that was to me worst case.

An AMD board launch with no driver issues, now that would be a red letter day for sure.

That's debatable when it comes to launch problems, but theres no reason to get into that here. But NVidia has its share of launch problems as well so neither company is perfect.

Of that I'm well aware- since I'm the one that linked to the information a few days ago back in post #19.
As for advance judgement, this card is going to have to be something that defies history if it doesn't fail in comparison with two single GPU cards- whether it be price, noise/heat, driver support (inc longevity), performance, or reliability. Dual cards never win out over two single cards without some severe compromises somewhere along the line.
Something tells me that if this card ships with an AIO attached it will either be much more expensive than two single GPU cards, or it will effectively be drip-fed into the channel....which brings us to

Well yes you posted it here im well aware you posted here on that first, I never intended to sound like taking credit or posting like I brought that knowledge in if thats the way you took it(Albeit the rumors have been flying around on other sites as well). But anyway dual GPU cards mostly fill in the gap for people who want SLI/CFX support without having extreme amounts of PCIE slots. Most AMD and Intel boards do not have CFX/SLI support beyond 2 or three. The price is depending on the alternative solutions and the performance. If this is truly 2 290X's at their max clocks running with liquid then its going to be a nice solution for the enthusiast because it will still be a pretty quiet solution (Speculation mind you).

Well, the leak mentioned that the 295X (?) will be clocked up to 1000MHz, so you're looking at close to full spec CFX 290/290X depending upon core count. Crossfired 290X are ~550W - 600W under heavy gaming workloads
1390541180GVyEsgrEO9_10_1.gif

Which is already a reasonable (but manageable) strain to put an AIO under...not so once overclocking comes into play I would imagine.

Well I have 3 290X's and the whole system pulls around 1100 at max load from my machine. Have to look at the number in detail but that was the last time I checked. The AIO stress will depend on how nice/big it is. If the Asus ARES II is anything to compare to, it should handle the machine quite nicely. But I guess we will have to see because for all we know its just going to be a Jet Turbine on the Center :p
 
Well yes you posted it here im well aware you posted here on that first, I never intended to sound like taking credit
Just seemed weird that you'd relay information back to me that I'd already posted earlier in the thread, that's all.

Well, however the dual Hawaii turns out, I think it might have its work cut out for it topping the benchmarks if the GTX Titan-Z with two fully enabled GK 110's arrives at the same time...
GeForce-GTX-TITAN-Z-850x604.jpg


Of course, at the $3K rumoured price, I'm guessing Nvidia are no more interested in selling any quantity than AMD would be with the 295X
 
Just seemed weird that you'd relay information back to me that I'd already posted earlier in the thread, that's all.

Well, however the dual Hawaii turns out, I think it might have its work cut out for it topping the benchmarks if the GTX Titan-Z with two fully enabled GK 110's arrives at the same time...
Of course, at the $3K rumoured price, I'm guessing Nvidia are no more interested in selling any quantity than AMD would be with the 295X

That price makes no sense...Im calling Nvidia crazy if they even think a 3k price tag on that card is acceptable or reasonable. Id buy 3 titan black cards before I would spend 3k on that.

There is no way that's the actual price of that card...
 
That price makes no sense...
Remember that many people don't read graphics reviews. By that I mean that they either skip to the summary page, or they concentrate on the graphs. Subliminally people note what brand of board has the best number regardless of what they themselves run, or what they are in the market for.
A lower mainstream $100 card review would be of infinitely more practical interest (and actual sales) that a $1K niche monster, but I'm pretty certain which review would have the largest number of views and review thread replies.
Im calling Nvidia crazy if they even think a 3k price tag on that card is acceptable or reasonable. Id buy 3 titan black cards before I would spend 3k on that.
That's the whole idea. Nvidia need a PR win (over a dual Hawaii) more than they need sales of a card like this because guaranteed the GPUs will be binned for low VID, so they'll be using the same bin as the Tesla K40/Quadro K6000 parts (runtime precision excepted). Remember the HD 7990 was initially priced at 2.5 to 3 times the price of a HD 7970 for the same reason - along with low/non-existent initial availability.
 
Remember that many people don't read graphics reviews. By that I mean that they either skip to the summary page, or they concentrate on the graphs. Subliminally people note what brand of board has the best number regardless of what they themselves run, or what they are in the market for.
A lower mainstream $100 card review would be of infinitely more practical interest (and actual sales) that a $1K niche monster, but I'm pretty certain which review would have the largest number of views and review thread replies.

That's the whole idea. Nvidia need a PR win (over a dual Hawaii) more than they need sales of a card like this because guaranteed the GPUs will be binned for low VID, so they'll be using the same bin as the Tesla K40/Quadro K6000 parts (runtime precision excepted). Remember the HD 7990 was initially priced at 2.5 to 3 times the price of a HD 7970 for the same reason - along with low/non-existent initial availability.

Ill give you that on the 7990, however do keep in mind it was more powerful in most gaming and otherwise benchmarks than the GTX 690 which was also at 1k. But either way it still was a much better idea to do 2 (Or at one point 3) HD 7970's over it.

But thats the thing, even with this being a nicely binned chip, its still very crazy no matter how you look at it. Its going to get coverage and probably a tiny margin of sales, but its still bringing the era of Dual GPU's to an untimely end.

The last thing is the cooler, why are they sticking with the 690/590 style cooler especially with this price. They could at least do something like this...
 
Last edited:
But thats the thing, even with this being a nicely binned chip, its still very crazy no matter how you look at it. Its going to get coverage and probably a tiny margin of sales, but its still bringing the era of Dual GPU's to an untimely end.
Not necessarily. Nvidia announced the card today at GDC2014. The way it was announced and targeted, was for computational workloads. From Anandtech's liveblog : " Promoting it to the GTC crowd as a supercomputer in a PCIe form factor (though no doubt you'd be able to game on it if you really wanted to)"
JHH basically used it to segue into the IRAY ray/path trace box using the same tech
GTC-2014-032.jpg


The last thing is the cooler, why are they sticking with the 690/590 style cooler especially with this price.
Like the Titan/Titan Black, the board is aimed at both the professional market as well as gaming. Render farms, general CG use, and CAD would require an air cooled plug-and-play card. No doubt EKWB will offer a full cover waterblock, and EVGA will do a couple of HydroCopper SKUs for the watercooling crowd.
 
Not necessarily. Nvidia announced the card today at GDC2014. The way it was announced and targeted, was for computational workloads. From Anandtech's liveblog : " Promoting it to the GTC crowd as a supercomputer in a PCIe form factor (though no doubt you'd be able to game on it if you really wanted to)"
JHH basically used it to segue into the IRAY ray/path trace box using the same tech

Like the Titan/Titan Black, the board is aimed at both the professional market as well as gaming. Render farms, general CG use, and CAD would require an air cooled plug-and-play card. No doubt EKWB will offer a full cover waterblock, and EVGA will do a couple of HydroCopper SKUs for the watercooling crowd.
Well yes, and the small margin in general but its gaming use at that price will be low on the spectrum since you could just buy a 4 way 780ti Classified set with waterblocks for still less than that one card and get better performance and clocking so that area is dead with this card unless you factor in small machines which then the price will still scare 99% of the people away.

As for the Cuda Devs, maybe (Though that picture from that debate is wrong unless im misreading something, shouldn't it be 6gb per GPU since those are dual GPU cards with a total of 12gb on the board?) but even still the better alternative would still be 3 titan blacks (I can't believe im saying that...).

Also the cooler I was referencing was the Asus Poseidon 780 cooler which has a built in air cooling system and water cooling where you can choose which method. I was just saying at the price, something besides the usual center mounted fan would be nice since obviously the margin they are making on this card is going to be ridiculously high.

I would have hoped they would have just stuck with a 780ti X2 card over this and charged 1500 (Still high but I guess somewhat justifiable).
 
As for the Cuda Devs, maybe (Though that picture from that debate is wrong unless im misreading something, shouldn't it be 6gb per GPU since those are dual GPU cards with a total of 12gb on the board?)
Tesla's use 12GB per GPU as does the Quadro K6000
Nvidia has also just qualified 512MB (double the current density) GDDR5 chips, so likely any new 6GB per GPU will still use 12 memory IC's - the standard number presently in use, and opens the door for 24GB professional cards. No doubt the qualification, along with AMD allowing 8GB versions of the 290X is the reason that the 780/780Ti will soon be available with a 6GB framebuffer.
And before you ramp the "I told you so" posts :eek:, just remember that my original premise was based on the scenario that is playing out now:
Name one company that has a 6GB 780 let alone a 6GB 780 Ti ? There isn't one. There won't be one unless AMD allow vendor 8GB 290X's and Nvidia changes its stance. What the AIB's want is immaterial, they will abide by Nvidia's edict to protect the market.
Another example of marketing triumphing over common sense.
900x900px-LL-8dba5f04_crysis34kvram.jpeg
 
Tesla's use 12GB per GPU as does the Quadro K6000
But I thought that picture was talking about Titan Z (Dual GPU Titan) which was 12gb or 6gb per GPU, or was I mistaken.
Nvidia has also just qualified 512MB (double the current density) GDDR5 chips, so likely any new 6GB per GPU will still use 12 memory IC's - the standard number presently in use, and opens the door for 24GB professional cards. No doubt the qualification, along with AMD allowing 8GB versions of the 290X is the reason that the 780/780Ti will soon be available with a 6GB framebuffer.
And before you ramp the "I told you so" posts :eek:, just remember that my original premise was based on the scenario that is playing out now:

Man way to take the fun out of it, I had just seen that a little bit ago.

Anyway then the gaming market is just better off having the smart option of 780ti 6gb versions or 8gb 290X (But even I think thats a bit extreme). Which of course is only going to hurt the Titan black and Titan Z more. But again as you said, it seems to just be more of a publicity stunt.


...............Told you so :D
 
But I thought that picture was talking about Titan Z (Dual GPU Titan) which was 12gb or 6gb per GPU, or was I mistaken.
JHH intro'ed the Titan-Z and talked up the visualization opportunities that could be had with it (basically the same argument that was used for the K10 and GTX 690)- I.e each framebuffer could be allocated to it's own threads/workload - non-SLI apps able to use cumulative vRAM. He touted as the low cost alternative. Then the talk turned to professional CG/visualization with the IRAY GRID VCA at $50K, and then to supercomputing, so basically he started with the lowest cost (relatively speaking) and worked up through the pricing/capability segments.
 
JHH intro'ed the Titan-Z and talked up the visualization opportunities that could be had with it (basically the same argument that was used for the K10 and GTX 690)- I.e each framebuffer could be allocated to it's own threads/workload - non-SLI apps able to use cumulative vRAM. He touted as the low cost alternative. Then the talk turned to professional CG/visualization with the IRAY GRID VCA at $50K, and then to supercomputing, so basically he started with the lowest cost (relatively speaking) and worked up through the pricing/capability segments.
Oh so he was strictly speaking about the fact you can allocate all 12gb on the dual GPU titan to one GPU. Makes sense, I was inferring he meant that it had 12gb per GPU which just by that slide could have been interpreted that way.
 
Back