Gigabyte's GTX 980 tri-SLI Waterforce cooler launches for $3,000

Scorpus

Posts: 2,162   +239
Staff member

Gigabyte's crazy closed-loop water cooling solution for three high-end graphics cards, Waterforce, is now on sale with a whopping $3,000 price tag. The mammoth liquid cooling unit that sits atop your PC case does come with three Nvidia GeForce GTX 980s, but the price tag is still hard to swallow.

Each GTX 980, which regularly retail for around $560 in a traditional air-cooled setup, is factory overclocked in the Waterforce cooler to a base clock of 1,228 MHz and a boost clock of 1,329 MHz. The cards all come with 4 GB of GDDR5 memory clocked at 7,010 MHz on a 256-bit bus.

gigabyte gtx tri-sli waterforce nvidia geforce water cooling waterforce gtx 900 series gtx 980 tri-sli

The setup for Waterforce is fairly inelegant: you have to feed the power cable from the cooling box to your PSU through a front 5.25" drive bay, and then feed one radiator for each card through the drive bay to the cooling box. The end result is a bunch of tubes running from the cooler atop your case, through the front panel and then to each graphics card.

Waterforce promises to deliver excellent cooling performance and quiet operation, which is what you'd hope for if you're spending $3,000. The front display on the cooler allows you to monitor GPU temperatures and adjust fan and pump speeds on the fly.

While the huge price tag will keep this sort of setup in the mind of enthusiasts only, installing a tri-SLI water-cooled setup can be complex and expensive. Water blocks for each GPU could cost upwards of $100 each, then you'd also have to purchase radiators and fans, pumps, reservoirs, and tubing.

gigabyte gtx tri-sli waterforce nvidia geforce water cooling waterforce gtx 900 series gtx 980 tri-sli

Permalink to story.

 
Those exposed tubes are a deal-breaker for me. If they break during transportation, which is certainly a possibility, then you have to spend a ton of money just to get them fixed. Not to mention one or more of your GPUs are useless during that time.
 
Lol no...I was just saying I'd love to blow my money on something like that too ;)
If those things were going for 3k, I'd buy a few!! :D
 
3K, are they kidding with that price??? I mean a standard 980 is about 550-700 depending on the variant and even picking a nice variant with some liquid blocks and building a LC setup I could do it all for less than 3K.

Sure I guess the argument is this comes pre-assembled and with 3 separate coolers and setups on top of the fancy display but the practicality of this versus the fact that its not exactly cheap and requires a very high end system to drive right your looking at a weird area this thing sits at.
 
3K, are they kidding with that price??? I mean a standard 980 is about 550-700 depending on the variant and even picking a nice variant with some liquid blocks and building a LC setup I could do it all for less than 3K. to

Sure I guess the argument is this comes pre-assembled and with 3 separate coolers and setups on top of the fancy display but the practicality of this versus the fact that its not exactly cheap and requires a very high end system to drive right your looking at a weird area this thing sits at.

This is designed for enthusiasts with money to blow. Trying to make a rational case for such an upgrade is like trying to fit a square into triangular hole - not gonna happen. The one guy who mentioned practicality during the design phase probably gave the rest of the team a good chuckle.
 
I'd like to get a new car; Tesla Model S perhaps?
I don't take much of an interest in cars these days and all I know about the Tesla is what I've read on this site but I suppose if you could get a new one for the same price as this setup I think you would be unwise not to do so. $3K in my country would net you a half decent 2nd hand run around depending on where you shop.
 
This is designed for enthusiasts with money to blow. Trying to make a rational case for such an upgrade is like trying to fit a square into triangular hole - not gonna happen. The one guy who mentioned practicality during the design phase probably gave the rest of the team a good chuckle.
Even so this just seems like a weird attempt at making something that seems easy but in the end makes things just more of a pain. I mean the price is above a custom made one that in this day and age is pretty easy to do and could be made for better performance (Or heck including your CPU into it) while on the other side buying 3 of the NZXT or equivalent hybrid coolers would be cheaper and work just as well (Albeit I understand these are full waterblocks I doubt the performance would be much different in both stock and overclocking).

Your right, this is intended for enthusiasts with deep pockets who want a LC system with 3 GPU's without having to do much themselves. But in the end I feel like the idea becomes overshadowed when looking at how much of a pain routing that through the front is not to mention it throws looks away with tubes coming out the front like that. Just seems to me like they should have thought a bit more about the overall idea but that's just my opinion.
 
Never been a big fan of external watercooling setups, and this setup looks like a more ungainly Koolance Exos 2.5 without the ability to customise/upgrade.

Bearing in mind that a lot of enthusiast outlets now pre-fit waterblocks to cards to order for those unwilling to do so themselves, this Gigabyte system looks like a huge cash premium as a trade off for cutting and attaching some tubing.
 
Those exposed tubes are a deal-breaker for me. If they break during transportation, which is certainly a possibility, then you have to spend a ton of money just to get them fixed. Not to mention one or more of your GPUs are useless during that time.

Are you seriously going to carry around a full tower with that thing on top to a LAN party?

That would just scream I really want to show this thing off and am willing to go through an iron man like course to do so.
 
See... here's the thing: The cards only have 4GB of memory on them. If memory "stacked" in SLI, or for that matter Crossfire if using ATI cards, then you might have something viable for 4K monitors of 32" or larger with widescreens which is what we are all looking for. But... no memory stacking, so this doesn't really help. Plus, no motherboard or processor I know of will run the three cards at PCIE 16... they will go to PCIE 8 the minute you put three on the board and then you take a performance hit. Also, a lot of other peripherals now share the PCI bus, so you will limit your options there too.

If they somehow make it technically possible to use the memory on ALL the cards at the same time as well as the GPUs and have a motherboard and processor (that is where the PCIE bus lanes are allowed) that can use the bus at PCIE 16 for fast data transfer, then ok... you got something. Otherwise three cards are just for bragging rights, and at this price bragging is pretty... unproductive.
 
See... here's the thing: The cards only have 4GB of memory on them.
A 4GB framebuffer is still adequate in the majority of gaming scenarios. You would need to seriously cherry pick benchmarks to show otherwise.
If memory "stacked" in SLI, or for that matter Crossfire if using ATI cards, then you might have something viable for 4K monitors of 32" or larger with widescreens which is what we are all looking for
Stacked memory is HBM/HMC. I believe the term you're after is "shared (or unified) memory pooling".
Plus, no motherboard or processor I know of will run the three cards at PCIE 16... they will go to PCIE 8 the minute you put three on the board and then you take a performance hit.
This is both incorrect ( quite a number of PEX8747 equipped boards offer 3 (and 4) * PCI-E 3.0 x16 functionality), and irrelevant. The system bus isn't constrained by eight lanes electrical, nor four lanes for that matter.
perfrel_3840.gif

Also, a lot of other peripherals now share the PCI bus, so you will limit your options there too.
See above
If they somehow make it technically possible to use the memory on ALL the cards at the same time
IIRC, Tiling and SFR (Split frame rendering) can use a unified memory pool - they just aren't efficient...and in the case of tiling, can't be used in conjunction with OpenGL.
Otherwise three cards are just for bragging rights,
This has pretty much always been the case, although it is demonstrably true that in the past, triple CFX/SLI alleviated microstutter considerably in the absence of effective frame pacing technology.
 
Whenever I see a post from dividebyzero, I know I'll be seeing some serious fact :D
I say Techspot promotes this guy :p
 
If I was thinking of droping this much money. I think I'll just go ahead and do my build a pc in a fridge idea. I know a guy who did this and it was pretty cool. Also it's 10 years old now and still going strong.
 
As DivideByZero points out, with today's gaming a 4GB buffer is still adequate for monitors like 1920X1200 or its widescreen equivalent whatever you may choose (25XX by 14XX or whatever). I was referring to the trend towards 4K monitors and what that would require. I still say present hardware and even three decent vid cards, though better than any one card can do, will not be adequate for 4K monitors at speeds we have come to believe are necessary for smooth rendering. It might take as much as 12GB single cards with possibly water cooling (though I think 2 cards with "unified memory"... thanks for that correction, my bad!) will be the most used solution because the GPU power is also necessary without the need for water cooling, though we still might go there (multiple 8GB 295X's anyone?) . I am not sure that the present PCIE bus is suitable with 3 or more cards, but with new MB designs that fully utilize it with multiple cards it still may be possible if you can cool the whole package reliably (think about case designs and expense of 3 or 4 cards too, especially if water cooled!). I'm thinking it will require new hardware possibly including new bus designs to handle the rendering power necessary for future 4K monitor gaming. Perhaps simply fully utilizing what is here will work but I'm not sure about that. It is irrelevant to state that tiling and so forth can be made to use the memory in multiple cards (at what speeds?) if games, as written, using OpenGL, cannot, since that is what we have and probably will have in the foreseeable future. Certainly I know that a screen of any size using the same number of pixels requires the same rendering power. I was merely stating that widescreen (or actually ultra wide screen) single large monitors of 4K or even higher pixel density are what most of us want for an immersive experience and no bezel in the field of view, and today's hardware is not sufficient for that task at the speeds we need and expect to have. The monitors ARE coming... new hardware to run them adequately, hopefully at native speeds of at least 60hz and fast refresh rates (120hz would be better), will have to be forthcoming also. No matter how we do it, we are not there yet. That is what I am trying to say.
 
As DivideByZero points out, with today's gaming a 4GB buffer is still adequate for monitors like 1920X1200 or its widescreen equivalent whatever you may choose (25XX by 14XX or whatever). I was referring to the trend towards 4K monitors and what that would require.
What you're arguing for is a pipe dream. Never has an incoming gaming screen resolution been absolutely playable with a single card in all situations since multi-GPU has been in existence - for the simple reason that if a single card could provide the rendering capability it would destroy the market for multi GPU (including dual GPU cards) graphics. When 1280x1024 debuted the only setup that could provide the necessary fillrate was 3dfx's Voodoo Graphics in SLI - and even that came at the expense of 16-bit colour.
Nothing has changed in the intervening years.
I still say present hardware and even three decent vid cards, though better than any one card can do, will not be adequate for 4K monitors at speeds we have come to believe are necessary for smooth rendering. It might take as much as 12GB single cards with possibly water cooling (though I think 2 cards with "unified memory"... thanks for that correction, my bad!) will be the most used solution because the GPU power is also necessary without the need for water cooling, though we still might go there (multiple 8GB 295X's anyone?)
The 295X2 uses 4GB per GPU. The 8GB advertised memory is not additive, so I'm unsure why you'd make an exception for it. FWIW, even if it were it cannot make use of the additional framebuffer in the vast majority of cases because the GPUs ROP count and memory interface remain static. Note that the 8GB 290X only pulls away from the 4GB version when swamping the framebuffer with texture packs - the resulting difference isn't due to computing power but the ability to hold the textures in the memory buffers. This isn't the de facto situation for PC games.
I am not sure that the present PCIE bus is suitable with 3 or more cards
It's fine, mainly because most game code utilizes internal bandwidth ( GPU <-> vRAM) greatly over external (PCI-E) bandwidth. Case in point:
h2A6WgG.jpg

I'm thinking it will require new hardware possibly including new bus designs to handle the rendering power necessary for future 4K monitor gaming.
Very unlikely. If anything, the reverse will be true. Indications are that both Nvidia and AMD will move to incorporate ARM architecture CPUs into graphics packages.
It is irrelevant to state that tiling and so forth can be made to use the memory in multiple cards (at what speeds?) if games, as written, using OpenGL, cannot, since that is what we have and probably will have in the foreseeable future.
I mentioned it solely in the interest of completeness. Tiling has greater issues IIRC - namely needing to overdraw geometry for each adjacent tile for each tile it renders - basically a doubling up of workload.
The monitors ARE coming... new hardware to run them adequately, hopefully at native speeds of at least 60hz and fast refresh rates (120hz would be better), will have to be forthcoming also. No matter how we do it, we are not there yet. That is what I am trying to say.
...and we never will be. By the time a single GPU can render 4K in every image quality scenario, Nvidia and AMD will have moved the goalposts again. Both game development programs will add more post-process compute based rasterization techniques as well as path tracing (if not ray tracing) such as voxel based global illumination...and by the time a single GPU can handle that, 4K will be the equivalent of 1080p and we'll be looking at how much GPU power will be required for 8K gaming.

Basically, if you're waiting for some idealized scenario where 100% of gaming is playable at a new standard of screen resolution, you'll need to wait 2-3 graphics generations...by then of course, that "new standard of screen res" is yesterdays news.
 
Hi again DivideByZero... I was using the 295X as another example just like the 980X3 being written about here. I trust you are more up to the minute about graphics rendering than I, so your analysis of the PCIE bus and its viability for future gaming I accept as being better than my own.

The whole "post processing" thing is new to me since I am now some years retired from fixing computers for various manufacturers, though I try to keep up. I was sort of surprised to see an option selection for it in the Dragon Age Inquisition graphics section, let alone what it would mean to me to adjust it. I never claim to be anywhere near an "expert" on the software side of things as that is not where I worked, though I appreciate its importance for moving things forward and have "dabbled" in programming. I do understand that the hardware necessary for 4K gaming is not going to be cheap at this time. The problem is right now it would take at least 5-$7000 to do it right (or at least as "right" as is possible at this time) if you include the monitor. I do think that 4K will become common and the hardware to run it "mainstream" and less expensive in a very short time though because the demand is there and will be filled.

I certainly agree with other things you say as well, including the part about "moving the goalposts" as time goes on. As someone who started out on vintage early 1980"s "orange text" monitors and a Franklin Ace 1000 (I had to hunt through offerings at "computer shows" to get aftermarket hardware to improve the poor thing!), I look back on the various benchmarks like the first 1MB hard drive, to minicomputers that ran whole transportation systems and had a full 1MB of memory (oh the humanity!) and through the advancement of CPU's, GPU's and memory and can only imagine what even 10 more years will bring. It's been an exciting ride so far... I hope it continues and that I am around to see it.
 
Back