Rumor: Nvidia GeForce GTX 680 to arrive in February

Noticed you had the Flex cards - they are second revision reference. You should have a reasonably wide choice- just make sure it's for a v2 reference pcb

Swiftech Komodo HD6900-2
Aquacomputer Type 2
EKWB v2 and ...and in white
Alphacool v2
Danger Den v2

Koolance probably make a compatible block also, but their QA/QC can be more than a little iffy with some products. AquaComputer and Swiftech might be your best bet. Never had a problem with EK's, but they have had issues with flaking internal Nickel plating. I'd assume that the affected parts have been recalled- but assume takes on some importance when you're talking about expensive hardware.

A user review at XS

Well thats just it though. I had thought you had told me at one point that since they all unlocked (actually came 6970 /1536 ) on the #1 bios, that they were rev 1.0. Am I misremembering this? I cant' find them list ought-right on a compatibility list, so I am relegated to matching pics of naked PCB's and the location of the VRM modules....it's not going well ROFL.
 
I think the unlocking is vendor dependant (XFX) rather than a quirk of board revision, as well as some 1GB HD 6950's not unlocking (MSI's TFII ?)
Here's the PCB of a revision 2 board- which just happens to be a Sapphire HD 6970 Flex.
From the accompanying review:
If you're running the 6950 Flex at 6970 settings it still comes down to the same PCB

Well thats just it though. I had thought you had told me at one point that since they all unlocked (actually came 6970 /1536 ) on the #1 bios, that they were rev 1.0. Am I misremembering this? .

https://www.techspot.com/vb/topic163560.html (post #4, #6) is about as much as I remember regarding non-ref unlocking (or otherwise). I reference second revision non-reference - thinking along the lines of the Twin Frozr II and XFX's centre fan model

...and back on topic...
ah control f couldnt find a $ sign :(
I wouldn't worry about it. I'm fairly certain the whole story is bogus. 2GB frame buffer means either a 256-bit bus (won't be competitive with the 7970 so why worry about rushing it into retail), or 512-bit - which seems extremely unlikely given the time frame. Unlikely that Nvidia would re-jig a Fermi GPU for both an increased bus width (up from 384-bit) and process shrink to 28nm in such short order
 
According to professional Nvidia hater Charlie over at semiarticulate, the 256-bit/2GB story is ...accurate:

The short story is that Nvidia will win this round on just about every metric, some more than others. Look for late March or early April availability, with volumes being the major concern at first. GK104 cards seen by SemiAccurate all look very polished and complete, far from rough prototypes or “Puppies“. A2 silicon is now back from TSMC, and that will likely be the production stepping barring any last second hitches. Currently, there aren’t any.

For the doubters, both of the current cards we saw have two DVI plugs on the bottom, one HDMI, and one DP with a half slot cooling grate on one of the two slot end plates. The chip is quite small, and has 8 GDDR5 chips meaning a 256-bit bus/2GB standard, and several other features we can’t talk about yet due to differences between the cards seen. These items won’t change the outcome though, Nvidia wins, handily
So either (1).a very pre-emptive April Fool's joke, (2). a blatant attempt to garner page views by placing a bogus story thats sure to be referenced all over the web, or (3). Nvidia pulled of a modern technological miracle ( Nvidia GTX 560 successor beats AMD's Tahiti !)
No actual information in the article, so I'm calling it (2) with a side -order of (1).




[no sauce]
 
"(3). Nvidia pulled of a modern technological miracle."

The only one delusion are people who believe 20-25% performance advantage that HD7970 has over GTX580 is enough to compete with high-end 28nm Kepler. Never in the history of NV did their next generation card was only 20-25% faster than their previous generation card.

NV could have simply shrunk GTX580 and increased clocks and added more than 25% performance increase. That's not even taking into consideration any architectural improvements that Kepler might bring.

HD7970 is going to be AMD's X1800XT. They'll release 20-30% faster clocked version or add more SPs. There is no chance that AMD will be able to sell $549 HD7970 that's only 20% faster than GTX580 because Kepler will blow that performance advantage away. Of course AMD probably knew this and decided to launch cards at 925mhz while yields on 28nm aren't as great as they will be once the process matures and they are ready to launch HD7980 with 1150mhz+.
 
You seemed to have missed the point.
The rumour isn't about the GTX 580's successor (GK110)-which I don't think is close to imminent release - it's about the GTX 560's successor - GK104

Why would Nvidia go from 384-bit to 256-bit for their top GPU ? Simple answer is they wouldn't....unless Nvidia have made a fundamental leap in GPU design that mitigates the reduced bandwidth.

The rumour is that GK104 -a supposed 256-bit/2GB vRAM/ 780MHz (or possibly 900MHz) second-tier GPU is supposedly going to take the HD 7970 or 7950 out to the woodshed...that's in the order of a 80% performance increase over the GTX 560Ti using a lower/equal core clock, same bus width and a 30% increase in power. Unless it's a die shrunk dual-GPU 560, I really don't see it happening with those specifications.
 
Ya, what about it. I didn't miss the point at all. NV can easily release a card for $399 with 20% more performance than a GTX580. That would make HD7970 at $549 insanely overpriced:

At stock speeds the HD7970 is barely faster than an HD7970.

http://www.computerbase.de/artikel/grafikkarten/2011/test-amd-radeon-hd-7970/10/#abschnitt_leistung_mit_aaaf

You keep focusing on 2GB of VRAM limitation when for 99.9% of people that's plenty fast. HD7970 can't take advantage of > 2GB of RAM since it's not fast enough in those situations in the first place. The few games that use a lot of VRAM like Shogun 2 destroy HD7970 at 2560x1600 with AA.

You are also assuming that 256-bit memory interface is a problem. You aren't considering that NV can simply increase TMUs, SPs and ROPs and match the bandwidth of the GTX580 with faster GDDR5 chips on 256-bit interface. They can squeeze 20% more performance from GTX580 without increasing bandwidth.

So basically the memory bandwidth limitation and "only" having 2GB of VRAM are only problems in your mind. AMD released a card for $550 that's only 20-22% faster without seeing what the competition can bring. I have no doubt that Kepler GK110 will be at least 40-50% faster than GTX580, which means GK104 should have no problems at all matching HD7970 at a much lower price if NV chooses to be aggressive with its pricing strategy.
 
Ya, what about it. I didn't miss the point at all. NV can easily release a card for $399 with 20% more performance than a GTX580. That would make HD7970 at $549 insanely overpriced
Some might argue that the GTX 580 and 7970 are already insanely overpriced - you don't need an unreleased card to see that- a $260 unlockable HD 6950 makes that abundantly clear.
At stock speeds the HD7970 is barely faster than an HD7970.
Hardly surprising
You keep focusing on 2GB of VRAM limitation when for 99.9% of people that's plenty fast
99.9% of people don't use enthusiast level graphics cards...and what I wrote was "Why would Nvidia go from 384-bit to 256-bit for their top GPU ?" -the statement was in reply to your fixation with GK110. You really think a GK110, like any other enthusiast card, will be purchased in any significant numbers. More to the point, are you expecting the GK110 to have a 256-bit memory bus?

(What is it and Guests with straw man arguments?)

It's also why I said "I really don't see it happening with those specifications" IF the card being talked about is the GK104, then the specifications being bandied around are a 40MHz lower core clock than the 560Ti, no shader hot clock, the same bus and framebuffer, smaller die, better performance/watt, better performance/mm² for a nominal 55w increase in TDP but lower temps and a 80% increase in performance.
Usually when something seems to be too good to be true, it's because it is. The flip side of this is that numbers like this wont be in any way a good thing for consumers if true. If the GK104 meets or exceeds 7970 performance, rest assured that Nvidia will price accordingly.

HD7970 can't take advantage of > 2GB of RAM since it's not fast enough in those situations in the first place. The few games that use a lot of VRAM like Shogun 2 destroy HD7970 at 2560x1600 with AA.
Whatever...Just out of interest, check the relative performance of the same 3GB 7970 with the 1.5GB version of the GTX 580 (albeit with a more powerful rig)

You are also assuming that 256-bit memory interface is a problem. You aren't considering that NV can simply increase TMUs, SPs and ROPs and match the bandwidth of the GTX580 with faster GDDR5 chips on 256-bit interface). They can squeeze 20% more performance from GTX580 without increasing bandwidth.
Yep. Probably why I wrote "unless Nvidia have made a fundamental leap in GPU design that mitigates the reduced bandwidth"
They may have also moved to a much higher bandwidth GDDR5, they might also have improved their memory controllers, they might also have managed to pare away latency, they may have simplified the GPU by removing the double-precision element., and they may -as you've said- move to increasing ROP's and TMU's (maybe 48 and 96 ?)...which probably constitutes "a fundamental leap in GPU design"

Bandwidth of GTX 580: 384-bit / 8 x 4008 MHz effective = 192384 MB/sec (usually expressed as 192.38GB/sec
Bandwidth of GK104...: 256-bit / 8 x 6012 MHz effective to reach 192384 MB/sec...a 50% increase in memory speed is required, and some 500MHz faster than AMD's GDDR5...the guys that pretty much invented the GDDR5 spec.
.

So basically the memory bandwidth limitation and "only" having 2GB of VRAM are only problems in your mind. AMD released a card for $550 that's only 20-22% faster without seeing what the competition can bring. I have no doubt that Kepler GK110 will be at least 40-50% faster than GTX580, which means GK104 should have no problems at all matching HD7970 at a much lower price if NV chooses to be aggressive with its pricing strategy.
A couple of points:
1. GK110 isn't here. This thread isn't concerned with what will in all probablility be a 384-bit + (possibly 512-bit) GPU. If 256-bit/2GB "isn't a problem" in your mind, do you think it likely that GK110 will be 256-bit/2GB ?
2. Nvidia has seldom been aggressive with its pricing strategy- unless responding to AMD's pricing. Haven't you noticed that AMD and Nvidia have been dovetailing price and performance since at least 2008?
3. This is what VRAM limitation looks like at 5760x1080...and remember that at this price point there are going to be a few people might want to use this res or higher. Bear in mind that Charlie mentions that the GK104 includes DisplayPort, so if Nvidia are moving toward supporting single card 5040+ resolutions (and I think they must look to match AMD sooner or later) then a larger framebuffer (esp with/AA enabled) is probably a must- at least for the top tier card.
4. The HD 7970 has already demonstrated an ability for a significant percentage of cards to clock in excess of 1200MHz core and 7000MHz effective memory on stock cooling. You don't think that AMD might take advantage of this fact and bin for a 7980 (or whatever) if they need to sometime between now and when the GK110 launches ? That kind of puts paid to the vast theoretical GK110 advantage...and that doesn't take into account that AMD might 1. revise/refine the GPU design, and 2. Have the HD 8000 series out by the time the GK110 drops....or are you privy to the launch dates as well as performance figures?

EDIT:
FWIW here's Dave Kanter's take just posted at B3D
 
I would go for the mid-range 600 series (GTX 660 Ti) instead of the high-end card. But my GeForce GTX 560 Ti is working fine for me so I'll pass on this next generation of GPUs.
 
I would go for the mid-range 600 series (GTX 660 Ti) instead of the high-end card. But my GeForce GTX 560 Ti is working fine for me so I'll pass on this next generation of GPUs.
I'm thinking very seriously about using that card (GTX 660 Ti) as a replacement for my GTS 450.
 
VR-Zone are apparently spilling (some of the) beans on the prospective GTX 660.

A doubling of shaders/CUDA cores to 768 and:
• "Kepler shaders will be different from Fermi counterparts"
• "Single precision performance is rated at above 2 Teraflops, twice that of GTX 560 Ti and over 50% higher than the GTX 580
• "256-bit memory interface, but with frame buffer doubled to 2GB presumably at higher clocks"

The GTX 560Ti has 1.264 TFlops (single point) precision...384 shaders x 1645MHz shader frequency x 2 flops per clock
So presumeably for GTX 660 at "twice the flops" (i.e. 2.53 TF) with twice the shader count means that the shader frequency remains unchanged (if Kepler remains at 2 SP flops/cycle) at 1645 MHz...which also mean that Nvidia aren't doing away with the shader hot clock...unless they plan on also clocking the core at 1645 !!

If Nvidia are simply doubling (minus memory controller etc) and shrinking the GF114 arch, they're going with 64 ROP's and 128 TMU's. (numbers bandied around in relation to GK110 ?). The ROP count looks a little wasteful -a lot of extra power required for a less than comparable gain in performance....and of course, a 30% dieshrink from 40nm to 28nm still wouldn't help too much when you're talking about combining the best part of 2 x 358mm² GF114's into a single package.
 
Same as AMD has done with the buldozer with the so called "8cores" wich is in fact 4 hyperthreaded cores that don't come to the heel of Intel's 4 year old duel cores.
 
you can run a higher resolution with more memory. as the image stretch out, you're gonna need more space to render on. instead of pre-render (which will tare your fps apart) they chose to make eyefinity even cheaper. that is that you dont need two gpu's to get that extra bufferspeed.
 
Back