Rumor: Nvidia GeForce GTX 680 to arrive in February

By on January 18, 2012, 11:47 AM

Nvidia was all about Tegra 3 and mobile computing at last week's Consumer Electronics Show, but the company hasn't forgotten about hardcore gamers awaiting their next-generation desktop products. In fact, according to recent rumors, their first 28nm "Kepler" cards may arrive a bit earlier than the previously expected March/April timeframe in response to AMD's successful launch of their GCN (Graphics Core Next) based Radeon 7970 graphics cards.

Chinese website ChipHell.com is reporting that Nvidia will launch the GeForce GTX 680 sometime in February and the card will supposedly offer similar performance to AMD's Radeon HD 7970, which currently holds the title of fastest single-GPU card on the planet.

Although detailed specifications for the upcoming cards are still unknown, the general consensus is that the GTX 680 cards will have 2GB of video memory on a 256- or 384-bit bus and have a core clock speed of 780MHz.

Previous rumors suggested Nvidia was timing the release of their new graphics cards to coincide with the launch of Intel's Ivy Bridge processors. A report in November also claimed that Nvidia was planning a bottom-to-top rollout for their 28nm Kepler architecture, starting with the entry-level GK107 GPU. The performance GK104 GPU that would presumably power the GeForce GTX 680 was slated for Q3 2012. Needless to say Nvidia doesn't comment on rumors and speculation so hopefully an official announcement will come soon.




User Comments: 38

Got something to say? Post a comment
captainawesome captainawesome said:

I wonder why AMD goes for the bigger framebuffer when Nvidia seems to like the 2GB max..?

Muggs said:

Been waiting on the 600series to upgrade my 400series card. Will be nice to have a single card solution again.

dividebyzero dividebyzero, trainee n00b, said:

@ captainawesome

That comes down to the BoM (Bill of Materials). Because Nvidia GPU's tend to be larger than AMD's (and hence more expensive to produce), Nvidia usually don't endow the cards with other big price tag components, which is usually seen in the areas of voltage regulation and vRAM (both in quantity and speed). For example, fhe 3GB of VRAM that the HD 7970 carries has a wholesale cost of around $US 90, while 1.5GB of the previous generation GDDR5 carried by the GTX580 is $US 32.

Having more vRAM onboard, for the most part, only produces tangible gains in a minority of games at standard resolutions -reference any HD 6950 1GB vs 2GB review. As gaming moves to greater resolutions the frame buffer becomes more desirable.

marinkvasina marinkvasina said:

dividebyzero said:

@ captainawesome

That comes down to the BoM (Bill of Materials). Because Nvidia GPU's tend to be larger than AMD's (and hence more expensive to produce), Nvidia usually don't endow the cards with other big price tag components, which is usually seen in the areas of voltage regulation and vRAM (both in quantity and speed). For example, fhe 3GB of VRAM that the HD 7970 carries has a wholesale cost of around $US 90, while 1.5GB of the previous generation GDDR5 carried by the GTX580 is $US 32.

Having more vRAM onboard, for the most part, only produces tangible gains in a minority of games at standard resolutions -reference any HD 6950 1GB vs 2GB review. As gaming moves to greater resolutions the frame buffer becomes more desirable.

all i can get from this is that u are a pro user of google.

LinkedKube LinkedKube, TechSpot Project Baby, said:

all i can get from this is that u are a pro user of google.

or that you're a troll waiting under a bridge

bandit8623 said:

captainawesome said:

I wonder why AMD goes for the bigger framebuffer when Nvidia seems to like the 2GB max..?

amd likes 4k resolutions :P

Guest said:

"captainawesome said:

I wonder why AMD goes for the bigger framebuffer when Nvidia seems to like the 2GB max..?"

Because AMD's cards support 3 screens out of the box, and in forexample Crysis 2 i saw tests where it consumed 2.4gb of vram in eyefinity.

That being said, dont underestimate the power of morons, many look at cards and think a 2gb card is better than a 1gb, even if its 2gb gddr3 vs 1gb gddr5.

Ultraman1966 said:

Because bigger sounds better

dividebyzero dividebyzero, trainee n00b, said:

all i can get from this is that u are a pro user of google.

Don't feel bad, an in-depth comprehension of a given subject takes time, concentration and a great deal of effort. You're bound to have a few missteps along the way.

You could, I suppose use Google for a quick-and-dirty synopsis of most things, but it's usually better if the technology and its wider implications are something you have a keen interest in....pretty radical concept for a tech enthusiast forum I know.

TL;DR

LinkedKube on the money

Sarcasm Sarcasm said:

captainawesome said:

I wonder why AMD goes for the bigger framebuffer when Nvidia seems to like the 2GB max..?

If you noticed AMD's recent string of products, they are aiming toward extreme multitasking. That also reflects in their CPU's and APU's. With their Radeon line, notice how they keep talking about multiple screens with Eyefinity? And even the FX CPU's is great for extreme multitasking (regardless of how people view it as a failure.) Point is, having more VRAM will definitely be needed especially for multi-monitor setups.

If a person was not to use any of those features, stick to one monitor, don't do much multitasking, then an Nvidia card with 2gb should be plenty. Even though I say the more the merrier.

My GTX580 with 1.5gb is plenty for me at my resolution of 1920x1080.

Guest said:

"Previous rumors suggested Nvidia was timing the release of their new graphics cards to coincide with the launch of Intel's Ivy Bridge processors."

With little change in performance and a hefty price tag, I guess Nvidia missed the memo that Intel's Ivy Bridge built-in gpu will have 30-60% better performance and be able to support 2 monitors running 4k x 4k.

http://www.techspot.com/news/46832-intel-to-launch-22nm-ivy-
ridge-processors-on-april-8.html

[I don't have the source for the 2 monitors running at 4kx4k, but it was a statement made my Intel.]

LinkedKube LinkedKube, TechSpot Project Baby, said:

My GTX580 with 1.5gb is plenty for me at my resolution of 1920x1080.

Amen to that although I'll be buying my third when the 680 drops. I'm trying to keep up with red but my gf won't allow it.

Guest said:

Still waiting for the GTX 780, i predict the 680 will suffer from power consumption and overheating issues :P

It'd better come out before 21st December, or i'll just be pis...

amstech amstech, TechSpot Enthusiast, said:

Hmmm no GTX 670 treats?

dividebyzero said:

all i can get from this is that u are a pro user of google.

Don't feel bad, an in-depth comprehension of a given subject takes time, concentration and a great deal of effort. You're bound to have a few missteps along the way.

How has 'Techspot' not banned you yet dividebyzero?

dividebyzero dividebyzero, trainee n00b, said:

Taking a general observation rather personal don't you think ?

If you think you see something worthy of a ban feel free to use the Report Bad Post link...I'm sure the administration will give your input all the attention it deserves.

LinkedKube LinkedKube, TechSpot Project Baby, said:

Hmmm no GTX 670 treats?

How has 'Techspot' not banned you yet dividebyzero?

Because despite what you think his mental real estate is still worth something on this site. I have a short list of TS regulars that I've gained respect for. He's one of them.

I'm sorry if your post was a troll attempt, or even if it wasn't I'm sure you'll find a comfy bridge to hide under.

dividebyzero dividebyzero, trainee n00b, said:

I guess Nvidia missed the memo that Intel's Ivy Bridge built-in gpu will have 30-60% better performance

Performance, like a backwoods marriage, is relative. 30-60% increase from HD2000/HD3000 level graphics really isn't saying a lot in the realm of gaming. For most non-gaming applications, IGP will do just fine, but I think you'll still find that Intel's IGP isn't at the level to rival AMD's Fusion APU's or a discrete card- which is where Nvidia's discrete graphics and Optimus switching fill the market demand.

Aside from gaming there are other area's where a discrete card (or two) makes perfect sense - business and workstation spring to mind.

@LinkedKube

Thanks for the kind words. And Ditto.

You can change your username...when did that happen?

red1776 red1776, Omnipotent Ruler of the Universe, said:

Amen to that although I'll be buying my third when the 680 drops. I'm trying to keep up with red but my gf won't allow it.

...so much for asking for help to find non-ref waterblocks

Still waiting for the GTX 780, i predict the 680 will suffer from power consumption and overheating issues :P

and what evidence are you basing this on ? power efficiency seems to be one of (if not thee main facet of the upcoming line.

How has 'Techspot' not banned you yet dividebyzero?

as soon as I can figure the over under on that, I'm opening the betting window. How much you want to lose Amstech?

LinkedKube LinkedKube, TechSpot Project Baby, said:

...so much for asking for help to find non-ref waterblocks

Ask me tomorrow when I'm not in a drunken rage and on a troll hunt.

You want full copper, copper and nickel or copper base with acrylic block? You know i got choooo! ///In my detroit lingo\\\

treetops treetops said:

ah control f couldnt find a $ sign

cliffordcooley cliffordcooley, TechSpot Paladin, said:

LinkedKube said:

How has 'Techspot' not banned you yet dividebyzero?

Because despite what you think his mental real estate is still worth something on this site. I have a short list of TS regulars that I've gained respect for. He's one of them.

You can say that again, DBZ has my respect too.

red1776 red1776, Omnipotent Ruler of the Universe, said:

Ask me tomorrow when I'm not in a drunken rage and on a troll hunt.

You want full copper, copper and nickel or copper base with acrylic block? You know i got choooo! ///In my detroit lingo\\\

I know you do T

Does the acrylic block present any problems?

LinkedKube LinkedKube, TechSpot Project Baby, said:

1 to 2 degree temp difference at full load, but nothing detrimental over complete loop temps compared to full copper blocks with good flow

Sent from my DROIDX using Tapatalk

Archean Archean, TechSpot Paladin, said:

LinkedKube = supermashbrada ................ *head scratching* ............. ??????

By the way amstech: DBZ is one of the few very well informed contributors here, and I am sure many will verify that he doesn't indulge in any non-sensical outbursts, somehow, some people always forget one thing, one have to 'gain' respect, as it is something which can't be taken as a 'given'.

dividebyzero dividebyzero, trainee n00b, said:

...so much for asking for help to find non-ref waterblocks

Noticed you had the Flex cards - they are second revision reference. You should have a reasonably wide choice- just make sure it's for a v2 reference pcb

Swiftech Komodo HD6900-2

Aquacomputer Type 2

EKWB v2 and ...and in white

Alphacool v2

Danger Den v2

Koolance probably make a compatible block also, but their QA/QC can be more than a little iffy with some products. AquaComputer and Swiftech might be your best bet. Never had a problem with EK's, but they have had issues with flaking internal Nickel plating. I'd assume that the affected parts have been recalled- but assume takes on some importance when you're talking about expensive hardware.

A user review at XS

red1776 red1776, Omnipotent Ruler of the Universe, said:

Noticed you had the Flex cards - they are second revision reference. You should have a reasonably wide choice- just make sure it's for a v2 reference pcb

Swiftech Komodo HD6900-2

Aquacomputer Type 2

EKWB v2 and ...and in white

Alphacool v2

Danger Den v2

Koolance probably make a compatible block also, but their QA/QC can be more than a little iffy with some products. AquaComputer and Swiftech might be your best bet. Never had a problem with EK's, but they have had issues with flaking internal Nickel plating. I'd assume that the affected parts have been recalled- but assume takes on some importance when you're talking about expensive hardware.

A user review at XS

Well thats just it though. I had thought you had told me at one point that since they all unlocked (actually came 6970 /1536 ) on the #1 bios, that they were rev 1.0. Am I misremembering this? I cant' find them list ought-right on a compatibility list, so I am relegated to matching pics of naked PCB's and the location of the VRM modules....it's not going well ROFL.

dividebyzero dividebyzero, trainee n00b, said:

I think the unlocking is vendor dependant (XFX) rather than a quirk of board revision, as well as some 1GB HD 6950's not unlocking (MSI's TFII ?)

Here's the PCB of a revision 2 board- which just happens to be a Sapphire HD 6970 Flex.

From the accompanying review:

[link]

If you're running the 6950 Flex at 6970 settings it still comes down to the same PCB

Well thats just it though. I had thought you had told me at one point that since they all unlocked (actually came 6970 /1536 ) on the #1 bios, that they were rev 1.0. Am I misremembering this? .

http://www.techspot.com/vb/topic163560.html (post #4, #6) is about as much as I remember regarding non-ref unlocking (or otherwise). I reference second revision non-reference - thinking along the lines of the Twin Frozr II and XFX's centre fan model

...and back on topic...

ah control f couldnt find a $ sign

I wouldn't worry about it. I'm fairly certain the whole story is bogus. 2GB frame buffer means either a 256-bit bus (won't be competitive with the 7970 so why worry about rushing it into retail), or 512-bit - which seems extremely unlikely given the time frame. Unlikely that Nvidia would re-jig a Fermi GPU for both an increased bus width (up from 384-bit) and process shrink to 28nm in such short order

ravy said:

YES! FINALLY! Let's start saving some money

dividebyzero dividebyzero, trainee n00b, said:

According to professional Nvidia hater Charlie over at semiarticulate, the 256-bit/2GB story is ...accurate:

The short story is that Nvidia will win this round on just about every metric, some more than others. Look for late March or early April availability, with volumes being the major concern at first. GK104 cards seen by SemiAccurate all look very polished and complete, far from rough prototypes or "Puppies". A2 silicon is now back from TSMC, and that will likely be the production stepping barring any last second hitches. Currently, there aren't any.

For the doubters, both of the current cards we saw have two DVI plugs on the bottom, one HDMI, and one DP with a half slot cooling grate on one of the two slot end plates. The chip is quite small, and has 8 GDDR5 chips meaning a 256-bit bus/2GB standard, and several other features we can't talk about yet due to differences between the cards seen. These items won't change the outcome though, Nvidia wins, handily

So either (1).a very pre-emptive April Fool's joke, (2). a blatant attempt to garner page views by placing a bogus story thats sure to be referenced all over the web, or (3). Nvidia pulled of a modern technological miracle ( Nvidia GTX 560 successor beats AMD's Tahiti !)

No actual information in the article, so I'm calling it (2) with a side -order of (1).

[no sauce]

Guest said:

"(3). Nvidia pulled of a modern technological miracle."

The only one delusion are people who believe 20-25% performance advantage that HD7970 has over GTX580 is enough to compete with high-end 28nm Kepler. Never in the history of NV did their next generation card was only 20-25% faster than their previous generation card.

NV could have simply shrunk GTX580 and increased clocks and added more than 25% performance increase. That's not even taking into consideration any architectural improvements that Kepler might bring.

HD7970 is going to be AMD's X1800XT. They'll release 20-30% faster clocked version or add more SPs. There is no chance that AMD will be able to sell $549 HD7970 that's only 20% faster than GTX580 because Kepler will blow that performance advantage away. Of course AMD probably knew this and decided to launch cards at 925mhz while yields on 28nm aren't as great as they will be once the process matures and they are ready to launch HD7980 with 1150mhz+.

dividebyzero dividebyzero, trainee n00b, said:

You seemed to have missed the point.

The rumour isn't about the GTX 580's successor (GK110)-which I don't think is close to imminent release - it's about the GTX 560's successor - GK104

Why would Nvidia go from 384-bit to 256-bit for their top GPU ? Simple answer is they wouldn't....unless Nvidia have made a fundamental leap in GPU design that mitigates the reduced bandwidth.

The rumour is that GK104 -a supposed 256-bit/2GB vRAM/ 780MHz (or possibly 900MHz) second-tier GPU is supposedly going to take the HD 7970 or 7950 out to the woodshed...that's in the order of a 80% performance increase over the GTX 560Ti using a lower/equal core clock, same bus width and a 30% increase in power. Unless it's a die shrunk dual-GPU 560, I really don't see it happening with those specifications.

Guest said:

Ya, what about it. I didn't miss the point at all. NV can easily release a card for $399 with 20% more performance than a GTX580. That would make HD7970 at $549 insanely overpriced:

At stock speeds the HD7970 is barely faster than an HD7970.

http://www.computerbase.de/artikel/grafikkarten/2011/test-am
-radeon-hd-7970/10/#abschnitt_leistung_mit_aaaf

You keep focusing on 2GB of VRAM limitation when for 99.9% of people that's plenty fast. HD7970 can't take advantage of > 2GB of RAM since it's not fast enough in those situations in the first place. The few games that use a lot of VRAM like Shogun 2 destroy HD7970 at 2560x1600 with AA.

You are also assuming that 256-bit memory interface is a problem. You aren't considering that NV can simply increase TMUs, SPs and ROPs and match the bandwidth of the GTX580 with faster GDDR5 chips on 256-bit interface. They can squeeze 20% more performance from GTX580 without increasing bandwidth.

So basically the memory bandwidth limitation and "only" having 2GB of VRAM are only problems in your mind. AMD released a card for $550 that's only 20-22% faster without seeing what the competition can bring. I have no doubt that Kepler GK110 will be at least 40-50% faster than GTX580, which means GK104 should have no problems at all matching HD7970 at a much lower price if NV chooses to be aggressive with its pricing strategy.

dividebyzero dividebyzero, trainee n00b, said:

Ya, what about it. I didn't miss the point at all. NV can easily release a card for $399 with 20% more performance than a GTX580. That would make HD7970 at $549 insanely overpriced

Some might argue that the GTX 580 and 7970 are already insanely overpriced - you don't need an unreleased card to see that- a $260 unlockable HD 6950 makes that abundantly clear.

At stock speeds the HD7970 is barely faster than an HD7970.

Hardly surprising

You keep focusing on 2GB of VRAM limitation when for 99.9% of people that's plenty fast

99.9% of people don't use enthusiast level graphics cards...and what I wrote was "Why would Nvidia go from 384-bit to 256-bit for their top GPU ?" -the statement was in reply to your fixation with GK110. You really think a GK110, like any other enthusiast card, will be purchased in any significant numbers. More to the point, are you expecting the GK110 to have a 256-bit memory bus?

(What is it and Guests with straw man arguments?)

It's also why I said "I really don't see it happening with those specifications" IF the card being talked about is the GK104, then the specifications being bandied around are a 40MHz lower core clock than the 560Ti, no shader hot clock, the same bus and framebuffer, smaller die, better performance/watt, better performance/mm˛ for a nominal 55w increase in TDP but lower temps and a 80% increase in performance.

Usually when something seems to be too good to be true, it's because it is. The flip side of this is that numbers like this wont be in any way a good thing for consumers if true. If the GK104 meets or exceeds 7970 performance, rest assured that Nvidia will price accordingly.

HD7970 can't take advantage of > 2GB of RAM since it's not fast enough in those situations in the first place. The few games that use a lot of VRAM like Shogun 2 destroy HD7970 at 2560x1600 with AA.

Whatever...Just out of interest, [link] (albeit with a more powerful rig)

You are also assuming that 256-bit memory interface is a problem. You aren't considering that NV can simply increase TMUs, SPs and ROPs and match the bandwidth of the GTX580 with faster GDDR5 chips on 256-bit interface). They can squeeze 20% more performance from GTX580 without increasing bandwidth.

Yep. Probably why I wrote "unless Nvidia have made a fundamental leap in GPU design that mitigates the reduced bandwidth"

They may have also moved to a much higher bandwidth GDDR5, they might also have improved their memory controllers, they might also have managed to pare away latency, they may have simplified the GPU by removing the double-precision element., and they may -as you've said- move to increasing ROP's and TMU's (maybe 48 and 96 ?)...which probably constitutes "a fundamental leap in GPU design"

Bandwidth of GTX 580: 384-bit / 8 x 4008 MHz effective = 192384 MB/sec (usually expressed as 192.38GB/sec

Bandwidth of GK104...: 256-bit / 8 x 6012 MHz effective to reach 192384 MB/sec...a 50% increase in memory speed is required, and some 500MHz faster than AMD's GDDR5...the guys that pretty much invented the GDDR5 spec..

So basically the memory bandwidth limitation and "only" having 2GB of VRAM are only problems in your mind. AMD released a card for $550 that's only 20-22% faster without seeing what the competition can bring. I have no doubt that Kepler GK110 will be at least 40-50% faster than GTX580, which means GK104 should have no problems at all matching HD7970 at a much lower price if NV chooses to be aggressive with its pricing strategy.

A couple of points:

1. GK110 isn't here. This thread isn't concerned with what will in all probablility be a 384-bit + (possibly 512-bit) GPU. If 256-bit/2GB "isn't a problem" in your mind, do you think it likely that GK110 will be 256-bit/2GB ?

2. Nvidia has seldom been aggressive with its pricing strategy- unless responding to AMD's pricing. Haven't you noticed that AMD and Nvidia have been dovetailing price and performance since at least 2008?

3. [link] ...and remember that at this price point there are going to be a few people might want to use this res or higher. Bear in mind that Charlie mentions that the GK104 includes DisplayPort, so if Nvidia are moving toward supporting single card 5040+ resolutions (and I think they must look to match AMD sooner or later) then a larger framebuffer (esp with/AA enabled) is probably a must- at least for the top tier card.

4. The HD 7970 has already demonstrated an ability for a significant percentage of cards to clock in excess of 1200MHz core and 7000MHz effective memory on stock cooling. You don't think that AMD might take advantage of this fact and bin for a 7980 (or whatever) if they need to sometime between now and when the GK110 launches ? [link] ...and that doesn't take into account that AMD might 1. revise/refine the GPU design, and 2. Have the HD 8000 series out by the time the GK110 drops....or are you privy to the launch dates as well as performance figures?

EDIT:

[link] at B3D

ddg4005 ddg4005 said:

I would go for the mid-range 600 series (GTX 660 Ti) instead of the high-end card. But my GeForce GTX 560 Ti is working fine for me so I'll pass on this next generation of GPUs.

cliffordcooley cliffordcooley, TechSpot Paladin, said:

I would go for the mid-range 600 series (GTX 660 Ti) instead of the high-end card. But my GeForce GTX 560 Ti is working fine for me so I'll pass on this next generation of GPUs.
I'm thinking very seriously about using that card (GTX 660 Ti) as a replacement for my GTS 450.

dividebyzero dividebyzero, trainee n00b, said:

VR-Zone are apparently spilling (some of the) beans on the prospective GTX 660.

A doubling of shaders/CUDA cores to 768 and:

* "Kepler shaders will be different from Fermi counterparts"

* "Single precision performance is rated at above 2 Teraflops, twice that of GTX 560 Ti and over 50% higher than the GTX 580

* "256-bit memory interface, but with frame buffer doubled to 2GB presumably at higher clocks"

The GTX 560Ti has 1.264 TFlops (single point) precision...384 shaders x 1645MHz shader frequency x 2 flops per clock

So presumeably for GTX 660 at "twice the flops" (i.e. 2.53 TF) with twice the shader count means that the shader frequency remains unchanged (if Kepler remains at 2 SP flops/cycle) at 1645 MHz...which also mean that Nvidia aren't doing away with the shader hot clock...unless they plan on also clocking the core at 1645 !!

If Nvidia are simply doubling (minus memory controller etc) and shrinking the GF114 arch, they're going with 64 ROP's and 128 TMU's. (numbers bandied around in relation to GK110 ?). The ROP count looks a little wasteful -a lot of extra power required for a less than comparable gain in performance....and of course, a 30% dieshrink from 40nm to 28nm still wouldn't help too much when you're talking about combining the best part of 2 x 358mm˛ GF114's into a single package.

Guest said:

Same as AMD has done with the buldozer with the so called "8cores" wich is in fact 4 hyperthreaded cores that don't come to the heel of Intel's 4 year old duel cores.

Guest said:

you can run a higher resolution with more memory. as the image stretch out, you're gonna need more space to render on. instead of pre-render (which will tare your fps apart) they chose to make eyefinity even cheaper. that is that you dont need two gpu's to get that extra bufferspeed.

Load all comments...

Add New Comment

TechSpot Members
Login or sign up for free,
it takes about 30 seconds.
You may also...
Get complete access to the TechSpot community. Join thousands of technology enthusiasts that contribute and share knowledge in our forum. Get a private inbox, upload your own photo gallery and more.