Nvidia said to be readying $899 GK110-based GeForce Titan card

By on January 22, 2013, 6:00 PM
Nvidia's GK110 graphics chip

Preparing for the upcoming launch of AMD's Radeon HD 8000 series, which will presumably kick off with a single-GPU flagship leading the charge, Nvidia is reportedly hoping to steal some of its rival's thunder by releasing a new card that will exist between today's GeForce GTX 680 and the dual-GPU GTX 690.

According to several sources speaking with SweClockers, Nvidia's newcomer will appear late next month for $899 as the GeForce Titan -- a neat reference to the Titan supercomputer built by Cray at the Oak Ridge National Laboratory, which is comprised of 18,688 nodes equipped with Nvidia's Tesla K20X GPU.

Instead of using the GTX 600 series' GK104 GPU, the Titan will be armed with the GK110, which powers Nvidia's enterprise-class Tesla range, though the Titan's chip will have at least one SMX unit disabled from the top configuration, leaving it with 2688 CUDA cores (still over a thousand more than the GTX 680 has).

It's also said that the Titan will have a clock rate of 732MHz (200 to 300MHz lower than the GTX 680 and 690), while its 6GB of GDDR5 VRAM will run at 5.2GHz and have 384-bit bus (50% wider than the GK104 offers. All told, the card will supposedly be 15% slower and at least 10% cheaper than the GTX 690.

Assuming those figures are accurate, such a value discrepancy would probably be justifiable when you consider the fact that the GTX 690 has a 300W TDP, while the Titan should consume less given that the Tesla K20X is rated at 235W -- not to mention the reduced hassle of not dealing with a SLI-based card.

Update: As noted by dividebyzero in the comments, the specs striked above are for the Tesla K20X, while the GeForce Titan's configuration hasn't been revealed yet unfortunately. For whatever it's worth, the estimate about the new card being roughly 15% slower than the GTX 690 still seems to be relevant.




User Comments: 42

Got something to say? Post a comment
2 people like this | soldier1969 soldier1969 said:

Wow 6GB of Vram, and I thought my 4GB 680 was slight overkill even for my 2560 x 1600 display. Talk about a slap in the face to those that just bought a 690...

1 person liked this | Sarcasm Sarcasm said:

Wow cant wait to see someone SLI those things

dividebyzero dividebyzero, trainee n00b, said:

Matthew, just a correction to the story:

It's also said that the Titan will have a clock rate of 732MHz (200 to 300MHz lower than the GTX 680 and 690), while its 6GB of GDDR5 VRAM will run at 5.2GHz

Those specifications are for the 235 watt Tesla K20X -the SweClockers original story is just recounting the spec sheet, but doesn't speculate on a GeForce cards numbers. Probably safe to assume that a GeForce version would have less need for such a restrictive power limit when the PCI SIG allows for 300W. My guess; 850-900MHz, 5.8 - 6GHz (effective) memory and a full enabled die (but with the native 1/3 FP64 rate hobbled).

Staff
Matthew Matthew, TechSpot Staff, said:

Thanks DBZ. I actually noticed that the specs lined up with the K20X's but I didn't think twice about it -- perhaps because Google's translation didn't make it particularly clear that they were strictly speaking about the Tesla product. Anyhow, I'll update the post .

thewind said:

Why would someone want this over a 690? SuperBiiz is selling this for $984.99 with free shipping. If (which I wouldn't for the obvious reason that by the time I cant play a game on ultra with the 680 a new graphics card will be out that will beat the 690 and be in the $450 price range. Hence saving $100 and having a better card than the 690.) But IF I was going to by a card I would rather spend the extra $85 for a 15% increase! Which is kinda a big difference considering the difference between a 580 and a 680 is about 30%. So cost vs. performance the 690 would be better still.The lower wattage is nice though. 6gb is deffinately overkill considering NO game even uses 4. And the only game I know of that uses close to 2gb is GTAIV. Hitman might also that does use a lot of memory I believe.

1 person liked this | dividebyzero dividebyzero, trainee n00b, said:

Why would someone want this over a 690?

1.Single GPU - less hassle with SLI scalability (or SLI profiles for that matter)

2. Remember the crappy performance in AMD Gaming Evolved titles where compute functionality shows up the GK104 based Keplers? A GK110 wouldn't have those problems.

3.The 690 is still a 2GB per GPU card, so...

6gb is deffinately overkill considering NO game even uses 4. And the only game I know of that uses close to 2gb is GTAIV. Hitman might also that does use a lot of memory I believe.

It's not unreasonable to assume that someone dropping 900 on a graphics card might just want use it for multi-monitor gaming. Here's the vRAM usage for BF3 (bear in mind that the GTX 680's are 2GB per card)

So cost vs. performance...

Rarely has any meaning with a halo-orientated part. You could say the GTX 690 has a better perf/$ metric, although I'd go out on a limb and say that the increased bandwidth and framebuffer of a GK110 part would come into play at higher resolutions and higher levels of game I.q. And if perf/$ is the overriding metric, then a couple of GTX 670/660 Ti's are a better bet (from an Nvidia GPU PoV) than a 690 in any case.

[source for table]

amstech amstech, TechSpot Enthusiast, said:

Huh thats interesting my GTX 670 runs the Gaming Evolved titles (well the 2 ive played) as good if not better but maybe thats because I only game at 1600p. Honestly... not trying to be a smartass.

godrilla said:

@dividebyzero

true but according to some sites the beast is due about 1 year after the gtx 690 late 2q13 or same time as hd 8000 series. I couldn't wait for this beast so I bought the gtx 690 at launch thank God I didn't hold my breath.

2 people like this | spydercanopus spydercanopus said:

@dividebyzero

You should be "Techspot Fact Guru" rank.

dividebyzero dividebyzero, trainee n00b, said:

Huh thats interesting my GTX 670 runs the Gaming Evolved titles (well the 2 ive played) as good if not better but maybe thats because I only game at 1600p. Honestly... not trying to be a smartass.

Well, maybe "crappy" performance is an overstatement on my part, but just remember that AMD's-tailored-for-GCN DiRT:Showdown leverages compute for global illumination and advanced lighting calculations- and that's pretty much why this happens...

Now, if you were AMD management, and if you knew that your competitor had an Achilles heel in the form of compute based image quality, what are the chances that they might exploit this in Gaming Evolved titles and every integrated benchmark from those games?

As far as Nvidia are concerned, launching the GK110 as a GeForce board- even if its overpriced (and it will be to stop production being diverted from high priced Tesla/Quadro) and in short supply, ensures that 1. AMD doesn't clock up some easy pickings in the PR benchmark wars, and 2. the card that sits atop most charts wont be a Radeon- unless it's a chart depicting performance-per-watt, dollar, or mm2... three metrics that Nvidia could care less about with this card- and three metrics AMD had been quick to downplay with Tahiti (and likely will again with Curacao)

Littleczr Littleczr said:

Huh thats interesting my GTX 670 runs the Gaming Evolved titles (well the 2 ive played) as good if not better but maybe thats because I only game at 1600p. Honestly... not trying to be a smartass.

Well, maybe "crappy" performance is an overstatement on my part, but just remember that AMD's-tailored-for-GCN DiRT:Showdown leverages compute for global illumination and advanced lighting calculations- and that's pretty much why this happens...

Now, if you were AMD management, and if you knew that your competitor had an Achilles heel in the form of compute based image quality, what are the chances that they might exploit this in Gaming Evolved titles and every integrated benchmark from those games?

As far as Nvidia are concerned, launching the GK110 as a GeForce board- even if its overpriced (and it will be to stop production being diverted from high priced Tesla/Quadro) and in short supply, ensures that 1. AMD doesn't clock up some easy pickings in the PR benchmark wars, and 2. the card that sits atop most charts wont be a Radeon- unless it's a chart depicting performance-per-watt, dollar, or mm2... three metrics that Nvidia could care less about with this card- and three metrics AMD had been quick to downplay with Tahiti (and likely will again with Curacao)

Recommend me a card. I have 2 Asus gtx 560tis 2GB with a u2711. I'm complaining about the noise they make when I play FPS games. And also the frame rates are not as fast in 2560 by 1440.

De4ler De4ler said:

6GB ? they are making cards for 4k TV/monitors

dividebyzero dividebyzero, trainee n00b, said:

Recommend me a card...

Probably not a lot in the single card market. Not even a 7970GHz is going to be a major step up. Depending where you're shopping, I'd say that a couple of non-reference 7950 would be the best bet for bang-for-buck. Scaling should generally be good. Single card/single GPU, you're probably limited to a 7970 for some sort of future-proofing. A non-ref factory OCÚd card for goes for about ~$350 (about the same as a GTX 670) in the U.S.

6GB ? they are making cards for 4k TV/monitors

With a 384-bit memory bus, its either 3GB or 6GB (without going asymmetrical), and 3GB doesn't really have the wow factor- not when a slew of GTX 670/680's are now 4GB.

Staff
Per Hansson Per Hansson, TS Server Guru, said:

@dividebyzero do you speak Swedish natively? (I'm not trying to be a smartass)

As I read the article originally a few days ago I understood it that the Geforce version would have one of the 15 SMX units disabled, resulting in 2688 CUDA "cores"

But now as I check the PDF you linked that's the spec for the K20X card, very intriguing!

And thus rereading the article I must say I agree with you, the "fully fledged" GPU has 2880 cores but nVidia disabled one SMX unit in the K20X card for a total of 14 SMX units.

This begs the question if the consumer variant will have one further SMX unit disabled, I mean they must have allot of chips leftover from the production of K20X cards that they could not use for die harvesting, since all Geforce cards are based on GK104 and not GK110

P.S: This article goes very well with this newspost: [link]

And this one too: [link]

2 people like this | dividebyzero dividebyzero, trainee n00b, said:

@dividebyzero do you speak Swedish natively?

Nope, just a couple of online translators and a little extrapolation. The chances that Nvidia released a GeForce GK110 at the clocks of the passively cooled K20X would approach 0%. The Tesla (and Quadro) parts are binned for long term stability and GPU core voltage. Stability isn't critical for GeForce cards, and nor is the 225-235W envelope per GPU that most server racks units are specced for

With more relaxed power envelope constraints, and less need to bin GPUs for low power means that generally the GeForce card is seldom core clocked lower than its Tesla/Quadro counterpart, and (almost?) never with vRAM

...the "fully fledged" GPU has 2880 cores but nVidia disabled one SMX unit in the K20X card for a total of 14 SMX units.This begs the question if the consumer variant will have one further SMX unit disabled, I mean they must have allot of chips leftover from the production of K20X cards that they could not use for die harvesting, since all Geforce cards are based on GK104 and not GK110

Possible three scenarios:

1. A full 15 SMX GK110 falls outside of the power envelope. In which case, they could be binned for GeForce. Also a possibility that the 15th SMX is built in redundancy to mitigate against any yield problem with a large die. Fully enabled GK110's could be seen as a bonus which might fit in with the lack of GTX naming for the GeForce card.

2. The GeForce cards are further salvage parts, as you hypothesise. In which case, I'd also expect a 12SMX part if yields are that bad. I'm also going to assume that Nvidia launch a Quadro GK110 that could well be a candidate for the 13 (and 12?) SMX parts.

3. There are 100% functional GK110's, but they are being held back for an (as yet) unannounced model.

Without knowing yield its tough to extrapolate this likelihood. With the probable die size and Nvidia's prioritizing for OEMs they couldn't have been that bad. Initial shipments of K20X started in early September and at least 18688 14SMX (93% functional) were supplied in a two months time period -since ORNL's Cray Titan ran a Linpack benchmark in early November. If Nvidia can produce nearly 20K 87% (K20) and 93% (K20X) functional GK110, what's the chance that there are some 100% functional units as well as 13SMX parts? And if there are 15SMX enabled GPUs, where are they going if not GeForce cards? Stockpiling for a future model seems pointless since yields are generally going to improve with every tranche of wafers produced which should keep the channel supplied. Much as Nvidia would like it not to be, there is a finite market for math co-processors- and even if every GPGPU HPC cluster is replacing Fermi boards with Kepler, I'm not sure the market is that large.

Either way, we wont have long to wait. About five minutes after these boards go to OEMs for validation we should start seeing benchmarks pop up on Chinese tech sites.

Guest said:

15% slower than gtx 690, but it's on single GPU..!! That's powerful! Wohow!

Jbucko said:

Powerful and only 235w I am sold but the price is up there. I would pay 6-7 hundred not 9 hundred.

Lionvibez said:

Wow 6GB of Vram, and I thought my 4GB 680 was slight overkill even for my 2560 x 1600 display. Talk about a slap in the face to those that just bought a 690...

why is it a slap in the face the card is suppose to be 15% slower! Anyone with a 690 still has a faster videocard and bragging rights!

Guest said:

Can't wait to see MSI's extreme lightning version for this card :D

Sarcasm Sarcasm said:

Wow 6GB of Vram, and I thought my 4GB 680 was slight overkill even for my 2560 x 1600 display. Talk about a slap in the face to those that just bought a 690...

why is it a slap in the face the card is suppose to be 15% slower! Anyone with a 690 still has a faster videocard and bragging rights!

Because it is single GPU with 6GB of RAM. Meaning less driver issues, no microstutter, and better compatibility with games. It's generally advised to get the single fastest GPU over two slower ones in SLI. For extreme resolutions, every bit of RAM counts in the long run too.

Lionvibez said:

Agreed on the " driver issues, no microstutter, and better compatibility with games"

If you were a 690 owner you would downgrade to this card to avoid these issues?

"For extreme resolutions, every bit of RAM counts in the long run too."

Agree with this to a certain extent there is no such thing as the long run with computers, a year or two after this card is released there will be a gpu out that is faster and for less money.

Guest said:

:( Guess I will wait for Maxwell in 2014 at the price of this new card..Unless this card goes down in price $$..My GTX 580 will have to hold on till then..

Zeromus said:

2. Remember the crappy performance in AMD Gaming Evolved titles where compute functionality shows up the GK104 based Keplers? A GK110 wouldn't have those problems.

I'd like more info on that please, I'd love to read an article on that if possible.

Zeromus said:

Agreed on the " driver issues, no microstutter, and better compatibility with games"

If you were a 690 owner you would downgrade to this card to avoid these issues?

"For extreme resolutions, every bit of RAM counts in the long run too."

Agree with this to a certain extent there is no such thing as the long run with computers, a year or two after this card is released there will be a gpu out that is faster and for less money.

Don't forget, highest SLI possible with a 690 is two. So 4x this GPU, and it will be ~240% faster than a single 690.

dividebyzero dividebyzero, trainee n00b, said:

I'd like more info on that please, I'd love to read an article on that if possible.

About what? There are three aspects in my quote you referenced. Two already graphically addressed, one easily interpreted from the spec sheet + PR bumpf + any quick Google of the two architectures

The compute functionality AMD jammed into the games? Why not try AMD's own benchmark guides

The compute functionality of the GK110 in relation to the GK104 ? Well, all the compute functionality culled from GK104 to prioritize general gaming, die size, thermal/power usage that Nvidia didn't spotlight with the GK104 launch, is pretty much front and centre in Nvidia's GK110 marketing : "Nvidia believes that traditional graphic APIs such as directx and OpenGL have stalled, which opens the way to new techniques and post-processing shader graph running on GPGPU APIs like CUDA, DirectCompute (part of DirectX 11 and above) and OpenCL as deferred shading techniques, global and interactive illumination, indirect illumination (octres and voxels) used in the game Battlefield 3 and Unreal Engine 3"

About the lack of compute functionality in GK104 ? Well, if you had problems parsing the previous chart- and the TS review it obviously came from, I'm not sure if showing you a heap of others is going to make things clearer- but as Ray Lewis said; I'll take a stab at it.

[source]

Note the relationship in the chart between the GTX 680 and the card it replaced in the product stack. I'd suggest reading through the review, as well as any others that utilize benchmarks which require compute shader input.

Lionvibez said:

Don't forget, highest SLI possible with a 690 is two. So 4x this GPU, and it will be ~240% faster than a single 690.

Correct about 2x sli and 4x however SLI scaling is not 100% so I doubt it would be 240% faster. Secondly 2 690 would already be crazy expensive. This new card is $900 who is going to spend $3600 on a 4x Sli setup? And in doing so you are going to back micro stutter land which the single card setup has over the 690!

PC nerd PC nerd said:

I don't understand why this GPU exists.

Is it meant to be a budget compute graphics card?

Or an overpriced gaming card?

Zeromus said:

Correct about 2x sli and 4x however SLI scaling is not 100% so I doubt it would be 240% faster. Secondly 2 690 would already be crazy expensive. This new card is $900 who is going to spend $3600 on a 4x Sli setup? And in doing so you are going to back micro stutter land which the single card setup has over the 690!

Yup, that's very true, but it's something to consider about this "Titan." It might not even scale well at all, but I'm hoping it should.

dividebyzero dividebyzero, trainee n00b, said:

WCCF have a purported pic of the PCB layout of the Titan.

8+2 phase power delivery, 8 pin+6 pin power connectors (300W max.).

Zeromus said:

WCCF have a purported pic of the PCB layout of the Titan.

8+2 phase power delivery, 8 pin+6 pin power connectors (300W max.).

Why the censor?

dividebyzero dividebyzero, trainee n00b, said:

Two possible reasons:

1. The board partners logo is embossed into the display connector mountings, or,

2. The card is actually a Tesla K20 -which has no display out functionality- although afaia, Tesla and Quadro PCB's are almost always green, while the GeForce is black.

Zeromus said:

I won't get excited from that until the primary source comes from a site that's English lol.

dividebyzero dividebyzero, trainee n00b, said:

Shouldn't be too far away. Both Tech Report and PC Perspective have Titan samples on hand- presumably since both sites are now intensively conducting graphics testing based upon frame latency rather than the frames-per-second model that other review sites use.

Zeromus said:

How in the world have you acquired this information?

dividebyzero dividebyzero, trainee n00b, said:

It was mentioned on a few forums a while back (James Prior at Rage3D for example)

Zeromus said:

I admire your dedication.

dividebyzero dividebyzero, trainee n00b, said:

Pictures of the GTX Titan at Egypthardware and elsewhere today

Relevant specs:

14 SMX , 2688 cores, 224 TMU, 48 ROP

Base clock : 837 MHz, Boost: 876 MHz, Memory clock: 6008 MHz effective

6 GB GDDR5

Pixel fillrate : 40176 MPixel/s ( 42048 MPixel/s @ boost)

Texture fillrate: 187488 MTexel/s ( 192224 MTexel/s @ boost)

Bandwidth: 288.38 GB/sec

Floating point : FP32 : 4488.7 GFLOPS (4720 GFLOPS @ boost).

EVGA and Asus likely to be the only AIB's selling the card. The Asus apparently has a base clock of 915 MHz

Zeromus said:

Actually saw that a few minutes ago lol.

dividebyzero dividebyzero, trainee n00b, said:

Very nice aesthetic. Makes me wonder with the second rear power pin-out location (along with the lack of integrated heatspreader) how much kinship the Titan actually has with the Tesla K20X , since both features are present on each card along with the 8+6 pin power.

PC nerd PC nerd said:

Why the blower cooler? Yuck

LNCPapa LNCPapa said:

That is an incredibly sexy card (especially if that's some sort of metal and not all plastic)... too bad it's gonna be just out of my price range. My next rig is almost definitely going to have 3 cards in it but I'm hoping for a total GPU investment of about $1500. No way I can stretch that to $2700 so I'm hoping there's a lower model GK110 coming soon after release or something interesting from the RED side.

Load all comments...

Add New Comment

TechSpot Members
Login or sign up for free,
it takes about 30 seconds.
You may also...
Get complete access to the TechSpot community. Join thousands of technology enthusiasts that contribute and share knowledge in our forum. Get a private inbox, upload your own photo gallery and more.