Nvidia promises to ship next-gen Kepler GPU this year (not!)

Matthew DeCarlo

Posts: 5,271   +104
Staff

Update (8/5): Nvidia has clarified a previous statement regarding availability of its next-gen GPU codenamed Kepler. In a nutshell, Nvidia won't be shipping any final products using the new silicon this year. "Although we will have early silicon this year, Kepler-based products are actually scheduled to go into production in 2012. We wanted to clarify this so people wouldn’t expect product to be available this year," said Ken Brown, a spokesman for Nvidia. Both AMD's and Nvidia's upcoming GPUs are slated to use a new 28nm process technology, with the likely possibility of brand new Radeon graphics cards making it into the market before the end of the year.

Original: We've seen various conflicting reports over the last six months about Nvidia's next-gen GPU launch schedule. It was originally reported that the graphics firm planned to unleash its codenamed "Kepler" (28nm) parts toward the end of this year. That story changed in early July when inside sources cited by DigiTimes claimed that Nvidia tweaked its roadmap due to low yields of TSMC's 28nm wafers, effectively shifting Kepler from 2011 to 2012, while "Maxwell" (22/20nm) cores would be bumped from 2012 or 2013 to 2014.

Apparently, that wasn't entirely accurate. Speaking at the company's GTC Workshop Japan event, Nvidia co-founder Chris Malachowsky reportedly said that Kepler is still (or back) on track for launch later this year. However, Malachowsky was careful with his wording, saying only that the parts would begin "shipping" by the end of 2011, and that doesn't necessarily mean you'll be able to purchase one in that window. It's fairly common for tech companies to paper launch products months before they're actually available.

nvidia vows start shipping kepler gpus

When Kepler does arrive (almost certainly under the GeForce 600 series moniker), it will supposedly deliver a threefold increase in double precision performance per watt over Fermi along with being easier for developers to utilize for GPGPU applications. Considering it's still a few years out, even less is known about Maxwell. Based on Nvidia's year-old roadmap, its 22/20nm GPUs are expected to yet again triple Kepler's performance per watt and bring a sixteenfold increase in parallel graphics-based computing.

Permalink to story.

 
No chance.Nope, just hype. Nvidia should focus on mobile combined with ARM processors, because they lost the battle on x86 front.Just my opinion, for now...
 
New motherboards to support the new graphics card, how many more 100`s £££££`s will this cost just for a basic mobo and graphs card upgrade that were told we need by the media and the hype that the companies producing the goods shove at us, why didnt they wait to release sandybridge until the new pci-e slot was in place instead of the first release and subsequent release of the fixed sandy mobos and then the Z68 series.

Well i dont suppose we should really complain, look at the fab X86/64 bit procs were still using, getting a little old tech now, why no 128/256 bit ????, and then of course theres the good old DDR3 ram, , what happened to cube ram or DDR4 at least....
 
According to TSMC, the slow ramping of the 28nm process is/was due to low (at least initial) demand for wafers. Of course this could be either true, or a spur for companies to place larger firm orders for wafer starts.

@mosu
Ah, where would be be without you. Are you and your industrial sized jar of K-Y hoping for a call from Abu Dhabi or Tom Seifert ?
Nvidia presently hold ~90% of the pro graphics and HPC GPGPU market...and they've "lost the battle" pfffft.

AMD are still starting from a long way back. Aside from the arch issues ( integrated cache/ECC memory controllers/anything larger than a 256Mb bus width), their biggest hurdle is going to be adoption of OpenCL, since they seem content to do little more than release a APP SDK every so often and hope people might decide to do something with it.
 
New motherboards to support the new graphics card, how many more 100`s £££££`s will this cost just for a basic mobo and graphs card upgrade that were told we need by the media and the hype that the companies producing the goods shove at us, why didnt they wait to release sandybridge until the new pci-e slot was in place instead of the first release and subsequent release of the fixed sandy mobos and then the Z68 series.
Technology moves fast...get used to it, and you are not having motherboards shoved in your face....unless you work for Foxconn.

look at the fab X86/64 bit procs were still using, getting a little old tech now, why no 128/256 bit ????

And exactly what advantage would a 128 or 256 bit processor have???

, what happened to cube ram or DDR4 at least...

It's coming within 12 months.

...anything else?
 
red1776 said:
And exactly what advantage would a 128 or 256 bit processor have???

If you understand computing and counting systems other than our 10-based system, you would see that 64bits provides us with ungodly big numbers. Numbers so big we dont even use them for the very most part. Thats why you dont see performance increases in 64bit cpus for the most part. The numbers being thrown down the (cpu) pipeline arent bigger than a 32bit number for the most part. Its not like a 16 bit number where the CPU would have to compute a calculation twice that a 32bit CPU can knock out in one round.
 
ghasmanjr said:
Just wow. In a few years, we're going to look at a GTX590 like it is a lowly GTX260.

Surprisingly enough, the GTX260 is still a good standard. Considering all Nvidia specializes in is antiquated versions of DirectX (think buying a console for your PC), it isn't surprising.
 
The GPU side of Nvidia has been quite for a while. It seems like GPUs on PC are not the rock stars they used to be... you can see why... the games that are coming out are not really making use go... oh cannot wait for the next GPU... at least consumer wise... the attention is on the mobile market where we might see ARM versions of Windows 8, MeeGo, and HP WebOS running on Nvidia dual and Quad cores and maybe even on ARM based PCs (dual booting or not) toward the end of 2012 and early to mid 2013 with the newer discreet GPUs??? New download marketplaces and App stores making things exciting again? no... maybe Id Software's new Doom and Quake just in time for the new GPU power houses...
 
its good to see that next big bump in gpu tech is coming.
As we have seen that there was almost 2x performance boost when going from gtx 200 series to gtx 400 series and from hd 4000 to hd 5000 and if we remove nvidia's case, this performance boost was also power efficient.
So we can expect same kind of boost in this new gpu tech.
And we can see gtx 580 like performance on a gtx 660(if the naming scheme continues)
And enthusiast series graphic card will give performance like todays dual gpu cards.
And i will grab a sub high end series of card.
But every good story have bad part, and in this case, we dont have a game which we still want to see on its max settings like crysis when it was launched.
But its still good for consumers
 
Im pumped for new gpu's like the next geek but what really is going to take advantage of it. Surely not all the dumbed down console ports as of late. Oh well i'll still buy it anyway.
 
ghasmanjr said:
Just wow. In a few years, we're going to look at a GTX590 like it is a lowly GTX260.

If that's the case, I'm eager to see what kind of games would be that demanding that a 590 would become lowly
 
Just in time for Battlefield 3? what were the odds ;)

Although i've still got a GTX260 i have been struggling to find too many games that proper max it out. I can run Crysis and Crysis 2, Company of Heros, Battlefield: Bad Company 2, All Source Engine games and well, pretty much every at full, sure I might have to turn the shader quality down a notch on the original Crysis and the same needs to be done on Battlefield: Bad Company 2, but appart from that it doesn't really struggle to much, usually staying above 30fps which isn't ideal but still smooth enough to play happily.

I started out with a 7600GT then to the GTX260. Hopefully a GTX660 will be my next big jump :)
But for the sake of the 400 series I pray the 600 series is a bit more power friendly.
 
Okay -- so when do we get to play games that look so real that we can practically TASTE THE FOOD on the plate that the protagonist is eating in GTA 12 (or whatever...)

?!

:D
 
lol..^^

Nvidia isn't APU.. so it will futher slip into insignificance on the PC & any other device that is trying to be effecient with energy. Laws of physics bro..
 
But for the sake of the 400 series I pray the 600 series is a bit more power friendly.
Unlikely IMO.
A process shrink usually means that you can either:
1. Keep roughly the same level of performance, but use less power, or
2. Pack more transistors into the same die package, and possibly increase clockspeed.

One of these options generates techgasms across the net, the other gives HTPC builders more range to choose from.

Near in mind that Kepler is another compute card so will be packing a lot of transistors that have little relevance to gaming, as will AMD's new GCN architecture. I think you can expect both camps to be pushing as hard (clocks/wattage) as yields and commercially viable stock cooling allow

GF110/GF100 (GTX5xx/4xx) -3000million trans...520-529mm² (40nm)
GT200 (GTX2xx).....................1400......................576mm² (55nm)
G80 (8800GTX/Ultra).............. 681.......................484mm² (90nm)
G71 (78xx/79xx)...................... 278.......................196mm² (90nm)

And AMD for comparison:

Cayman (HD69xx)..................2640million.............389mm² (40nm)
Cypress (HD58xx/5970).........2150........................334mm² (40nm)
RV770/790 (HD48xx)............. 956....................... 260/282mm² (55nm)
R680 (HD38xx).........................666.......................192mm² (55nm)
and the exception that proves the rule...
R600 (HD2900)........................700........................420mm² (65nm)
 
dividebyzero, your two options don't conflict at all. There are always cards in the new generation which provide the same performance as the previous one for a lower price and less power. The top end has a lot more performance and uses a little more power than the previous gen.

Since the slide in this news item is specifically about performance per watt, I think that NVIDIA is taking this matter seriously. Therefore there's a chance the 600 series will not raise power requirements more, just provide more performance.
 
My wishes for Kepler - bigger memory buffers for multi-monitor gaming and games with hi-res textures plus with the 28 nm die shrink, hopefully, more performance but less heat and noise.
 
dividebyzero, your two options don't conflict at all. There are always cards in the new generation which provide the same performance as the previous one for a lower price and less power. The top end has a lot more performance and uses a little more power than the previous gen.
The "top-end" was/is the primary thrust of my comments. Note that I used the top GPU for each arch/process node in my comparisons.
I think it would be a given that cards that could be named GT 630, GTS 650, HD 7650 et al would be more power efficient- a quick look at the number of (relatively) competant cards that utilise <25w should be proof of that.


Since the slide in this news item is specifically about performance per watt, I think that NVIDIA is taking this matter seriously. Therefore there's a chance the 600 series will not raise power requirements more, just provide more performance.
Neither AMD, nor Nvidia would be looking at raising the power requirement I would think (not that I stated anything to the contrary in any case) since they are already uncomfortably close to the 300w PCI-SIG spec for enthusiast-class graphics cards (dual GPU excepted)
No PCI-SIG validation = No insurance for OEM's.

If you re-read my post, you'll note that I think TDP will not decrease for upper-tier (above mainstream) cards. The HD79xx will almost certainly be a larger and more complex die integrating compute functions, and will-by extrapolation-require a hefty power input...I most certainly wouldn't see Nvidia throwing away all their Fermi R&D and drastically altering their compute strategy.

What will probably eventuate is an evolution of hardware and software based power limiters, to further muddy any attempts to pin down accurate independant power usage figures.
 
I agree with you to a point, especially given that console ports are inherently hampering the code paths of their PC counterparts, but my situation is one which begs for a better card...

I have a Dell Studio XPS 8100 and the spec was great until I was given (yes, given!) a brand new Dell 3007 monitor capable of running at 2560x1600. This meant that my hitherto powerful GTX570 was now being dragged into sub-40 framerates, even with an i7-870, SSD and 8GB RAM in support.

The motherboard does not have a separate PCIe x16 slot and so I am left with the dilemma of changing an entire, highly specc'd machine just for the sake of additional GPU horsepower. The very latest CPU is faster than mine, granted, but not by a margin sufficient to warrant the jump.

So, I will wait, and grab a next gen card so that I can continue to enjoy the rest of the machine for a little while longer. I will probably ensure that the next PC has two PCIe x16 slots, and use the next card along with another one, for SLI, meaning I get double the GPU power for the price of one (now much cheaper) next gen card at the time of upgrading.


Best regards all


Neil
 
Just thought I'd post this from VR-Zone

Although the chart and the spec's don't necessarily tally if the vertical axis is measuring performance. i.e. if the GK104 (GTX560Ti successor) is truly 250w /384-bit then I'd be sceptical that it offers the same performance as an OC'ed 560Ti. Likewise the dual version of the same GPU. If the vertical axis is just showing the top-to-bottom nature of the product stack then I'd assume there will be salvage parts filling in the gaps.

A couple of other observations:
If the GK104 is 250 watt, then I'd doubt that a dual GK104 is going to tip the scales at much less than 400w...and these are the "small" GPU's. It's also probably a given that the GK112 with a 512Mb memory bus is going to be about as close to the PCI-E 300w limit as is technically possible.
 
Back