Nvidia teases Fermi successor, plans for CUDA x86

Matthew DeCarlo

Posts: 5,271   +104
Staff

During its GPU Technology Conference today, Nvidia offered a glimpse at its plans for upcoming graphics hardware. Based on the shared CUDA GPU roadmap, it seems the company intends to launch a new architecture every two years, but CEO Jen-Hsun Huang wouldn't outwardly commit to that.

Codenamed Kepler, the next-gen chip should enter production this year and ship sometime in 2011. It uses 28nm fabrication tech, and will boast three to four times in the performance per watt compared to today's Fermi-based GeForce 400 products.


Further down the line, Nvidia expects to ship another architecture sometime in 2013. Known as "Maxwell" internally, the GPU will again triple Kepler's speeds per watt and bring a sixteen-fold increase in parallel graphics-based computing.

In addition to sharing its mainstream GPU roadmap, the company said it has partnered with The Portland Group (TPG) to develop a CUDA C compiler for x86 platforms, which will be demonstrated in November at the SC10 Supercomputing conference.

Permalink to story.

 
Okey, with the 5000 series owning this generation, the 400 series not that bigger in performance (even with nvidia sweating to release it) and the 6850 at the begining of the next year, nvidia couldnt put out another marketing graphic but that.

I really dont like how fast they are releasing new gpus. They are expensive and we buy them just to play games not for internet, work, homework, music, movies, etc. So, they arent that worthy it. And my new and expensive 5870 doesnt make me that proud because of the next 6850 so damn close :(
 
Guest said:

I really dont like how fast they are releasing new gpus. They are expensive and we buy them just to play games not for internet, work, homework, music, movies, etc. So, they arent that worthy it. And my new and expensive 5870 doesnt make me that proud because of the next 6850 so damn close :(

it doesn't matter how fast they release new tech, the game industry doesn't move quite as fast so most new cards will probably be irrelevant to most users. most of us are not even running DX11 yet (i myself just bought a 5870 not long ago) and the bulk of games that will utilize it are not even out yet. it's not like the HD6xxx series is not going to be leaps and bounds ahead of the HD5xxx series anyway...
 
Well a lot of people are in the same boat, everyone buys into current gen hardware just to see next gen months away. In this case its Nvidia shaming those who bought fermi cards, thats a real stab in the back. Anyone who has waited now gets to wait longer too. And here I was looking forwards to fermi revisions based on the GTX460 die like a 485 or wtv they would call it. Well I'm happy with my 5870, it does what I need just fine.
 
exactly, i'm loving my 5870 too and if i had to have stayed with my 4870 i'd probably be fine with that as well. like i said there's still only a handful of DX11 games on the market at the moment with the next big batch being set for early next year. that's good news for people who are waiting for the HD6xxx cards but it's not like HD5xxx users will be left in the dust. i'm sure AMD has a similar roadmap in store for their GPUs, but it's no different than what's already going on with graphics cards...

that's just how it is with buying high-performance PC hardware... deal with it.
 
Why is it most of the comments made about Nvidia cards made by little boys who only play games.Nvidia cards and Cuda are use extensivly in Geo science and medical and mining and movie industries.What do 70% of the worlds Super computers run on, not ATI graphic cards.
 
Maybe because "little boys" like us are probable 90% of the users of those things?
 
Guest said:
Why is it most of the comments made about Nvidia cards made by little boys who only play games.Nvidia cards and Cuda are use extensivly in Geo science and medical and mining and movie industries.What do 70% of the worlds Super computers run on, not ATI graphic cards.

Well excuse us for not wanting to pay $2000 for a GPU, I know Nvidia would like to think we do. And about the comment concerning the DX11 games, the hardware actually can't handle true DX11 implementation. Try tessellating everything in any current game on a single GPU, not happening. Even games that implement some DX11 features have trouble keeping a decent frame rate on most hardware. Think about it, the hardware will be used when it becomes available, thats all I have to say.
 
If they know what they are going to do, then, why not just do it!? :)
here is your answer
The manufacturing process of choice is 22nm, something that TSMC hopes to have in 2013 but it still leaves quite a gap in 2012 when there won’t be anything really new, just maybe a tweak in Kepler.

This is a long time to wait for a new architecture, but at the same time it's all about the FABS that simply won’t be ready. 28nm only comes in 2011 and until 2013 we will be stuck with it.
 
I think this technological increase is great. And no, you don't need to buy every new GPU unless you are a developer or you have "Tim the Toolman Taylor (TTT)" Syndrome, which some of us do. Games hardly start using the power of current gen cards but about midway through their next generation brethren's lifecycle. In other words, most games work fine on last year's redesigned cards.

I've learned to by with in the redesign lifecycle, unless I skip, or I see a really sold product.

As for the 28nm process, I do hope this transition goes more smoothly for all. I do mean all. I think it is better to watch products compete on the actual technology and not how much time they have made it to market ahead of the competition. Manufacturing problems aside, I hope to see realese evening out this time. This will allow them to make faster corrections/redesigns which is a win for the customes always. And price battles too.

Also, did you consider that they might also work with another manufacturing group? Global Foundry could be an option right. I do hope they get over their manufacturing issue regardless. So lets hope TSMC can get their act together this round.
 
Sounds good as long as the price, power requirements and heat generation isn't more than what Fermi was, And what does 'tim the toolman taylor syndrome' mean? haha =/
 
OH THANK GOD nvidia got some form of x86 technology. Without that they'd be dead in the water, asmuch as some will claim otherwise, nvidia makes most of it's money out of consumer graphics cards sold to the masses either in simple gpu's, consoles and mobile devices.

With AMD fusion and intel sandy bridge, nvidia have nowhere to run to, and are left with the small fragment of the market that is the high end gaming segement. With AMD making so much more money from fusion, money that nvidia cannot gain, they wouldn't compete there either. but with an x86 architecture they may be able to make some kind of competitor to fusion and SB and open the market a little.

Or, of course, they could be running off into pure supercomputing and leaving the consumer graphics entirely behind them. Of course AMD also competes there too, and if/when nvidia cut themselves off from too many lifelines, AMD will just squash them there too.

nvidia need to be very careful at the moment, too many market niches that they need to survive are being eaten up by AMD and intel.
 
Back