Epic demos Kepler GPU, could be faster than three GeForce GTX 580s

By on March 12, 2012, 5:30 PM

A recent demo by Epic Games at the Game Developer Conference in San Francisco suggests that Nvidia’s upcoming Kepler GPU could be faster than three GeForce GTX 580 graphics cards. An Nvidia blog post notes that the graphics in the Unreal Engine 3 tech demo are so advanced that it was recently regarded as a fringe demo and required no fewer than three GTX 580 cards to run.

The Samaritan demo is no doubt impressive but Epic Games vice president Mark Rein said that the team is able to get so much more out of the card that what was shown in Samaritan. Behind closed doors, Epic showed off the Unreal Engine 4, reportedly powered by a single Kepler GPU.

Epic Games unveiled the Samaritan demo at last year’s GDC. The tech demo impressed onlookers as advanced rendering techniques smoothly tessellated and morphed facial features and created realistic scenes using three GTX 580 cards.

Kepler includes Fast Approximate Anti-Aliasing (FXAA), an anti-aliasing technique designed to improve upon Multisample Anti-Aliasing (MSAA) that’s commonly found in many games today. 4x MSAA in Samaritan’s lighting pass consumed close to 500MB of GPU memory. As a shader-based technique, FXAA doesn’t require additional memory which makes it much more performance friendly in the demo. The extra memory allows developers the option to reinvest it in additional textures or other niceties.

FXAA is also noticeably smoother than MSAA, as shown in the demo embedded above.




User Comments: 31

Got something to say? Post a comment
MrTomTom said:

Pictures or it didn't happen.

captainawesome captainawesome said:

that video is impressive but no way the in-game graphics is gonna be near that!

Guest said:

You could buy a Kepler GPU card to run Samaritan demo at a smooth frame rate or instead with the same money you could buy the new iPad and run Samaritan demo on it at 0 FPS. hahahahaha

Guest said:

What happened to the paper launch that was supposedly today? Or is it later today?

Cinders Cinders, TechSpot Chancellor, said:

Wow, faster then three GTX 580's. That would be a huge leap forward if it's true. I wonder what the power requirements would be. :|

Guest said:

1) Nvidia and EPIC had more than a year to optimize the drivers/game code to make the game run faster.

2) EPIC stated before that 2x GTX580s were used for Graphics and the 3rd was used strictly for Physics last year.

3) GTX580s in Tri-Fire do NOT have 100% scaling. Therefore, in principal that's not 3x the performance of a single 580.

4) FXAA 3 is FAR less GPU intensive than deferred MSAA Mode used in the last year's presentation (just look at 40%+ performance loss in Battlefield 3 with 4x MSAA deferred).

FXAA is a post processing filter, not "real" anti-aliasing. It blurs textures.

Anyway,

The specs for GTX680 are right here:

http://videocardz.com/30951/nvidia-geforce-gtx-680-to-be-rel
ased-on-march-22nd-available-for-549

Based on the $549 price, only 320mm^2 die size and 256-bit memory bus, this card was only intended to be GTX670Ti. Since it's fast enough to compete with HD7970, TSMC is having 28nm capacity/yield issues and NV is late with the Large-die flagship, they relabelled it to a GTX680.

The "flagship" is delayed and likely won't be available until at least the summer. For now, NV will simply cut the prices of GTX560Ti and GTX570 as there is no replacement for them for the time being.

Overall, meh since NV is essentially selling upper midrange performance for $550. However, given how much more efficient this 320mm^2 256-bit card is, we can expect amazing performance from GTX780 or whatever the "large-die" Kepler is called. Too bad, it's way out on the roadmap...

amstech amstech, TechSpot Enthusiast, said:

I didn't think Nvidia would swing back this hard until the GTX 780.

MrAnderson said:

For this comming cycle of generation of PC gaming, it will take lots of work to get games to look like that espcially with CPU cycles and even GPU cycles possibly going to other things...

I just hope that the power consumption for performance is going down. Having Killiowatt power supplies is getting rediculous. I cannot wait to see what demo Nvidia puts out. I miss those days when Nvidia and ATI used to create short realitime films to show off their tech.

dividebyzero dividebyzero, trainee n00b, said:

Wow, faster then three GTX 580's. That would be a huge leap forward if it's true.

It isn't. Last year's demo (w/ tri-SLI GTX 580) used multisampling antialiasing which imposes a large penalty on video RAM buffer. FXAA is basically free.

SLI also does not scale linearly, and the game is likely much better optimized as the Guest posted below you.

Based on the $549 price, only 320mm^2 die size and 256-bit memory bus, this card was only intended to be GTX670Ti.

Sounds like opinion being paraded as fact. Isn't it more likely that the 670Ti is a salvage part, and that the 680 is the fully operational at target voltage? Or do you expect all Nvidia's die's to be either 100% functional or non-working. Using that logic, the GTX 570 and 560Ti(448SP)/ 560Ti OEM wouldn't exist.

Die size isn't strictly relevant in relation to previous (Fermi) design, or possibly Tahiti, since at least one of those (and possibly both) used automated tools to layout the chips. This process isn't noted for delivering an area efficient design (see AMD Bulldozer)

TSMC is having 28nm capacity/yield issues

Really? You do realise that AMD's HD 7970, 7950, 7870, 7850, 7770, 7750 as well as Qualcomm (S4), Xilinx and Altera are shipping for revenue on TSMC's 28nm processes (28nm HP, HPL and LP)...it hasn't stopped any of them producing a chip. The fact that Nvidia's large chip (GK110) is still first-and-foremost for the professional market (remember also that Cray has first dib's on a sizeable percentage of the first production run), it's more likely that die area, memory controller, cache, and performance per watt are more problematic than capacity or process.

For now, NV will simply cut the prices of GTX560Ti and GTX570 as there is no replacement for them for the time being.

Since the GTX 680 is a direct replacement for the GTX 580, you'd think that the 580's would be the ones being price cut/EOL'ed -since there is plenty of room to lower 580 pricing without upsetting the $US 320-360 pricing of the next-tier SKU (GTX 570). Strangely enough, this is exactly what seems to be happening

Since it's fast enough to compete with HD7970,....Overall, meh since NV is essentially selling upper midrange performance for $550.

So what's more meh? AMD's single fastest non-dual card (HD 7970) supposedly offering only "upper midrange performance", or Nvidia matching that performance and price ?

What happened to the paper launch that was supposedly today? Or is it later today?

There was no launch scheduled for today. The only thing I've seen referencing the 12th March and Kepler, was a "story" from Charlie "Can't spell Charlie without L-I-E" D over at SemiArticulate. My guess would be that Charlie put up the story in order to accumulate page-hits and get his name in print, and once the story was repeated often enough by secondary sources it went from being parsed as "satire"/"ramblings cobbled together from musings on Chinese tech sites" to "fact" by some people. Charlie then could follow up the first "story" with an "Nvidia misses 12th March deadline, is late, deathknell only a matter of days away" front page article.....then again, [link] "

St1ckM4n St1ckM4n said:

That video didn't impress me. At all.

I thought we had all this stuff 2 years ago. And the textures were poor.

soldier1969 soldier1969 said:

I'll take a single 4GB GTX 680 to replace my 2 GTX 580s 3GB each please...

hahahanoobs hahahanoobs said:

Guest said:

Based on the $549 price, only 320mm^2 die size and 256-bit memory bus, this card was only intended to be GTX670Ti. Since it's fast enough to compete with HD7970, TSMC is having 28nm capacity/yield issues and NV is late with the Large-die flagship, they relabelled it to a GTX680.

Word is, 670 Ti was an internal name.

[link]

Same Guest said:

Overall, meh since NV is essentially selling upper midrange performance for $550.

It's priced the same as what nVIDIA said is its AMD equivalent - the 7970.

hahahanoobs hahahanoobs said:

I have a hunch we will eventually see a GTX 685 in addition to the GTX 680.

Guest said:

dividedbyzero,

"Die size isn't strictly relevant in relation to previous (Fermi) design"

If you look back at Nvidia's GPU history, you'd note:

1) Huge memory bandwidth increases over previous generation high-end chips (GTX680 aka GK104 brings none over GTX580)

2) Large die size chips designate high-end parts, a part of their large monolithic die strategy (i.e., 450mm^2+)

3) Performance increase on average of 50-100% vs. previous high-end. GK104 is unlikely to beat GTX580 by an average of 50%

ALL rumors even going back 6 months ago pointed that NV will counter HD7970 with a GK104 style chip, and release a much larger GK110 (or some called it GK112) much later in 2012. It seems this is becoming true. NV is unable to launch their flagship on time. They have no choice but to launch GK104 as a placeholder.

The fact that NV's GK106/107 are also no show just clearly goes to show NV isn't ready to launch Kepler series. 28nm yield issues and capacity constraints are widely documented and have even been sighted in Investor Financial Analyst calls by NV.

Go read some sources and you'll know that TSMC will not ramp up production of 28nm Fab until later in the year, around Q3. It's simply TSMC's own timeline. The fact is both NV and AMD held back on performance due to slow ramp up of 28nm process. Prices are also pressured since 28nm wafers cost MORE than 40nm wafers did. But all that means is consumers are better off waiting for 6 months, which shouldn't be hard to do given that 99% of games in 2012 for PC are console ports.

The fact that GK104 will hardly beat HD7970 by more than 10% is a 100% indication that it's not NV's high-end because HD7970 beats GTX580 by only 25%! Nvidia simply couldn't deliver the real flagship and now consumers are going to be stuck with $550 cards that normally would only be $399 or so given their performance levels vs. previous high-end.

dividebyzero dividebyzero, trainee n00b, said:

dividedbyzero,

"Die size isn't strictly relevant in relation to previous (Fermi) design"

If you look back at Nvidia's GPU history, you'd note:

1) Huge memory bandwidth increases over previous generation high-end chips (GTX680 aka GK104 brings none over GTX580)

Immaterial to the argument. Moreover, bandwidth isn't an accurate measure of performance. Let me illustrate:

GTX 480 : 384-bit / 8 x 3696 MHz effective = 177.4 GB/sec

HD 6970 : 256-bit / 8 x 5500 MHz effective = 176 GB/sec

and if the performance difference isn't readily apparent with that, try this:

HD 2900XT: 512-bit/ 8 x 2000MHz effective = 128GB/sec

GTX 560Ti : 256-bit/ 8 x 4008MHz effective= 128.27GB/sec

2) Large die size chips designate high-end parts, a part of their large monolithic die strategy (i.e., 450mm^2+)

And ? I don't believe anyone is arguing that Nvidia doesn't, at this point in time pursue a big-die strategy. That is pretty much self evident with the need for a wide buswidth memory controller (more important for compute than gaming), on-die cache, provision for 72-bit ECC memory etc.

What I would argue is that having been burnt on a big-die first strategy on 40nm with the exceedingly late to market (and with no full 512 shader die) GTX 480/470, Nvidia probably erred on the side of caution by going with a mid-sized die on a new 28nm process-likely given TSMC's problems (and eventual abandonment) of 32nm. It's hard enough ensuring a new architecture on a new node works out well, without having to factor in possible process problems with your sole foundry partner. Why complicate matters with a huge die that would take a correspondingly long time to debug.

Performance increase on average of 50-100% vs. previous high-end. GK104 is unlikely to beat GTX580 by an average of 50%

How often do you see a 100% increase in performance- practically never. Making up arbitrary numbers and calling them fact doesn't make the point (however vague) you're trying to make valid.

8800GTX/Ultra to 9800GTX, 9800GTX to GTX 280, GTX 285 to GTX 480..none represent a 100% increase in performance. You could possibly argue for a 7900GTX/GTO to G80 in some circumstances...but kind of goes against your "big die" hobbyhorse ( G71 being 196mm˛)

They have no choice but to launch GK104 as a placeholder

Wrong. [link] .

The fact that NV's GK106/107 are also no show just clearly goes to show NV isn't ready to launch Kepler series

You probably need to pay closer attention to whats going on. GK107 has been in the wild somewhat longer than GK104 (was going to link to the earlier video benchmarks that have been doing the rounds, but they've been pulled)

28nm yield issues and capacity constraints are widely documented etc, etc...

And I repeat. It.Hasn't.Stopped.Anyone.Releasing.A.Chip. Being constrained on wafer production and/or yield is one thing- being unable to produce a chip at all is entirely another. Considering TSMC's 28nm production (including low-power) is around 2% of their total wafer output I'd be a little surprised if they could provide for the whole market. You might also want to consider that Nvidia surrendered a portion of it's 28nm wafer starts. If GK110 was ready to go now, then they would quite simply sacrifice the GK104 to go with the high-value compute card. AMD sure as hell don't have more wafers being baked at TSMC than Nvidia- it hasn't stopped people from being able to buy HD 7000 cards.

Everything points to a 23rd March launch for the GK104, so [link]

Nvidia simply couldn't deliver the real flagship and now consumers are going to be stuck with $550 cards that normally would only be $399 or so given their performance levels vs. previous high-end.

Does.Not.Compute.

Nvidia will price relative to performance. If performance is broadly equal to an AMD card then they price accordingly, with probably a tariff on top allowing for the brand. Rory Read and AMD have set the price/performance bar as far as 28nm goes.Nvidia simply wouldn't sell a card that outperforms a card already on the market for less. It makes no sense, any more than AMD releasing the HD 7970 at the same (or lower) price as a GTX 580 which it has a comfortable performance margin over....and since people are buying HD 7970's at $549+ it proves the market exists for performance at that price. Q.E.D....and the only way that changes is if one company decides to institute a price war and slashes prices. The consumer has determined what these cards will sell for. AMD and Nvidia will exploit that.

You might note that Nvidia's recent second-tier cards have had only one SLI connector, with a maximum of two cards supported in SLI (GTX 460, 560). The 680 features two SLI connectors, which indicates at least three, and probably, four card SLI supported. As is usual for the top-tier SKU's. When was the last time a top tier Nvidia card debuted at $399 MSRP ?

BTW: A GK110 at the rumoured specs. No way it retails at $550 or anywhere close. The market/signed contracts for Quadro and Tesla versions of the GPU would dictate that few would see the light of day as gaming cards, and I really wouldn't be surprised at a $700+ pricetag when they arrive.

I still haven't seen any actual convincing argument that GK110 was ever intended to be released as a 600 series card. From all accounts, the initial 28nm offerings were going to follow AMD/ATi's recent model (i.e. mid-sized die with the top SKU being a dual card**) with the GK110 to follow at a later date -probably as the GTX 780 to combat the HD 89xx cards.

EDIT: ** Dual-card GTX 690 in May according to 3DCenter, with GK110 somewhat later-indicating that it would be closer to being a 700 series than 600.

emmzo said:

Single 6 series card vs 3 580s? We've seen this kind of advertising over and over again. Even if such tech is available, marketing dictates small leaps for maximizing profits. The 680 will only be 10 to 15% faster than the 7970, so probably not even 40% over the 580 and the price will also get bigger since AMD has a stable market and won't cut prices anymore.

indiangamer said:

it looks good but not that much that it requires 3 gtx 580 to run. this shows how weak our software development is.consoles are 10 times weaker than current generation gpus but they don't look 10 times bad.

and epic only shows demo i think the hires some 100s of software engineers to work for years and make a 2:46 min long tech demo. see every game using unreal engine 3, they looks fool.

Guest said:

Wrong. [link] .

?

That link itself showed that GK110 was a real replacement for GTX580 while GK104 was always intended to be a replacement for GTX570.

I don't know what you are arguing about. Whether NVidia launches GK110 as a GTX780 is irrelevant.

It's like Nvidia launching a GTX460 first and skipping GTX480 entirely due to issue and re-releasing it as a GTX580 --> aka 780.

GK104 is not the real high-end Kepler chip. It might only be the highest end single-GPU for 600 series, but that's just marketing and naming.

Also, you are dead wrong that previous NV's high-end generations didn't bring a 50-100% performance increase. They all did. GK104 will bring the least performance increase in that regard.

http://alienbabeltech.com/abt//viewtopic.php?t=21797

Comparing memory bandwidth across different brands just shows you are either not knowledgeable or trying to skew the argument by ignoring the point. You should ONLY compare memory bandwidth across the same brand. I never said anything about AMD vs. Nvidia's relation of memory bandwidth. In Nvidia's case it did increase memory bandwidth every time it released a new high-end GPU. Considering Kepler GK104 doesn't move that bar at all, it's upper mid-range. Nvidia is just launching it as high-end because they COULD NOT get out the real flagship card. And since GK104 is good enough to compete with HD7970, it's their "flagship" holdover card.

If you don't want to believe that, it's your choice. But that's a fact. Just speak to any Nvidia employee. *wink*

EXCellR8 EXCellR8, The Conservative, said:

Def a pretty badass tech demo...

Adhmuz Adhmuz, TechSpot Paladin, said:

Guest said:

If you don't want to believe that, it's your choice. But that's a fact. Just speak to any Nvidia employee. *wink*

Because we know what Nvidia employees tell us is 100% fact... Don't be so naive.

Guest said:

Comparing memory bandwidth across different brands just shows you are either not knowledgeable or trying to skew the argument by ignoring the point. You should ONLY compare memory bandwidth across the same brand.

The point was more to illustrate that memory bandwidth doesn't determine the performance of a GPU, ever consider that the need for more bandwidth doesn't actually exist? At least not with current GPU technology, ie the GPU is now the bottleneck and not the memory.

As far as the video, the original running off the 580s looks better imo. This one looks muddy and not as sharp.

dividebyzero dividebyzero, trainee n00b, said:

Also, you are dead wrong that previous NV's high-end generations didn't bring a 50-100% performance increase. They all did. GK104 will bring the least performance increase in that regard.

GTX 480 + 39% over GTX 285

GTX 280 + 39% over 9800GTX

9800GTX -2% vs. 8800GTX

...Yeah, right, "they all did"......0 fer 1

Comparing memory bandwidth across different brands just shows you are either not knowledgeable or trying to skew the argument by ignoring the point.You should ONLY compare memory bandwidth across the same brand

So, you somehow think that comparing Nvidia-to-Nvidia will somehow validate your Walter Mitty world view:

GTX 280 : 512-bit / 8 x 2214 MHz effective = 141.7 GB/sec

GTX 560Ti : 256-bit / 8 x 4008 MHz effective = 128.27 GB/sec

Oops.....0 fer 2

I think I'll stop there, since the rest of your posting really isn't worth the effort, and provides no more than a textbook example of why I don't usually bother even reading the posts of the bulk of Guest posters.

St1ckM4n St1ckM4n said:

...

Just speak to any Nvidia employee. *wink*

Also I agree with someone above - this junk shouldn't max out even a single 580. I don't know if I can't see it through the YouTube, but is the entire focus on AA and texture filtering, along with some other filters/effects..?

Also, for consoles to look 10x worse, you really need to be looking for this stuff. From a couch position to the casual gamer, consoles are mind blowing.

dividebyzero dividebyzero, trainee n00b, said:

The fact that NV's GK106/107 are also no show just clearly goes to show NV isn't ready to launch Kepler.

Straight from Guest posting accuracy contest:

[link]

Ryan Shrout

Review/preview here

Guest said:

I laugh at the silly kids who spend a fortune on computer hardware...so quickly outdated lol

St1ckM4n St1ckM4n said:

I laugh at the silly kids who spend a fortune on computer hardware...so quickly outdated lol

Technically, yes, outdated. In the real world though, people can still use high-end hardware from many generations past with no problems.

Guest said:

Dividedbyzero:

1) Using techpowerup to show advancements of GPUs from 1 generation to another is not accurate. You are comparing performance increases using older games using launch dates. Try newer games. For example, HD5870 was barely 40% faster than HD4870 at launch. In modern games, it's easily 75-100% faster. GTX480 is at least 50% faster than GTX285 and more in modern games.

I am not even going to revisit all the previous reviews going back to GeForce 3. I know for a fact that GeForce 6800 Ultra was more than 2x faster than FX5950 and that 7800GTX 512mb was 2x faster than 6800Ultra, while 8800GTX was more than 2x faster overall than 7900GTX. GTX280 was also at least 50-75% faster than 8800GTX. But you used 9800GTX+ (that's a refresh, so the comparison is a failed one).

Check any modern website: Toms Hardware, Techreport, Xbitlabs, Anandtech, Computerbase, Techspot. You are so off the mark, it's not even funny. TPU uses lower resolutions which skews the results since no enthusiast games on a $500 GPU at anything below 1920x1080. By adding relative resolutions to arrive at an average, you aren't stressing the high-end GPUs enough.

It's laughable to think NVidia brings 30-40% performance increases from one high-end to the next. GTX480 vs. GTX280 in Batman AC or Metro 2033 or Witcher 2. Go test it. See what happens.

Here 5 minutes of googling "Graphics evolution" got me here:

http://www.computerbase.de/artikel/grafikkarten/2011/bericht
grafikkarten-evolution/3/#abschnitt_leistung_mit_aaaf

GTX480 is at least 51% faster than GTX280

8800GTX is > 2x faster than 7900GTX.

It looks like there is no point in arguing with you. So I'll just leave it at that.

2) Why are you talking about mobile GK106/107 parts. We are discussing desktop parts. Nvidia isn't ready to launch GK106/107 desktop parts in volumes until at least April or May.

That's 2/2 for me.

Crazy fanboy.

dividebyzero dividebyzero, trainee n00b, said:

blah blah...while 8800GTX was more than 2x faster overall than 7900GTX

already covered, if you'd bothered to read.

You could possibly argue for a 7900GTX/GTO to G80 in some circumstances...but kind of goes against your "big die" hobbyhorse ( G71 being 196mm˛)

Standard anonymous poster strategy- jump onto another little pony as soon as the one you're on gets shot out from under you.

BTW: You should go straight for the ISA graphics to AGP comparisons- some pretty big percentage gains going from those 2D cards.

Why are you talking about mobile GK106/107 parts. We are discussing desktop parts

Nope....

The fact that NV's GK106/107 are also no show just clearly goes to show NV isn't ready to launch Kepler series

Don't see any stipulation for desktop there. All I'm seeing is a whole lot of prevarication.

...the architecture and GPU remains the same in any case- the fact that the GPU resides in an MXM (mobile PCI-E) module as opposed to PCI-E makes no difference.

The rest isn't worth the effort. Pretty stupid comparing 2011 games for cards that have been EOL since long before a lot of the games debuted...unless you think that any company optimizes their new drivers for 3, 4, 5, 6 year old cards.

BTW: Your link references the GTX 280 and 480.

GTX480 is at least 51% faster than GTX280

For the sake of completeness you should be looking at top single GPU card from one generation to the next. G200b (GTX 285) was the "big die" prior to GF100. The higher core/shader/memory clocks translate into 7% higher average framerate (1920x1080)

Anyway, I'm done. Maybe someone else would like to keep you company on your increasingly meandering thought process...

Guest said:

For the sake of completeness you should be looking at top single GPU card from one generation to the next. G200b (GTX 285) was the "big die" prior to GF100.

That makes no sense. GTX285 is a refresh of GTX280. Therefore, on a generational timeline it's only fair to compare GTX480 vs. GTX280 (>50% performance increase) and GTX580 (GTX480's refresh) vs. GTX285 (GTX280's refresh).

Looks like when you lose arguments you just resort to insults. Typical childish behaviour.

Meh. Don't care. When the flagship GTX685/GTX780 is > 50% faster than GTX580, enthusiasts will be maxing out games with them, while you'll be testing 2009-2010 games like TPU to skew the average performance increase to prove some point online.

You realize people don't throw out their videocards in 6 months? That's why testing newer games is representative. HD5870 review of TPU is a laughing joke. HD5870 was modern games not to play Quake and Unreal games. Just saying.

mailpup mailpup said:

Just a reminder to everyone to make your points without personal comments. Thanks.

Guest said:

That was a good read, the 2 of you made some good points.

Personally i own 2 285's & haven't seen the need to get a 480/580 since they're not all that much better then what i have.

As long as we are being held back by the consoles with constant ports i don't see why PC gamers should spend rediculous amounts on these 680/780 parts.

As far as i can tell the next time i will need a new card to compete with a xbox 720/ps4 is when the part nebers are like 980/985.That is if rumours of new consoles are true in E3 this summer.

I have done the crazy foolish Upgrade trip since geforce 256 & ati 8500 days.

I am willing to bet those early days 2000-2005 are going to make a comback with the new consoles:) & card refreshes being every 6 months.

dividebyzero dividebyzero, trainee n00b, said:

That was a good read, the 2 of you made some good points

Thanks- we aim to please.

You'll note that sometimes the signal to noise ratio tends to take a dive in graphics "discussions", so I'm not entirely sure how much benefit is derived.

An example: Our Guests original premise...

3) Performance increase on average of 50-100% vs. previous high-end. GK104 is unlikely to beat GTX580 by an average of 50%

...but by his own argument...

That makes no sense. GTX285 is a refresh of GTX280. Therefore, on a generational timeline it's only fair to compare GTX480 vs. GTX280

...he should be comparing the GTX 680 with the GTX 480 - since the GTX 580 is a....refresh.

I'm not convinced in any case that average frames-per-second is the only metric that "performance" should be judged by. The overriding factors (for gaming cards) should be game playability (single and multi GPU), driver optimization, and for some people, thermal, acoustic, and power usage characteristics (some might argue performance/$, performance/watt...depends on which metric favours their preferred brand at the time if the forums are to be believed)

I am willing to bet those early days 2000-2005 are going to make a comback with the new consoles & card refreshes being every 6 months.

If/when there are refreshes at that rate then the new card series would likely amount to some new box art and some higher numbered digits. The present cycle I think is still likely to be linked to a yearly cycle - at least for a while. Both AMD and Nvidia need to recoup their investment on each series- and neither seem predisposed to accelerate their timelines. TSMC (the foundry for both AMD and Nvidia GPU's) looks to continue 28nm for complex chips through to early 2014 (with production moving from 300mm wafers to 450mm) which means that transistor densities -and the constraints that brings, will limit any significant gains.

As the majority of computer users buy pre-builts from OEM's and neither know, nor care about anything other than "does it work" and "bigger number means better, right?", I'm sure we'll see the usual slightly tweaked -or outright renamed- cards on a fairly regular basis, but architectual changes should hold steady ( [link] for example)

Load all comments...

Add New Comment

TechSpot Members
Login or sign up for free,
it takes about 30 seconds.
You may also...
Get complete access to the TechSpot community. Join thousands of technology enthusiasts that contribute and share knowledge in our forum. Get a private inbox, upload your own photo gallery and more.