It isn't. Last year's demo (w/ tri-SLI GTX 580) used multisampling antialiasing which imposes a large penalty on video RAM buffer. FXAA is basically free.Wow, faster then three GTX 580's. That would be a huge leap forward if it's true.
Sounds like opinion being paraded as fact. Isn't it more likely that the 670Ti is a salvage part, and that the 680 is the fully operational at target voltage? Or do you expect all Nvidia's die's to be either 100% functional or non-working. Using that logic, the GTX 570 and 560Ti(448SP)/ 560Ti OEM wouldn't exist.Based on the $549 price, only 320mm^2 die size and 256-bit memory bus, this card was only intended to be GTX670Ti.
Really? You do realise that AMD's HD 7970, 7950, 7870, 7850, 7770, 7750 as well as Qualcomm (S4), Xilinx and Altera are shipping for revenue on TSMC's 28nm processes (28nm HP, HPL and LP)...it hasn't stopped any of them producing a chip. The fact that Nvidia's large chip (GK110) is still first-and-foremost for the professional market (remember also that Cray has first dib's on a sizeable percentage of the first production run), it's more likely that die area, memory controller, cache, and performance per watt are more problematic than capacity or process.TSMC is having 28nm capacity/yield issues
Since the GTX 680 is a direct replacement for the GTX 580, you'd think that the 580's would be the ones being price cut/EOL'ed -since there is plenty of room to lower 580 pricing without upsetting the $US 320-360 pricing of the next-tier SKU (GTX 570). Strangely enough, this is exactly what seems to be happeningFor now, NV will simply cut the prices of GTX560Ti and GTX570 as there is no replacement for them for the time being.
So what's more meh? AMD's single fastest non-dual card (HD 7970) supposedly offering only "upper midrange performance", or Nvidia matching that performance and price ?Since it's fast enough to compete with HD7970,....Overall, meh since NV is essentially selling upper midrange performance for $550.
There was no launch scheduled for today. The only thing I've seen referencing the 12th March and Kepler, was a "story" from Charlie "Can't spell Charlie without L-I-E" D over at SemiArticulate. My guess would be that Charlie put up the story in order to accumulate page-hits and get his name in print, and once the story was repeated often enough by secondary sources it went from being parsed as "satire"/"ramblings cobbled together from musings on Chinese tech sites" to "fact" by some people. Charlie then could follow up the first "story" with an "Nvidia misses 12th March deadline, is late, deathknell only a matter of days away" front page article.....then again, if you're a devotee of the Cult of Charlie, you just might believe that the worlds largest supplier of discrete graphics cards deliberately held back a launch to make Charlie "seem wrong"What happened to the paper launch that was supposedly today? Or is it later today?
Word is, 670 Ti was an internal name.Guest said:
Based on the $549 price, only 320mm^2 die size and 256-bit memory bus, this card was only intended to be GTX670Ti. Since it's fast enough to compete with HD7970, TSMC is having 28nm capacity/yield issues and NV is late with the Large-die flagship, they relabelled it to a GTX680.
It's priced the same as what nVIDIA said is its AMD equivalent - the 7970.Same Guest said:
Overall, meh since NV is essentially selling upper midrange performance for $550.
Immaterial to the argument. Moreover, bandwidth isn't an accurate measure of performance. Let me illustrate:dividedbyzero,
"Die size isn't strictly relevant in relation to previous (Fermi) design"
If you look back at Nvidia's GPU history, you'd note:
1) Huge memory bandwidth increases over previous generation high-end chips (GTX680 aka GK104 brings none over GTX580)
And ? I don't believe anyone is arguing that Nvidia doesn't, at this point in time pursue a big-die strategy. That is pretty much self evident with the need for a wide buswidth memory controller (more important for compute than gaming), on-die cache, provision for 72-bit ECC memory etc.2) Large die size chips designate high-end parts, a part of their large monolithic die strategy (i.e., 450mm^2+)
How often do you see a 100% increase in performance- practically never. Making up arbitrary numbers and calling them fact doesn't make the point (however vague) you're trying to make valid.Performance increase on average of 50-100% vs. previous high-end. GK104 is unlikely to beat GTX580 by an average of 50%
Wrong. GK104 seems to have always been a part of the lineup.They have no choice but to launch GK104 as a placeholder
You probably need to pay closer attention to whats going on. GK107 has been in the wild somewhat longer than GK104 (was going to link to the earlier video benchmarks that have been doing the rounds, but they've been pulled)The fact that NV's GK106/107 are also no show just clearly goes to show NV isn't ready to launch Kepler series
And I repeat. It.Hasn't.Stopped.Anyone.Releasing.A.Chip. Being constrained on wafer production and/or yield is one thing- being unable to produce a chip at all is entirely another. Considering TSMC's 28nm production (including low-power) is around 2% of their total wafer output I'd be a little surprised if they could provide for the whole market. You might also want to consider that Nvidia surrendered a portion of it's 28nm wafer starts. If GK110 was ready to go now, then they would quite simply sacrifice the GK104 to go with the high-value compute card. AMD sure as hell don't have more wafers being baked at TSMC than Nvidia- it hasn't stopped people from being able to buy HD 7000 cards.28nm yield issues and capacity constraints are widely documented etc, etc...
Does.Not.Compute.Nvidia simply couldn't deliver the real flagship and now consumers are going to be stuck with $550 cards that normally would only be $399 or so given their performance levels vs. previous high-end.
Because we know what Nvidia employees tell us is 100% fact... Don't be so naive.Guest said:
If you don't want to believe that, it's your choice. But that's a fact. Just speak to any Nvidia employee. *wink*
The point was more to illustrate that memory bandwidth doesn't determine the performance of a GPU, ever consider that the need for more bandwidth doesn't actually exist? At least not with current GPU technology, ie the GPU is now the bottleneck and not the memory.Guest said:
Comparing memory bandwidth across different brands just shows you are either not knowledgeable or trying to skew the argument by ignoring the point. You should ONLY compare memory bandwidth across the same brand.
GTX 480 + 39% over GTX 285Also, you are dead wrong that previous NV's high-end generations didn't bring a 50-100% performance increase. They all did. GK104 will bring the least performance increase in that regard.
So, you somehow think that comparing Nvidia-to-Nvidia will somehow validate your Walter Mitty world view:Comparing memory bandwidth across different brands just shows you are either not knowledgeable or trying to skew the argument by ignoring the point.You should ONLY compare memory bandwidth across the same brand
Just speak to any Nvidia employee. *wink*
Straight from Guest posting accuracy contest:The fact that NV's GK106/107 are also no show just clearly goes to show NV isn't ready to launch Kepler.