dividedbyzero,
"Die size isn't strictly relevant in relation to previous (Fermi) design"
If you look back at Nvidia's GPU history, you'd note:
1) Huge memory bandwidth increases over previous generation high-end chips (GTX680 aka GK104 brings none over GTX580)
Immaterial to the argument. Moreover, bandwidth isn't an accurate measure of performance. Let me illustrate:
GTX 480 : 384-bit / 8 x 3696 MHz effective = 177.4 GB/sec
HD 6970 : 256-bit / 8 x 5500 MHz effective = 176 GB/sec
and if the performance difference isn't readily apparent with that, try this:
HD 2900XT: 512-bit/ 8 x 2000MHz effective = 128GB/sec
GTX 560Ti : 256-bit/ 8 x 4008MHz effective= 128.27GB/sec
2) Large die size chips designate high-end parts, a part of their large monolithic die strategy (i.e., 450mm^2+)
And ? I don't believe anyone is arguing that Nvidia doesn't, at this point in time pursue a big-die strategy. That is pretty much self evident with the need for a wide buswidth memory controller (more important for compute than gaming), on-die cache, provision for 72-bit ECC memory etc.
What I would argue is that having been burnt on a big-die first strategy on 40nm with the exceedingly late to market (and with no full 512 shader die) GTX 480/470, Nvidia probably erred on the side of caution by going with a mid-sized die on a new 28nm process-likely given TSMC's problems (and eventual abandonment) of 32nm. It's hard enough ensuring a new architecture on a new node works out well, without having to factor in possible process problems with your sole foundry partner. Why complicate matters with a huge die that would take a correspondingly long time to debug.
Performance increase on average of 50-100% vs. previous high-end. GK104 is unlikely to beat GTX580 by an average of 50%
How often do you see a 100% increase in performance- practically never. Making up arbitrary numbers and calling them fact doesn't make the point (however vague) you're trying to make valid.
8800GTX/Ultra to 9800GTX, 9800GTX to GTX 280, GTX 285 to GTX 480..none represent a 100% increase in performance. You could possibly argue for a 7900GTX/GTO to G80 in some circumstances...but kind of goes against your "big die" hobbyhorse ( G71 being 196mm²)
They have no choice but to launch GK104 as a placeholder
Wrong.
GK104 seems to have always been a part of the lineup.
The fact that NV's GK106/107 are also no show just clearly goes to show NV isn't ready to launch Kepler series
You probably need to pay closer attention to whats going on.
GK107 has been in the wild somewhat longer than GK104 (was going to link to the
earlier video benchmarks that have been doing the rounds, but they've been pulled)
28nm yield issues and capacity constraints are widely documented etc, etc...
And I repeat. It.Hasn't.Stopped.Anyone.Releasing.A.Chip. Being constrained on wafer production and/or yield is one thing- being unable to produce a chip at all is entirely another. Considering TSMC's 28nm production (including low-power) is around 2% of their total wafer output I'd be a little surprised if they could provide for the whole market. You might also want to consider that Nvidia surrendered a portion of it's 28nm wafer starts. If GK110 was ready to go now, then they would quite simply sacrifice the GK104 to go with the high-value compute card. AMD sure as hell don't have more wafers being baked at TSMC than Nvidia- it hasn't stopped people from being able to buy HD 7000 cards.
Everything points to a 23rd March launch for the GK104, so
Nvidia's AIB's obviously have boards up and ready to go
Nvidia simply couldn't deliver the real flagship and now consumers are going to be stuck with $550 cards that normally would only be $399 or so given their performance levels vs. previous high-end.
Does.Not.Compute.
Nvidia will price relative to performance. If performance is broadly equal to an AMD card then they price accordingly, with probably a tariff on top allowing for the brand. Rory Read and AMD have set the price/performance bar as far as 28nm goes.Nvidia simply wouldn't sell a card that outperforms a card already on the market for less. It makes no sense, any more than AMD releasing the HD 7970 at the same (or lower) price as a GTX 580 which it has a comfortable performance margin over....and since people are buying HD 7970's at $549+ it proves the market exists for performance at that price. Q.E.D....and the only way that changes is if one company decides to institute a price war and slashes prices. The consumer has determined what these cards will sell for. AMD and Nvidia will exploit that.
You might note that Nvidia's recent second-tier cards have had only one SLI connector, with a maximum of two cards supported in SLI (GTX 460, 560). The 680 features two SLI connectors, which indicates at least three, and probably, four card SLI supported. As is usual for the top-tier SKU's. When was the last time a top tier Nvidia card debuted at $399 MSRP ?
BTW: A GK110 at the rumoured specs. No way it retails at $550 or anywhere close. The market/signed contracts for Quadro and Tesla versions of the GPU would dictate that few would see the light of day as gaming cards, and I really wouldn't be surprised at a $700+ pricetag when they arrive.
I still haven't seen any actual convincing argument that GK110 was ever intended to be released as a 600 series card. From all accounts, the initial 28nm offerings were going to follow AMD/ATi's recent model (i.e. mid-sized die with the top SKU being a dual card**) with the GK110 to follow at a later date -probably as the GTX 780 to combat the HD 89xx cards.
EDIT: **
Dual-card GTX 690 in May according to 3DCenter, with GK110 somewhat later-indicating that it would be closer to being a 700 series than 600.