Also, Dirt Showdown is not a mysterious outlier. It's an AMD Gaming Evolved title. AMD is just working closer with developers on pushing global lighting model and contract hardening shadows via DirectCompute shaders.
Yup. And that's what they should have been pushing since GPGPU became a factor in consumer graphics. Nvidia 2006...AMD 2012. Better late than never.
If AMD puts even more $ behind DirectCompute for games
Yup. IF. A big proviso when it comes to AMD. Some of us remember
ATi's GiTG program which also promised much...and promptly faded into obscurity. Hopefully AMD learn this time around...although it does bring up a couple of points:
1. AMD is haemorrhaging red ink. It will be interesting to see what kind of triage system they use for project funding, and
2.
AMD's chief game dev point man recently departed for Intel, so your future world of AMD domination may not be so cut-and-dried
BTW: All that's going to happen is that AMD release AMD-centric titles, and Nvidia does likewise. Borderlands 2 is due for release soon is it not...with TXAA support for Nvidia cards - 4 x MSAA performance for FXAA/MLAA performance penalty...and as far as I'm aware, AMD cards still tank in extreme tessellation- you don't think that maybe Nvidia might exploit that weakness?
(or the
OCaholic review you linked to earlier)
All this ends up meaning is that eventually your choice of IHV might be predicated upon the game sponsor and game engine- that being the case, then us as the consumers lose. I'd also bear in mind that:
1. the image quality additions have to bring more to the game than the penalty for using them, and
2. The I.q. additions need to be something that can be used by cards down through the product stack if they do add substantially to gameplay
Kepler will continue to suffer in all future titles which use DirectCompute for HDAO/global lighting, etc.
I doubt it. For some obscure reason, people are thinking that Nvidia has abandoned compute because of the GK104, when the strategy was obviously to learn from the big-die mistake of Fermi (I.e. unnecessary double precision, 72-bit ECC memory support, large cache - none of which is required for gaming). AMD/ATi's success stemmed from getting the maximum number of usable GPU's per wafer. Nvidia is doing nothing more than branching it's product line.
If you think that a GK110 card won't leave a Tahiti/Sea Islands in the shade in both compute and gaming I think you'll be sorely disappointed in Q1 2013 ( All production from September into the new year will be Tesla's for HSC projects such as ORNL).Nvidia have been doing consumer compute since 2006 (G80)- I doubt that they have suddenly decided that they don't want 85% of the professional graphics market.