All those specs are fake/made up. Whoever put them together must have been bored.

1) If HD8970 uses 260W of TDP and AMD couldn't put 2x HD7970Ghz cards into 1 GPU (heck not even 925mhz HD7990s), then no way will they be able to do a 375W HD8990.

2) HD8970

- Memory bandwidth is wrong: 6000mhz @ 384-bit = 288GB/sec. They have it at 322GB/sec.

- Double precision is wrong: DP is 1/4 of SP in GCN. If you have SP of 5.38Tflops, DP has to be 1.34. They have it as 1.6 Tflops. Impossible. If they somehow go to 1/3rd of SP, the number is 1.79. No matter how you slice it, theirs is made up.

3) HD8950

- Memory bandwidth is wrong: 5500mhz @ 384-bit = 264 GB/sec. They have 300GB/sec.

- TMUs are wrong: GCN has a 1 compute unit to 4 TMU ratio. To get 2304 SPs, you need 36 Compute units or 144 TMUs. They have it as 140 TMUs.

4) Odd power consumption. There is a 50W difference between HD8970 and 8950 despite very small differences in specs. Yet, the same 50W difference exists between HD8870 and HD8950 despite the latter having 50% more ROPs and huge advantages in memory bandwidth and shaders. This logically doesn't make any sense. Also, you'd end up with a huge gap in performance between 48 ROP and 32 ROP parts.

5) How can they only increase transistors from 4.3B to 5.1B and yet squeeze in 50% more ROPs, 25% more shaders and 25% more TMUs and TDP only goes up 10W? I'll believe it if they raised shaders and TMUs but ROPs stay at 32 or at most 40. If anything, I can see more transistors being used to improve Rasterization and Geometry engines in the architecture. 48 ROPs is probably not possible until 20nm.

Overall, these specs are sloppy, have mathematical mistakes, don't adhere to GCN ratios of how things work. It's pure speculation as so many mistakes rule them out as credible.

Click to expand...