First spotted by German website 3DCenter.org, the listing says that Micron charges $11.69 per GB for 2000 1GB modules of 14Gbps GDDR6, while they charge just $6.84 for the equivalent 8Gbps GDDR5 modules. That’s 70% more for 75% faster speeds.
If the prices are accurate then the RTX 2080 Ti memory costs $129 and the 2080’s and 2070’s cost $94. That is $39 more than the memory in the GTX 1070 Ti. The cost of GDDR5X modules used in the GTX 1080 Ti and GTX 1080 remains unknown, but it’s likely it wasn’t much more than GDDR5 given how the cards were priced.
The prices also help explain the confusing rumored RTX 2060 variants, which are likely to come with GDDR6 and GDDR5 options with 6GB, 4GB and 3GB of memory. Going down the chain, dropping from 6GB to 4GB could save $23 while dropping from 4GB to 3GB could save $12. Switching to GDDR5 would save $29 at 6GB, $19 at 4GB and $14 at 3GB. That could be 5-10% depending on pricing.
For Nvidia to be buying GDDR6 at these prices, they must’ve been somewhat desperate. But if you think about it, with the benefits of the Turing architecture spent on ray tracing rather than CUDA core improvements, the RTX series had to look elsewhere for performance.
The RTX 2070 has 11% fewer TFLOPS than the GTX 1080 despite a similar price, so how does it perform 7% better on average? How does the RTX 2080 achieve GTX 1080 Ti like performance while missing an entire 1050’s worth of cores? Speed improvements from GDDR6 is at least part of that answer.
According to the listing, Nvidia could have gone with 13Gbps GDDR6 that’s only 7% slower but is 21% cheaper. If they had, then the RTX cards might have achieved a better value, although these days we're finally seeing them at their intended MSRPs. It does leave AMD an interesting opportunity, however. Once again, we’re left with high hopes for Navi, which could be teased any minute now at CES.