BoboOOZ
Posts: 63 +56
Well, to be honest, I got the 1750 MHz/150W numbers not from some rigorous testing, but from my playing around with different TDP and undervolting in my Radeon driver.I'm really curious to see where this goes in the next gen in and out of the consoles. AMD showed that the original 1560 MHz spec of the 5600XT was running close to the sweet spot of power efficiency and then pushed it up from there with the BIOS update to increase performance to 1750 MHz. TechPowerUp's power efficiency graphs are great to visualize this, with the 5600XT original BIOS clearly the most efficient video card out there. I believe laptop GPUs run in this range as well to get the most performance from the fewest watts.
That said, now that you mention more rigorous testing the numbers seem to add up. My card is much more recent (bought only 3 months ago) and, more importantly, all Radeon chips are the same die, with different levels of binning, A level chips go to 5700XT, B to 5700, C to 5600XT and D to 5600. So it would make sense that higher-quality silicon would have a curve shifted towards higher efficiency.
Just as an aside, the most efficient design that AMD came with is a custom 5600M for Apple, which is in fact the A quality die (5700XT, 40 CUs) but downclocked to only 1035MHz and using HBM2 instead of GDDR 6.
Yes, I'm really curious to see how this one goes, too. I'm pretty sure that we'll see higher clocks from AMD this fall, it's already their second generation on this node or whereabouts, and historically, ATI/AMD engineers seem to be better at silicon level technology, while Nvidia engineers seem to be better with architectures. Nvidia also might have the handicap of having delayed the choice of its node, playing (and losing) the competition between Samsung and TSMC in order to get a better price.I do the same for my GTX 1080 using Afterburner and the fastest peak efficiency is around 0.9v at 1911 MHz (or 1923 MHz with a better cooler) but that's optimally undervolted which needs to be done on a per-card basis. Standard clocks for 0.9v are at 1733 MHz, right at the number you mention.
It will be interesting to see if clocks do increase noticeably with Nvidia's node shrink and whether AMD's IPC and other improvements in Navi 2 can keep up. Intel has already given a hint that some companies to not see a clock speed improvement with a smaller node as their new laptop CPUs on 10nm have lower top speeds than their very mature and optimized 14nm CPUs, though IPC improvements make up for the speed deficit.
One thing is sure, though, is that the barrier that you are speaking about, that both AMD and Intel are facing, seems to be somewhere between 4 and 4.5GHz, so I'd say that for GPU's we've got some leeway yet. I imagine that the problems are different in nature and are mostly related to the huge size of the dies.
Last edited: