dividebyzero
Posts: 4,840 +1,279
can someone explain me why nvidia's GTX 480 GDDR5 memory works at 1848MHz... and ATI's GDDR5 is 4x the clock speed of the memory, say HD5850 has a memory of 1000MHz for and effective clock of 4000MHz... or the HD5870 with 1200MHz for the memory for an effective clock of 4800MHz... so.... anyone?![]()
Matthew quoted the GTX 480 memory as 1848MHz- which is it's DDR (Dual Data Rate). Doubling this will give you it's "effective" rate (i.e. 3696MHz). Which is lower than the HD5xxx series but on a wider memory bus, so peak bandwidth is actually fairly similar.
A basic equation for the GTX 480 would be:
924MHz clock x 4 (DDR doubled) x 384 bit bus / 8 bits (1 byte) = 177408 Mb/sec
177408Mb / 1024 (Mb per Gb) = 173.25Gb/sec (173.4 if you round the effective rate to an even 3800MHz)
and for the HD 5870...
1200MHz clock x 4 x 256 bit bus / 8 = 153600 Mb/sec / 1024 = 150Gb/sec