#### TheBigFatClown

##### Posts: 1,050 +445

I've been trying to understand memory timings for years and while I fool myself into thinking I do sometimes I really still need help in understanding the practical real world differences.

So, I'll state a few things I believe to be true for sure. A kit labelled for sale @ 3200MHz is really only stating the "effective" speed if operating in dual-channel mode. So, it's really each chip running @ 1600MHz. The more truthful representation of the bandwidth for a single memory chip would be the PC25600 description. Anyway, with that fact now stated we calculate the total theoretical bandwidth for a 3200MHz dual-channel memory kit as 1600MHz x 16 bytes a transfer x 2 channels = 51.2 GB/sec.

So, let's start with that number which is 51.2 GB/sec for a set of memory running at an "effective" speed of 3200MHz in dual-channel mode.

That's the theoretical maximum for all memory kits running at a rated 3200MHz "effective" speed

But, if the memory can only do work during a percentage of those clock cycles and has to wait other times then we won't get 51.2 GB/sec in reality. Similar to a lamborghini that can do 120 MPH on a highway. If there are stoplights and the car has to stop then the maximum potential is lost. So, my question is what would be the "effective" bandwidth difference in two memory kits rated to run @ 3200MHz but with the different timings I listed in the subject. Again, they are: 14-14-14-34 vs 16-18-18-38. I've asked this question in chats and I know the difference may be negligible in real-life (or maybe it isn't) and people say it's no big deal. But how can I calculate the real numbers myself just so I know for sure the difference in the value of the 2 memory kits.

Thanks for reading!

EDIT: One last point to be made. I care about this because my CPU (right now) is an AMD Ryzen 2200G with Vega 8 graphics where these memory speeds matter more than on an Intel system for gaming.

EDIT #2: So, if you lose 18 clock cycles per transfer out of 3.2 billion versus 14 clock cycles out of 3.2 billion that does really seem significant. I guess the best way to solve this puzzle is with benchmarks. Is AIDA 64 still the best tool for this task? I can post my results for 3200MHz with 16-18-18-34 timings. I don't have a faster kit.

EDIT # 3: I got 45,442 MB/s using the AID64 benchmarking option on my 16-18-18-38 memory kit. Seems pretty good. On the other hand though. 45.4 / 51.2 means about 89% usable bandwidth. The other 11% spent waiting.

So, I'll state a few things I believe to be true for sure. A kit labelled for sale @ 3200MHz is really only stating the "effective" speed if operating in dual-channel mode. So, it's really each chip running @ 1600MHz. The more truthful representation of the bandwidth for a single memory chip would be the PC25600 description. Anyway, with that fact now stated we calculate the total theoretical bandwidth for a 3200MHz dual-channel memory kit as 1600MHz x 16 bytes a transfer x 2 channels = 51.2 GB/sec.

So, let's start with that number which is 51.2 GB/sec for a set of memory running at an "effective" speed of 3200MHz in dual-channel mode.

That's the theoretical maximum for all memory kits running at a rated 3200MHz "effective" speed

**.***regardless of timings*But, if the memory can only do work during a percentage of those clock cycles and has to wait other times then we won't get 51.2 GB/sec in reality. Similar to a lamborghini that can do 120 MPH on a highway. If there are stoplights and the car has to stop then the maximum potential is lost. So, my question is what would be the "effective" bandwidth difference in two memory kits rated to run @ 3200MHz but with the different timings I listed in the subject. Again, they are: 14-14-14-34 vs 16-18-18-38. I've asked this question in chats and I know the difference may be negligible in real-life (or maybe it isn't) and people say it's no big deal. But how can I calculate the real numbers myself just so I know for sure the difference in the value of the 2 memory kits.

Thanks for reading!

EDIT: One last point to be made. I care about this because my CPU (right now) is an AMD Ryzen 2200G with Vega 8 graphics where these memory speeds matter more than on an Intel system for gaming.

EDIT #2: So, if you lose 18 clock cycles per transfer out of 3.2 billion versus 14 clock cycles out of 3.2 billion that does really seem significant. I guess the best way to solve this puzzle is with benchmarks. Is AIDA 64 still the best tool for this task? I can post my results for 3200MHz with 16-18-18-34 timings. I don't have a faster kit.

EDIT # 3: I got 45,442 MB/s using the AID64 benchmarking option on my 16-18-18-38 memory kit. Seems pretty good. On the other hand though. 45.4 / 51.2 means about 89% usable bandwidth. The other 11% spent waiting.

Last edited: Aug 19, 2020