Ok, so i find myself pondering things in a rather unorthodox method about memory. when performing an overclock on the system memory it is sometimes required to sacrifice some latency in favor of a bit more bandwidth. using that same logical scale but in reverse i come to a hypothesis of underclocking in the interest of reduced latency. take DDR2 for example, great bandwidth but latency sucks on it. so what happens when you take DDR2 thats way too fast and clock it down to "normal" speeds with the benefit of latency closer to what regular DDR runs. so i ask anyone more technically knowledgeable than myself if this is sound thinking or is there something else at play that would negate this mechanic?