Overclocking and RAM sticks

Mister_K

Posts: 2,218   +900
Just a quick question.

Is it true that more DIMMS = less stable machine / lower overclock?

So if you have lets say 4 slots and you fill them up, you would get better performance and more stable machine with only 2 slots of the 4 filled out?

I heard this somewhere some time ago, not sure if its true or just a random claim.
 
It used to be very true. Memory controllers used to be a lot more fickle than they are at present, and certain vendors motherboards had OC'ing and stability issues when fully populated (Gigabyte and MSI's P45/X48/X38/P35 range seemed particularly afflicted)
Having said that, with the advent of the memory controller moving to the CPU in Intel boards in recent years, better/cleaner power delivery, lower RAM operating voltage, and dedicated power regulation for RAM operation the issue isn't generally a factor.

So the answer is No...or Yes, depending upon what system you're running.
 
I remember on some of my older machines if I added more than 1GB or so it would crash no matter what if I overclocked. Today it is much more forgiving, but I never recommend to OC the RAM itself anyway. Just buy the fastest native RAM your board can handle and overclock in accordance etc. Much easier and a lot less likely to cause problems.
 
Yes, night and day when comparing older systems with the newer on-die memory controller. Many of my Gigabyte, DFI and MSI builds were very picky with RAM, and many became unstable very quickly with all four DIMM's populated, with quite a few pretty much refusing to run at all if I was OC'ing up into the 2.0-2.2v range
Overclocking RAM used to be a sport in its own right, you could buy a relatively cheap DDR2-800C4 kit and with some tweaking move up a speed bin quite easily. My Crucial Ballistix kits (the dual sided modules) were nominal at 800 CL4, and they would stand 1150-1200 @ CL5 with a voltage nudge and timing tweak- the same timings that Corsair's 1150 Dominator kit would hit but without paying 3-4 times the price of the Corsairs. All a thing of the past for the most part, since RAM is more tightly binned these days -very seldom will you find a kit that will run at the next highest speed bin at standard voltage. Luckily, Intel CPU's don't seem particularly bandwidth sensitive for the most part.
 
Yes, night and day when comparing older systems with the newer on-die memory controller. Many of my Gigabyte, DFI and MSI builds were very picky with RAM, and many became unstable very quickly with all four DIMM's populated, with quite a few pretty much refusing to run at all if I was OC'ing up into the 2.0-2.2v range
Overclocking RAM used to be a sport in its own right, you could buy a relatively cheap DDR2-800C4 kit and with some tweaking move up a speed bin quite easily. My Crucial Ballistix kits (the dual sided modules) were nominal at 800 CL4, and they would stand 1150-1200 @ CL5 with a voltage nudge and timing tweak- the same timings that Corsair's 1150 Dominator kit would hit but without paying 3-4 times the price of the Corsairs. All a thing of the past for the most part, since RAM is more tightly binned these days -very seldom will you find a kit that will run at the next highest speed bin at standard voltage. Luckily, Intel CPU's don't seem particularly bandwidth sensitive for the most part.
You may be right, but the Kingston HyperX 1600 CL9 kit I just bought for $40 (KHX1600C9D3K2/8GX) (2 x 4GB) is rated at 1600 MHz @ 1.65 volts 9-9-9-27. I have mine overclocked to 1866 MHz @ 1.65 v 10-10-10-27 and it's running fast and stable (over 21 GB/s) and passes IntelBurnTest and Prime95.
 
You may be right, but the Kingston HyperX 1600 CL9 kit I just bought
Sounds like you did OK, although in reality there is virtually no difference between the stock and your overclock.
Access time is key to the equation. Bandwidth in itself is basically meaningless. A quick and dirty calc is command rate divided by latency:
Stock 1600C9 : 9/800 * 1000 (convert to nanoseconds) = 11.25ns
OC 1866C10: 10/933 *1000 = 10.72ns ...a 4.7% decrease in access time in a perfect predictable memory stream. As the calculations become more unpredictable, latency (ultimately the clock cycles of CAS) become the overriding factor...I.e. a real world scenario. Overclocking RAM is less about increasing bandwidth than it is lowering latency- the gain comes from reaching the next speed bin at the same timings or retaining the same bandwidth at a lower latency -in your case, 1600 @ 8-8-8-24 or 1866 @ 9-9-9-27
DVuYg.jpg

The Crucial Ballistix example I used is a case where the lower latency makes the difference. Nominal DDR2-800 @ 4-4-4-12 becomes DDR2-1200 @ 5-5-5-18 ( A jump of two bins vs a latency jump of one)
[Graph: Bit-tech Sandy Bridge RAM article]
 
It's definitely not what it used to be... in terms of noticeable performance anyway. Today's memory is very fast out of the box when you compare it back 5-10 years. Like divide said back then overclocking RAM was sport and now it is, for the most part, irrelevant. DDR3 is so cheap nowadays and 1600mhz modules are pretty standard. I run a 6GB set of mushkin redline with timings of 6-8-6-24 and that's the default. I can run them up a little bit and bench to get higher numbers but at the end of the day they are plenty fast and better left alone.
 
I think you're right, dbz, it isn't really faster in real world use even though the benchmarks show a 10% increase. I'm new to overclocking and learning more every day, thanks to you guys and others like you all over the world. Now that I have the latest hardware (P8Z77-V & i5-3570K) I am really able to do it instead of just reading about it. It's great to get more performance with knowledge instead of money. I just want the best overall experience without compromising stability, tranquility, reliability, or longevity.
 
Back