Corsair: DDR5 memory will need better cooling as VRMs move to the module

Just remember, the VRMs for all 4 rams slots is usually only like a couple mosfets on the motherboard. They also never have heatsinks on them because its completely unnecessary.

Hopefully it won't be a big deal, but then again the stupid RGB on my current sticks raises the dimm temps by like 5 degrees..
 
Is there a lot of marketing bullshit going around? Yes. That doesn't mean it's all the same or that everything is just bullshit. This is the classic oversimplification.

MP's, for instance, is the wrong metric to compare camera quality. A 8.3MP sensor can already capture 4K images. What you want are bigger sensors that can capture more light.

"108MP cameras" are just 12MP cameras where each pixel is sub-divided in 9 in order to lower noise through AI trickery.

And for SSD's, the 7 GB/s number is for peak throughput. What you "feel" when using it day to day is the latency. Get an Optane SSD and you'll feel the difference.

No, you won't "feel the difference" between higher throughput and lower throughput DRAM when browsing the web. It's not the bottleneck.

But it's enough to make a difference when you're pushing your DRAM to the max, be it in games, video editing or CAD. And it's the advances in DRAM that allowed us to see 16 core CPU's in a consumer platform - and even then, only just. Look how much L3 cache AMD needs to make it work, how much extra money you're paying (In the increase to the CPU's die size) to have a CPU that won't get bottlenecked by the DRAM.
Well your both kind of not there.
Games aren't coded to take advantage of blistering fast ram or ssds, it's the current state of game engines and what is considered mainstream hardware which is the bulk of their customers. Newer consoles will change this like direct storage and such. There is a point of diminishing returns however, it is very similar of high refresh rates when it comes to human perceptions, there is a point where milliseconds just don't matter to most users.
Content creation, Photo, Video, Audio and other productivity suites are programmed to take advantage of it because the normal hardware used is more prosumer to enterprise level hardware as a norm. Here there is always a difference because time is money, NVidia 2000 series changed alot whether people admit it or not a $72k server workstation cluster was replaced by a pair of $16k GPUs allowing real time Ray Tracing and what took months to render was now a week or less, which is significant amateurs using gamer hardware easily can get much better quality cgi for under $5k systems that is mainly to NVidia and AMD Ryzen processors and core scaling.


Your computer is only as fast as it's slowest component. Chopping a few milliseconds off milliseconds is not going to be noticable in regards to Windows itself.
 
Heat sinks on RAM aren't exactly a new thing. My old build with my Phenom II X4 940 had 8GB of OCZ Reaper DDR2-800. I only got it because I was working at Tiger Direct at the time and it was on clearance. Those things had HUGE heat sinks on them. They're still being used in my mother's computer.

They kinda remind me of a cross between a clothes iron and a padlock:
20-227-289-22.jpg

The RAM I used for my FX-8350 was UMAX Cetus and it also had heat sinks (although they were a lot more conservative than the OCZ Reapers:
s-l1600.jpg

So it's not like adding heat sinks is a big deal. Hell, Team already has heatsinks on almost all of their RAM like Vector, Vulcan and Dark:
20-331-395-V01.jpg

This is what my DDR4-2400 for my R7-1700 and my DDR4-3200 for my R5-3600X already look like. Both are Team Dark series. Having head sinks on RAM has been normal for me for over a decade already so I seriously doubt that I'm going to lose sleep over having them on DDR5.
 
I use G.Skill Ripjaws V DDR4-4000, and when I switch XMP off in bios, memory speed drops from 4000Mhz to the nominal 3200Mhz, and benchmarks show a significant drop in performance. So, not a marketing cheat.

benchmarks =/= user experience.
 
Back