SK Hynix presents its first DDR5 chip promising major improvements over DDR4

Greg S

Posts: 1,607   +442
Something to look forward to: Competition between Samsung and SK Hynix is growing for DDR5 memory. As the launch of faster and denser memory is on the horizon, engineers are pushing clock speeds up and voltages down. The end result is desktop memory that will be a major step forward for the industry.

As the International Solid State Circuits Conference continues, memory maker SK Hynix has shared its plans on DDR5. Even though the DDR5 standard is considered under development by the Jedec standards organization, a real world design will help in the push towards finalized specifications.

Actual finished products using DDR5 are expected to be available during the fourth quarter this year whether or not the standard is officially ratified. In Hynix's presentation, 16Gb DDR5 SDRAM module was showcased. The design is capable of running at 6.4Gb/s on each pin while operating at 1.1V.

Despite Hynix's design choices that will allow high clock speeds to be possible, Samsung still appears to be in the lead for high performance. Samsung's 10nm LPDDR5 SDRAM can push data around at 7.5Gb/s on only 1.05V. However, Samsung declined to present any operational details of their memory offering, making it difficult to know what differences exist in silicon.

Regardless of which company wins the performance race in the next generation of memory, everyone will enjoy up to 50% more bandwidth. Density is also expected to be double that of DDR4. The future of desktop workstations easily reaching 1TB of memory is no longer completely out of the question, although cost and practicality of such a setup will be limiting factors for most.

Hynix is predicting that DDR5 memory will become 25 percent of market sales by 2021 and reach 44 percent 2022. Pricing and availability of consumer memory modules is still unknown, but rest assured that large capacity DIMMs with record setting clock speeds are not going to be wallet-friendly initially.

Permalink to story.

 
I always think about the improvements you get for integrated graphics with a new generation of DRAM. For as long as I can remember, integrated graphics were only ever just barely good enough for browsing or watching videos. Scrolling down graphic heavy web pages and watching the machine struggle!

Times are changing. You can run a few undemanding modern games quite well at common resolutions with something like a Ryzen 2500U/2200G. Nearly everything more than a 3-4 years old runs a treat. Intel are also talking up a big boost in integrated GPU performance, because they know RX Vega graphics are attractive.

We'll quickly hit the point where DDR4 doesn't really supply enough memory bandwidth to get improvements. Probably why Vega 10/11 iGPUs aren't faster than the Vega 8 version by as much as you might expect.
 
I spent $300 buying 32GB of DDR4.

I really saw no difference between 32GB of DDR3 and 32GB of DDR4. Beyond the fact the motherboard demanded the higher-end memory, I truly saw no huge difference in my tasks. The biggest jumps in performance came from my upgrading from Core i7 to Core i9Ex and my move from HDD to SSD only.

My games aren't demanding as much compute power as my other tasts like VR or 4K rendering.

It will be a while before I build a new PC.
 
I spent $300 buying 32GB of DDR4.

I really saw no difference between 32GB of DDR3 and 32GB of DDR4. Beyond the fact the motherboard demanded the higher-end memory, I truly saw no huge difference in my tasks. The biggest jumps in performance came from my upgrading from Core i7 to Core i9Ex and my move from HDD to SSD only.

My games aren't demanding as much compute power as my other tasts like VR or 4K rendering.

It will be a while before I build a new PC.
don't forget that your computer (and almost anything else) is strongest as its weakest link.
 
don't forget that your computer (and almost anything else) is strongest as its weakest link.


I disagree with that.

My CPU is a Core i9 ex. Arguably one of the best available.

My GPU is a 2080 Ti. Arguably the best available.

My RAM is just 32GB. 16 was all I needed and many would say 32 is uneccessary.

My SSD drives are far faster than any HDD.

These components can't be compared because they are all top of their game and don't do the same job.

If I had only 4GB of RAM, or a Core i3, or an HDD or a AMD-anything...then you might be right.
 
I disagree with that.

My CPU is a Core i9 ex. Arguably one of the best available.

My GPU is a 2080 Ti. Arguably the best available.

My RAM is just 32GB. 16 was all I needed and many would say 32 is uneccessary.

My SSD drives are far faster than any HDD.

These components can't be compared because they are all top of their game and don't do the same job.

If I had only 4GB of RAM, or a Core i3, or an HDD or a AMD-anything...then you might be right.
Well you misunderstood what I was trying to say and/or I did not express myself well enough.

Anyways nice PC, Arguably ;P
 
Well you misunderstood what I was trying to say and/or I did not express myself well enough.

Yep swing and a miss
 
I notice they don't say anything about what latencies it'll have it's al well and good saying it will have more bandwidth but that's meaningless if the latencies are shite I mean would you want to pay $400 32GB of DDR-4000 with 45-56-56-80 timings
 
Why is the writer writing gigaBITS instead of gigaBYTES?

He didn't specify which. Gb/s could be both.
Why is the writer writing gigaBITS instead of gigaBYTES?

Because he was highlighting the bandwidth *per pin* not for the entire module. Its a pretty technical way to measure bandwidth and not something we usually see used at the "consumer" level. In fact the first article I could find on it was from 2003 at EEtimes. I'll post it as a second comment just in case it gets flagged for having a link.
 
Back