Benchmarks showcase impressive DDR5-4800 RAM performance

The timings will improve over time (I'll wait for the first gen to be replaced by something better).

I'm more excited about the mandatory on-die ECC support. It's better than nothing, but I would have liked the full ECC pipeline to be implemented, as this doesn't protect against errors on the DDR channel.
 
Last edited:
40 CAS at 4800MHz seems very high. That's worse than highest latency for official JEDEC DDR4(16ns vs 15ns - and with just XMP, you can get below 10ns). If that's indicative of what the performance will be like at release, I would expect to see little to no benefit over DDR4(and possibly even performance regression, everything else being equal) except for iGPUs and some niche workloads.

This guy gets it. Look at the massive performance benefits of running lower latency in modern games on both AMD and Intel, high frequency memory is pointless if the latency sucks which this clearly does!
 
But there's stuff all to indicate BIOS settings in the published image. Important fields are blanked out. An 800MHz CPU clock(???), and who knows what else has been gimped in CMOS settings...
Aida simply doesn't recognize the CPU, as it is an unreleased product on an unreleased platform. The 800MHz clock is the base clock or minimum clock of the CPU, generally when you launch that benchmark it will take current CPU clock which sounds right at 800MHz if it was idle on launch, that number doesn't change once the benchmark is ran.

When you run the benchmark the CPU will clock itself up as in any loaded situation, that will not reflect on the initial hardware information that was pulled at launch.

As far as I know those numbers are irrelevant to the actual test results. The results are the results, having the fields populated would simply let us know what the hardware was running at to attain those results.

Again however, as this is unreleased hardware the performance might still not be accurate as these are likely engineering samples, features might be lacking or not fully functional.

My post was simply to state how despite the title of the post claiming "impressive DDR5-4800 RAM performance" I was certainly unimpressed considering my now 6-7 year old hardware doubled it's results across the board.
 
Aida simply doesn't recognize the CPU, as it is an unreleased product on an unreleased platform. The 800MHz clock is the base clock or minimum clock of the CPU, generally when you launch that benchmark it will take current CPU clock which sounds right at 800MHz if it was idle on launch, that number doesn't change once the benchmark is ran.

When you run the benchmark the CPU will clock itself up as in any loaded situation, that will not reflect on the initial hardware information that was pulled at launch.

As far as I know those numbers are irrelevant to the actual test results. The results are the results, having the fields populated would simply let us know what the hardware was running at to attain those results.

Again however, as this is unreleased hardware the performance might still not be accurate as these are likely engineering samples, features might be lacking or not fully functional.

My post was simply to state how despite the title of the post claiming "impressive DDR5-4800 RAM performance" I was certainly unimpressed considering my now 6-7 year old hardware doubled it's results across the board.
Mine, too. We have the same CPU, and probably the same motherboard. R5E.
 
But how much are modern applications actually handicapped by RAM speed? Zen 3 CPU's already have huge 32MB caches and I'd expect caches to get larger over time so you wonder how much the RAM is even accessed. If the issue is loading graphics resources then the bottleneck is more likely to be in getting the data from SSD. If the RAM isn't the bottleneck within a modern system then speeding it up will obviously give no real improvement. It would be interesting to see where the bottleneck is in a modern PC running the latest applications (games or work) - is it CPU, GPU, SSD, RAM, PCIe etc? Does it differ between AMD and Intel? What about between Windows and Linux?
You vastly overestimate SSD performance... the idea with SSDs is to fill the RAM fast not to run your application. So by definition, most decent applications will be running GB+ of RAM. So RAM is a very heavily utilised component. 32MB cache is pitiful compared to most desktop applications.
 
You vastly overestimate SSD performance... the idea with SSDs is to fill the RAM fast not to run your application. So by definition, most decent applications will be running GB+ of RAM. So RAM is a very heavily utilised component. 32MB cache is pitiful compared to most desktop applications.
No, I'm simply trying to work out why faster RAM is important to you. Applications can certainly be a huge size but how much of that application needs to be in memory at any one time? how much of the program would actually benefit from faster RAM than we have now? GTAV is 95GB and MS Flight Sim is 150GB but most of the space is taken up by graphic assets. These assets don't benefit from faster RAM as they're just loaded from SSD to the GPU in the background as they're required. I suspect the parts of the application that need to be fast would probably easily fit into 32MB. I guess we'll find out when they release this new RAM.
 
No, I'm simply trying to work out why faster RAM is important to you. Applications can certainly be a huge size but how much of that application needs to be in memory at any one time? how much of the program would actually benefit from faster RAM than we have now? GTAV is 95GB and MS Flight Sim is 150GB but most of the space is taken up by graphic assets. These assets don't benefit from faster RAM as they're just loaded from SSD to the GPU in the background as they're required. I suspect the parts of the application that need to be fast would probably easily fit into 32MB. I guess we'll find out when they release this new RAM.
Because you can see when apps and games are processor limited. Increasing RAM bandwidth alleviates that bottleneck. SSD rarely impacts game FPS workflows right?

I don't understand how you can't conceive of the concept of increasing CPU performance having benefits?
 
JEDEC have specified 3 different variants (A, B, and C) for each DDR5 speed - A being the highest rated, and C the lowest.

DDR5-4800A is 34-34-34, B is 40-40-40, and C is 42-42-42; compared to something like this DDR4-4800, which is 46-24-24, the A-spec stuff is pretty good.

The advantage DDR5 will have over 4 is the far higher achievable transfer rates (currently max is DDR5-6400).
That seems like a bad sign when it comes to latency. Even 34-34-34 for 4800MT/s is only middle of the pack compared to the DDR4 standard(which goes all the way to 15-15-15 for 2400MT/s and the highest is 18-18-18 - I've picked the 2400 because it requires me to do the least math, but the latencies are comparable across the board). Of course, everyone also knows that the standard timings tend to be very loose(as I recall, XMP profiles for 3000MHz with CAS 16 was fairly common soon after release), so we'll have to wait and see.
 
People don't get how important memory timings are....

Upgrading from a 1080ti to a 3080 gave me about 30 more fps average in Warzone.

Tweaking my memory timings from stock to 3600mhz c14-14-14 with Samsung Bdie gave me 35 more fps average in warzone.

In other words, for Warzone, memory timings made a bigger difference than a $700 graphics card upgrade!
 
People don't get how important memory timings are....

Upgrading from a 1080ti to a 3080 gave me about 30 more fps average in Warzone.

Tweaking my memory timings from stock to 3600mhz c14-14-14 with Samsung Bdie gave me 35 more fps average in warzone.

In other words, for Warzone, memory timings made a bigger difference than a $700 graphics card upgrade!
You have to check what the 1080Ti + memory timings change performance improvement was. I'd say you'd get nowhere near the 35fps improvement. It came primarily from your 3080 suddenly becoming bottlenecked by your CPU/system memory.
 
You are running quad channel... I think theirs is a single channel. So x4 to get the new 2xxx chipset equivalent.
Good eye, I didn't notice the picture shows a single stick of RAM on that motherboard. Not that the benchmark tells us much anyways, if that's single stick bandwidth performance is good, the latency is still very bad however.

4x the performance if and when Intel brings the HEDT platform back.
 
Good eye, I didn't notice the picture shows a single stick of RAM on that motherboard. Not that the benchmark tells us much anyways, if that's single stick bandwidth performance is good, the latency is still very bad however.

4x the performance if and when Intel brings the HEDT platform back.
Yeah that latency increase I'm really not impressed with either. Wonder how that will be addressed... thinking that latency is going to come down once better modules hit the ground.
 
Edoram sdram ddr 1 rdram ddr2 ddr3 ddr4 ddr5 goes 1 way. UP.
smaller then ??nm uses less power and higher mhz ghz int that time. 1 st edoram was good enough. win 3. 11 .1 could hanndle old edoram but limited to low 1-8 max. from 8-128 mb ram for better pentium cpus.
sdram was impleted with mmx tecnoligy and the speed on the cpu gpu fsb was enabled.
the ata was phushed in with sata 1 2.

later ddr1 would go up to 256-8gb in speed
ddr2 was going higher from about 2gb -32 gb ram
ddr3 was running much faster then ddr 2 and was implented as fast to work with gpu cpu chipsets.
ddr4 with pcie 1 gpus could be used in pcie 2.0 and pcie 3.0 could be used in pcie 4.0 where we are now.
the extra bandwidt from ddr5 would enage pcie 5,0 speed.
from gaming it would never be the sam(e) 4 again .
the speed of nxt gen cpu gpus and soundcard could now scale better and faster.
even cad rendering would take lesser then ddr 3 4 in mhz speed.

never benchmarks like unigine 3dmark must now support ddr5 speed.

so if we soon going to trow ing away ddrx to support faster ddr5 we must try it out 1 st.
games that was running crappy can now run butter with this better ting to do.
far cry 3 4 5 6 with new game textures would rip trough for getting better performance ingame.
exel would be old but still supported.


 
Last edited:
You have to check what the 1080Ti + memory timings change performance improvement was. I'd say you'd get nowhere near the 35fps improvement. It came primarily from your 3080 suddenly becoming bottlenecked by your CPU/system memory.
Actually the big issue is that Warzone just isn't optimized for the 3000 series cards AND it is the most memory performance demanding game I've ever played. Plenty of people on the latest Intel 6 cores report similar performance gains with tighter memory timings. Look at benches of a 3060 vs 3080 in Warzone at 1080p.... there's barely any difference.
 
Actually the big issue is that Warzone just isn't optimized for the 3000 series cards AND it is the most memory performance demanding game I've ever played. Plenty of people on the latest Intel 6 cores report similar performance gains with tighter memory timings. Look at benches of a 3060 vs 3080 in Warzone at 1080p.... there's barely any difference.
Sure, I can report plenty of failings for the same timings with the same graphics card, but in benchmarkerland hearsay is pointless.
A performance uptick in "this"(h) section of a computer results in x% faster (K) in "this" application. Benchmarks can be that simple; when we speed some"thing" up, some"thing" runs faster, time after time. Repeatability in our computers is fundamental, if it only works once the value is zilch.
The "1080Ti + memory timings" show a performance increase if the timings made anything faster, or a decrease if it's slower. 35fps sounds improbable. Optimization is a software wild card but for benchmarking both sides must usually be equal.
 
It will be VERY expensive when it does come out, so it may be several years after that before it makes any economic sense to build a PC with it.

It's out now and it's by no means expensive. Your comment aged like a wet loaf of bread.
 
It's out now and it's by no means expensive. Your comment aged like a wet loaf of bread.
It isn't in the stores where I live. Calgary, Canada. Is it out where you are? How much? Are there motherboards and are there CPUs for it? Is it for consumers or for servers and the such?
 
It isn't in the stores where I live. Calgary, Canada. Is it out where you are? How much? Are there motherboards and are there CPUs for it? Is it for consumers or for servers and the such
It isn't readily available based on the searching I've done so far, but it has been stated as being "available" so take that with a grain of salt in todays post covid electronics component shortage world. Intel should have boards and processors supporting it later this year but take that with as much salt as before. It is slated for general consumption, so adoption should be reasonable by late 2022 after AMD has released new hardware (AM5/Zen 4) which supports it. Point was it's basically like previous DDR releases before it and will likely be in your next or subsequent major upgrade.
 
Back