Ryzen 9 3950X vs. Core i9-9900KS Gaming, Feat. Tuned DDR4 Memory Performance

Julio Franco

Posts: 9,099   +2,049
Staff member
All in all..It seems like a lot of messing around in the BIOS, for not very much reward.
Anything above 5% is statistically significant though. This is a great article. It shows the bottleneck of memory in some titles through titer timings for mostly Ryzen although we seen similar gains with similar articles of titer timings and the Ryzen 3800x and overclocking the infinity fabric.
 
Last edited:
Love the article, puts some hard data to the assumptions and claims you hear, thanks for taking the time to run through all those games. I had an article idea for you - I saw that you tested Fortnite in DX12, I think it would be interesting to see the performance uplift, if any, across a selection of Nvidia and AMD gpus, comparing DX12 to DX11 performance. I've seen a few YouTube videos that show a side by side comparison, but it's tough to tell if it provides a consistent uplift, and how much the uplift is. Any way, an idea in case you're looking for future tests
 
You didn't tune the memory, you skewed the test in favour of the AMD platform yet didn't manage anything. Still slower.

If you want to "tune the memory", do the same for both platforms.

AMD benefit mainly from lower latency and timings, Intel benefits from higher mhz on memory and thus increased bandwidth. So for Intel, 4400mhz and CL18.

3600mhz CL14 is 7,7777ns
4400mhz CL18 is 8.181818ns yet intel would perform better than on the CL14 kits because cores are bandwidth starved.
 
Last edited:
I game with Core i9...

But a warning to anyone who wants one, you'd better have a liquid cooler.

My stock Alienware AIO and my EVGA AIO both do a great job, but aiir coolers are too loud and don't cool enough for me.
 
In my 25 years as a tech enthusiast, I've never seen memory latency benchmarks listed as "higher is better". Couldn't get past that and stopped reading.

Maybe something is changed and I am missing something? But higher latency has always been bad. Wtf?
 
The best ddr4 kit for Ryzen 2 would be 3600mhz dual rank (2x16GB kit), overclock the memory and mem controller to 3800mhz and it beat any 3600mhz single rank with tight timings any day. And who would buy a 3950X with only 16GB of Ram lol (cant run tight timings with 4x8GB kit).
 
3600mhz CL14 is 7,7777ns
4400mhz CL18 is 8.181818ns yet intel would perform better than on the CL14 kits because cores are bandwidth starved.
I like Techspot, and I have alot of respect for the reviewer, great guy, but It's been awhile since I've seen a fair article on Techspot.
AMD's $400 GPU against Nvidia's old 2060 Super (which was a crappy GPU with an overpriced release), AMD's 16/32 3950X against Intel's 10/20 CPU, and now tweaked timing AMD chips against basically untweaked Intel setups.
Lets see a review of a $400 8/16 3800X vs a $450 8/16 9900K in only games, so we can see how, core for core, Intels old stuff still wipes the floor with AMD's new stuff in games and still bests it in some benchmarks.
I like Ryzen, it's stellar architecture with great latency for its speed and impressive multicore performance, but core for core, its really only marginally better, sometimes slower in application benchmarks, and much slower in gaming.
 
Wow, just wow... All these cores (and article) lost on 1080p. If you have the money to buy the best CPU and RAM, why the H would you think that anyone in this group even cares about 1080p. I mean, we aren't reviewing RX580s are we?
 
Consumers shouldn't have to do this. Maybe AMD will get it right with Zen 3. Maybe.

I've been tweaking memory since I've been building computers all the way back to Socket A AMD and earlier.

So for your standard novice computer user I agree.

For an tech enthusiastic its par the course.


The best ddr4 kit for Ryzen 2 would be 3600mhz dual rank (2x16GB kit), overclock the memory and mem controller to 3800mhz and it beat any 3600mhz single rank with tight timings any day. And who would buy a 3950X with only 16GB of Ram lol (cant run tight timings with 4x8GB kit).

 
Last edited:
Wow, just wow... All these cores (and article) lost on 1080p. If you have the money to buy the best CPU and RAM, why the H would you think that anyone in this group even cares about 1080p. I mean, we aren't reviewing RX580s are we?
Because this is a C-P-U test, not a G-P-U test. So you run at a lower resolution to make the CPU the bottleneck to experiment with how memory timings may benefit CPU performance in a CPU bound scenario, rather then waiting 10 years for a game to come out that will stress current gen CPUs.

Reading comprehension is hard.
 
You didn't tune the memory, you skewed the test in favour of the AMD platform yet didn't manage anything. Still slower.

If you want to "tune the memory", do the same for both platforms.

AMD benefit mainly from lower latency and timings, Intel benefits from higher mhz on memory and thus increased bandwidth. So for Intel, 4400mhz and CL18.

3600mhz CL14 is 7,7777ns
4400mhz CL18 is 8.181818ns yet intel would perform better than on the CL14 kits because cores are bandwidth starved.


Both platforms are using tuned memory, pay more attention to the article. He didn't skew anything
 
I like Techspot, and I have alot of respect for the reviewer, great guy, but It's been awhile since I've seen a fair article on Techspot.
AMD's $400 GPU against Nvidia's old 2060 Super (which was a crappy GPU with an overpriced release), AMD's 16/32 3950X against Intel's 10/20 CPU, and now tweaked timing AMD chips against basically untweaked Intel setups.
Lets see a review of a $400 8/16 3800X vs a $450 8/16 9900K in only games, so we can see how, core for core, Intels old stuff still wipes the floor with AMD's new stuff in games and still bests it in some benchmarks.
I like Ryzen, it's stellar architecture with great latency for its speed and impressive multicore performance, but core for core, its really only marginally better, sometimes slower in application benchmarks, and much slower in gaming.
Just to make sure, this is a 8c/16th vs a 16c/32th correct? If so, is AMD really struggling this bad still? Just asking
 
Back