AMD Ryzen 9 3900X and Ryzen 7 3700X Review: Kings of Productivity

Last edited by a moderator:
I'm sure we'll get more detailed analysis once the dust has settled but for now I think it's pretty obvious which CPU has the best bang/$ in gaming - i7-9700K. It costs slightly more than the X3700 but significantly less than the X3900 while performing better overall than both of them (in some cases quite significantly in terms of lowes FPS).

Hm, wouldn't X3950 be very close to the X3700 with a slight OC ? Not sure I would consider that a good choice considering huge price difference and the need to actually disable half of the chip.

Yeah, but when Lisa Su said the 3950x was the best gaming pc, people were like huh? I've been lol to myself at the 750$ price, waiting to see what the 9900k crowd will say. It was going to come out eventually on the benchmarks. But yeah I THOUGHT that extra binning will make it the best gaming CPU. 4.7 boost > 4.6 boost. No it's not economical but if you have that best of the best wallet, thar you go.



p.s. then again I thought the 9900k would get barely toppled....

p.s.s. Were all the security patches applied?
 
Last edited:
I was all set to buy the Chip till I saw the Prices for the M.B. they START at 200. and go up 3-400 $$$$ Did you see in Gaming the I-5 9600 K BEAT or Tied these 2 Chips in gaming. I saw it on sale the other day for 225 $$ and it doesn't need a 3-400 M.B. That is NOW the Chip I want that was in 1440 2K.

X570 starts at $170. Otherwise you can pickup X370, X470, and B450 and they will all work.

I'm sure we'll get more detailed analysis once the dust has settled but for now I think it's pretty obvious which CPU has the best bang/$ in gaming - i7-9700K. It costs slightly more than the X3700 but significantly less than the X3900 while performing better overall than both of them (in some cases quite significantly in terms of lowes FPS).

Hm, wouldn't X3950 be very close to the X3700 with a slight OC ? Not sure I would consider that a good choice considering huge price difference and the need to actually disable half of the chip.

Better when paired with a 2080 Ti at 1080p. If you don't have a 2080 Ti or do not play at 1080p the two CPUs are functionally equal in games. Given that the 9700K isn't the top skew, you have to figure the person does in fact have a budget. In that case, the included cooler of the 3700X is a bonus.
 
I like the power consumption figures and the performance is good but one let down is that it seems AMD still could not pass the 4.2 Ghz barrier and overclockability is still a joke for Ryzen. On the other hand, I saw some R5 3600X vs i7 8700k benchmarks and R5 performs close in games.
 
Intel is still faster in gaming for 2 reasons:

1. Intel CPUs have a slightly higher clock. Which wouldn't be that important, if it wasn't for this second point below...

2. Games are still very poorly programmed. Most of them don't use multiple threads. Yes, in the year 2019. That's how crappy they are. Reminds me of the days of DOS gaming, when some games didn't use GPU. It didn't matter how much money you spent on the GPU when the stupid game used CPU for rendering. We have a similar situation today with multi-threading. Hardware is getting more and more advanced, while programmers are crappier and lazier every day. Those who know how to parallelize workload can ask for astronomical salaries.

But if you're gaming and AT THE SAME TIME doing something else in the background, AMD should be the winner. Let's say you're zipping/unzipping/backuping files in the background, or recompressing a video, while at the same time playing your favorite game in the foreground.

It would be cool to measure the gaming performance while capturing gameplay and compressing it to a video file. Or while streaming the gameplay on Twitch. That's my benchmarking suggestion for all the streamers out there.

Not really. Because someone that takes streaming seriously uses a 2 PC setup, and always will. Becuase you always lose performance when streaming from the same machine you are playing a game with. Plus, the streaming argument gets old when nowadays with the new Nvenc encoder by Nvidia you don´t even need to use the CPU and still have very close performance to x264.

The new Nvenc encoder is clearly the winner and the best approach for a single streaming PC setup.
 
Yeah, but when Lisa Su said the 3950x was the best gaming pc, people were like huh? I've been lol to myself at the 750$ price, waiting to see what the 9900k crowd will say. It was going to come out eventually on the benchmarks. But yeah that extra binning will make it the best gaming CPU. 4.7 boost > 4.6 boost. No it's not economical but if you have that best of the best wallet, thar you go.


p.s. then again I thought the 9900k would get barely toppled....

p.s.s. Were all the security patches applied?

psss 9th gen CPUs are not affected by the recent security flaws. Plus here we go again talking about the security problems hypocrisy. Do you use android? You´re not safe. Do you use ios? You´re not safe. Do you use gmail or google? You´re not safe. Do you use DDR4 ram? They can be exploited. Etc etc

Next argument please.
 
X570 starts at $170. Otherwise you can pickup X370, X470, and B450 and they will all work.



Better when paired with a 2080 Ti at 1080p. If you don't have a 2080 Ti or do not play at 1080p the two CPUs are functionally equal in games. Given that the 9700K isn't the top skew, you have to figure the person does in fact have a budget. In that case, the included cooler of the 3700X is a bonus.

If you don´t have a 2080ti or do not play at 1080p, then get the 2700 instead because it costs half of the price and will have similar performance.
 
And these are the flagships. The cheaper offerings will only perform worse. Good try AMD, but this wasn't the competition I was waiting for. What I wanted was a faster or as fast as Intel in games and acceptable overclocking in a 6 or 8 core package.
 
@Steve

Do you have any CAS17 3733 MHz kits to test with? Just wondering what those results would be like since that's what AMD reccomended for the "performance sweet spot"; I can't expect it being a whole lot more though.

d24hymdbbm331.png
 
I am disappointed that Intel's grumblings about "real-world gaming" turned out to be right on the money. And these reviews didn't even include the i9-9980XE, which is available now from Intel, while the Ryzen competitor will only come out in September. So you can still get something from Intel that even does Cinebench better than anything from AMD. (For a much higher price, of course.)
But I'm not disappointed enough not to come to the conclusion that AMD is the best choice by far for nearly everyone looking for a new computer today.
AMD is lucky the 10nm desktop chips with AVX-512 support from Intel aren't out yet. As AMD claims to have designed these Ryzen chips for head-on competition with them, they were actually planning to bring to market something that was too little, too late. Lucky them, they failed at that plan.
 
First of all, why in the HELL would you do a direct comparison of X570 with B450? Why would you not compare directly with X470, when X470 is already known to be the higher end offering currently? It's like comparing the quarter mile speed of a Mustang with a Pinto and then saying wow, what a car.

More importantly, this:

DDR4-3600 is the fastest spec memory AMD recommends using with 3rd-gen Ryzen as higher clocked memory will actually reduce performance, at least when clocked higher than 3733 as this changes the Infinity Fabric to a 2:1 mode rather than 1:1. Basically 2:1 sees the Infinity Fabric clocked at a quarter of the memory speed, while 1:1 is half.

Since AMD recommends DDR4-3600 for optimal performance and provided us with a CL16 kit, we tested it just to make sure we’re not hampering performance by using the CL14 3200 kit.

The good news is we’re not, here you’ll see virtually identical performance in Corona, WinRAR, Far Cry New Dawn, Assassin’s Creed Odyssey and World War Z from either RAM kit.

I guess my first question would be, why in the hell would AMD want to design an architecture that gets SLOWER if you increase the memory speed past a certain point? I see that in Paul Alcorn's review on Tom's hardware he mentions that there might be some adjustment of this ratio possible in the BIOS but the TS article makes no mention of that at all that I can see and it seems counterintuitive to design an architecture that reduces performance on higher overclocked memory. I'm pretty sure none of the true enthusiasts are going to be inclined to want to use a platform that does that.

Granted, it's nice to see DDR4 3200 as the base speed, but now this clarification makes this platform a lot less appealing to anybody who has previously believed they were going to be able to use very high speed memory on this platform, at any level other than trying to save money while getting fairly similar performance to an existing Intel platform. Totally get it that this is a huge step forward for AMD as it's been a very long time since they were competitive at all on single core performance, but it just makes me shake my head. Seems like everytime AMD does something right, they INTENTIONALLY do three other things wrong. I'm waiting to find out what the other two are still.
 
Last edited:
I am disappointed that Intel's grumblings about "real-world gaming" turned out to be right on the money. And these reviews didn't even include the i9-9980XE, which is available now from Intel, while the Ryzen competitor will only come out in September. So you can still get something from Intel that even does Cinebench better than anything from AMD. (For a much higher price, of course.)
But I'm not disappointed enough not to come to the conclusion that AMD is the best choice by far for nearly everyone looking for a new computer today.
AMD is lucky the 10nm desktop chips with AVX-512 support from Intel aren't out yet. As AMD claims to have designed these Ryzen chips for head-on competition with them, they were actually planning to bring to market something that was too little, too late. Lucky them, they failed at that plan.

Considering there are no new consumer desktop chips on the horizon that offer anything tangible, from Intel, for the next year AT LEAST, and the fact that neither Windows nor any of the Nvidia or AMD graphics drivers have been fully optimized for R3 yet, much less actual games themselves, we might see substantial gains on gaming performance AND productivity, once some of that has trickled into the pipeline.
 
AMD, thank you for continuing to limit Intel's ability as to how much they arbitrarily hold back from consumers "just because."

I fully credit AMD for being the sole reason there were six cores on my last Intel CPU, vs. what would have been four, and why there'll probably be 8+ on the next one.

That said I don't think I've ever actually made much use of those two extra cores. I do think it's great that the first core stays at 5.0 GHz while I'm gaming on it, as could the rest if background tasks ever so demanded, which they never do.

For anyone with the right workloads, these new CPUs sound fantastic.
 
Nothing new here. AMD is STILL playing the catch-up game. Pass.

Waiting with popcorn for the AMD warriors. Bring 'em on... **munch...munch**

Ah, yes, the Intel superiority crowd, enjoying their marginal lead in a select few games while basking in almost double the price (and no hyper-threading) and way more power consumption and heat.

Anyone thinking AMD was going to suddenly catch up to Intel after a decade behind is not being realistic.

The fact of the matter is AMD is trading blows with a company 10x it's size, and delivering awesome value for customers. How anyone views that as a loss is beyond me.

Double the price? srsly?

i5 9400F is currently much cheaper than R5 3600 and gaming performance is on par
i5 9600K is a little cheaper than R5 3600X and gaming performance is slightly superior (more than slightly with overclocking factored in)
i7 9700K is priced between R7 3700X and R7 3800X and gaming performance is superior (especially with overclocking factored in)

Considering that AMD is already using 7nm (thats HALF the structure width of Intel) but still struggeling to beat the gaming performance of an architecture dating back to 2011, Zen2 is pretty underwhelming.
(Power-)Efficency and productive performance is pretty good, though.
 
psss 9th gen CPUs are not affected by the recent security flaws. Plus here we go again talking about the security problems hypocrisy. Do you use android? You´re not safe. Do you use ios? You´re not safe. Do you use gmail or google? You´re not safe. Do you use DDR4 ram? They can be exploited. Etc etc

Next argument please.
Maybe you should write a letter to intel, microsoft explaining to them why they wasted time and money making auto updates?
https://www.pugetsystems.com/labs/a...9900K-in-Pix4D-Metashape-RealityCapture-1461/
 
Nothing new here. AMD is STILL playing the catch-up game. Pass.

Waiting with popcorn for the AMD warriors. Bring 'em on... **munch...munch**

Ah, yes, the Intel superiority crowd, enjoying their marginal lead in a select few games while basking in almost double the price (and no hyper-threading) and way more power consumption and heat.

Anyone thinking AMD was going to suddenly catch up to Intel after a decade behind is not being realistic.

The fact of the matter is AMD is trading blows with a company 10x it's size, and delivering awesome value for customers. How anyone views that as a loss is beyond me.

I was afraid the new chips would fall behind Intel in the gaming benchmarks, and that we would have to listen to the Intel groupies rubbing it in. Well, I suppose it's good the Intel chips have something going for them anyways. Something to keep the conversation going.
 
while programmers are crappier and lazier every day.
You're getting a little carried away here. Go back to the start of the article and look at all those use cases where more cores easily lead to more performance. It's not that programmers don't know how to do it where it is clear that parallel processing is a good fit. Much more often there is an issue of just how suitable the task is. If you look at the tasks that benefit greatly, you'll often see they involve chunks of work that can be fully independent of each other and are not truly interactive with the user either (I.e., the video editor is could care less as to whether all X cores complete their transcoding within milliseconds of each other.)

That's not to say that games are fully optimized, of course they react to simple economics, such as developers being unlikely to spend extra money on unusual cases like say 16 core performance. Their game needs to run on the mass market dual core CPU; they'll spend effort on making sure that's true, and then probably stop there.
 
Back