Intel Core i7-8700K, i5-8600K, 8400 versus AMD Ryzen 7 1800X, R5 1600X, 1500X

So you’re suggesting that no one commenting here is stating that Ryzen is a better buy?

Oh, now you changed the question. A better buy? Yes, maybe it is, depending on whether you game or not. For gaming though, no it isn't. Do you disagree with any of that?

Also, 720p is the best indication of future performance out of all the tests.

It's the best indication of performance on current games on future graphics cards. It doesn't tell you anything about future games though.

Or are you suggesting that the slower chips at 720p today will actually end up being faster in future titles?

No, I'm suggesting that they actually might. I already gave you some examples. i5 4670k vs i7 4930k. The i5 was better back in 2012-2013 games. It isn't anymore. R5 1600x vs 7600k. The i5 is a lot faster in older games, the gap has closed or the R5 has surpassed it in new games.
 
The only thing you've "proven" is that you're still incapable of understanding why 720p benchmarks exist. Likewise, "lack of proof of the future is proof of a negative" is a logical fallacy, not some 'clever' argument.

Actually, I do. And I find them very interesting. But the conclusion I draw from a 720p benchmark is which CPU is faster on current games. Not some future games they don't exist. Your point about headroom would be absolutely spot on if the faster cpu actually did have more headroom, like for example less CPU usage or something, having the resources to push a gpu faster. But that's not the case.
The real claim observation is : "For over 20 years, people have benchmarked at lower than normal play resolutions to eliminate GPU bottlenecks. By doing this, the difference in fps between that and normal resolution gives an idea of how much overhead you have when upgrading GPU but keeping same CPU 1-2 years later."


That's true. The reason is, for over 20 years single thread was king, so the faster current CPU was going to be faster 3-4 years down the road. Multiple cores didn't mean much frankly, up until very recently.

And yes, that's a future prediction you absolutely can "take to the bank" because no game dev in their right mind is going to throw away 80-90% of their 2019 AAA sales just because someone bought a Threadripper and wants it to become the minimum new standard. That's not how the real world works at all regardless of enthusiast PC hardware.

I also agree on that, but you need to realize you are making a prediction. That's not a hard fact, you are just guessing. Sure you have some data (like current gen consoles), but these data are subject to change. Also, not all games are multiplatforms, some are PC exclusive.
 
Just out of curiosity, do the people who say that 720p performance is not an indicator of future performance think that the chips which perform worse at 720p now will perform better than their competition at 720p in the future?

Nope, if you actually bothered to read my posts you would realize that that's not what I'm saying. I'm saying, I don't have a clue, it all depends on how much more threaded games will become. 2 years ago 4 cores was king, now even the 6 cored 8400 gets 100% utilization in bf1 64mp at about 100fps. If the number of usable threads keeps increasing then yes, the "slower" but with more resources CPU's will come on top. You know, exactly like it happened on the past. i5 4690k vs i7 4930k for example.

Of course 720p testing isn’t 100% accurate for the future. But it does show how fast a game can run when the limiting factor is the CPU. It is the best possible way of determining how fast a CPU can run games. Typically, if a chip A is faster than chip B at any given task in year X then chip A will be faster than chip B in year X+3. It’s not difficult logic.

I agree, it's the best metric concerning what you should buy right now for gaming. That's why I'm suggesting 8400 over the R5 1600. I wish the cheaper mobo's were out though, cause right now it's kind of a toss up.
 
Nope, if you actually bothered to read my posts you would realize that that's not what I'm saying. I'm saying, I don't have a clue, it all depends on how much more threaded games will become. 2 years ago 4 cores was king, now even the 6 cored 8400 gets 100% utilization in bf1 64mp at about 100fps. If the number of usable threads keeps increasing then yes, the "slower" but with more resources CPU's will come on top. You know, exactly like it happened on the past. i5 4690k vs i7 4930k for example.

I agree, it's the best metric concerning what you should buy right now for gaming. That's why I'm suggesting 8400 over the R5 1600. I wish the cheaper mobo's were out though, cause right now it's kind of a toss up.

Hard|OCP's low resolution testing for their 2500K and 2600K review back in 2011 showed the 2600K to be more than 20% faster than the 2500K. Meanwhile my own tests at 1080p and 1600p showed little to no difference at all.

https://www.hardocp.com/article/2011/01/03/intel_sandy_bridge_2600k_2500k_processors_review/4

Might be worth considering that.

They also showed the 2600K to be 54% faster than the Phenom II X6 1100T and we all know that's very true today....
https://images.hardocp.com/images/articles/1293839528CCXLXmKatJ_4_1.png

I'll admit, I messed up by not providing low res testing back then....
https://static.techspot.com/articles-info/353/bench/Gaming_03.png
 
Hard|OCP's low resolution testing for their 2500K and 2600K review back in 2011 showed the 2600K to be more than 20% faster than the 2500K. Meanwhile my own tests at 1080p and 1600p showed little to no difference at all.

https://www.hardocp.com/article/2011/01/03/intel_sandy_bridge_2600k_2500k_processors_review/4

Might be worth considering that.

They also showed the 2600K to be 54% faster than the Phenom II X6 1100T and we all know that's very true today....
https://images.hardocp.com/images/articles/1293839528CCXLXmKatJ_4_1.png

I'll admit, I messed up by not providing low res testing back then....
https://static.techspot.com/articles-info/353/bench/Gaming_03.png
Yes, your benchmarks were severely gpu bottlenecked. Thing is, of course the 2600k will always perform faster than the 2500k. They are same processors, same frequency, but the 2600k has more threads. On the other hand, the 4930k vs 4670k I compared it somewhat similar to the R5 1600 vs i5 7600k. The 4930k had less single thread performance (due to lower clocks I'm guessing) but way more threads.

So what I'm saying is, we don't know who the winner is going to be 3 years from now between single thread performance and number of threads. Sure if the 8400 had the same multicore performance then it's obvious it's always going to be faster than the R5 1600. But that's not the case.

PS1. Could you do a bf1 64mp benchmark between the 2? You know, 3200mhz ram, clock it to the 100% achievable by everyone 3.8ghz and give it a go. Both with a 1080ti and a vega64. I'm kinda curious cause I really think the 8400 will choke the gpu more than the 1600 will, based on some weird benchmarks I've seen on youtube.
 
Intel CPUs will work at DDR4-4000 speeds with all DIMMs populated, they aren't nearly as sensitive.
I’m only quoting to get your attention. But FYI slower chips give you higher frame rates in Civilization. Gamers nexus tested the turn completion time and found that the chips that completed turns the quickest had the lowest average frame rate. This is because there isn’t much going on visually whilst you’re waiting for the turn to complete. As a lifelong civ player I can vouch that faster turn completion time is far better than having a better average frame rate. I think this is quite a big spanner in the works of your testing. My apologies. Especially your dollars per frame assessment etc. To display results that show the best chips are the ones with the highest frame rates in this game is very misleading. You should do turn completion time, it’s a far more reliable CPU testing methodology anyway. Also your game average results will be quite heavily affected as will your value charts as Civ is one of the only titles that you state Ryzen “wins” at.
 
Thing is, of course the 2600k will always perform faster than the 2500k. They are same processors, same frequency, but the 2600k has more threads.

I'm going to nitpick here a bit. More threads does not always mean more performance (assuming everything else stays constant). Multithreading always comes with a cost, since the efficient distribution of tasks becomes more complicated, and there are cases where you may actually end up losing performance because of this. I know there are many cases where multithreading helps, but even in gaming it's not always the case. Take for example the Total War: Warhammer (DX12) results for R3 1200 and R5 1400 from Techspot's 'Ryzen 3: The Ultimate Gaming Benchmark Guide'. The R5 got (avg/1%) 72 fps / 58 fps and the R3 got 69 fps / 58 fps. Considering that the R3 runs at 100 MHz lower clockspeed, the difference is negligible and there does not seem to be any benefit in having those extra threads.

When Ryzen came out, there were cases where disabling the SMT improved gaming performance, which was then attributed to poor optimization for the new architecture. However, some people claim to have observed benefits in disabling HT on Intel CPUs as well (http://www.overclock.net/t/1588555/gaming-benchmarks-skylake-core-i7-hyperthreading-test), and I doubt the same optimization card can be played there, since Intel's architecture hasn't gone through any massive changes in recent years (AFAIK). It would actually be interesting if Techspot could do an in-depth article about the effect of SMT/HT in gaming. Comparing the i5-8600K and i7-8700K doesn't quite do the trick, since they have different amounts of cache. And while we already have some idea of the benefits of SMT in 4/4 vs 4/8 -scenarios, this does not mean that the R5 1600 would necessarily benefit from SMT to the same extent. So maybe a i7-8700K "vs" R5 1600, both with multithreading on and disabled, 30 games tested with Vega 64 LC and GTX 1080 Ti... Is that Steve I see running for the hills?
 
This is to be expected. Four memory modules stresses the memory controller more than two modules, so it's not surprising that the amount of modules also affects the achievable overclocking results. In fact, for several AMD CPUs from FX to Sempron, even the maximum supported _stock_ memory speed has varied depending on the amount of modules used, as well as the amount of available slots and the rank of the modules. For example for the FX line the officially supported speeds are as follows:

2 slots available, 2 populated: 1866 MHz
4 slots available, 2 populated: 1600 MHz
4 slots available, 4 poplutated, single rank memory: 1600 MHz
4 slots available, 4 populated, dual rank memory (or both dual and single rank): 1333 MHz

I don't know if Intel CPUs have been similarly sensitive to the memory configuration, but it's hard to see any point in using 32 GB of memory. The best case scenario is that it has no effect, but the worst case scenario is that it affects the maximum stable OC you can achieve.
Do AMD players know of this shite.they do now .any that read good reviews that is.
That's some serious limitations .turned me from AMD 100% right there.my Rampage 4 Extreme has 8x4 gig sticks of 2133 ddr3 set in xmp with tightened timings and a decent oc.come to think of it all my intel rigs could max out the memory without issue.and overclocks well.even my old nforce 790i ultra has all4 slots filled.no issue.the old 790i ultra even ran 4x4 gig sticks of ram for 16 gig total.(which didn't even exist upon release).linked and synced at over 1700 mhz.intel didn't like that nforce chipset.or the nforce 200 chip that split up the pci-e lanes giving 3 full 16x slots for 3way sli goodness.at full tilt.lol.don't know why intel didn't license that chip.yes I do .they would never sell the bigger CPU with all the pci-e lanes then would they.
 
Last edited:
Do AMD players know of this shite.they do now .any that read good reviews that is.
That's some serious limitations .turned me from AMD 100% right there.

I doubt many people are aware of it, but that's probably because it wasn't really an issue. Back then 8 GB of RAM was more than enough for gaming and with 2x4 GB people were likely able to overclock their memory to 2100 MHz or beyond. From what I've heard, AMD CPUs used to like tighter timings more than clock speed and that going beyond 1600 MHz didn't really bring big gains, so I doubt the limitations were really observed in real-world use. At least with my Phenom II X6 the only differences I saw going from the stock 1333 Mhz CL 10 to 1600 MHz CL 9 were in memory benchmarks. The bandwidth just isn't needed with the older hardware; the bottlenecks are elsewhere.

It's funny how the tables have in a way turned. Ryzen doesn't have differing official figures for different memory configurations (AFAIK), but now you can in some cases actually obtain real-world FPS gains when using RAM with high clock speed - at least if the CPU is paired with something like a GTX 1080. In addition, the timings seem to not matter that much. Would be nice to see more testing on this as well, though.
 
I doubt many people are aware of it, but that's probably because it wasn't really an issue. Back then 8 GB of RAM was more than enough for gaming and with 2x4 GB people were likely able to overclock their memory to 2100 MHz or beyond. From what I've heard, AMD CPUs used to like tighter timings more than clock speed and that going beyond 1600 MHz didn't really bring big gains, so I doubt the limitations were really observed in real-world use. At least with my Phenom II X6 the only differences I saw going from the stock 1333 Mhz CL 10 to 1600 MHz CL 9 were in memory benchmarks. The bandwidth just isn't needed with the older hardware; the bottlenecks are elsewhere.

It's funny how the tables have in a way turned. Ryzen doesn't have differing official figures for different memory configurations (AFAIK), but now you can in some cases actually obtain real-world FPS gains when using RAM with high clock speed - at least if the CPU is paired with something like a GTX 1080. In addition, the timings seem to not matter that much. Would be nice to see more testing on this as well, though.

with my 3570 k .I recently upgrade from 2sticks of 1600mhz to 2400 mhz and it's like night and day.also I get a jump in performance in cinebench just by going from cas 11 to cas 10.so the memory sure does make a difference with the new memory controllers.
 
I’m only quoting to get your attention. But FYI slower chips give you higher frame rates in Civilization. Gamers nexus tested the turn completion time and found that the chips that completed turns the quickest had the lowest average frame rate. This is because there isn’t much going on visually whilst you’re waiting for the turn to complete. As a lifelong civ player I can vouch that faster turn completion time is far better than having a better average frame rate. I think this is quite a big spanner in the works of your testing. My apologies. Especially your dollars per frame assessment etc. To display results that show the best chips are the ones with the highest frame rates in this game is very misleading. You should do turn completion time, it’s a far more reliable CPU testing methodology anyway. Also your game average results will be quite heavily affected as will your value charts as Civ is one of the only titles that you state Ryzen “wins” at.

I'm not using the AI benchmark, I addressed this issue. It doesn't impact the results seen here for Civilization.

I'm only interested in average frame rate for that game as it points to how other well made DX12 titles in the future might perform.
 
Last edited:
I'm not using this, I addressed this issue. It doesn't impact the results seen here for Civilization.

I'm only interested in average frame rate for that game as it points to how other well made DX12 titles in the future might perform.
Curious. Perhaps you should explicitly state in your articles that whilst Ryzen may have better average frame rates, they actually have slower turn completion times and result in a worse end user experience than Intel chips? Many people will look at your results and mistakenly think that Ryzen is a better chip to buy if they are playing Civ. this is misleading and I know that you do to want to mislead your readership.

Furthermore, how exactly is measuring the frames per second of a game that is constantly waiting on the CPU to finish a good measure of DX12? I’m genuinely really struggling to understand that logic. Surely the turn completion time would be a better indicator of this?

https://www.gamersnexus.net/hwreviews/3086-intel-i5-8400-cpu-review-2666mhz-vs-3200mhz-gaming/page-4
 
Last edited:
So ,now with the Rysen memory controller issue out of the bag so to speak,and a serious issue it is.
will it be fixed with Rysen2? Are there similar issues with Threadrippers memory controller.not being able to load all mem slots with fast memory and maintain speeds,timings?Steve I guess is who I'm asking.I don't trust an answer from the fan club!
FLAME ON !
I mean this itself is downright dishonest.how many people bought the marketing B.S. Then bought. Rysen and a nice kit of fast ,led ,memory be it g-skill.or corsair.and discover that they are better off to leave 2 sticks of their premium kit in it's package.I would dump the whole .kit,cram and corruption for a proper system that's not lieing to me blatantly.what Junk!
And the shills know of this and still spout the same marketing BS.no shame in a shill!
FLAME OFF! (for now)
Now I think Steve should retest with all banks populated.not his problem that the memory defaults too 1333 or w/e.it defaults to.I want an honest,subjective,review to make an informed buying decision.as do all readers.thats why we are here.ya got my blood to a boil.I'm making steam!
I'm having trouble getting the flames to subside!
but I'm so so happy that I didn't buy a rysen setup.and I was so close.
 
Last edited:
It's slow at just 1080p for sure! Especially relative to the alternatives. Just in Project CARS 2 1500X is way slower than a mere 7600K, enough to make a very noticeable difference.

Neither of those games are particularly optimized for Intel, but the fact is if you have Intel then you know you can't go wrong as it'll be the target architecture. As long as it stays that way it's just another reason for consumers to not change. Concerning that I was looking at Zelda on CEMU today and if you have an Intel CPU and Nvidia GPU, it works a charm. Anything AMD doesn't so much.

There is plenty AMD can do, but blaming everybody else does nothing.

AMD has great CPU architecture. It's not something AMD can do if developers are lazy.

Over a year later again! Welcome to the party!
Rysen 2 had better be faster than these current intel chips.or we'll be doing this all over again.lol :(

What do you figure Intel will be doing in the meantime?sitting back on their laurals .waiting for it.I doubt it.they will be ready to counter again ,rest assured.IMO.had to throw that in .no all seeing, third eye here!

I just got to get me one of those crystal ball /time machine things you guys use to read those future benchmarks
and make such wild statements based on.can't be just opinions can they? Keep the Faith!

Things Intel has done after Skylake launch (2015):

- Putting more cores into CPU's.
- More clock speed to CPU's
-
-

I had to charge my phone .lol.

So more crystal ball/ time machine type predictions.good stuff.seems like you know what the game devs are thinking.finally gonna do some FREE optimising on some games for AMD .

Phone?

Remember that Intel has more cores too now.
 
I'm going to nitpick here a bit. More threads does not always mean more performance (assuming everything else stays constant). Multithreading always comes with a cost, since the efficient distribution of tasks becomes more complicated, and there are cases where you may actually end up losing performance because of this. I know there are many cases where multithreading helps, but even in gaming it's not always the case. Take for example the Total War: Warhammer (DX12) results for R3 1200 and R5 1400 from Techspot's 'Ryzen 3: The Ultimate Gaming Benchmark Guide'. The R5 got (avg/1%) 72 fps / 58 fps and the R3 got 69 fps / 58 fps. Considering that the R3 runs at 100 MHz lower clockspeed, the difference is negligible and there does not seem to be any benefit in having those extra threads.

When Ryzen came out, there were cases where disabling the SMT improved gaming performance, which was then attributed to poor optimization for the new architecture. However, some people claim to have observed benefits in disabling HT on Intel CPUs as well (http://www.overclock.net/t/1588555/gaming-benchmarks-skylake-core-i7-hyperthreading-test), and I doubt the same optimization card can be played there, since Intel's architecture hasn't gone through any massive changes in recent years (AFAIK). It would actually be interesting if Techspot could do an in-depth article about the effect of SMT/HT in gaming. Comparing the i5-8600K and i7-8700K doesn't quite do the trick, since they have different amounts of cache. And while we already have some idea of the benefits of SMT in 4/4 vs 4/8 -scenarios, this does not mean that the R5 1600 would necessarily benefit from SMT to the same extent. So maybe a i7-8700K "vs" R5 1600, both with multithreading on and disabled, 30 games tested with Vega 64 LC and GTX 1080 Ti... Is that Steve I see running for the hills?

SMT sometimes slows down because CPU must make "guess" what threads get higher priority. Wrong guess means "wrong" thread gets executed first and that means lower performance. That's why SMT mainly helps on situations where it really doesn't matter which thread gets higher priority as main goal is to calculate as much as possible.

That also means SMT advantage or disadvantage depends on game used.

It is plain and obvious for everyone too see that Hardrest, Strawman and Puiu are the same sock puppet of the same paid AMD shill. They are in the comment section to do AMD's marketing and parrot the AMD marketing lines. Do NOT be fooled by their lies.

They are also disrespectful to Steve the author and directly hostile to anyone that posts anything they do NOT like. The mods should ban those sock puppet accounts.

Typical Intel fanboy.

So ,now with the Rysen memory controller issue out of the bag so to speak,and a serious issue it is.
will it be fixed with Rysen2? Are there similar issues with Threadrippers memory controller.not being able to load all mem slots with fast memory and maintain speeds,timings?Steve I guess is who I'm asking.I don't trust an answer from the fan club!

Threadripper uses same cores as Ryzen so memory controller is also same.
 
AMD has great CPU architecture. It's not something AMD can do if developers are lazy.



Things Intel has done after Skylake launch (2015):

- Putting more cores into CPU's.
- More clock speed to CPU's
-
-



Phone?

Remember that Intel has more cores too now.

Yeah .reading and posting on a very sarcastic Alcatel idol 4s with windows 10.I absolutely love it!
I'm at my acreage .no grid power for a p.c. And it's hunting season and I finally got a moose license for my area.so nothing is safe.lol. :)
 
Last edited:
Great article and work. Too bad that games are not optimized for Ryzen arch, but maybe in the future with better/custom compilers and DX12 the situation will change. One thing is for sure, Ryzen CPUs are not inferior from a raw performance point of view, which compute tasks show, only that game companies did not had the time and opportunity/support from AMD to optimize their engine for Ryzen also.
 
SMT sometimes slows down because CPU must make "guess" what threads get higher priority. Wrong guess means "wrong" thread gets executed first and that means lower performance. That's why SMT mainly helps on situations where it really doesn't matter which thread gets higher priority as main goal is to calculate as much as possible.

It's a bit more complicated than that. Remember that multithreading does not (for the most part) increase the resources the CPU has for doing calculations, it's just a way of trying to use the same resources more efficiently. Sometimes it's not a matter of making a bad guess on which task to prioritize (or to which CPU and which thread to assign a task), sometimes a lot of the tasks that are performed just cannot be performed efficiently in mulptiple threads, because they already take up so much of the CPUs resources. In these cases the CPU just wastes time trying to fit all the puzzle pieces together. Scalability also affects how much benefits SMT can bring. Even if the tasks themselves will not take enough resources so that shared resources start bottlenecking the execution of tasks, the tasks can be dependent on one another so that with more cores and threads in use the CPU starts spending more and more time waiting for other tasks to finish before moving on - and the more cores and threads you have, the more time the CPU will use in deciding how to distribute the tasks. And these are just very simplified examples.

In a nutshell, it's a complicated issue with a lot of variables. The result depends on the game, the operating system, other programs being run in the background, the CPU architecture, the CPU microcode and probably some other factors as well.
 
FLAME ON !
I mean this itself is downright dishonest.how many people bought the marketing B.S. Then bought. Rysen and a nice kit of fast ,led ,memory be it g-skill.or corsair.and discover that they are better off to leave 2 sticks of their premium kit in it's package.I would dump the whole .kit,cram and corruption for a proper system that's not lieing to me blatantly.what Junk!
And the shills know of this and still spout the same marketing BS.no shame in a shill!
FLAME OFF! (for now)

What marketing BS? First of all, the "issue" is that the amount of memory modules may affect the maximum stable memory overclock that can be achieved. The word "issue" is in quotes, since this is more or less a no-brainer for anyone who's done overclocking. If anything, it's a bit suprising that Intel's memory overclocks do not apparently depend on the amount of modules. Second, no overclock is ever guaranteed, even if you have just two modules and the same exact components as the source you're basing your buying decisions on. AMD, Intel or any of the motherboard or memory manufacturers will never promise you you will achieve a certain overclocked speed with a given set modules. So, if the customer ends up feeling cheated, then they've likely just made an uninformed buying decision.

Now I think Steve should retest with all banks populated.not his problem that the memory defaults too 1333

Memory never defaults to 1333 MHz on Ryzen. The fall-back RAM clockspeed for Ryzen is 2133 MHz, which you can get also with two modules if you make a poor choice of RAM modules. Like I said before, there's no official maximum stock speeds for different memory configurations on Ryzen. There's only one maximum supported speed, which is 2667 MHz. The numbers I posted were for the FX CPUs (FX-8370, FX-6350 etc.).
 
What marketing BS? First of all, the "issue" is that the amount of memory modules may affect the maximum stable memory overclock that can be achieved. The word "issue" is in quotes, since this is more or less a no-brainer for anyone who's done overclocking. If anything, it's a bit suprising that Intel's memory overclocks do not apparently depend on the amount of modules. Second, no overclock is ever guaranteed, even if you have just two modules and the same exact components as the source you're basing your buying decisions on. AMD, Intel or any of the motherboard or memory manufacturers will never promise you you will achieve a certain overclocked speed with a given set modules. So, if the customer ends up feeling cheated, then they've likely just made an uninformed buying decision.



Memory never defaults to 1333 MHz on Ryzen. The fall-back RAM clockspeed for Ryzen is 2133 MHz, which you can get also with two modules if you make a poor choice of RAM modules. Like I said before, there's no official maximum stock speeds for different memory configurations on Ryzen. There's only one maximum supported speed, which is 2667 MHz. The numbers I posted were for the FX CPUs (FX-8370, FX-6350 etc.).

yeah.intels memory controller works ,so it would make no sense to buy anything higher than 2666 mhz memory in any more than 1 or 2 sticks.as it would underclock itself.
who mention overclocked speeds .I'm waiting for the overclocking portion of this review.anxiously waiting.

I said 1333 or w/e it defaults to.2133? Fine! Still broken!gimped ,or w/e you or AMD likes to call it.still anyone who bought rysen or threadripper for a workstation.ie: Productivity.and would like to use lots of fast memory .or even an enthusiast such as my self where money is not the driving factor.say an Gigabyte Aorus or an asus Aura sync mother board in a nice windowed case.gets gimped when trying to use all memory slots.
3200 mhz mem running at 3200 mhz is not overclocked.its at default.so 2666 is max speed mem to buy for said setup.if you wish to fully populate all slots and then will default to 2133.? Just buy intel and save the headaches for the fan club.it will never sell to me.I've been overclocking since my p3 and k6 266.which I still have both.
typical rational though,

so some productivity benchmarks .please ,using large data sets.spreadsheets.in Excel etc.with a 64 bit office like pro plus that will use lots of fast memory.

I use the like tab for giving likes to deserving posts .there should also be a pity tab for such deserving posts as well.lol.
 
Last edited:
Conclusion: An interesting comparison comparing all angles of cost and performance in order to evaluate what's only important to a pure gamer that plans to play relatively current/old generation games that are not GPU bound I.e. 720p. Unfortunately very few folks fall into this category. Over the next year, we'll see games that will utilize multiple cores for various purposes other than simply running the game engine. Also a key benefit of the Ryzen platform is being able to upgrade to Zen2 in an affordable manner which cannot be said for intel's next 'lake' platform. Discarding the 720p results, Ryzen is a clear winner. Factoring in newer game titles Ryzen is a winner again. Accounting for overall system performance outside of gaming and future proofing your investment Ryzen is the winner again.

11 pity for you guys.
winner ,winner,winner.steve didn't mention it by the graphs.so I'll put it here.
HIGHER IS BETTER! Got that?
Correction.it is there.just very small.
Sorry didn't see all the likes for this comment untill I reread them a few times lol.
Another reader with a crystal ball.all seeing third eye.
yes ,just discard the 720p results.they don't mean anything.(facepalm).FUTURE PROOFING,yet again?why bringing Zen 2 so soon if Ryzen is so future proof.been beating that dead horse pretty hard haven't you.its DEAD already!

in an affordable manner?what does that mean?

That there will be NO new features coming with zen2.other than a new cpu.hope they fix the memory controller.

Are they gonna fix the broken memory controller.?
I would like Steve to add input here.I value his expertise,and I'm sure he's quite happy with his new threadripper being unable too utilize all the slots on his mobo with his premium kit of memory.haven't seen DBZ around for a while.I'm sure he has some valuable input as well!
 
Last edited:
who mention overclocked speeds .

Steve did. "Ryzen still has issues when all DIMMs are populated and as a result can't run at 3200 speeds." That 3200 MHz is an overclocked speed. It's not at all impossible to run four memory modules at 2667 MHz, as long as you buy based on the motherboard's memory QVL (which you should do in any case, also with Intel).

3200 mhz mem running at 3200 mhz is not overclocked.its at default.

Not quite. The modules themselves are not overclocked beyond their capabilities, but it's still an overclock for the memory controller. Furthermore, most of those memory ratings are for Intel platforms. Just take a look at some AM4 motherboard's memory QVL and you'll se how well those Intel specs hold up. Spoiler: They're a poor indicator of actual performance.

so 2666 is max speed mem to buy for said setup.if you wish to fully populate all slots and then will default to 2133.?

Max stock speed is 2667 whether you have two or four modules. You might still be able to overclock RAM in both cases. The actual speeds you will reach - or default to - will depend on the CPU, the motherboard and the memory.

Just buy intel and save the headaches for the fan club.it will never sell to me.

Intel's maximum stock speed for Coffee Lake CPUs is 2666 MHz, by the way. Anything above that is an overclock for the memory controller. For Kaby Lake it was 2133/2400 MHz. As with AMD's CPUs, no overclock speed is guaranteed and depending on motherboard choice, you may also be limited in your options for overclocking. In other words you still need to do your homework, even if in general memory support is better with Intel and that stock speed can be considered quite conservative.
 
Intels memory controller goes all the way to 4000 mhz and beyond.by the way.and also has xmp capability so just a click in the bios to find out if memory will run at those speeds.Yes the QVL is important.if your not sure which memory to buy,or trying to cheap out with sale items.
One word "CORSAIR"!. Another word "DOMINATOR"
my goto memory source.XMP supported.overclocking bliss.
I've only had to view the list when I bought some refurb or clearance priced memory .OCZ titanium Alpha and The Spec ops.DDR2.wouldn't work on a few gigabyte boards .back in the day without some tweaking.setup with 1 stick then add the others.was quite a pain.clear cmos ,reset to defaults .do it again do it again.

.Disclaimer.quoted from the read.
Please note none of the cpu's were overclocked.but we do plan to do an overclocked version of this test soon.

Be ready with the comments.
 
Last edited:
Back