Core i5-8400 vs. Overclocked Ryzen 5 1600

Intel did a nice move under $80 with the Kaby Lake Pentium and then regret it because it was selling better than what it should.
AMD can't do much before the AM3+ and FM2+ and Bristol Ridge stock goes away. They don't have the money anyway, to push too many new models in the market at once. I only wonder if they will create single CCX Ryzen dies for that market, or if they will just do what they where doing in the FM2 platform, by disabling the GPU part of an APU and calling the final product "Athlon".

Disabling the APU part would likely be the easiest way for them to make dual cores. They wouldn't even have to make a new chip.
 
Once again you don't know what you're talking about. The Civ results are not upside down, this test has nothing to do with AI turns.
How rude. I very much do know what I am talking about. Care to explain the methodology behind the testing? That is all I am asking for. One thing is for sure, your graphs mislead. Intel CPUs are better for end users who are looking for the best Civ experience.

I can get behind a different testing methodology but I don’t see it here and all I am asking for is some clarity, can you provide this please?
 
How rude. I very much do know what I am talking about. Care to explain the methodology behind the testing? That is all I am asking for. One thing is for sure, your graphs mislead. Intel CPUs are better for end users who are looking for the best Civ experience.

I can get behind a different testing methodology but I don’t see it here and all I am asking for is some clarity, can you provide this please?

Why do you deserve/need to have it explained to you twice?

https://www.techspot.com/community/...00x-r5-1600x-1500x.241676/page-6#post-1642758

https://www.techspot.com/community/...00x-r5-1600x-1500x.241676/page-6#post-1642780

https://www.techspot.com/community/...00x-r5-1600x-1500x.241676/page-7#post-1642956
 
Because "all old games will run at 60fps if new ones do" is a complete fallacy at odds with observable reality (usually repeated by those who don't actually play such games). Examples : Morrowind (with MXE Graphical Extender that significantly increases view distance) or Oblivion (with mods like Unique Landscapes, especially around the "Aspen Wood" area) will stutter a lot more than Doom (2016) or Skyrim (2012) on the same modern CPU unless fed enough high 1T performance. Other games such as Neverwinter Nights with large mods, or even Age of Empires 2 HD with its unit cap significantly raised from classic 200 to over +1000 on giant maps can totally saturate one core. So no, you cannot blindly extrapolate guesses about performance of older games based purely on release year.

LOL@mods.
Please stop wasting my time okay? Thanks!
 
LOL@mods.Please stop wasting my time okay? Thanks!
Sorry I don't understand what point you're trying to make. Plenty of older games still struggle under modern CPU's due to maxing out just 1-2 cores due to lack of threading in older engines, and plenty of people mod their games. And many more people in general want to see a wider choice of games tested than the usual "12x AAA's of the year" that every other sites does over & over, hence my suggestion for perhaps the next budget retro gaming rig article on lower end chips to include more older games purely to see if, eg, 1T performance gap for R3 1300X vs i3-8100 is still as wide as on say Cinebench or LAME, or perhaps if it narrows due to the nature of game code. Purely as one of those articles that makes Techspot memorable for doing unusual / different stuff. If this doesn't interest you, then simply ignore it and move on. No-one's forcing you to argue about games you don't play. Likewise for "wasting your time", you're the one replying to me, not the other way around...
 
You haven’t actually explained this at all, those links are just me asking for an explanation.

I don’t understand your hostility? I’m just seeking clarification.

Please explain why showing the average frame rate for Civilization VI is indicative of future DX12 performance. I’m asking in the light of gamers nexus testing demonstrating that when a CPU takes longer to complete a turn the average frame rate goes up because there is little visual activity on the screen. They themselves have stated that weaker CPUs tend to give he highest FPS for this reason. Is that not an accurate statement from gamers nexus? Personally I think it is. I just fall down where you tell us that these average frame rate numbers show that the prospective chips will perform better in he future in DX12.

I understand that Ryzen performs better with natively supported DX12 but surely gamers nexus point about turn completion time is still accurate. Maybe you could provide turn completion times with a 1080ti and Vega64? I would be surprised if the results match what you have posted here but then again I’m not a hardware tester. It might well be the case that if you use Vega that Ryzen floats above Intel in the turn completion times. I’d like to know please. Gamers nexus came to the conclusion that the CPU performance in the game was more dependent on frequency than core count. Does this change with Vega?

https://www.gamersnexus.net/hwrevie...vs-ryzen-streaming-gaming-overclocking/page-5

“Starting with Civ VI, we used the AI benchmark to test the time required to compute AI turns, as FPS is useless here. The turn time is about the same at 1440p as it is at 1080p, though we did test both. AVG FPS actually goes up for worse CPUs, because the time spent sitting idle on the screen is longer, as it takes longer for the game to calculate a turn. This makes FPS an unusable metric for this particular AI benchmark.” - direct quote from gamers nexus coffee lake review linked above.

Can I get a bit less hostility mate, I’m not trying to catch you out, you obviously put a lot of work into this but this is quite a contradicting result to other testers and also clearly an anomaly in your own testing. Surely clarity is at utmost importance here, If either testing methodology is inaccurate we have the right to know as potential consumers and the readership of your tech website.

Thanks.
 
Last edited:
Sorry I don't understand what point you're trying to make. Plenty of older games still struggle under modern CPU's due to maxing out just 1-2 cores due to lack of threading in older engines, and plenty of people mod their games. And many more people in general want to see a wider choice of games tested than the usual "12x AAA's of the year" that every other sites does over & over, hence my suggestion for perhaps the next budget retro gaming rig article on lower end chips to include more older games purely to see if, eg, 1T performance gap for R3 1300X vs i3-8100 is still as wide as on say Cinebench or LAME, or perhaps if it narrows due to the nature of game code. Purely as one of those articles that makes Techspot memorable for doing unusual / different stuff. If this doesn't interest you, then simply ignore it and move on. No-one's forcing you to argue about games you don't play. Likewise for "wasting your time", you're the one replying to me, not the other way around...
Well, you can set CPU affinity to just use how ever many cores you want. And I find it hard to believe that that any older game will struggle more on a modern CPU than an old one. Single core performance by Intel is better than ever. This honestly sounds like you're trying to make this all about YOU. When and how did you conduct your poll that lead you to state that "more people in general want to see a wider choice of games tested". Can't you fill in the blanks and assume that if every game tested on a rig runs well, the rest will, too? This isn't rocket science.

Techspot is also a business, so they need to put out articles that will interest the majority, not the minority. Most of us aren't playing old games. It's the same reason you won't see MotorTrend go back and review the 1985 Mazda RX7. There's just no point.
 
I understand that Ryzen performs better with natively supported DX12 but surely gamers nexus point about turn completion time is still accurate. Maybe you could provide turn completion times with a 1080ti and Vega64?

I'm sure it's accurate, but it's also irrelevant here because it's not the same benchmark. Steve uses the GPU bencmark, not the turn-based AI benchmark, and if you watch the Hardware Unboxed video (from 3:13 onwards), Steve explicitly states this as well as the fact that the test completion time for the GPU test is the same with a Pentium G4560 and an i7-8700K.
 
And I find it hard to believe that that any older game will struggle more on a modern CPU than an old one. Single core performance by Intel is better than ever.
I never said old games would run worse on new hardware than old. I said there are quite a few older games than run worse than newer games on newer hardware (which is true, see Oblivion vs Skyrim, etc). Just wondered by how much scaling in those kind of games would differ vs Cinebench 1T benchmarks, as I've seen some highly variable results on Youtube.

When and how did you conduct your poll that lead you to state that "more people in general want to see a wider choice of games tested".
By reading the comments section in new hardware benchmarks on 20 sites that test exactly the same 8-12 games over and over... In fact a while back Techspot did a "30-game battle" which was met with positive results (as were TPU in previous generations), so the proof is already there. Do I need a "poll" to argue over how cr*p moving in the opposite direction like Hexus and testing just 2 games would be too?

This honestly sounds like you're trying to make this all about YOU. Techspot is also a business, so they need to put out articles that will interest the majority, not the minority.
I think you need to calm down and stop trying to read more into something that was written. I simply suggested Techspot do something unusual for a one-off future budget / retro gaming rig article. If that doesn't interest you, then simply ignore it. If Steve isn't interested, then he can ignore it too. No-one suggested "replacing" anything mainstream. But as a flip side to "putting out articles that will interest the majority", Techspot's strength is also doing unusual benchmarks / features that can't be found elsewhere which keeps the site fresh and avoids going stale (as many other tech sites have over the years). Hardly an "outrageous" observation...
 
What about machines that pull dual gaming + productivity/rendering duties? Ryzen specs (12 threads) suggest that it might be better for this scenario
I agree. im a mutli tasking user. I need all I can get and the intel 8400 has no threads.
 
Intel always comes out the best in test, as AMD always seems secondary. AMD + HDMI (Digital Sound) vs Intel + HDMI (Digital Sound) no issues with Windows 10 but AMD + HDMI Audio issues, only way around that was to go AMD + S/PDIF. Anyway great review, AMD needs to work a bit harder bus wise reduce the overhead and bottleneck but again test results shows which CPU is quickest. I think no more AMD's for me been a huge fan but my other Intel ix still has no HDMI Audio issues with Windows 10. Go figure.
 
I know the article is primarily the 1600 vs 8400 but then you do a cost breakdown of all the platforms including an overclocked 1600. Yet the 7700k is at stock speeds, there should be a platform cost for the 7700k @ 5ghz

The Core i7-7700K is now a horrible buy and one of the biggest shaftings of 2017. We don't need to compare it's value at 5 GHz, it's no good.

Then why is it even there then? I got one last January and I saved hundreds on RAM and video cards vs what they are priced now. We all are getting the shaft here. At the time it seemed like a deal after seeing the 6700k priced so high for so long. Yeah I'm a little butt hurt. Oh well.
 
What some AMD fanboys don't get - that this is CPU benchmark, not GPU. On high resolution GPU will run on 100% while CPU will idle, and the more powerfull CPU - more idling will be, cause GPU can't draw more frames per second. While with low resolution GPU load reduces and CPU work as hard as they can - and this allows to see what they are capable of. Comments like "make bench on 4k" is just ridiculous, because GPU will be the bottleneck. As opposed to that at 720p resolution you can approximate imagine how more powerfull future generations of GPUs will run current games with tested CPUs. Basically 1080ti on 720p today = some GTX 3070ti on 1440p in 2 years.
 
Back