Ryzen 7 2700X vs. Core i7-8700K: 35 Game Benchmark

Wait what?! Require 1080ti or SLI just to run 120 or 144hz?

I have a 1060 3gb with an i5-6600k and run on 144hz no problem. Gta, witcher 3, wow, all perform fantastic.

at what resolution and quality settings?

And for GTA and Witcher, not Wow its an ancient game that will give you high fps on anything recent.
 
Last edited:
Wait what?! Require 1080ti or SLI just to run 120 or 144hz?

I have a 1060 3gb with an i5-6600k and run on 144hz no problem. Gta, witcher 3, wow, all perform fantastic.
Unless you are running on some ultra low settings, no you don't get 144hz on these games.
 
No proof required honestly. I got an 8700k and a 1080ti and even this setup struggles to get 144hz on AAA games. His GPU gets about 75 to 80fps on witcher 3 at uber ultra low ;)

I know cause I had it

hehe I know but I want to see him post more nonsense like he thinks he is dealing with a bunch of noobs on this forum :)
 
I would get intel just because of habit. I need one because my last one was 5 years ago. But I love that AMD is closing the gap and creating heated competition for intel.

I was going to get a 8700k until the 9700k was made officially known this week so I guess I'll wait a little while longer
9700K is just Coffee Lake refresh. No 10nm.
 
You would probably be running 1080p if you have a 240Hz monitor for FPS gaming. That refresh rate is very hard to do on higher resolutions.
That's the joy of computers, we all have our own unique needs - high res gaming, lower res and high refresh rate gaming, VR, streaming, video encoding, budget, fanboyism or a dislike of monopolies, product compatibility, future upgrade plans or various mixtures of all. Reviews like this (and the ensuing comments) might help us decide which way to go. Neither choice is definitively right because our input conditions vary so widely. The bottom line is, if you only want to game right now on a PC, based on games currently available right now, and no other factors matter to you, then you should probably go for the Intel chip. That's the recommendation provided. You decide if it's right for you.
The recommendation is pointless to all those with higher resolution monitors or 60 Hz monitors . In those cases , which represent probably more than 90% of all users its a way of spending money for no reason
 
AMD Ryzen 2700X for me for the following reasons:

- I do not overclock because I need to keep my computer running in full load for a very long time. I need maximum stability and longevity for my system. I will buy a big aftermarket cooler nevertheless, so, the bundled one have no value for me.
- I am into gaming but my monitor is DELL SE2717H, a 1080p freesync monitor that max out at 75 Hz, and my next video card will be strong enough to drive all games at 60 - 75 fps at this resolution. At 75 fps is almost 0 difference between Intel and AMD. If you are gaming at 1440p or 4k the bottleneck will also be the GPU.
- I do a lot of things with my computer including 3D renderings and video production and encoding. For these tasks that sometimes I run in the same time as games, I prefer to have 8 cores.
- The 720p resolution included in this benchmark for the sake of exposing the CPU's strength in gaming, is totally irelevant in real world because no one game today at 720p.
- All the tests here are made with GeForce GTX 1080 Ti. Seriously, how many can afford such an expensive high-end card? Use a lower tier card and games will quickly become GPU bound, rendering the differences in CPU's totally irellevant.
- Both, Intel and nVidia companies have a way to do business that I personally dislike profoundly. So far, AMD started many open source initiatives like Vulkan, Freesync and others, and because of this I am more likely to support AMD then the other two when the price/quality ratio is about the same. I do not care about corporations fanboys dispute, I know for a fact that all corporations including AMD have only one rule: profit at all costs, even with the price of destroying the world, is just that AMD managed to look more friendly and fair in my eyes.
 
We seem to be receiving the message quite clearly from the reviewers. If building a workhorse, say something to crunch multiple big spreadsheets at the same time etc then Ryzen is probably best suited towards your needs. If you want to build a beast of a home gaming system then Intel is definitelty the better buy.

Simple really. Of course there will be scenarios where users are on a tight budget and might want to save a few dollars by opting for the cheaper of these two regardless. But I don’t think most users buying flagship CPUs will be that concerned about saving a few dollars as far better value is found further down the product stack.


Im a gamer I bought a 2700x x470 gaming 7 and 16gb of g.skill flare x memory.. Reason being is 10fps isnt worth the loss of no socket support in the future.. I went fro ma 4790k.. Id rather keep my motherboard for 1-2 years get a 7nm zen 3 then have to stayt with the 8700k for 2-3 years then sell the mobo cpu and so forth AND spend more money..

Just was a no brainer.. The days of getting a E8400 wolfdale vs a Phenum, or a 4790k and Phenum are over... it was clear for 10 year to buy intel.. but now AMD has pretty much identical performance the small diff in IPC on the 8700k isnt worth your brand new z370 board not supporting the next cpu lol.. thats stupid.

Especially if youre gaming at 2560x1440 like me..

You don't need to upgrade that often for gaming; even Bloomfield on PCIE 2.0 can slug it out with the newest CPUs @1080p/1440p withGTX 980Ti/1070; difference is typically less than 10FPS. By the time you actually need to replace your Ryzen CPU, that platform might be gone.
 
Personally I would have tested both CPUs at the same clock speeds, granted the i7 is a 6 core and the AMD is a 8 core. But still, for games I think AMD has it made, there FPS were more consistent across all games when the resolutions changed, and also took less of a FPS loss than the i7.

Who is going to play games at 720p ? This is 2018, not 2002....

Also, with the AMD if you wanted to do streaming or even content creation you can, easily. Where as the i7 I'll assume will have some issues in mulit-threaded apps, even with there hyper-threading. Though there still good regardless, just not as quick perhaps.

Your also not forced to change out your motherboard every year Intel decides to come out with a new chip, that's just BS and an extra expense when upgrading a current system. Least with AMDs AM4 socket were have some longevity which means less $$$ down the road, which everyone likes.

Now don't get me wrong, Intels always been great with games etc, but they have also always been way over priced with what you get, I mean take for example the i7 5960X, there Still $900+ atm, where as the 2700X from AMD is roughly $330-$350ish. That's a huge a difference where bang for your buck is concerned.

So in reality, yes the 8700k is good for gaming, but the 2700X is good for both gaming and multi tasking, and there priced relatively the same. So it really depends on what you need the most.
 
Last edited:
I have barely enough energy to post my experience with this question as seeing this argument over and over makes me very tired... so if you're interested here goes:

I have an AMD 1950X and an Intel 8700K.
When I first set up my current gaming machine, I had it running with the
  • 1950X (@ 4 GHz) with an
  • ASRock Fatal1ty X399 Professional Gaming sTR4 AMD X399 mobo, an
  • EVGA 1080 Ti FTW3, a
  • Samsung 1TB 960 Pro and
  • 32GB of G.SKILL TridentZ RGB Series (4 x 8GB) DDR4 3600 playing games at
  • 2560x1440 @ 60-120 Hz (I switch refresh depending on how I feel).
All of my games played pretty much perfectly - or at least as expected since I'm a hardcore realist. I didn't expect this rig to play newer AAA games with max settings at 120 fps - so I'd drop my refresh to 90 or 75 or 60 depending on where I wanted to run V-Sync. Sometimes I just used Adaptive V-Sync... depends on the game and how I was feeling that day. The problems rose when I did something that was heavily single threaded - think emulation. Newer forms of game emulation NEVER played well. You would almost expect that emulation on a host running at 4 GHz would give you at least 80% the performance of running on a host at 5 GHz but that's just not how it ever worked. It was more like 50% the performance. Also, when playing games at 2560x1440@60 and trying to record at reasonably good quality settings at the same resolution the recordings would consistently drop frames. I always had to resize down to get smooth 60fps recordings or streams. Many of you are saying that all of these threads will cause you to excel at recording while playing...

Due to having these issues (and my file/plex server dying) I decided to move my 1950X and X399 mobo + RAM over to that machine and get the 8700K for my primarily gaming machine. The new setup for my gaming machine ended up like this:
  • 8700K (@4.7 GHz on all cores) with an
  • ASUS ROG Strix Z370-E Gaming, an
  • EVGA 1080 Ti FTW3, a
  • Samsung 1TB 960 Pro and
  • 16GB of G.SKILL TridentZ RGB Series (2 x 8GB) DDR4 3200 CL14 playing games at
  • 2560x1440 @ 60-120 Hz (I switch refresh depending on how I feel).
You'll notice that the only things that really changed are the processor, mobo, and RAM. I've gone with a relatively mild overclock of 4.7 on all cores since I didn't delid and I'm wanting to keep my temps pretty low which they are. I noticed a pretty stark difference on this rig compared to when it was running Threadripper...

First, my gaming performance was not noticeably different. We all know that as you run at higher resolutions and refresh rates the GPU becomes the bottleneck - hence the 720p benchmarks when showing differences in CPU performance. True, no one with these procs will ever play at that resolution but why bother with these cpu benchmarks if you're just going to make the GPU the limiting factor? People complaining about this tire me... these types of benchmarks are not really to show you how fast you can play a current game at 720p but more to give you an idea of the differences between the CPU performance and how that might affect future games.

Second, as I mentioned earlier emulation had an almost 100% increase in performance. It is a DRASTIC difference and changed it from almost unplayable to almost perfect.

Third, recording was suddenly super smooth. 2560x1440@60 recordings didn't drop frames anymore. My Twitch streams were staying at 60 frames. My 8700K is doing a much better job at giving me good recordings and streams.

So I've decided to leave things this way. I'm running my VMs and Plex / File services on my 1950X and it's an overall much better experience. Hope this info helps someone trying to make a decision based on specific needs.

FYI - the apps I typically use on my PC are:
  • FFXIV
  • FFXV
  • The Witcher 3
  • Battlefield 1
  • Tekken 7
  • Cuphead
  • CEMU Emulator
  • VMWare Workstation Pro
  • OBS
 
It's not because CPU is bad or even about single thread performance. It's because you are using crappy emulator software.

Same applies to streaming. If you cannot stream with 1950X but can stream with 8700K, then again you are using very crappy streaming software, no excuses.

Top notch hardware works best with good software too. i7-8700K architecture is essentially tweaked Pentium Pro from 1994 so it works better with 10 year old software. That's also reason why it took quite little effort from AMD to get very close with architecture that was rushed into market. Zen2 is what Ryzen would have been if not being time constrained.
 
Glad you have the time and expertise to write all of your absolutely perfect software. The rest of us plebs have to resort to using the software written by other developers and use whatever hardware we have access to and can afford.

I never called the 1950X a bad processor - in fact I love it especially for what I'm using it for now. It just has its own set of strengths and weaknesses. The 1950X will do a superior job compared to the 8700K for its current set of tasks. Be careful not to let your biases shine through.
 
Glad you have the time and expertise to write all of your absolutely perfect software. The rest of us plebs have to resort to using the software written by other developers and use whatever hardware we have access to and can afford.

I never called the 1950X a bad processor - in fact I love it especially for what I'm using it for now. It just has its own set of strengths and weaknesses. The 1950X will do a superior job compared to the 8700K for its current set of tasks. Be careful not to let your biases shine through.

Or then accept that software is crap and wait until better arrives or use alternative software? It's usually only about laziness, very good example is Dark Souls for PC.

There are no known weaknesses on 1950X architecture for emulation software. If 1950X is slow when using emulation software, then emulation software is crap. Simple. Same about streaming. There's no proof that Ryzen architecture is bad for streaming so again it's software's fault. There are many streaming software available, so if one doesn't work, use another.

So if you say that 1950X has weaknesses, then you should also provide something about architecture weaknesses. Otherwise it's software weakness, not CPU weakness: it's possible to make software that works badly on every current AND future CPU.
 
It's also possible for something to work better than your beloved AMD proc at specific tasks - everything has a weakness. The jobs I'm throwing at these machines are not your mundane sort of everyday needs but more demanding ones. What I am showing is that when performing these sorts of jobs (recording 1080p30 is nothing compared to recording 2560x1440@60 with identical quality settings) on commonly used recording and streaming software or using CEMU to render titles at higher resolutions (2560x1440) and with plugins that the 8700K has proven to be a better performer for me - everything else being equal. Sometimes you will reach a point where you just don't have enough single threaded performance to do something very well and if you believe that every task or process can be written to perfectly take advantage of multithreading then you're in for a rude awakening.
 
Demanding ones? Streaming is not demanding and it can very easily utilize multiple cores. If 1950X is "too slow" for streaming where 8700K is "fast enough", then you have quite serious software problem. There's no way 1950X can be slower than 8700K on multi core software. And if it is, then you have software problem, not CPU problem.

CEMU seems to be Nintendo emulator. Pure software emulation requires quite lot CPU power. It also seems to be quite new non-commercial software so it's very probably NOT optimized for features like NUMA Threadripper offers.So again, software problem.

You essentially said that 1950X has about 50% of single thread performance of 8700K. We all know that's BS. Your software just sucks. I also use bad software, game that uses abandoned SDK. That piece of crap barely holds together, but I accept that software is simply crap and don't blame CPU for that.

There are very rare cases where thing just must be made with single thread. More often it's about laziness or SDK problem. There are many examples about this. Like 7-zip that can compress using at least 16 threads. But decompression is single threaded. Because it is.
 
If emulation software runs better on an 8700K than a 1950x then it’s the 1950x that’s lacking, not the software. I mean why should the developer spend the money adapting their software to something that a minuscule percentage of the market have?

The same goes with any software that doesn’t run better on a 16 core threadripper. Devs aren’t going to optimise for Numa when less than 1 in 20 of its users will benefit. Devs are usually not in the business of wasting money.
 
If emulation software runs better on an 8700K than a 1950x then it’s the 1950x that’s lacking, not the software. I mean why should the developer spend the money adapting their software to something that a minuscule percentage of the market have?

The same goes with any software that doesn’t run better on a 16 core threadripper. Devs aren’t going to optimise for Numa when less than 1 in 20 of its users will benefit. Devs are usually not in the business of wasting money.

So because software developer has no interest supporting CPU features, CPU is to blame and not software? Using that logic, we should still be using 16-bit software only :p
 
So because software developer has no interest supporting CPU features, CPU is to blame and not software? Using that logic, we should still be using 16-bit software only :p
Not what I was saying. Can you tell me why a software developer would spend money and time supporting features that 95% of their potential customers don’t have?
 
I'd get the 2700X regardless because;
- You have to be extremely lucky to get your 8700K to 5Ghz without delidding and water cooling, while the 2700X performance shown is practically guaranteed even with the boxed cooler. In fact, stock performance with XFR2 is known to have better results than overclocking to 4.2 GHz, because it can reach 4.35 GHz on the cores it needs (feel free to test it Steve).

You should not speak to things you don't have first-hand experience with. Nearly every 8700k can reach 5.0ghz on air without issue. Using a ARCTIC Freezer 33 (single fan) in a large high-airflow case, at 5ghz mine wouldn't even break 80 degrees on a full load stress. In a VERY restrictive case (LIAN LI PC-V320A) with just one intake fan and one exhaust fan, using a Scythe Kabuto 3 cooler, it would break 90 degrees in stress tests but barely hit 80 in real-world high-load scenarios. Not ideal temps, but perfectly safe and reasonable. With a $40-50 investment (IE, barely more than either cooler I used) in a 120/140mm AiO, even in the Lian Li the temps would have been great. And that is all without delidding or using expensive thermal paste. These are very standard results. Hitting 5.0 in a 8700k is cake. Hitting 5.1 is pretty hard on air (mine would not without crashing) and hitting 5.2 on air is nearly impossible (mine wouldnt boot at all), but 5.0 not a problem.
 
You should not speak to things you don't have first-hand experience with. Nearly every 8700k can reach 5.0ghz on air without issue. Using a ARCTIC Freezer 33 (single fan) in a large high-airflow case, at 5ghz mine wouldn't even break 80 degrees on a full load stress. In a VERY restrictive case (LIAN LI PC-V320A) with just one intake fan and one exhaust fan, using a Scythe Kabuto 3 cooler, it would break 90 degrees in stress tests but barely hit 80 in real-world high-load scenarios. Not ideal temps, but perfectly safe and reasonable. With a $40-50 investment (IE, barely more than either cooler I used) in a 120/140mm AiO, even in the Lian Li the temps would have been great. And that is all without delidding or using expensive thermal paste. These are very standard results. Hitting 5.0 in a 8700k is cake. Hitting 5.1 is pretty hard on air (mine would not without crashing) and hitting 5.2 on air is nearly impossible (mine wouldnt boot at all), but 5.0 not a problem.

You must have the most golden chip ever then because every review seems to tell the opposite story.

https://www.tomshardware.com/reviews/intel-coffee-lake-i7-8700k-cpu,5252-12.html

Tom's hardware's 8700K was peaking over 90c at only 4.9 GHz with an absolute top end 420mm AIO. In no universe is the average 8700K getting to 5 GHz on cheap cooling without a delid or a golden chip.

Even techspot itself recommends a delid when going to those higher GHz.

"Once overclocked to 5.2 GHz, we reached within six degrees of the TjMAX while running the CPU stress test, peaking at 97 degrees briefly. Obviously a delid would help tremendously for those seeking extreme overclocks, and/or a more extreme cooling solution."

https://www.techspot.com/review/1497-intel-core-i7-8700k/page4.html

And once again, that was also on top end cooling. In your own words....

You should not speak to things you don't have first-hand experience with.

You cannot claim to have experience with all 8700Ks when you've only got one nor should you forced your opinion onto others, especially against the suggestion of the many available professional reviews.
 
Last edited:
It’s fine mate. The failure rate doesn’t really significantly go up until you hit 100C.

Just like the video states there will always be fringe cases when working with high temps.
And the temps of the CPU are not the only thing you should worry about, your mobo too and the components that are around your CPU (like the VRM in some cases).
You are also forgetting something: with time dust and the aging of the thermal paste can and will raise your CPU's temps by a few degrees. And your room's temp will not always be low. You need a good temp buffer when you OC your CPU. Running at or very close to TJ Max is not recommended at all and you should not tell anyone that it is "fine".
 
Back