AMD Ryzen 7 5800X vs. Intel Core i7-11700K: 32 Game CPU Battle

I cannot afford either one and can only dream about them. Further a 30-series GPU of any sort might as well be on the moon for me. I am the man on the street. So in this respect it's back to the "folding tables" at the local computer show looking for unboxed 2-3 generations ago hardware. Reality bites but I must say reading here all of these fine commentaries does compliment my living a dream...at least for now.
 
I think it’s you that has failed to grasp what the problem is here. Games perform differently with different drivers, Steve himself showcased this not that long ago, showing the Nvidia driver to have a higher CPU overhead than the Radeon driver. This means that we would likely see bigger differences between these CPUs if you used an Nvidia card. And as like 90% of the market uses Nvidia it would have given better results to use Nvidia.

It’s very simple to understand.
You're wrong.
 
No, you don't know the 101 of benchmarking. Hopefully, Steve will come to annihilate your enormous ignorance.
I am definitely right. Steve himself proved the CPU overhead is higher with Nvidia. But he doesn’t like to use their products because he has a public beef with Nvidia. He knows what he’s doing.

You however haven’t got a clue.
 
These benchmarks are not fair imo. The Intel chip used here is a highly overclockable chip, and they used it at stock settings. If it was OCed I'm sure it would beat the 5800X in not all but in more games. Techspot please redo these benchmarks! Who's with me?
 
I am definitely right. Steve himself proved the CPU overhead is higher with Nvidia. But he doesn’t like to use their products because he has a public beef with Nvidia. He knows what he’s doing.

You however haven’t got a clue.
You're wrong.
 
These benchmarks are not fair imo. The Intel chip used here is a highly overclockable chip, and they used it at stock settings. If it was OCed I'm sure it would beat the 5800X in not all but in more games. Techspot please redo these benchmarks! Who's with me?
I'm not with you.
 
Are you asking about what software I use to monitor temperatures? I use AMD Ryzen Master, HWMonitor, RealTempGT and MSI Afterburner during gaming.
Yes. You do understand that different CPU's have different sensors and different ways to tell temperatures. In other words, temperatures are not comparable between AMD and Intel CPU's.
 
As someone who used to own a Ryzen 7 5800X, the heat issue is really because there is too much power fed to the single chiplet/ CCX. And being a chiplet, the surface area is too small to cool effectively as compared to a monolithic chip like Comet Lake/ Rocket Lake.

If you are savvy enough to OC the Intel chip, messing around with the PBO settings is probably easier. Just by virtue of reducing the PPT to 115W instead of 140W, I see a reduction from 90 degs looping Cinebench R20, to 78 degs with an Arctic Liquid Freezer II 360 and with a room temp of slightly over 27 degs.

I am aware of the reason behind the 5800X's temperatures, I was using it with a Gigabyte X470 Aorus Gaming 7 WiFi and at the time they didn't roll out the curve optimizer for 400 series boards, I had an opportunity to sell my board, cpu and ram for a really favourable price and get new Intel bundle so I did it, I really like the 10850K and I'm planning on putting it under 3 x 360mm custom loop next few weeks and try for 5.2Ghz 😅 😅
 
Yes. You do understand that different CPU's have different sensors and different ways to tell temperatures. In other words, temperatures are not comparable between AMD and Intel CPU's.

Are you saying that 1 degree on AMD CPU is not the same as 1 degree on Intel CPU? 😅😅 I think you might be wrong on this one :p
 
These benchmarks are not fair imo. The Intel chip used here is a highly overclockable chip, and they used it at stock settings. If it was OCed I'm sure it would beat the 5800X in not all but in more games. Techspot please redo these benchmarks! Who's with me?
Nodody is with you :joy::joy: 5800X can be overclocked too remember plus both of these CPU's run pretty much maxed out straight out of the box
 
My next build will be with DDR5 RAM and whatever generation of Intel Core is available at the time. Probably 13 or 14th.

Intel for gaming and content creation.

AMD for benchmarkers and apps I don't use.

So, without knowing anything about what Zen 4 or Zen 5 will be like, because that's what will be around for Intel 13th and 14th gen, make a claim that Intel will be the machine for gaming and content creation. Funny statement when right now AMD CLEARLY wins at content creation core per core and with gaming they're about the same.

If Intel is putting out 13th or 14th gen on their 10nm node then maybe they won't get destroyed in perf/watt, but AMD will be on TSMC 5nm (and I really don't need to hear about size comparison because that's not the point. TSMC 5nm is going to be comparable to Intel 7nm, according to Intel). TSMC 5nm is showing to be very power efficient even running at higher frequencies, so I would assume for what AMD is getting at about 120W right now when pushing a CPU, they'll be getting about the same perf. at about 80 - 90W, and that's a pretty big deal for people who use their systems all the time and are putting heavy loads on them a few times a day. And for gamers, having a CPU run at about 50 - 60W is HUGE. It means a quiet system with a CPU that's staying fairly cool. And I base these numbers off of the current Zen 3 5600X which is what many gamers right now are wanting to buy, and when you push it, it's only running at about 70W.
 
A large chunk of your audience probably have nvidia gpus, so how do these trends compare with a 3080 or a 3090?

Is Ryzen better with AMD GPUs? Or does Intel close the gap when nvidia is inside? Or does the 5800x win regardless of the GPU?

From the charts of the average performance, seems that Intel has largely closed the gap, and that the 11700k is a decent alternative. This is a different conclusion than Gamer’s Nexus who called the 11700k a waste of sand. How can it be a waste if it’s roughly tied with the innovative Zen3 chip in 75 percent of games tested, when the same wasn’t true for 10th gen?

Would also be good to see a comparison between the 11700k and the 10700k. Thanks for the hard work!!

"A large chunk of your audience probably have nvidia gpus, so how do these trends compare with a 3080 or a 3090?"

That's not the point of running a test setup. Testing GPUs is different than testing CPUs which is different than testing other components in a system, so you create a build that you feel will show the most differences even when they are similar. For instance you could toss out 4K numbers because gaming at 4K creates a GPU bound system. There simply isn't a GPU powerful enough to show a difference between similar CPUs when running 4K.

"Is Ryzen better with AMD GPUs? Or does Intel close the gap when nvidia is inside? Or does the 5800x win regardless of the GPU?"

There is no evidence that shows this to be true. You can watch EVERY review that exists and there's no data that suggests AMD CPUs are better with AMD GPUs or Intel CPUs are better with Nvidia GPUs. The ONLY thing that's been shown is that in a handful of games, when enabling SAM, which he didn't do, Zen 3 CPUs with an RDNA 2 GPU AND a 500 series chipset MB give better results than SAM running with an Nvidia GPU on ANY system. The reason why this question keeps getting asked is because reviewers tend to ignore it. However there has been a video or two in the past couple years by different Youtubers that show an AMD GPU has no advantage going into an AMD system and Nvidia doesn't run better because it's in an Intel system. It's all about the CPU and the platforms and how well they perform in general.

"From the charts of the average performance, seems that Intel has largely closed the gap, and that the 11700k is a decent alternative. This is a different conclusion than Gamer’s Nexus who called the 11700k a waste of sand. How can it be a waste if it’s roughly tied with the innovative Zen3 chip in 75 percent of games tested, when the same wasn’t true for 10th gen?"

Watch the GN review again to see why he came to that conclusion. What you just read here was a GAMING test between 2 CPUs. The video you are referring to was a review of the 11700K itself, in which case there are multiple things to consider, not just games.

"Would also be good to see a comparison between the 11700k and the 10700k. Thanks for the hard work!!"

Huh? Go watch ANY full review of the 11700K. Every single one I watch compared it to the 10700K, which is why Steve referred to the drop in perf. in certain situations from 10th gen to 11th gen. The reason why he knew that was because they tested the 11th gen against 10th gen and against AMD CPUs.

In other words, why are you asking questions you should already know the answers to?
 
Cost per frame?
Would also be good to see a comparison between the 11700k and the 10700k. Thanks for the hard work!!
Cost per frame?
any virtualization tests so we can see the multi-core performance?

Most reviewers do cost/frame analysis on GPU reviews. This was a head to head with 2 CPUs. You're never going to get that kind of info on a head to head. Go watch the many GPU reviews Hardware Unboxed has released on Youtube if you want cost/frame data, or watch the same reviews by GN. They both do cost/frame analysis on GPU reviews, where it makes sense to do it.

"any virtualization tests so we can see the multi-core performance?"

Pretty much no. I don't think I've seen a single reviewer test a desktop CPU with any VM testing, and I believe the answer as to why not is they are too unpredictable. You get too much variation in the testing from one pass to another. You might be able to watch some VM testing with something like HEDT CPUs on different channels.

For multi-core testing, reviewers like using video editing software, especially the ones that will do tile rendering because that software will ACTUALLY run the CPU at 100% (all cores running full speed). Go watch any CPU review by GN or Hardware Unboxed on Youtube. All the data is there.
 
These benchmarks are not fair imo. The Intel chip used here is a highly overclockable chip, and they used it at stock settings. If it was OCed I'm sure it would beat the 5800X in not all but in more games. Techspot please redo these benchmarks! Who's with me?

You think the 5800X isn't also an overclockable CPU?
 
5800X for gaming? Still no.
You can get a 10700 for $399 CAD and a Z590 for $220 in Canada for example. Or a B560 for $140ish. 5800X alone is $609.

Edit: paid 399 cad for my 5600x and 1059 cad for the 3080
More than 8 cores won't help you in games either, so there goes your upgrade path if it's a gaming machine. For gaming get a 5600X and leave the 5800X alone unless it's HEAVILY discounted and you just have to have 8 cores.



My 5600x is pushing an Asus TUF 3080 to higher fps averages in Warzone than most streamers with a 3090 because I'm really good at memory overclocking. C14 3600 at 1:1 fsb is crazy awesome for warzone
 
Last edited:
These benchmarks are not fair imo. The Intel chip used here is a highly overclockable chip, and they used it at stock settings. If it was OCed I'm sure it would beat the 5800X in not all but in more games. Techspot please redo these benchmarks! Who's with me?

People who make a living out of doing reviews have to have standardized ways of testing components. Some will throw in OC'd setups along with "stock" setups. But, you saying that the 11700K has a lot of headroom wouldn't really be true. Both Intel and AMD systems NOW do the same thing, where they will use algorithms to control their boost behaviors, so you're wrong. In a stock setup, you are actually allowing CPUs to use boost algorithms, and both Intel and AMD now get most of what you possibly can out of a stock config. There are situations BTW where running a straight OC is worse than running with the standard boost configs, because you are ALWAYS generating a bit of heat and there is no longer the headroom for a single or two cores to boost higher than what you can get from an all core OC. So, more and more testers don't bother with OC'd systems anymore, but if they do then they show a few different results including OC'd results from the components they're comparing.

No, I don't care to see OC'd results because I let the CPU and the MB control the boost behavior, and because of it I don't have a room that gets hot when gaming, except from the GPU and there's nothing you can do about that for now.
 
How did you come to that conclusion, when the 10700k is performing better than the 11700k in most games (and matching or surpassing the 5800x in a few cases)?

"How did you come to that conclusion, when the 10700k is performing better than the 11700k in most games (and matching or surpassing the 5800x in a few cases)?"

Not in most games, some games. Steve pointed this out. And no, the 5800X beats Intel 10th gen too at gaming, unless you're talking about the 10900K that's pushed, in which case the 5800X performs about like the 10900K in gaming. In fact that's what reviewers were waiting to see, if Intel Rocket Lake would get back the gaming crown from AMD. Clearly it didn't.

If you use the system for heavy workloads other than gaming though such as what I do, the 5800X wipes the floor with the 10700K or 10700 or 11700K, and uses less power while doing so. On top of that Intel 10th gen doesn't have MB capabilities that even Zen 2 has. Rocket Lake on the other hand does. I run a dual OS system with an X570 MB, where Ubuntu is loaded on the NVMe that connects directly to the CPU, and Win10 which I only use for gaming is loaded on an NVMe RAID on 2 WD 1TB SN750s, giving me a 2TB NVMe RAID for Win10 and the games I run. And with this config, the Samsung 980 Pro that Ubuntu is loaded on has the capability to run at speeds up to 8GB/s, and the Win10 RAID also has the capability to run at 8GB/s. There isn't an Intel 10th gen setup that can do that. Rocket Lake can do that, but not 10th gen.
 
Nice to see I made the right call to pick up a 5800X in December. I was a little worried that the 11700K would be faster. You do get thunderbolt and an iGPU on the Intel side but it is a lot hotter.

It’s definitely odd to use a 6900XT to test these CPUs, Radeon has about a 10% market share so the vast majority of people would be using different drivers to what’s being used here. But this is probably just a symptom of Steve’s well documented beef with Nvidia.

Also it’s savage that the 6900XT can’t deliver metro Exodus enchanced edition at 4K60. A 3070 appears to manage just fine from other sources, a 2070 super does it with DLSS 2.1. If only Techspot would test it properly. But no, we will probably get another boring office laptop review instead.

Steve CLEARLY stated why he used the 6900 XT. It performs the best at 1080p, no matter WHAT system you put it in. So, to show the biggest delta, you use components that give you the least bottleneck, and that's what the 6900 XT @1080p does. This is a test rig for testing CPUs, and it's actually the first one he's used in a long time that makes sense. He used the best GPU for testing at 1080p, where EVERY reviewer will tell you gives the best information about CPUs because your best GPUs aren't restrained at 1080p, and he FINALLY moved up to 3800 CL14 memory which gives you the lowest latency setup you can get, so it's the ideal setup for both systems for testing a CPU. I didn't like it when he changed from a 2080 Ti to an AMD 5700 XT, because that wasn't ideal for a CPU test system.

A test setup doesn't have to take into consideration what most people are buying. Most people aren't buying an Nvidia 3080 or 3090. They're going to buy a 3700 or 3600 or 3600 Ti. A smaller percentage will buy a 3080 or wait for a 3080 Ti. But if you want to go on percentages, hell he could test with an RX580 because a HELL of a lot of people bought those. But, you wouldn't see any difference between the CPUs.

I also bought a 5800X because I don't need a 12 core CPU, but there are times when I need more than 6 cores, I multi-task with my systems all the time, and when titles come out on the newer Unity game engine and also UE5, I think they're going to use more cores, and I plan to own my X570/5800X setup for at least 5 years, probably more like 6 or 7. For gaming, the system is good enough to last years and the bigger changes that will improve gaming still need to come from the GPU if you've moved on from 1080p. I have a 2K monitor for gaming right now, but I plan to buy a 65" OLED in about 3 years, when I figure the prices will be better, and it's going to take a LOT more power from a GPU than what's currently out to push most games up to 120fps @4K, especially if you want to use ray tracing. We're about 2 generations of GPUs away from that capability, unless you kill quality settings, but if you do that, that kind of defeats the purpose of 4K so I might at well just run the 4K TV at 1080p. The 5800X is suitable for any of these setups, and will be so for many years.
 
I think it’s you that has failed to grasp what the problem is here. Games perform differently with different drivers, Steve himself showcased this not that long ago, showing the Nvidia driver to have a higher CPU overhead than the Radeon driver. This means that we would likely see bigger differences between these CPUs if you used an Nvidia card. And as like 90% of the market uses Nvidia it would have given better results to use Nvidia.

It’s very simple to understand.

You're creating an idea in you're head that doesn't equate well to what you're trying to equate it to. He's testing one CPU against another CPU. An ideal test setup is the one that gets rid of other constraints, because ANY constraint in the system will act to slow down the total system. You want the CPU to run with minimal constraints from the rest of the system.

From all the reviews I watched about Ampere and RDNA 2, the best of these, when put up against each other, when game testing @ 1080p the 6900 XT gives the best results. Therefore, testing @1080p and using the 6900 XT gives the least constraint from the graphics subsystem. Even if you run Nvidia though, running at 1080p gives the least constraint from the graphics subsystem and gives you the biggest delta to make comparisons with. He also FINALLY got out of the stupid notion that testing with 3200MHz memory @ CL14 doesn't effect the delta for CPU testing, when clearly other reviewers have shown it's a constraint. So, he FINALLY put in the correct memory to run with these newer systems and stepped up to 3800MHz CL14 memory. It benefits both Intel and AMD, but for AMD, it allows for the interconnect to run at about its peak performance, while running at 3200MHz slows it down. It's a difference of running the interconnect at 1600MHz or 1900MHz, and anyone with a brain could tell you that's going to make a difference, especially if you're trying to reduce latency, or, remove constraints from a test setup.

Why it is you feel it would be better to put in a GPU that would run slower, which there is SO MUCH historical data in testing that says you get less of a delta when you do that, is beyond me.

Steve's test setup is now ideal for all current gen hardware, although a Samsung 980 Pro thrown in as the NVMe drive would once again, reduce constraints. But, it has no bearing on frame rates in any games he tested. If he were doing a complete set of productivity tests though drives can start to affect numbers. It wouldn't be the case in most testing, but at some point it would make a difference.


BTW 90% of consumer don't buy the best Nvidia GPU that exists on the market, and Nvidia doesn't sell 90% of what consumers buy when it comes to discrete GPUs that get put into systems. Their numbers show large because data that's often used to represent market share comes from Steam, or at least what Nvidia likes to show. The problem with that is they don't exclude countries, and in many countries most people don't have a PC, and their using a PC at an IC to play games. I could walk around the many ICs where my wife is from and take PICs to show how that works. But, Steam data is based on user accounts, and if in Asia and Africa and S. America most machines have an inexpensive Nvidia GPU, then that skews the data since each user of the same machine shows as an instance of a GPU. In other words Nvidia shows bogus data for market share. Based on sales data AMD has a lot more than 10% of the market over the last few years.

But if you want to test based on what most people own, then you'd need to test with a low end GPU, which would give you about a ZERO delta with the new gen CPUs.
 
Are you saying that 1 degree on AMD CPU is not the same as 1 degree on Intel CPU? 😅😅 I think you might be wrong on this one :p
No, I'm right. First, temperature software tells you is not necessarily right. Even when using official tools. Like this:

The primary temperature reporting sensor of the AMD Ryzen™ processor is a sensor called “T Control,” or tCTL for short. The tCTL sensor is derived from the junction (Tj) temperature—the interface point between the die and heatspreader—but it may be offset on certain CPU models so that all models on the AM4 Platform have the same maximum tCTL value. This approach ensures that all AMD Ryzen™ processors have a consistent fan policy.

Specifically, the AMD Ryzen™ 7 1700X and 1800X carry a +20°C offset between the tCTL° (reported) temperature and the actual Tj° temperature. In the short term, users of the AMD Ryzen™ 1700X and 1800X can simply subtract 20°C to determine the true junction temperature of their processor. No arithmetic is required for the Ryzen 7 1700. Long term, we expect temperature monitoring software to better understand our tCTL offsets to report the junction temperature automatically.
No, you cannot know what is actual temperature. Even when staying on same generation AMD Ryzen CPU's.

For Ryzen 5000 series AMD does NOT specify exact TJMax temperature. Since CPU temperature is shown as: "TJMax"-"difference to TJMax"+"possible offsets", there is no way of telling Ryzen 5000 temperature without knowing exactly what is TJMax AND offsets. Remember that offset needs NOT to be linear. Those are unknown to public so you cannot know exact temperature for Ryzen 5000 CPU's unless you have some very insider information available.

Now add Intel to mix and comparison is impossible.
 
No, I'm right. First, temperature software tells you is not necessarily right. Even when using official tools. Like this:


No, you cannot know what is actual temperature. Even when staying on same generation AMD Ryzen CPU's.

For Ryzen 5000 series AMD does NOT specify exact TJMax temperature. Since CPU temperature is shown as: "TJMax"-"difference to TJMax"+"possible offsets", there is no way of telling Ryzen 5000 temperature without knowing exactly what is TJMax AND offsets. Remember that offset needs NOT to be linear. Those are unknown to public so you cannot know exact temperature for Ryzen 5000 CPU's unless you have some very insider information available.

Now add Intel to mix and comparison is impossible.

AMD has removed that offset reporting not long after Ryzen 1000 came out due it causing problems for people who tried to measure temperatures......

On top of it I've used a 2700X and 5800X and my wife is currently using a 3700X with a Corsair Capellix 360mm aio running at 800rpm and her CPU temperature in demanding games with PBO enabled is 54C. Either I had a really bad 5800X or they are just hot in nature due to small die area : -).
 
Last edited:
Back