Making a Fast Quad-Core Gaming CPU

Always the same benchmark games that I don’t even play. But that’s okay. I’m pretty sure DCS World would run complex missions soooo so much better if I had a 20GHz dual core CPU instead of my current 5GHz 8 core setup.
 
Very interesting analysis, thanks.

So since you decided to briefly revisit the overhead issue, why not add a few tests on older / weaker CPU when reviewing lower end GPU like the 3060, 6600XT and 3060Ti ?
That surely would be as useful as testing the effect of the x8 lane limitation for owners of PCIe 3 systems (and I‘m not being sarcastic here, that was interesting information).

I am sure there are many 8400, 2600 etc owners who would love to know which GPU upgrade would be worth it for their particular system. Even I with my 2700X am not sure which GPU upgrade would give me the best results for the money spent once GPU prices reach an acceptable level.
 
It's been long observed going back to Haswell or so, that the oft 6MB vs 8MB cache on i3's doesn't make that much difference in a lot of games, especially given that low-end $70 CPU's tend to be matched with 60-75Hz monitors and sub $300 (normally) GPU's where the "we tested with 6900XT / RTX 3090 to highlight the difference" effect gets masked anyway. So, eg, for Cyberpunk, 85/112, 80/109, 76/106 and 64/100 (1% low / avg) are all like 60/60 or 75/75 anyway with "budget Freesync 60-75Hz monitors" (no LFC) usually handling variable refresh rates down to about 40-48fps, which is below the 1% low in every game tested here. And yet again, not all budget gamers play AAA's, so other games are affected even less.

Edit: I'd also second Irata's request for another CPU vs GPU bottleneck roundup. Whilst it's understandable to test every CPU with an RTX 3090 / every GPU with an i9 / Ryzen 9 to eliminate bottlenecking on a technical level, "we took a $70 CPU and paired it with a $1,500 GPU" doesn't make a particularly good buyers guide for those who never buy top-end CPU's / GPU's anyway.
 
Who would think of doing this??? how about testing all this "efficient" new 180w / 200w /300w GPUS AT JUST 115W and see how they compare to the number 1 GPU on steam the last 5 years? yea no, lets NOT do that!
 
Always the same benchmark games that I don’t even play. But that’s okay. I’m pretty sure DCS World would run complex missions soooo so much better if I had a 20GHz dual core CPU instead of my current 5GHz 8 core setup.


I play DCS WORLD with VR.
More Everything is necessary for that game.

64GB DDR4
SSD
Core i9
3090

The more you throw at it- the better.
 
Wow, this article is some definitive proof that you need at least 6 cores to play games made in the last 5 years, regardless of L3 cache. Maybe not yet 8 cores, but 4 is definitely out. Thanks guys for the thorough analysis!

? My 4770K Haswell can play any game made in the past 5 years and I definitely do not need a 6 Core CPU to do so.

Who games at 150 FPS anyway? Only a small minority does. I don't even have a monitor capable of 100 FPS+ nor do I want to buy one.

Linus Sebastian, using Steam Survey, underlined that 43% of the Steam Userbase are still using 4 Core CPUs. It is obviously absurd to think that game developers would only launch games playable by 6 Core CPUs and up only and would ignore a huge chunk of the User Base who are still on 4C/8T.

As Linus says in the video, the OP tests and machines used concern such a miniscule portion of the global playerbase as to be in essence, 1% tests for 1% machines.

If you think for a moment that the GTX 1060, 1050TI and similar capabilities GPUs still top the Steam Survey lists, you will understand why 150+ FPS gaming concerns for the most part, 1 percenters.

 
Who would think of doing this??? how about testing all this "efficient" new 180w / 200w /300w GPUS AT JUST 115W and see how they compare to the number 1 GPU on steam the last 5 years? yea no, lets NOT do that!
I have done something like that, the GTX 1060 6gb gets rekt by an undervolted 3070 at 130 watt when tested with a latest gen cpu. The differences in 1440p and 4k especially are vast.
 
? My 4770K Haswell can play any game made in the past 5 years and I definitely do not need a 6 Core CPU to do so.

Who games at 150 FPS anyway? Only a small minority does. I don't even have a monitor capable of 100 FPS+ nor do I want to buy one.

Linus Sebastian, using Steam Survey, underlined that 43% of the Steam Userbase are still using 4 Core CPUs. It is obviously absurd to think that game developers would only launch games playable by 6 Core CPUs and up only and would ignore a huge chunk of the User Base who are still on 4C/8T.

As Linus says in the video, the OP tests and machines used concern such a miniscule portion of the global playerbase as to be in essence, 1% tests for 1% machines.

If you think for a moment that the GTX 1060, 1050TI and similar capabilities GPUs still top the Steam Survey lists, you will understand why 150+ FPS gaming concerns for the most part, 1 percenters.


I guess that would mean you don't play Shadow of the Tomb Raider (2018) or Final Fantasy XV (2016). I tried both on my i7-3770K and the games would freeze and lag like crazy. It wasn't until I switched to my 6 core i7-4930K workstation did I realize that these two games were constantly using 80%+ of the 6 core CPU when looking at Task Manager while the games were running. Techspot's analysis confirms that it was because of having only 4 cores.

It would seem you play very different games than the ones I have played in the last 5 years so thus have a very different perception of "can play any game in the past 5 years".
 
Very interesting analysis, thanks.

So since you decided to briefly revisit the overhead issue, why not add a few tests on older / weaker CPU when reviewing lower end GPU like the 3060, 6600XT and 3060Ti ?
That surely would be as useful as testing the effect of the x8 lane limitation for owners of PCIe 3 systems (and I‘m not being sarcastic here, that was interesting information).

I am sure there are many 8400, 2600 etc owners who would love to know which GPU upgrade would be worth it for their particular system. Even I with my 2700X am not sure which GPU upgrade would give me the best results for the money spent once GPU prices reach an acceptable level.
Unfortunately reviewers tend to be more academic than helpful, a 5950x with a entry level GPU or a 3090 with a entry level CPU, pointless and lazy. And also the testing is not very representative, like F1 games they never use true gameplay, or Battlefield where no one uses online the difference is huge.
To be fair to Hardware Unboxed they have some scaling content, but most of the content is purely academic contradicting their latest comments targeting Digital Foundry.
 
I have done something like that, the GTX 1060 6gb gets rekt by an undervolted 3070 at 130 watt when tested with a latest gen cpu. The differences in 1440p and 4k especially are vast.
show me this test? and 115W max... not 130 with 200w+ spikes (1060 runs at 75w with 100W spikes)
 
I guess that would mean you don't play Shadow of the Tomb Raider (2018) or Final Fantasy XV (2016). I tried both on my i7-3770K and the games would freeze and lag like crazy. It wasn't until I switched to my 6 core i7-4930K workstation did I realize that these two games were constantly using 80%+ of the 6 core CPU when looking at Task Manager while the games were running. Techspot's analysis confirms that it was because of having only 4 cores.

It would seem you play very different games than the ones I have played in the last 5 years so thus have a very different perception of "can play any game in the past 5 years".


There's tons of videos of ppl who run Shadow Of The Tomb Raider at 1080p ultra just fine.

This guy here did not even seriously overclock his 4770K to run the game, he did a moderate 4.1GHz OC. My own 4770K runs at 4.4GHz OC'd.

The thing is, hardware companies depend on ppl like you who are not knowledgeable about PC hardware to rip you off and sell you their planned obsolescence hardware which you do not need.

As for FFXV, if my 4770K cannot run a 2016 game maxed out w/o even trying too hard, I will delete my account leave the world and go take Monastic Vows.
 

There's tons of videos of ppl who run Shadow Of The Tomb Raider at 1080p ultra just fine.

This guy here did not even seriously overclock his 4770K to run the game, he did a moderate 4.1GHz OC. My own 4770K runs at 4.4GHz OC'd.

The thing is, hardware companies depend on ppl like you who are not knowledgeable about PC hardware to rip you off and sell you their planned obsolescence hardware which you do not need.

As for FFXV, if my 4770K cannot run a 2016 game maxed out w/o even trying too hard, I will delete my account leave the world and go take Monastic Vows.
Ah okay, I see you are one of those who pride themselves on using really old hardware to run modern software to prove they are "better" "and more knowledge about PC hardware". I guess you can tell me all about the ISA bus that connected the Intel 8088 CPU too then. Maybe the 8087 math coprocessor while you are at it. Why bother coming to Techspot then if all you come to do is to tell them that their analysis is wrong and just say the opposite of what others say?

I just summarized their analysis. It shows that 4 cores clearly lag greatly behind 6 cores in the games they tested, even with copious amounts of L3 cache, and those games seem to be from the last 5 years. Their article shows that 8 cores are probably still not needed. Take it up with Techspot if you have a problem with their analysis.
 
Steve is drooping the ball HARD here. Mustache kid did what im saying but with nvidia and crappy laptop gpus... ( useless) I want to see how ANY 7 nm Navi performs on PC at 115W ( 85w could be fun 2) vs ampere / the abysmal Samsung node... but im getting instead: 180- 200W! thats efficient baby :/
 

There's tons of videos of ppl who run Shadow Of The Tomb Raider at 1080p ultra just fine.

This guy here did not even seriously overclock his 4770K to run the game, he did a moderate 4.1GHz OC. My own 4770K runs at 4.4GHz OC'd.

The thing is, hardware companies depend on ppl like you who are not knowledgeable about PC hardware to rip you off and sell you their planned obsolescence hardware which you do not need.

As for FFXV, if my 4770K cannot run a 2016 game maxed out w/o even trying too hard, I will delete my account leave the world and go take Monastic Vows.
I have an overclocked 4790K (4.7ghz) with 2133 CL9 memory and it doesn’t always hit 60fps in some games. Far Cry 5 hovers around 50-70 for example. It’s definitely time to upgrade. I’m waiting for Intel’s DDR5 parts and then my quad core is history. Or probably on eBay as for some reason people still pay quite a lot of money for one.
 

There's tons of videos of ppl who run Shadow Of The Tomb Raider at 1080p ultra just fine.

This guy here did not even seriously overclock his 4770K to run the game, he did a moderate 4.1GHz OC. My own 4770K runs at 4.4GHz OC'd.

The thing is, hardware companies depend on ppl like you who are not knowledgeable about PC hardware to rip you off and sell you their planned obsolescence hardware which you do not need.

As for FFXV, if my 4770K cannot run a 2016 game maxed out w/o even trying too hard, I will delete my account leave the world and go take Monastic Vows.

In NPC-heavy areas his 1% lows were consistently in the 30FPS range with the CPU hitting 100%. So the GPU was up to the task but the CPU was not. I've played that same area with an i7-4790 (non-K) and a 1050Ti at 1080p on lowest settings plus shadows and TAA so the gfx look passable, and I get decent framerates in the 50s but in those same NPC heavy areas, it dips into the 40s with 100% CPU.

For 60fps in SotTR you need a much faster 4c8t than a Haswell or maybe even the Broadwell and it's L4 cache. I wonder if an OC 7700K/R3-3300X can keep SotTR above 60fps in those scenarios.
 
In NPC-heavy areas his 1% lows were consistently in the 30FPS range with the CPU hitting 100%. So the GPU was up to the task but the CPU was not. I've played that same area with an i7-4790 (non-K) and a 1050Ti at 1080p on lowest settings plus shadows and TAA so the gfx look passable, and I get decent framerates in the 50s but in those same NPC heavy areas, it dips into the 40s with 100% CPU.

For 60fps in SotTR you need a much faster 4c8t than a Haswell or maybe even the Broadwell and it's L4 cache. I wonder if an OC 7700K/R3-3300X can keep SotTR above 60fps in those scenarios.
This is exactly what I saw when playing Shadow of the Tomb Raider and Final Fantasy XV. The sudden dips in FPS made the game really difficult to play when there were the appearance of more CPU intensive things such as NPCs. I even delidded and used liquid metal to overclock my i7-3770K to 4.6Ghz from 3.9, but it’s just the lack of cores spiking the CPU utilization to 100% that is the issue just like you also said.

Thanks for confirming what I was also seeing as well, appreciate it
 
In NPC-heavy areas his 1% lows were consistently in the 30FPS range with the CPU hitting 100%. So the GPU was up to the task but the CPU was not. I've played that same area with an i7-4790 (non-K) and a 1050Ti at 1080p on lowest settings plus shadows and TAA so the gfx look passable, and I get decent framerates in the 50s but in those same NPC heavy areas, it dips into the 40s with 100% CPU.

For 60fps in SotTR you need a much faster 4c8t than a Haswell or maybe even the Broadwell and it's L4 cache. I wonder if an OC 7700K/R3-3300X can keep SotTR above 60fps in those scenarios.

Τhat's not what we are seeing in the video you quote and I posted.

I literally posted a video that shows the CPU doing fine in NPC heavy areas and you proceed to throw all that in the trashcan by setting up a huge strawman based on your anecdotal experience.

I watched the FPS counter carefully it never falls below 81 FPS in the NPC crowded area. Apparently the "31 FPS" mentioned as 1% low is a fluke b/c you never see that in the FPS counter in the video and the game never stutters. The lowest FPS in this video is in the river scene when the FPS counter visibly drops to 73.

That's really good for a 4.1 GHz Haswell.
 
Last edited:
I guess that would mean you don't play Shadow of the Tomb Raider (2018) or Final Fantasy XV (2016). I tried both on my i7-3770K and the games would freeze and lag like crazy. It wasn't until I switched to my 6 core i7-4930K workstation did I realize that these two games were constantly using 80%+ of the 6 core CPU when looking at Task Manager while the games were running. Techspot's analysis confirms that it was because of having only 4 cores.

It would seem you play very different games than the ones I have played in the last 5 years so thus have a very different perception of "can play any game in the past 5 years".
Played Tomb 2018 on my i7 2600K@5.1GHz just fine but if memory serves me right it was in DX12 mode where it worked the best. Also played the latest PC version of final fantasy and it also worked decently enough on the system.
 
Steve is drooping the ball HARD here. Mustache kid did what im saying but with nvidia and crappy laptop gpus... ( useless) I want to see how ANY 7 nm Navi performs on PC at 115W ( 85w could be fun 2) vs ampere / the abysmal Samsung node... but im getting instead: 180- 200W! thats efficient baby :/
Why are you so obsessed with this 115W cap on a DESKTOP? In a laptop, I can relate somewhat, but what is the significance for a "regular" gaming PC? You mentioned the 1060 as an example, even from that generation the 1070 and 1080 used around 150-170W (I'm a 1070 owner, I go up to 150w by factory default, which I can raise by 20% if I fancy).

I'm obsessed with a silent PC, and therefore I undervolt my components if I need to, but I wouldn't demand Steve to run tests for my particular use case (brcause I assume I'm a very small minority). I can't imagine that the masses would be throttling (and hard at that!) their mid or high end GPUs to 115W...for...what exactly again? I might be missing something here, help me out...
 
Last edited:
Τhat's not what we are seeing in the video you quote and I posted.

I literally posted a video that shows the CPU doing fine in NPC heavy areas and you proceed to throw all that in the trashcan by setting up a huge strawman based on your anecdotal experience.

I watched the FPS counter carefully it never falls below 81 FPS in the NPC crowded area. Apparently the "31 FPS" mentioned as 1% low is a fluke b/c you never see that in the FPS counter in the video and the game never stutters. The lowest FPS in this video is in the river scene when the FPS counter visibly drops to 73.

That's really good for a 4.1 GHz Haswell.

I agree that focusing on the 1% lows is not accurate as I assumed the reviewer was controlling for that but he's not a pro at this, so that's cool.

However his numbers are quite suspect as his 1% lows as we can see them in the video are the same as TS's here, but with a notably slower CPU and weaker GPU:

TR.png
 
Why are you so obsessed with this 115W cap on a DESKTOP? In a laptop, I can relate somewhat, but what is the significance for a "regular" gaming PC? You mentioned the 1060 as an example, even from that generation the 1070 and 1080 used around 150-170W (I'm a 1070 owner, I go up to 150w by factory default, which I can raise by 20% if I fancy).

I'm obsessed with a silent PC, and therefore I undervolt my components if I need to, but I wouldn't demand Steve to run tests for my particular use case (I assume I'm a very small minority). I can't imagine that the masses would be throttling (and hard at that!) their mid or high end GPUs to 115W...for...what exactly again? I might be missing something here, help me out...
simple: same reason I passed on the 1070 sucky 2000 series, 3000 series, etc and don't touch AMY CPU that goes past 65W... ( got a 5600x - instead of the 5800x.. ) Same reason I don't touch any useless RGB, etc gamerZ just seem 2 have lost track of all STANDARDS, ( along with all self-respect)

120w is more that enough power! the 1060 still performs FINE TILL THIS DAY, with this TGP... but since 2016 neither AMD or NVIDA released any gpu with that same level of performance and TGP (vs prev gens , I know you also have the 1660 ti, super etc) and to make things WORSE 99% of gpus are OC gamerz versions with even stupider TGP
 
Last edited:
I play DCS WORLD with VR.
More Everything is necessary for that game.

64GB DDR4
SSD
Core i9
3090

The more you throw at it- the better.
I tried VR and even with a system very similar to yours, it’s full of compromises and poor performance. There’s no hardware on the market I can upgrade to that will fix it since I’m pretty much sitting on the best hardware that DCS can take advantage of right now.

Yeah, sitting in the cockpit is cool and all, but when you have back issues (checking your six in VR is no fun), wonky eyes (near sighted, astigmatism, presbyopia, floaters, photophobia, slight flicker sensitivity, ghosting/blooming due to dry eyes/damaged cornea, 73mm IPD) and end up with all day long brain-fog after playing VR for an hour, VR stops being fun once the initial WOW factor is gone.

I opted for a 49” 32:9 super ultrawide instead. A 3090 is perfect for driving 5120x1440. Is it the same as VR? No. Never will be, but it’s a hell of a lot more immersive than 16:9 and you don’t have to deal with the issues you have to deal with on a triple screen setup. I can play almost all day without getting fatigued and it enhances most of my non-VR titles. It’s a good compromise for me. Final Fantasy 14 is amazing on 32:9 because it lets you customize your UI. It’s surprising how many games work on 32:9 and if not, most will at least do 21:9.

Not running VR in DCS also means that I can use external MFDs and make better use of Stream Decks, button boxes and control panels. At this stage, I’m guessing all my controllers/peripherals and chair with mounts is actually worth more than my PC and monitor lol.
 
Back