Recent benchmark submission shows Intel's upcoming Core i3 will have Hyper-Threading

Well you also have the issue of lazy devs using threads but not utilizing them fully. We more or less need to brute force our way around developer laziness.
The obvious response to such a statement (and this isn't directed at yourself; it's a remark that repeatedly comes up) is ask the question 'what is the evidence that a group of developers are being lazy?' Perhaps it's better to word it as 'what is the evidence to suggest that a game would perform better if the engine was using parallel processing?'

The simple answer is that there is no such evidence, unless the developers in question openly stated the full processing sequence in their game engine and/or broke down the respective performance bottlenecks and what they had done in response.

However, we can look at look at benchmark results from a game, where the number of available cores restricted. For example:


At first glance, the results for Assassin's Creed Odyssey would strongly suggest that mulithreading offers significant performance benefits. But first note the system specs and game settings being used, specifically that the GPU is an overclocked RTX 2080 and the resolution is 1080p. These two alone are putting the performance bottleneck almost entirely on the CPU, so any minor increases in performance will be clearly highlighted.

Then note that the game is fundamentally created for the PS4 and XB1 platforms; the CPU in both systems is essentially a dual CPU setup, each with 4 cores:

Annotated-Poly-PS4.jpg


Cross-CPU read/writes are fraught with latency problems and need to be avoided at all costs, so games for these consoles that are using multithreading need to control what is done in parallel in such a way to keep performance critical threads (e.g. animation, physics, networking, culling) on one cpu and the rest (general OS, I/O, audio) on another.

But no matter which multithreading approach to the engine is used (synchronous parallel compute, asynchronous parallel compute, synchronous parallel data objects) there is always a hitch somewhere, either in the form of data dependency (I.e. the result of one thread is dependent on the rest of another thread being done before) or thread stalls (I.e. the engine can't move on until all threads in flight are processed).

This may go some way to explaining why the performance increase in the video above, especially in 0.1% low values is so significant when going from 4 to 6, and then again from 6 to 8 - the game engine is fundamentally designed to operate on a system with two 4 core CPUs. Does this make Ubisoft's developers lazy for not creating an entirely new engine, just for the PC platform?

Possibly, but it's really down to cost reasons. The Anvil engine, like Unreal or Unity, is an economic approach to creating a game system for 8 different platforms. For Odyssey, it's a relatively large team of programmers that work on the engine as a whole (roughly 100), but this is just a tiny part of the 10 divisions of development and publishing organisations used to create the title.

Every engine development team are faced with the same challenges - there's only so much time and so much financial resources with which to work, and always means certain design choices have to be made. In the case of cross-platform games, it's quicker and easier to push the performance bottleneck onto the GPU that it is to generate a engine specifically for the PC and one for consoles.

For a PC-only game, such as Total War: Three Kingdoms, it's a different story, especially if the developers aim to have as broad a range as possible for the system requirements. For that particular Total War game, the minimum CPU is a dual core processor, with the clear caveats that the frame rate is going to be around 30 fps on average, with low settings used abound. The recommended CPU for the best overall experience is a 6 core, 12 thread processor - not 8 cores or more, which may seem surprising given that it's a 2019 release. That's not a sign of laziness, such one of design and cost constraints.

Evernessince said:
Features like ray tracing require a lot of CPU overhead and that's why Battlefield recommends an 8 core CPU when RTX is enabled.
This doesn't explain why Remedy recommends a 4 to 6 core processor for Control, nor why 4A Games recommends a 4 core one for Metro Exodus.

If you look at this analysis of BV's ray tracing performance, you can see that BV may just be a one-off case:

2018-12-06-image.png

2018-12-06-image-2.png


If you look at the post-patch Ultra results, comparing 1080p to 1440p (a 77% increase in pixels to be processed), the 1% low drops from 62 to 59 fps and the average from 83 to 60 fps - drops of 5% and 28% respectively.

Compare that to Metro Exodus:

2019-02-15-image.png

2019-02-15-image-2.png


For the same GPU, the drops are 22% (1% low) and 29% (average); this means that the use of ray tracing in Metro is a lot more GPU dependent than it is in Battlefield V, and the Metro test was done with a CPU with 2 more cores. So the use of RT isn't necessarily heavily dependent on the CPU.
 
With next-generation consoles having 8-core processors, I doubt 4 cores will be enough in the foreseeable future.

Dual core i3's still hold their own for casual gaming - compatible to current consoles due to their generally higher clock speeds. Unless you're running cutting edge stuff these new quads will perform incredibly well for the money.

They'll be especially great in small form factor machines, where they can serve a myriad of uses, from workstations, to small retro gaming machines to media servers.
 
Mainsteam is $200, which is where 6 cores currently sit. Next generation they will likely be at $100 - 150 (although you can already snag a 2600 and 1600 at these prices).

Features like ray tracing require a lot of CPU overhead and that's why Battlefield recommends an 8 core CPU when RTX is enabled. There are plenty of features that are run on the CPU. Games run things like AI, scripts, pathing, the game engine itself, ect on the CPU. The GPU may be excellent at rendering the game but ultimately it is the CPU that is setting everything up in the background. This is why GPUs are co-processors.

Want more complicated games with advanced weather systems, dynamic wildlife with hunter pray associations, fully dynamic game environments where there is more then just static 3D models and terrain like we have now? As I see it, video games haven't even scrapped the surface of what's possible and for better games we certainly need to utilize the advantages of both components.

People aren't spending more on these higher core count CPUs, they are priced in the same bracket you could have paid for a mainstream processor over the past decade. In fact they are a bit less the way intel had inflated prices a few years back.

Vulcan on DX12 reduce overhead the API's themselves and remove bottlenecks but at the cost of requiring more work from devs. As you can see by the current crop of DX12 games, they are not a silver bullet. Even if we do someday reach a point where DX12 and Vulcan consistently provide those benefits, it only provides more opportunity to use that extra CPU power to make the game more compelling.

GPUs are not a cure all that can do everything a CPU can. It would be wise to utilize each to it's own advantages.

Maistream average pc gamer is on 4 cores or less; looking at the steam survey, 51% on 4 cores and almost 25% on 2 cores. Also over 41% are on 3GB or less VRAM, and integrated intel graphics make up 20% of GPUs, GTX/GT 520650/660/710/730/750/750Ti/950/960m/1050/1050Ti make up another ~25%, then 1060s @14%. 85% aren on 1080p or lower with about 22% below 1080p.

Average gamers are not chasing resolution/FPS or the latest and greatest.
 
Last edited:
IPC and high MHz are just as important as cores. Overclock that four core, eight thread processor and they are still quite viable. But with 6+ core cpu's becoming more reasonable in cost it makes sense when you upgrade to opt for one of them.
 
Maistream average pc gamer is on 4 cores or less; looking at the steam survey, 51% on 4 cores and almost 25% on 2 cores. Also over 41% are on 3GB or less VRAM, and integrated intel graphics make up 20% of GPUs, GTX/GT 520650/660/710/730/750/750Ti/950/960m/1050/1050Ti make up another ~25%, then 1060s @14%. 85% aren on 1080p or lower with about 22% below 1080p.

Average gamers are not chasing resolution/FPS or the latest and greatest.

It raises a question, what is "mainstream".

I don't want to be rude but probably it's worth saying that 2c or 2c/4t or even 4c/4t is not mainstream, it's kinda swamp. It is direct consequence of that stagnation period of time from Sandy Bridge (early 2011) till Kaby Lake (early 2017).

The most sensible part of meaning of the word "stream" is "movement". A "stream" is something that is moving more quickly than surroundings and somehow is leading, or is showing the direction, where the whole bunch of things will be turning.

I'm not an AMD fan, but I think they are that "stream" of the CPU PC market now (and they were for a while, for 2 years probably). Main stream is what is sold more in quantity units. Steam survey just can't show us this data because it doesn't have one.

Btw, MindFactory' stats, which are regularly cited here on TS, are not probably that relevant even in Germany. According to Statista, Amazon was 30x 20x bigger in 2018 in $ units than MF as an e-tailer. But if we consider MF stats, the best selling CPU is 6c/12t part.
 
Last edited:
Maistream average pc gamer is on 4 cores or less; looking at the steam survey, 51% on 4 cores and almost 25% on 2 cores. Also over 41% are on 3GB or less VRAM, and integrated intel graphics make up 20% of GPUs, GTX/GT 520650/660/710/730/750/750Ti/950/960m/1050/1050Ti make up another ~25%, then 1060s @14%. 85% aren on 1080p or lower with about 22% below 1080p.

Average gamers are not chasing resolution/FPS or the latest and greatest.

Did you read what you just posted?

If 20% of steam survey respondents are using integrated graphics then they clearly aren't a "Mainstream average pc gamer" like you claimed.

Your comment says it all. The thing is the steam survey includes laptops and netcafes into it's results. These are not your average PC gamer. Only god knows how they gather or purge their results either, as steam is 100% opaque with it's testing methodology.

The fact that 28.15% of people who took the survey use a monitor with a resolution BELOW 1080p says that these results CLEARLY do not represent the average gamer. Are you going to sit here and convince me that resolutions below 1080p are mainstream now "cuz steam survey"?

I remember when I had 3GB of VRAM with my 7970, in 2011.
 
This doesn't explain why Remedy recommends a 4 to 6 core processor for Control, nor why 4A Games recommends a 4 core one for Metro Exodus.

If you look at this analysis of BV's ray tracing performance, you can see that BV may just be a one-off case:

2018-12-06-image.png

2018-12-06-image-2.png


If you look at the post-patch Ultra results, comparing 1080p to 1440p (a 77% increase in pixels to be processed), the 1% low drops from 62 to 59 fps and the average from 83 to 60 fps - drops of 5% and 28% respectively.

Compare that to Metro Exodus:

2019-02-15-image.png

2019-02-15-image-2.png


For the same GPU, the drops are 22% (1% low) and 29% (average); this means that the use of ray tracing in Metro is a lot more GPU dependent than it is in Battlefield V, and the Metro test was done with a CPU with 2 more cores. So the use of RT isn't necessarily heavily dependent on the CPU.

A 4 to 6 core recommendation sounds just about right actually, given that 4 is the low end and 6 is the high end. BF has always been a CPU intensive game due to the number of players and size of the maps. It makes sense that a game place on a smaller scale would have lower requirements.

I don't believe those metro exodus graphs show much in the way of CPU performance either. It was being tested with a 9900K, unless you peg the cores the game is running on at 100%, you aren't going to see a hit to 1% lows. After all, those tests were designed to test the GPU, not the impact of Ray Tracing on the CPU.
 
BFV is heavily CPU attribute dependent, especially in large multiplayer setups (just look at the difference between the i5 processors). The CPU hit with using Direct3D DXR comes mostly from the performance penalty from doing pipeline state switches (graphics to compute, graphics to DXR, etc). Now the linked tests were done in DX11, so no use of DXR occurred; however, if it's already very sensitive to core count, frequency, etc then using DXR will only exacerbate the issue. I'm not convinced it's a useful example to look at the CPU cost with using ray tracing.

One can look at these results for Control, though:


You can see that the CPU % values are generally lower when comparing High with Medium DXR to just High graphics settings, for obvious reasons: the performance bottleneck is GPU based.
 
I'm glad to know there are more PC Gamers than a single console platform. With that said you invalidated your own comment.
...what? Even if there are more PC gamers in terms of numbers, PC gaming represents less revenue.
PC hasn't been the market leader for gaming in over a decade. While the revenue numbers are closer than I'd believed ($38.3B for console, $33.4B for PC as of 2018; https://www.gamesindustry.biz/artic...ndustry-biz-presents-the-year-in-numbers-2018), the fact of the matter is major AAA developers prioritizes consoles over PC when developing a game.
So the argument that because 4/8 CPUs are more popular on Steam therefore devs will target that still doesn't make sense because for 99% of studios, PC is an afterthought not a focus.
 
4/8 is enough, I have a quad core hyper threaded CPU and I can play both of those games at 60fps fine. In fact it’s quite obvious that my GPU is the limiting factor in my system.

Some games can deliver better performance with 6/12 than they can on 4/8 but that doesn’t mean 4/8 isn’t enough.

Games aren’t complicated, if a game requires a heavy core count CPU I would say it’s badly optimised.

Imho, the opposite is true. If games are able to take advantage of more modern multi core architectures then I feel they are in fact more optimized / better coded.

For older game engines 4C/8T is certainly enough, but be aware that four cores is now the bottom of the barrel.
 
Btw, MindFactory' stats, which are regularly cited here on TS, are not probably that relevant even in Germany. According to Statista, Amazon was 30x 20x bigger in 2018 in $ units than MF as an e-tailer. But if we consider MF stats, the best selling CPU is 6c/12t part.

I'd bet that Amazon sales numbers include all kinds of electronics, I.e. TV, smartphones, tablets, vaccum cleaners... it would be much more interesting to see what they sell PC component wise (especially CPU) to compare their sales numbers to Mindfactory.
 
The most sensible part of meaning of the word "stream" is "movement". A "stream" is something that is moving more quickly than surroundings and somehow is leading, or is showing the direction, where the whole bunch of things will be turning.

Desperately grasping at straws there aren't you? Mainstream concepts are "things accepted by the majority of people" as per webster's definition corresponding to the previous post. Your "stream" theory is incorrect. Steam has the largest segment of CPU gamers as quad cores users @ 52% followed by dual cores @ 24% and hex core @ 18%. If those facts somehow offend you...that is a you problem. They are just facts not which CPU you should go out and buy which leads me to...

Btw, MindFactory' stats, which are regularly cited here on TS, are not probably that relevant even in Germany. According to Statista, Amazon was 30x 20x bigger in 2018 in $ units than MF as an e-tailer. But if we consider MF stats, the best selling CPU is 6c/12t part.

So the current CPUs offerings are the most bought today as opposed to previous generation CPUs of dual and quad cores? Very interesting. Do you think minivans are still as popular today as they were in 80's or perhaps the current crop of SUVs are outselling them?
 
Imho, the opposite is true. If games are able to take advantage of more modern multi core architectures then I feel they are in fact more optimized / better coded.

For older game engines 4C/8T is certainly enough, but be aware that four cores is now the bottom of the barrel.

According to PC gamer the 7700k 4c/8T outperforms the Ryzen 3600 at stock Intel settings (you can easily OC the 7700k) in the PC gamer suite and matches or outperforms it in every major game released this year. Metro Exodus, Rage 2, Borderlands 3, etc., etc., I guess Ryzen is them "bottom of the barrel" using your terms...

I personally see two CPUs giving you over 80FPS in the 97th percentile but hey you can't be a fan boy and justify your purchase without ripping the competition can you?


t2Xqo6UaW5VPxoLHusTEnJ-650-80.png
 
Last edited:
What about the i5 CPUs? Once they had HT, then it was removed, will they enable it again?
It was i7s with HT, unless you are talking about the few mobile i5 CPUs.

With HT vulnerabilities and increasing core count, Intel dropped HT in everything except i9s. I don't see them bringing HT back, but I could be mistaken.
 
My take is if you're a serious about gaming, then you'll want to meet or exceed the performance levels of the console market. Playstation consoles are usually the more powerful of the generation.

Wikipedia:
https://en.wikipedia.org/wiki/PlayStation_4_technical_specifications
PlayStation 4 Pro
The upgraded 'PS4 Pro' (originally codenamed 'Neo',[17][18] product code CUH-7000) uses a more powerful APU initially built with a 16 nm FinFET process from TSMC. While the number of logical processor cores (8) remained the same, CPU clock speed was increased from 1.6 GHz to 2.13 GHz (a 33.1% improvement in CPU core clockrate), but with the underlying architecture unchanged. The number of graphics Compute Units on the APU was doubled to 36 Graphics Core Next (GCN) Compute Units (from 18), with a clock speed increase to 911 MHz (from 800 MHz), resulting in a theoretical single precision floating point performance metric of 4.19 TeraFLOPs. Compared to the original PS4 GPU, this is a 2.27X increase in single precision FLOPs. Improvements in GPU 16-bit variable float calculations derived from the newer AMD Vega architecture result in the PS4 Pro having a theoretical half precision floating point performance of 8.39 TeraFLOPs.

Digital Trends
PS5
PlayStation 5 specs

In the interview with Wired, Mark Cerny revealed that the PS5’s CPU and GPU are AMD chips that will be able to support 3D audio, 8K graphics, and ray tracing, a feature currently found on very powerful PCs. The CPU will be an eight-core chip based on the Ryzen line and use Zen 2 microarchitecture. The GPU will be based on the Radeon Navi line.

Despite specs that significantly outperform current-generation consoles, a recent rumor speculates that the PS5 will still have less power than recent high-end PC graphics cards. Hardware leaker Komachi (thanks TechRadar) suggested that the PS5’s AMD Oberon APU (which combines CPU and GPU) will run at 2GHz. While that more than doubles the speed of the PS4 Pro and outpaces the Nvidia RTX 2070 Super, it’s not as powerful as the Nvidia RTX 2080 line.
 
It was i7s with HT, unless you are talking about the few mobile i5 CPUs.

With HT vulnerabilities and increasing core count, Intel dropped HT in everything except i9s. I don't see them bringing HT back, but I could be mistaken.
With the ease of increasing core counts, is HT even necessary? Somehow though I would think having an extra thread within the same core would have better latency over 2 cores with one thread each.
 
Desperately grasping at straws there aren't you? Mainstream concepts are "things accepted by the majority of people" as per webster's definition corresponding to the previous post. Your "stream" theory is incorrect. Steam has the largest segment of CPU gamers as quad cores users @ 52% followed by dual cores @ 24% and hex core @ 18%. If those facts somehow offend you...that is a you problem. They are just facts not which CPU you should go out and buy which leads me to...

For those who seems to be hungry for definitions:
 
According to PC gamer the 7700k 4c/8T outperforms the Ryzen 3600 at stock Intel settings (you can easily OC the 7700k) in the PC gamer suite and matches or outperforms it in every major game released this year. Metro Exodus, Rage 2, Borderlands 3, etc., etc., I guess Ryzen is them "bottom of the barrel" using your terms...

I personally see two CPUs giving you over 80FPS in the 97th percentile but hey you can't be a fan boy and justify your purchase without ripping the competition can you?

The pic chosen by your own just shows that 4c/4t are at the bottom of the barrel. Much higher clocked latest ring-bus' 4c/8t is another story still.
 
What about the i5 CPUs? Once they had HT, then it was removed, will they enable it again?

supposedly in the comet lake desktop (10 series) but still unconfirmed at this point. The comet lake mobile i5 do have HT enabled for what that's worth
 
Imho, the opposite is true. If games are able to take advantage of more modern multi core architectures then I feel they are in fact more optimized / better coded.

For older game engines 4C/8T is certainly enough, but be aware that four cores is now the bottom of the barrel.
There is a difference between a game that can take advantage of more cores and a game that needs them. A well optimised game should deliver a good playable experience on as few cores as possible whilst being able to deliver more performance with more cores. If a game in 2019 requires say 6 or more cores and if you don’t have those cores it runs badly, then unless it’s doing something compute wise that would justify requiring those extra cores then I would say it’s a badly optimised game.

The more expensive a graphics card is, the more cores it has generally speaking. But we will say a game that requires an expensive card is badly optimised. I would say the same for a CPU.
 
There is a difference between a game that can take advantage of more cores and a game that needs them. A well optimised game should deliver a good playable experience on as few cores as possible whilst being able to deliver more performance with more cores. If a game in 2019 requires say 6 or more cores and if you don’t have those cores it runs badly, then unless it’s doing something compute wise that would justify requiring those extra cores then I would say it’s a badly optimised game.

The more expensive a graphics card is, the more cores it has generally speaking. But we will say a game that requires an expensive card is badly optimised. I would say the same for a CPU.

Very well said - I totally agree.

My point was more that games who cannot take advantage of more cores (I.e. do not scale with them or offer additional / better features) are the ones that are poorly written. But of course it also goes the other way around. :)
 
Back