AMD Ryzen 3 3300X and Ryzen 3 3100 Review

This is a great gaming CPU. Especially as we all know it’s going to be years before we need a minimum of 6 cores for a good experience.

It’s good that it beat a 7700K (or almost matched, whatever). But if you have a 7700K or even most quad Core Intel chips from the last decade then there still isn’t really a chip that you can buy that provides a significant upgrade in gaming performance. I think a lot of gamers are in this position. I’m on a 4790K OCd to 4.8. Even if I buy a R9 3950X it’s not going to be more than what 10-20% faster than my 4790k at gaming if that? And it costs a bomb.

As soon as there’s a chip out there than can beat something like a 7700K by 50%+ in games then I think a lot of people will start upgrading. I would, for me extra cores don’t mean much. I want extra gaming performance.
It really does depend on what games you play, and on what resolution, and what is your FPS target . At FHD/60FPS, I think there is very little chance you can hit the limit (I am in that category too, used a 2600k@4.5, and it felt fine, only upgraded for better future-proofing, and because I tend to use my pc for other things beside gaming...and now I like my 3600 :D)
However, if you want higher FPS (say rocking a 120Hz monitor), especially if you want to keep even the 1% lows above that rate, you will hit that limit more often, and will have better results with more than 8 threads. Or, if you plan to game at 1440p or even 4K, god forbid aiming again for high refresh rates, you most definitely will hit that wall.
All I'm saying is that there is a viable use case for many-core CPUs, even just for gaming, right now, and maybe you just don't belong into that category (I know I don't...but there are many who do).
 
It seems unfortunate, IMO, that these procs, along with AMD's other procs, are billed as "unlocked" and that means little to nothing these days. This kind of hubris infected sIntel - with mediocre performance gains generation to generation when they were at the top of the heap - and it seems that sIntel now has a long climb out of the hubris rut they are in.

I just hope that AMD takes this kind of marketing tactic no further. Otherwise, the overconfidence might creep in and sIntel might just creep up on AMD again and surprise them.

We need competition in the marketplace not unfounded hubris.
 
All we need now is for Intel to release i3-10100 and i5-10400F with lower than announced prices to the market. The domino effect of that will be something to watch.
 
Ryzen 3300G is so awesome.

We are talking about top performance beating an 7700K@4.5 GHz at R20 Cinebench Single Threaded, PCIE 4.0 motherboards, and all that for ca. $200.

Chipzilla is really feelin' the heat.
Its good but the desktop APU's are going to be better. I was looking at the 2600 for £120 but that seems to have disappeared off the shelves.
Hopefully b450 itx will drop in price by 10,20,30% , yeah!
 
It seems unfortunate, IMO, that these procs, along with AMD's other procs, are billed as "unlocked" and that means little to nothing these days. This kind of hubris infected sIntel - with mediocre performance gains generation to generation when they were at the top of the heap - and it seems that sIntel now has a long climb out of the hubris rut they are in.

I just hope that AMD takes this kind of marketing tactic no further. Otherwise, the overconfidence might creep in and sIntel might just creep up on AMD again and surprise them.

We need competition in the marketplace not unfounded hubris.

Hard to disagree with you as much as it is hard not to praise AMD's improvement. At the end of the day, customers get very good value chips without artificial locks that will go as high as architecture and silicon allow. Without them, we would still be paying $200+ for bare 4 cores, and the multithreaded 4 would be offered as a high-end premium product starting at $300. Since SandyBridge we faced just more of the same for more and more cash, the mid range consumers cpu offering was hardly progressing. Imo we have competitive market now.
 
It really does depend on what games you play, and on what resolution, and what is your FPS target . At FHD/60FPS, I think there is very little chance you can hit the limit (I am in that category too, used a 2600k@4.5, and it felt fine, only upgraded for better future-proofing, and because I tend to use my pc for other things beside gaming...and now I like my 3600 :D)
However, if you want higher FPS (say rocking a 120Hz monitor), especially if you want to keep even the 1% lows above that rate, you will hit that limit more often, and will have better results with more than 8 threads. Or, if you plan to game at 1440p or even 4K, god forbid aiming again for high refresh rates, you most definitely will hit that wall.
All I'm saying is that there is a viable use case for many-core CPUs, even just for gaming, right now, and maybe you just don't belong into that category (I know I don't...but there are many who do).
It’s been proven that there is a small handful of games that benefit from 6 cores over 4. But in those games a quad core at 5ghz is faster than a 6 core at 4ghz. There are no games that really benefit from 8 cores over 6.

As for gaming at high resolutions, well that doesn’t need more than 4 cores at all. In fact the opposite, the game will be limited by the GPU and not the CPU so as long as your CPU can keep up then you don’t get any benefit from more powerful CPU. It’s when you have a low resolution and a powerful GPU that the CPU becomes the limiting factor.

I do think it multi core will come to games but it’s a slow process. I believe it will take around 4-5 years before having less than 6 cores will not be enough. I don’t think it’s worth spending loads on an 8 core now to “future-proof” because by the time we need 8 cores something like an i3 would do better.

Of course heavy core counts are utilised much more so outside of gaming.
 
Hard to disagree with you as much as it is hard not to praise AMD's improvement. At the end of the day, customers get very good value chips without artificial locks that will go as high as architecture and silicon allow. Without them, we would still be paying $200+ for bare 4 cores, and the multithreaded 4 would be offered as a high-end premium product starting at $300. Since SandyBridge we faced just more of the same for more and more cash, the mid range consumers cpu offering was hardly progressing. Imo we have competitive market now.
I certainly agree that AMD significantly improved their CPUs. It is great to see this - I have been an AMD supporter for a long time and their lack of competition was what enabled sIntel's hubris.

However, I look back at the early days of overclocking when it was possible to get 50% or more increase in clock speed with air coolers (there was only one CPU I had over the years that I overclocked - so perhaps I should not be saying anything). However, these days, whether its sIntel or AMD, and without water cooling or extreme cooling measures, what do overclockers get - 10% at most?

My gripe is that these days, whether AMD or sIntel, "unlocked" seems relatively meaningless in the context of the "good old days" when some overclocks were able to achieve 50%. For instance, (and I know I am dating myself) there was an sIntel proc a long time ago that was 300 MHz out of the box and most of them could be overclocked to 450 MHz on air - which was not guaranteed, of course.

Personally, I think its time for unlocked to be put to bed for good as a marketing term, however, I am not advocating for "locked" processors to return to the market. Just get rid of locked/unlocked marketing terms without preventing overclocking for those who want to give it a go. For the current gen of AMD procs, locked/unlocked is essentially meaningless though as far as price goes, it typically adds cost that is born by the consumer.
 
I hope people aren't going to buy these thinking they'll be set for gaming over the next 3-5yrs. Once the new consoles turn up and devs start optimising for the 7 cores 14 threads that they will have to play with, 4 cores 8 threads is going to come up well short of what's needed to avoid horrendous 1% low framerate figures.
I don't think anyone's buying low budget CPU thinking they'll game on it 5 years later. Even those with high end components will upgrade in 5 years or less cause they constantly want "better".
 
I certainly agree that AMD significantly improved their CPUs. It is great to see this - I have been an AMD supporter for a long time and their lack of competition was what enabled sIntel's hubris.

However, I look back at the early days of overclocking when it was possible to get 50% or more increase in clock speed with air coolers (there was only one CPU I had over the years that I overclocked - so perhaps I should not be saying anything). However, these days, whether its sIntel or AMD, and without water cooling or extreme cooling measures, what do overclockers get - 10% at most?

My gripe is that these days, whether AMD or sIntel, "unlocked" seems relatively meaningless in the context of the "good old days" when some overclocks were able to achieve 50%. For instance, (and I know I am dating myself) there was an sIntel proc a long time ago that was 300 MHz out of the box and most of them could be overclocked to 450 MHz on air - which was not guaranteed, of course.

Personally, I think its time for unlocked to be put to bed for good as a marketing term, however, I am not advocating for "locked" processors to return to the market. Just get rid of locked/unlocked marketing terms without preventing overclocking for those who want to give it a go. For the current gen of AMD procs, locked/unlocked is essentially meaningless though as far as price goes, it typically adds cost that is born by the consumer.
It looks like tweaking the ram can give you some nice gains, just the using the dram editor is abut a 10% fps gain.
 
I certainly agree that AMD significantly improved their CPUs. It is great to see this - I have been an AMD supporter for a long time and their lack of competition was what enabled sIntel's hubris.

However, I look back at the early days of overclocking when it was possible to get 50% or more increase in clock speed with air coolers (there was only one CPU I had over the years that I overclocked - so perhaps I should not be saying anything). However, these days, whether its sIntel or AMD, and without water cooling or extreme cooling measures, what do overclockers get - 10% at most?

My gripe is that these days, whether AMD or sIntel, "unlocked" seems relatively meaningless in the context of the "good old days" when some overclocks were able to achieve 50%. For instance, (and I know I am dating myself) there was an sIntel proc a long time ago that was 300 MHz out of the box and most of them could be overclocked to 450 MHz on air - which was not guaranteed, of course.

Personally, I think its time for unlocked to be put to bed for good as a marketing term, however, I am not advocating for "locked" processors to return to the market. Just get rid of locked/unlocked marketing terms without preventing overclocking for those who want to give it a go. For the current gen of AMD procs, locked/unlocked is essentially meaningless though as far as price goes, it typically adds cost that is born by the consumer.

Celeron 300A was exceptional in this regard, even with its stock heatsink. Today is more or less the same as 10 or 20 years ago, but you should buy K-parts from Intel to go from ~4GHz stock to ~5GHz OC'd. It's surely more than 10% and not just a marketing term but different product segment. TSMC's nodes are different, that translates into typical +10% OC headroom for Ryzens.
 
I hope people aren't going to buy these thinking they'll be set for gaming over the next 3-5yrs. Once the new consoles turn up and devs start optimising for the 7 cores 14 threads that they will have to play with, 4 cores 8 threads is going to come up well short of what's needed to avoid horrendous 1% low framerate figures.
4 cores will be fine for gaming for 3-5 years. But what we will see is more games actually benefitting from more than 4 cores.

If anything DX12 has reduced the CPU overhead in games. There just simply isn’t enough for a CPU to do to need 8 cores in most games.

This is a very good thing for consumers, we don’t want to have to buy expensive high core count CPUs just to play games!
 
I certainly agree that AMD significantly improved their CPUs. It is great to see this - I have been an AMD supporter for a long time and their lack of competition was what enabled sIntel's hubris.

However, I look back at the early days of overclocking when it was possible to get 50% or more increase in clock speed with air coolers (there was only one CPU I had over the years that I overclocked - so perhaps I should not be saying anything). However, these days, whether its sIntel or AMD, and without water cooling or extreme cooling measures, what do overclockers get - 10% at most?

My gripe is that these days, whether AMD or sIntel, "unlocked" seems relatively meaningless in the context of the "good old days" when some overclocks were able to achieve 50%. For instance, (and I know I am dating myself) there was an sIntel proc a long time ago that was 300 MHz out of the box and most of them could be overclocked to 450 MHz on air - which was not guaranteed, of course.

Personally, I think its time for unlocked to be put to bed for good as a marketing term, however, I am not advocating for "locked" processors to return to the market. Just get rid of locked/unlocked marketing terms without preventing overclocking for those who want to give it a go. For the current gen of AMD procs, locked/unlocked is essentially meaningless though as far as price goes, it typically adds cost that is born by the consumer.


I never O/C & know stuff all about O/Cing but even Intel basically stopped at 5Ghz - they had plans for 7Ghz or something to that effect - but the power draw, yield and the heat became too costly .

Basically until we move on from silicon - we have to get smarter and cleverer with shrinkage/cores/instructions/memory and software to get speed increases . There is more to be gained in the other methods than trying to say increase all cores beyond 5Ghz - even with that limit - the CPUs in 20 years will blow the socks off the current ones even if we are still on silicon
 
Adding to my comment - our brains burn huge amounts of fuel and they are probably thermally constraint as well . But we get our power from an incredible mount of connections ( synapses ) and variable firing = so we need to integrate neural network chips onto our motherboards . Or even freakier isto have a wetware section in our computer case with attached cleaners , feeders and heart
 
4 cores will be fine for gaming for 3-5 years. But what we will see is more games actually benefitting from more than 4 cores.

If anything DX12 has reduced the CPU overhead in games. There just simply isn’t enough for a CPU to do to need 8 cores in most games.

This is a very good thing for consumers, we don’t want to have to buy expensive high core count CPUs just to play games!

4 Core and 8 Threads will be ok for a while but just 4 Core is dead already, BF V Multiplayer proved it
 
I am not even angry that I bought 2500X.
WHat a time to be able to get affordable CPUs
that hadle great almost anything from games to heavy productive apps.
 
It’s been proven that there is a small handful of games that benefit from 6 cores over 4. But in those games a quad core at 5ghz is faster than a 6 core at 4ghz. There are no games that really benefit from 8 cores over 6.

As for gaming at high resolutions, well that doesn’t need more than 4 cores at all. In fact the opposite, the game will be limited by the GPU and not the CPU so as long as your CPU can keep up then you don’t get any benefit from more powerful CPU. It’s when you have a low resolution and a powerful GPU that the CPU becomes the limiting factor.

I do think it multi core will come to games but it’s a slow process. I believe it will take around 4-5 years before having less than 6 cores will not be enough. I don’t think it’s worth spending loads on an 8 core now to “future-proof” because by the time we need 8 cores something like an i3 would do better.

Of course heavy core counts are utilised much more so outside of gaming.

Here is an Gamers Nexus "10 Years of Intel CPUs Benchmarked: i7-930, 2600K, 4790K, & Everything Since (2020)" video from a few days ago. Your 4790k and an 6700k are there (with OCs of 4.8ghz) and they can't keep up with an stock 3700x in a lot of games.The 4790k falls well short and even the 6700k is suffering in newer demanding games.

This is why I upgraded my 4c/8t i7 to an 3700x last year, I was getting lots of stutters and FPS drops in newer demanding games. 4c/8t CPU are not good enough going forward anymore, not for the next 3 years and I doubt 2 years. Newer games are using more CPU power. DX12 and Vulkan do not lower CPU requirements, they just make better use of the CPU, and when games want 6c/12t in an DX12/Vulkan API, 4c/8t CPU will suffer. And games are using more and more, and next gen consoles will cement this.

Once next gen GPUs start dropping (in combination with next gen consoles) requirements will go up in the next year or two. 6c/12t really is the new minimum a person would want now. 4c/8t will get you buy for a little while, but they are well short on cores and threads and this shows in newer demanding games. And these are games that are not even targeting next gen consoles yet. Games are in a weird middlle space now between generations, one foot in lower core counts, one foot in higher core counts. But with next gen consoles looming, both feet will be firmly planted in higher core count games as next gen progresses.

Some games are absolutely using more cores and threads for an performance advantage for less stutters, better smoothness and higher frames across the board. And the next gen consoles haven't even dropped yet. But make no mistake, 4c/8t CPU are fast going the way of the dodo if you want to hold a great framerate in all games all the time.
 
Last edited:
Here is an Gamers Nexus "10 Years of Intel CPUs Benchmarked: i7-930, 2600K, 4790K, & Everything Since (2020)" video from a few days ago. Your 4790k and an 6700k are there (with OCs of 4.8ghz) and they can't keep up with an stock 3700x in a lot of games.The 4790k falls well short and even the 6700k is suffering in newer demanding games.

This is why I upgraded my 4c/8t i7 to an 3700x last year, I was getting lots of stutters and FPS drops in newer demanding games. 4c/8t CPU are not good enough going forward anymore, not for the next 3 years and I doubt 2 years. Newer games are using more CPU power. DX12 and Vulkan do not lower CPU requirements, they just make better use of the CPU, and when games want 6c/12t in an DX12/Vulkan API, 4c/8t CPU will suffer. And games are using more and more, and next gen consoles will cement this.

Once next gen GPUs start dropping (in combination with next gen consoles) requirements will go up in the next year or two. 6c/12t really is the new minimum a person would want now. 4c/8t will get you buy for a little while, but they are well short on cores and threads and this shows in newer demanding games. And these are games that are not even targeting next gen consoles yet. Games are in a weird middlle space now between generations, one foot in lower core counts, one foot in higher core counts. But with next gen consoles looming, both feet will be firmly planted in higher core count games as next gen progresses.

Some games are absolutely using more cores and threads for an performance advantage for less stutters, better smoothness and higher frames across the board. And the next gen consoles haven't even dropped yet. But make no mistake, 4c/8t CPU are fast going the way of the dodo if you want to hold a great framerate in all games all the time.
Falls well short is a bit of an exaggeration. It’s definitely slower but not by enough to want spending hundreds of $ to upgrade it. I don’t get any stuttering or any issues whatsoever. I imagine you probably had another issue with your I7 if you were getting this.

You are actually quite vastly incorrect about your claim of 4C/8T only being valid for another year or so. I can absolutely guarantee you that it’s going to take more than a year or so for games to start requiring more than 4 cores. It’s a delusion to believe that all of a sudden games are going to start needing all the cores on a big expensive multi core CPU overnight or very soon. DX12 actually reduced the CPU overhead, meaning users could use a dual core i3 where they previously needed a quad core i5 for example.

And this is a good thing, we want more things like DX12 enabling lower specced hardware and the very last thing we want is for us to have to buy expensive multi core CPUs to get the most out of a graphics card. The only people who benefit from that are the multi billion dollar corporations who sell the chips. The software developers don’t benefit, we lose out as we have to spend money to upgrade.

I’m going to set a reminder for 12 months on this comment thread so I can prove to you how wrong you are.

By the time games start needing more than 4 cores (5 years I reckon). Today’s 6 and 8 core parts will be outdone by budget components.
 
Falls well short is a bit of an exaggeration. It’s definitely slower but not by enough to want spending hundreds of $ to upgrade it. I don’t get any stuttering or any issues whatsoever. I imagine you probably had another issue with your I7 if you were getting this.

You are actually quite vastly incorrect about your claim of 4C/8T only being valid for another year or so. I can absolutely guarantee you that it’s going to take more than a year or so for games to start requiring more than 4 cores. It’s a delusion to believe that all of a sudden games are going to start needing all the cores on a big expensive multi core CPU overnight or very soon. DX12 actually reduced the CPU overhead, meaning users could use a dual core i3 where they previously needed a quad core i5 for example.

And this is a good thing, we want more things like DX12 enabling lower specced hardware and the very last thing we want is for us to have to buy expensive multi core CPUs to get the most out of a graphics card. The only people who benefit from that are the multi billion dollar corporations who sell the chips. The software developers don’t benefit, we lose out as we have to spend money to upgrade.

I’m going to set a reminder for 12 months on this comment thread so I can prove to you how wrong you are.

By the time games start needing more than 4 cores (5 years I reckon). Today’s 6 and 8 core parts will be outdone by budget components.

What I meant is that if you look at those Gamers Nexus benchmarks, an overclocked 6700k/4790k fall short of 60fps minimum in some games, down to 40fps sometimes. That is well short. That is what will happen more and more going forward. They cant support a top of the line GPU anymore, they are an bottleneck. That is why I had to upgrade my CPU to an 3700x, my 2080s suffered on my i7. And if you look at Gamers Nexus benchmarks I showed you, you see they suffer their as well. If you want to use a top end GPU, and match next gen consoles (or beat them) you won't be doing it with an 4c/8t CPU and an older GPU.
You can get away with it, but that is not going to be a great experience, unless you are happy with lower FPS and weaker GPUs.

I want to match and beat next gen consoles. I am not happy with less then 60 FPS (and I really want double that) or an unstable FPS. DX12 will allow the games to make better use of your CPU, but when games are using low level APIs and 8c/16t CPUs on next gen consoles, a low level API wont save you. You still only have half the cores and threads. Even clocked to 4.8ghz is not enough, the Gamers Nexus benchmarks show this clearly. DX12 will only help if the games are not built to use the 8 core CPUs on next gen consoles.DX12 wont help when you are still short on so many cores and threads.

But yes, you might get away with a lower end GPU and lower then 60 FPS unstable FPS, but I am not okay with that. That is why I had to upgrade completely, it was not a fun experience to bottleneck an 2080s. It wont be all games though, but the demanding AAA games are the ones I am talking about here, the ones that most people will want to play.

And with the next gen RTX 3000 GPUs and RDNA2 coming, CPU power will indeed need to be higher to cope with them. My 2080s was already bottlenecked by my i7 (badly in some games) and an RTX 3000 GPU will not run any better on an 4c/8t CPU, they will run worse.
 
Last edited:
Falls well short is a bit of an exaggeration. It’s definitely slower but not by enough to want spending hundreds of $ to upgrade it. I don’t get any stuttering or any issues whatsoever. I imagine you probably had another issue with your I7 if you were getting this.

You are actually quite vastly incorrect about your claim of 4C/8T only being valid for another year or so. I can absolutely guarantee you that it’s going to take more than a year or so for games to start requiring more than 4 cores. It’s a delusion to believe that all of a sudden games are going to start needing all the cores on a big expensive multi core CPU overnight or very soon. DX12 actually reduced the CPU overhead, meaning users could use a dual core i3 where they previously needed a quad core i5 for example.

And this is a good thing, we want more things like DX12 enabling lower specced hardware and the very last thing we want is for us to have to buy expensive multi core CPUs to get the most out of a graphics card. The only people who benefit from that are the multi billion dollar corporations who sell the chips. The software developers don’t benefit, we lose out as we have to spend money to upgrade.

I’m going to set a reminder for 12 months on this comment thread so I can prove to you how wrong you are.

By the time games start needing more than 4 cores (5 years I reckon). Today’s 6 and 8 core parts will be outdone by budget components.
And I should have been more clear. The games in those GN benchmarks that are dropping so low on their mins, are being run at medium settings on an 2080ti. So when you actually turn the settings higher, FPS drops accordingly for AVG, 1.0% and 0.1%. So the quad cores are already under 60 FPS on their mins or closer to them. Now when you turn settings up, they drop even lower (on all CPUs of course, even higher core count CPUs). Those are the FPS he is getting at 1080p on an 2080ti at medium settings.

So if you want to run at higher settings then medium (I do) and keep your mins higher then 60 FPS (without stutters and FPS drops) than a higher core count CPU becomes a must to achieve that more readily. And we actually need next gen GPUs (RTX 3000 and RDNA 2) to really keep things solid with future more complex games.

DX12 is great, and it does help with CPU load balancing across all cores and threads (in most DX12 games, there are 1 or 2 wonky ones), but that is with current game engine complexity. Next gen games (thanks to next gen consoles) will increase massively in complexity (as next gen progresses). It won't take 2 or 3 years before we start seeing games taking advantage of their 8 core CPUs and powerful GPUs. Game devs are going to be making games that blow us away (and push the consoles hard) so that we buy those games.

That is always what happens. Consoles set the bar for the gaming generation. The difference is this time next gen consoles are very powerful, and with powerful CPUs. And once there are those new demanding games (we will see some within a year, not all of them, but there will be some) that take advantage of their power, we wont match their experience or FPS with weaker hardware. DX12 wont allow an 4c/8t CPU to match an consoles (witch the games are targeting) FPS and quality settings. Game complexity will increase tremendously this gen, DX12 will allow an higher core count CPU to match or beat an 8 core console, but not an 4c/8t. With 4c/8t, you are just going to be clinging onto to whatever FPS you can manage to get, lower then 60 FPS no doubt, and at lower settings.

Just look at the latest Unreal Engine PS5 demo. That is at 30 FPS. If they need an 8c/16t CPU (and RDNA2 GPU and super fast SSD) to achieve 30 FPS in that, an 4c/8t CPU will struggle to hit 30 FPS. So when a next gen game targets 30 or 60 FPS, those are the games that will hit 4c/8t CPUs the hardest. The next gen games that target 120 FPS (if any) will be easier to run on an lower end CPU, but still won't match an higher core count CPU.

You cant match or better console hardware performance (consoles are also incredibly low level API machines) with a DX12 API on PC and an 4c/8t CPU, in those demanding games that are coming. And they will start coming within a year, not all of them. But the ones most people will want to play at console matching settings and FPS.

But yes, I absolutely agree with you about upgrading, don't upgrade now if you already own an 4c/8t i7 and are happy with it. Only upgrade when it is an issue for you (same as always). But if you are purchasing an new CPU today, then buy an 6c/12t CPU minimum. Save if you have to. Or just wait like you said and buy an even better CPU that is even cheaper in the future, I absolutely agree on that advice. Nothing wrong with waiting if you are still fine or can deal in the meantime. That is what I did, I only upgraded because my 2080s was bottlenecked badly in newer demanding games that I play on my i7 build.
 
Last edited:
And I should have been more clear. The games in those GN benchmarks that are dropping so low on their mins, are being run at medium settings on an 2080ti. So when you actually turn the settings higher, FPS drops accordingly for AVG, 1.0% and 0.1%. So the quad cores are already under 60 FPS on their mins or closer to them. Now when you turn settings up, they drop even lower (on all CPUs of course, even higher core count CPUs). Those are the FPS he is getting at 1080p on an 2080ti at medium settings.

So if you want to run at higher settings then medium (I do) and keep your mins higher then 60 FPS (without stutters and FPS drops) than a higher core count CPU becomes a must to achieve that more readily. And we actually need next gen GPUs (RTX 3000 and RDNA 2) to really keep things solid with future more complex games.

DX12 is great, and it does help with CPU load balancing across all cores and threads (in most DX12 games, there are 1 or 2 wonky ones), but that is with current game engine complexity. Next gen games (thanks to next gen consoles) will increase massively in complexity (as next gen progresses). It won't take 2 or 3 years before we start seeing games taking advantage of their 8 core CPUs and powerful GPUs. Game devs are going to be making games that blow us away (and push the consoles hard) so that we buy those games.

That is always what happens. Consoles set the bar for the gaming generation. The difference is this time next gen consoles are very powerful, and with powerful CPUs. And once there are those new demanding games (we will see some within a year, not all of them, but there will be some) that take advantage of their power, we wont match their experience or FPS with weaker hardware. DX12 wont allow an 4c/8t CPU to match an consoles (witch the games are targeting) FPS and quality settings. Game complexity will increase tremendously this gen, DX12 will allow an higher core count CPU to match or beat an 8 core console, but not an 4c/8t. With 4c/8t, you are just going to be clinging onto to whatever FPS you can manage to get, lower then 60 FPS no doubt, and at lower settings.

Just look at the latest Unreal Engine PS5 demo. That is at 30 FPS. If they need an 8c/16t CPU (and RDNA2 GPU and super fast SSD) to achieve 30 FPS in that, an 4c/8t CPU will struggle to hit 30 FPS. So when a next gen game targets 30 or 60 FPS, those are the games that will hit 4c/8t CPUs the hardest. The next gen games that target 120 FPS (if any) will be easier to run on an lower end CPU, but still won't match an higher core count CPU.

You cant match or better console hardware performance (consoles are also incredibly low level API machines) with a DX12 API on PC and an 4c/8t CPU, in those demanding games that are coming. And they will start coming within a year, not all of them. But the ones most people will want to play at console matching settings and FPS.

But yes, I absolutely agree with you about upgrading, don't upgrade now if you already own an 4c/8t i7 and are happy with it. Only upgrade when it is an issue for you (same as always). But if you are purchasing an new CPU today, then buy an 6c/12t CPU minimum. Save if you have to. Or just wait like you said and buy an even better CPU that is even cheaper in the future, I absolutely agree on that advice. Nothing wrong with waiting if you are still fine or can deal in the meantime. That is what I did, I only upgraded because my 2080s was bottlenecked badly in newer demanding games that I play on my i7 build.
I don’t think you know how games utilise system resources. Dropping the settings graphically doesn’t reduce performance generally if you are CPU limited because the CPU isn’t processing the graphics. This includes the minimums.

My 6 year old 4790K easily averages over 60fps in any game. In fact it’s enough to push over 100 fps in a lot of games.
 
Last edited by a moderator:
Your 4790k at 4.8ghz falls well short of 60 FPS (or just on the line at 62 FPS in SOTR which is DX12 by the way at medium settings, not even max) the 4790ks minimums in some of those Gamers Nexus benchmarks are already at 60 FPS or lower. Gamers Nexus knows better then you, I don't think you understand how CPU utilization works.

There are settings that do influence CPU usage (depending on the game and setting of course,it is not every game or every setting, but the CPU does have to prepare everything for the GPU obviously), you clearly don't know as much as you think you do. At this stage, everyone knows better then you do.

And with your 4790k already falling well short of or just making 60 FPS minimum in some games, future demanding games won't run any better, SOTR is DX12 and shows your CPU just making 62 FPS mins, to think future demanding games will run any better at max settings is that is retarded thinking. Next.
 
Last edited:
I don’t think you know how games utilise system resources. Dropping the settings graphically doesn’t reduce performance generally if you are CPU limited because the CPU isn’t processing the graphics. This includes the minimums.

My 6 year old 4790K easily averages over 60fps in any game. In fact it’s enough to push over 100 fps in a lot of games.
And here you go:
Very rough rule of thumb:
Anything that puts more stuff on the screen hits both CPU and GPU. LOD/draw distance, particles, model quality, etc.

Anything that makes that stuff look nicer is primarily GPU. Resolution, AA, texture quality/filtering, shader quality, lighting quality etc....100% GPU.

You obviously have no idea how system resources work. Like I said, the CPU does have to prepare DATA for the GPU first, the GPU doesn't just magically render all DATA from the HDD/SSD on it's own, obviously. Game DATA goes via CPU first and the CPU does increase load with the settings I posted above, duh.

And like I said, it is not every setting or every game (some ports don't give us lots of options) but there are still games that do.You clearly don't know as much as you think you do.
 
Last edited:
I don’t think you know how games utilise system resources. Dropping the settings graphically doesn’t reduce performance generally if you are CPU limited because the CPU isn’t processing the graphics. This includes the minimums.

My 6 year old 4790K easily averages over 60fps in any game. In fact it’s enough to push over 100 fps in a lot of games.
And yes, I also agree that when a CPU is completely overwhelmed then dropping settings will have no effect whatsoever. But I am not saying that will happen to you 4790k just yet, it will be dropped settings and/or a lower locked FPS at first.

But of course, the time will come when your CPU is completely overwhelmed, same as every CPU. I am just not sure when that will happen, so I am playing it safe and saying dropped settings and/or locked lower FPS to avoid stutters and FPS drops, at first.

And once again, I am not saying every game (and not current gen less demanding games) but demanding next gen games that will be more demanding then SOTTR is even with DX12. Just like SOTTR that is also DX12, and your 4.8ghz 4790k has mins of 62 FPS in SOTTR DX12 (at medium settings, not even max), so we are talking stable 0.1% and 1% lows here, so get it straight.
 
Last edited:
And here you go:
Very rough rule of thumb:
Anything that puts more stuff on the screen hits both CPU and GPU. LOD/draw distance, particles, model quality, etc.

Anything that makes that stuff look nicer is primarily GPU. Resolution, AA, texture quality/filtering, shader quality, lighting quality etc....100% GPU.

You obviously have no idea how system resources work. Like I said, the CPU does have to prepare DATA for the GPU first, the GPU doesn't just magically render all DATA from the HDD/SSD on it's own, obviously. Game DATA goes via CPU first and the CPU does increase load with the settings I posted above, duh.

And like I said, it is not every setting or every game (some ports don't give us lots of options) but there are still games that do.You clearly don't know as much as you think you do.
Clearly I have a far better idea than you. Pushing up graphical settings generally doesn’t affect CPU performance mate. Some do, things like grass and physics effects. But visual differences don’t. Yes increasing line of sight can but not by a huge difference. You are overvaluing the impact to a CPU. If you jack up all the settings from zero to max it has a very minimal impact on the minimum frame rate on thr CPU. Resolution changes have no impact. In fact as games and APIs move forward the reliance on the CPU diminishes, we need a weaker CPUs to game in some ways today than we did previously - GTA V is a good example of that.

I definitely know better than you. I suggest you go have a look at some more reviews and benchmarks. Youl get there eventually, we were all new once I guess.
 
Last edited:
Clearly I have a far better idea than you. Pushing up graphical settings generally doesn’t affect CPU performance mate. Some do, things like grass and physics effects. But visual differences don’t. Yes increasing line of sight can but not by a huge difference. You are overvaluing the impact to a CPU. If you jack up all the settings from zero to max it has a very minimal impact on the minimum frame rate on thr CPU. Resolution changes have no impact. In fact as games and APIs move forward the reliance on the CPU diminishes, we need a weaker CPUs to game in some ways today than we did previously - GTA V is a good example of that.

I definitely know better than you. I suggest you go have a look at some more reviews and benchmarks. Youl get there eventually, we were all new once I guess.
lol. Okay genius who doesn't understand how an game engine or how a CPU delivers info to the GPU works.
Draw distance, LOD etc all effect CPU performance.

You are the guy who boasts that Intel is the best gaming CPU and you are only running a little 4790k. You are biased and to cheap to buy a new PC. Enjoy your 4790k, I dare you to keep it for 4 years and not touch 1 graphics setting or lock your FPS to an lower FPS when you will have to.

Toodles.
 
Back