How CPU Cores & Cache Impact Gaming Performance

I’m a Digital Audio Workstation guy and Intel’s were king until AMD stole the show with more cache and IPC.

I bet their new Alder Lake have addressed this, but I finally bailed on my i7 4790k rigs and went with the 5600G/5700G that are much better, and should be as the 7 year gap was a time of indecision for me.

Intel Tiger Lake with a large cache would have kept me in Intel’s camp, but they were struggling with everything so AMD won/
 
Like. NOW : TAKE ALL AMD / NVIDIA -thirsty- GPUS, undervolt them to 120W max TGP and pit them against the KING > GTX 1060. I want to see how that goes. im tired of this "Eficient" 180W/ 350W gpus
 
Like. NOW : TAKE ALL AMD / NVIDIA -thirsty- GPUS, undervolt them to 120W max TGP and pit them against the KING > GTX 1060. I want to see how that goes. im tired of this "Eficient" 180W/ 350W gpus

Have a look at some of the laptop GPU reviews, what you're describing is pretty much what's done there, except the 3080 laptop GPU is the same core config as the 3070Ti desktop GPU model, so you'd need a chart to keep the core configs the same (or close when not the same). Laptops have variable max power so you'd need good reviews like the ones from JarrodsTech, who measures these wattages, to make sure they stay within the 120W target you want.

Suffice it to say that when these laptop GPUs are power limited to 120W they are still notably higher performance than the 1060.
 
Have a look at some of the laptop GPU reviews, what you're describing is pretty much what's done there, except the 3080 laptop GPU is the same core config as the 3070Ti desktop GPU model, so you'd need a chart to keep the core configs the same (or close when not the same). Laptops have variable max power so you'd need good reviews like the ones from JarrodsTech, who measures these wattages, to make sure they stay within the 120W target you want.

Suffice it to say that when these laptop GPUs are power limited to 120W they are still notably higher performance than the 1060.
not that much faster imo, the 3060 performance is kinda meh, at 130w but im more interested in seen how the 6600 7600 perfom at 120w.
 
I kinda laugh alot about the nonsense posted all the time about the need for so and so cores.
I actually till now not even need a dual core at all for games.
Now I do own a faster pc yes but if I am honest it does only need to work when I start up my virtual machines .... games are a joke.
So game wise I actually see from the 16 cores I have 15 doing nothing besides being 3 cores 0,8 % active on windows processes which actually are the spyware from M$ most of the time.
The games I have which I admit are not new ones at all because those new games are all crap to me.
But also because I simply hate play online.
If a game is online only I do not want it [ period ]
The only game I am looking for to be released is the new release of Horizon Zero Dawn replacement but this far no news from it yet

Anyway almost all games do not even make my 9900Kf even work more than 5% all the time.
And the GPU does often go into 400 Mhz mode because its so under loaded.
So yes I have not one game which actually does make them work for the money.
Now when I startup my virtual servers and other tech toys then I am make it work hard for its money. The only step I am going to make soon is switch to AMD wx platform for the 8 channel mem and the insane amount of pci-e lanes which would make it possible for me to run 8 nvme ssd in raid 10 with ease :D
 
not that much faster imo, the 3060 performance is kinda meh, at 130w but im more interested in seen how the 6600 7600 perfom at 120w.
The 6700 does run very well according to my friends, there is not one using the 6600.
So can not report how they do in games
 
On topic post the caching on CPU and GPU do have a huge impact in the performance the downside is still the huge prices of the superfast caches.
One thing we learned so far that too much cache actually has deminishing rewards as well. So thats why the hardware builders try to find the perfect match for certain tasks. Because in the Pro card huge cache can be beneficial but as always if its a fail in data it makes it slow as well. So the prediction has to be very streamlined to find the proper load time when fail as well.
 
I had a PC with an i7 CPU and 4GB GPU but in the end it wasn't the CPU or the GPU that let it down it was the 8GB of DDR3 RAM and my motherboard that killed it.

I've upgraded my whole set up massively now and I do see a huge difference in gaming quality, gaming on ultra at 165hz just beats my old set up 1000 times. I only decided to get my current set up because the DDR 3 RAM and a motherboard that wouldn't accept DDR4 RAM let me down. If my motherboard would have accepted a RAM upgrade I probably would never have bought the system I have now. Also the cache can make a huge difference to gaming too.
 
Everyone is about how modern AAA games are what is commonly tested and requires big hardware and 6+ cores and how older and indie games run great even on a dual core.

But then there are games Like Planetside 2 or Arma 3 that need lots of CPU power, but not many cores, barely scaling beyond 4 threads, but hammering those so hard that most, if not all, quad cores are unable to maintain 60 fps, while a GTX 1050 can run the games maxed out and is still CPU limited and something like a 1660 super or RX 5600 XT is plenty for "4k ultra". Many MMOs have the same situation. CPU heavy, but not well multithreaded.
 
Back