Spider-Man Remastered CPU Benchmark

Wowzers. The gap between 2600X and the generation after e.g the 3600 is enormous. Highly unusual results in a modern game, it's like I'm reading a chart from 1997 or something
 
"as the Core i3-10105F -- which is basically a 7700K, which is basically a 6700K -- wasn't very good."

Maybe I'm just old or my opinion of "wasn't very good" is different but it was doing around 85FPS avg with very high quality settings at 1440p and 1% lows above 60FPS. I personally would find that enjoyable performance in most single player games.
 
With a GPU the 10105F is more likely to be paired with at 1440p, those 1% lows will drop below 60fps. Which leaves the 4C8T minimum at the 3300X and 12100, maybe the 5300G in this chart. The 3100 might be OK as well and if Intel had made an 4C8T 11th Gen that should have been good enough. Many other games still play well with last decade's quad cores but not this one.
 
With a GPU the 10105F is more likely to be paired with at 1440p, those 1% lows will drop below 60fps.
oh-the-humanity-gif-1.gif
 
Now im really gonna check my system out, the drops I get in this game are really gonna annoy me now after reading this
 
Great article ! Interesting and fun :)

Quote from the article:
" Throwing more cores at the problem isn't the solution, which is a bit surprising given the game will spread the load quite well across even 12 cores"

Maybe on a stretch here, but could we say that this game is badly optimized for multithreading ?
Being able to utilize all cores/threads for +90%, doesn't necessarily mean all threads are actually doing something useful :)

During performance testing for CAE/CAD software I've seen cases where the CPU usage was higher, but it took longer to complete the test.
In one example, when a process was configured to use only 1 core/thread, it finished quicker than when using 8c/16t. Another example was quicker in 4c/8t than 12c/24t.
In all these tests the CPU usage was +90% (for the available threads). Sometimes the difference in time was not much, but when taking efficiency or power usage into account, it became a different story:
10sec of 90% CPU utilization for 4 cores, or 10 sec of 90% CPU utilization for 12 cores is a big difference.
(For a game on it's own, this probably doesn't matter to much, but other applications running in the background could be affected ...)

I'm wondering if there is something similar going on in this game ?
I would guess, if the 12900K or 5950X are set to a lower (but reasonable) core configuration, that the performance would still be the same, but the power usage will be lower ?
(Freeing up resources for different tasks at the same time ...)
 
Oh boy, apparently Zen 4 v-cache models are showing a much bigger leap in performance over Zen 4 than Zen 3 5800X3D did over 5800X and the clock speeds ov the v-cache models will be at worst 100MHz lower but possibly the same. Combined with higher clocks and DDR5 7800X3D will be amazingly good. Also v-cache models have been brought forward to late Q1, early Q2 2023. The v-cache in 5800X3D was actually beta, Zen 4 gets second gen v-cache that can handle high voltages
 
Being able to utilize all cores/threads for +90%, doesn't necessarily mean all threads are actually doing something useful :)
In one example, when a process was configured to use only 1 core/thread, it finished quicker than when using 8c/16t. Another example was quicker in 4c/8t than 12c/24t.

Exactly. The OS and drivers may try to use all cores but not able to succeed:

1) if the code is badly optimized, some cores may have to way for another to end to keep processing, that makes more data be crossed across cores, more heat/bandwidth needed

2) when working single core, most CPU s achieve higher clock rates. If you use unnecessary more cores and the code is optimized for single core, you achieve in the end less performance because it gets lower clock speeds.
 
The minimums for both my CPU and GPU are over 60fps at 4K very high settings so it's all good for me. The rest of the information is extraneous (for me anyway) at this point. :laughing:
 
This article is an amazing wealth of knowledge. Thank you so much for including small details like the differences between CAS latencies and ranks. Things like that are what set you guys above the rest. Something this has made immediately clear to me is that there are a lot of CPUs that are a clear bottleneck for GPUs like this and that will only be made worse when the 4000 series launches. It might finally be time for me to upgrade, after I read your reviews of course.
 
The lowest number for the R5-3600 is 82fps as the 1% low. My R5-3600X will be a few fps above that which means that I'll NEVER see stuttering at any resolution. I'm good! :laughing:
 
Should have tested the 7700K OC'd.
That's not a dead horse.
Also there is no 6/6 processors. Like the 8600K or 9600K. Those are pretty competent OC'd.
 
Last edited:
Back