285K Benchmark

I compared the benchmark results of Assassin's Creed Mirage, especially the Intel Core Ultra 285K results. Using the same highest settings and FHD resolution, my setup with the RTX 4080 recorded 199 FPS
If I had used the same RTX 4090, my system with a Ryzen 9800X3D would have achieved scores close to or equal to TechSpot's test. For further verification, I downclocked the DDR5-6400 memory, but the score still remained as high as 190 FPS.

It is also worth noting that my system is running Windows 11 version 24H2 (build 26100.2605). In theory, 24H2 should perform worse, but in my environment, 24H2 is running stable and without problems.

The core question arises here: is it really possible to achieve such a low score with a 285K + RTX 4090 configuration? Were the benchmarks intentionally tweaked to make the 9800X3D look better?

It seems unlikely that the RTX 4080 would outperform the more powerful RTX 4090 under the same conditions, as it would undermine the fundamental promise of GPU scaling. Given the upcoming RTX 5000 series, this implies the following strange scenario
 

Attachments

  • 3993194_m.jpg
    3993194_m.jpg
    172.5 KB · Views: 2
  • 3993198_m.jpg
    3993198_m.jpg
    260.7 KB · Views: 2
  • 5p6og3w.jpeg
    5p6og3w.jpeg
    46.6 KB · Views: 1
  • i0Fp93h.jpeg
    i0Fp93h.jpeg
    336.2 KB · Views: 1
I conducted performance measurements on Horizon Zero Dawn this time as well.

At first glance, the results seemed reasonable. The average frame rate was '224' with a 285K CPU and RTX 4080, while the reference system (9800X3D + RTX 4090) achieved '284.' Since the reference system uses an RTX 4090, it's natural for the scores to be higher considering the performance of the 9800X3D.

However, the issue lies in the fact that the reference system uses an 'RTX 4090.'
Even in this game, the GPU performance 'balance' seems to be broken. While I also ran tests on the 'highest settings,' if there were slight preset adjustments, it’s hard to tell.

With a 285K CPU and an RTX 4090, the GPU performance should have scaled better, so the results don’t make much sense. Interestingly, given the similarities in the benchmark numbers, I can’t help but suspect that the RTX 4090 was deliberately underclocked or replaced with a lower-tier GPU during the 285K tests.

Intel's DDR5 configurations are mentioned as being either 7200 or 8200 MHz, so I conducted my tests with CUDIMM 8000 for comparison. I don’t think such a small memory difference would cause significant performance deviations.

If there are users out there who 'trust' these results, I would argue that their credibility is questionable.

Another question arises: are the reference system's numbers based on the 'overall average' or the 'CPU average'?
If it’s the CPU average, the 285K + RTX 4080 setup is reported to achieve a score of '232'—which is higher than the 285K + RTX 4090 score.
This leads to an even more puzzling situation.

Looking at the performance of the RTX 4090, its CUDA core count is approximately 1.4 times higher than that of the RTX 4080.
In actual games and benchmark scores, the RTX 4090 generally scores about 1.3 to 1.5 times higher than the RTX 4080.
If the scores scaled appropriately based on this ratio, then using a conservative multiplier of 1.3 on the overall average of '224' would yield an RTX 4090 score of close to 290.

This discrepancy doesn’t add up.
 

Attachments

  • タイトルなし3.jpg
    タイトルなし3.jpg
    27.9 KB · Views: 1
  • タイトルなし2.jpg
    タイトルなし2.jpg
    107.7 KB · Views: 1
  • タイトルなし.jpg
    タイトルなし.jpg
    423.7 KB · Views: 1
Back