You can't extrapolate case-specific data from a single data point. There needs to be at least a second data point to create a difference and an inferred trajectory, along which you can infer additional data estimations for a variety of systems and use cases that weren't included in the base testing.I don't see a bunch of people complaining that the wrong CPU or brand was handed the crown. To me this discussion is mostly around the lack of additional information around "and how much of that potential difference could I expect to realize for my actual use case?"
I must be pretty effing stupid because I have no idea how to extrapolate from 1080p/4090/low results to 1440p/3080/high going just on paper. Is there a formula for that? I don't believe it's as simple as "none because you're entirely gpu limited" nor as dramatic as the single-case charts might suggest. Where exactly in-between is not an equation I can solve for myself.
Also, I feel important reviews like this get read not only by regulars who follow all this as a hobby, but also the drop-in crowd who is here every few years when its time to buy. They're not stupid but they could use the context and background that regulars might take for granted.
That's the point that's being made when people saying 1080p benchmarks on their own are useless to people wondering if they should upgrade. To be able to extrapolate 1440p and 4k performance, there needs to be testing at those resolutions using the same hardware and quality settings as the low-res test, so that the performance drop-off can be seen, so that people can then compare the witnessed GPU-limited drop-off results to their situation and infer whether the situation would be better or worse for them (and it will almost always be much worse, unless someone has the same system as the test data is from, or is using the data in the future after newer, more powerful GPUs are available).
Last edited: