Ryzen CPU prices fall as third-generation chips draw closer

9900K = 2700X

This is clearly wrong. Not single core, not multi-core, no applications, compute, etc. and not any benches could back this claim. And it should be obvious:
https://cpu.userbenchmark.com/Compare/Intel-Core-i9-9900K-vs-AMD-Ryzen-7-2700X/4028vs3958

Or in more detail:
https://www.tomshardware.com/reviews/intel-core-i9-9900k-9th-gen-cpu,5847-8.html

And Techspot's analysis:
https://www.techspot.com/review/1730-intel-core-i9-9900k-core-i7-9700k/page6.html
"A lot of you are gamers and Intel has been touting the 9900K as the world's best gaming CPU, which it technically is"

"Then if we look at application performance it’s still hard to justify buying either of these new 8-core processors. For the most part, the 9700K is slower than the 2700X, while the 9900K is up to 30% faster when overclocked, so that’s pretty impressive, less so for the price of admission."

The 9900k may not be good value, and we sure hope the 3700x will make it even less so, but it is still the beast right now.
 
This is clearly wrong. Not single core, not multi-core, no applications, compute, etc. and not any benches could back this claim. And it should be obvious:
https://cpu.userbenchmark.com/Compare/Intel-Core-i9-9900K-vs-AMD-Ryzen-7-2700X/4028vs3958

Or in more detail:
https://www.tomshardware.com/reviews/intel-core-i9-9900k-9th-gen-cpu,5847-8.html

And Techspot's analysis:
https://www.techspot.com/review/1730-intel-core-i9-9900k-core-i7-9700k/page6.html
"A lot of you are gamers and Intel has been touting the 9900K as the world's best gaming CPU, which it technically is"

"Then if we look at application performance it’s still hard to justify buying either of these new 8-core processors. For the most part, the 9700K is slower than the 2700X, while the 9900K is up to 30% faster when overclocked, so that’s pretty impressive, less so for the price of admission."

The 9900k may not be good value, and we sure hope the 3700x will make it even less so, but it is still the beast right now.

:facepalm:

Citing userbenchmark as a source. Enough said.

FYI no one here is saying the 9900K isn't a beast.
 
Citing userbenchmark as a source. Enough said.

Over 81 million data sets from all the various user providing a massive database of crowd-sourced data. Yep enough said, large datasets are far more reliable than any subjective opinions.

Care provide evidence where the aggregated results provided Userbenchmark is clearly without a doubt(modiifiers/qualifiers/etc.) wrong? If anything is wrong, it is usually an individual's build, someone broken components, or messed up software install.
 
Over 81 million data sets from all the various user providing a massive database of crowd-sourced data. Yep enough said, large datasets are far more reliable than any subjective opinions.

Care provide evidence where the aggregated results provided Userbenchmark is clearly without a doubt(modiifiers/qualifiers/etc.) wrong? If anything is wrong, it is usually an individual's build, someone broken components, or messed up software install.

Incorrectly gathered and inaccurate data.

Short single run tests less then a minute long with no clear testing methodology and no isolation of variables like background processes. Userbenchmark tests violate every rule a benchmark needs to be considered accurate.


Care provide evidence where the aggregated results provided Userbenchmark is clearly without a doubt(modiifiers/qualifiers/etc.) wrong? If anything is wrong, it is usually an individual's build, someone broken components, or messed up software install.

Userbenchmark is a synthetic benchmark that gives a points value based on it's own system. It doesn't output raw performance numbers like FPS that can be compared to other tests. How in the heck does one compare 1308 multi-threaded points to another benchmark? You don't, because those points specifically pertain to their 10 second multi-threaded test in userbenchmark only.

And you make a good point with you last sentence, there is zero control over the rest of a user's build. What is stopping me from benching a 2080 Ti with a G3220 and 4GB of RAM? Nothing.

In addition to that form of manipulation, what is stopping companies from simply setting lower fixed clock speeds of competitor's products and running userbenchmark? Or running something in the background? You'd have to be a fool to believe something like userbenchmark can't be manipulated when companies like Google and Amazon are having issues keeping fake reviews off.
 
Incorrectly gathered and inaccurate data.

Short single run tests less then a minute long with no clear testing methodology and no isolation of variables like background processes. Userbenchmark tests violate every rule a benchmark needs to be considered accurate.

So like, how CPUs are actually used in the real world?

Userbenchmark is a synthetic benchmark that gives a points value based on it's own system. It doesn't output raw performance numbers like FPS that can be compared to other tests. How in the heck does one compare 1308 multi-threaded points to another benchmark? You don't, because those points specifically pertain to their 10 second multi-threaded test in userbenchmark only.

So, like, umm... Cinebench, another industry standard?

Granted, boost clocks of modern hardware means that longer benchmarks are better unless only peak performance is desired. Intels peak boost for up to 8 seconds before TDP throttling if following intel specifications (which they're not required to do). Some hold high boosts for 20 seconds, and will throttle fully after about 2 minutes. I do agree that longer tests are better.

Games are patched frequently, so comparing FPS across different articles isn't really a good measure. Games behave differently from each other, even when using the same game engine. Every single benchmark (including games) uses "its own system". Calculating a million digits of pi isn't something the average user does each day, yet it's still used as a benchmark. The goal for any benchmark is to establish relative performance for consistent workloads. As long as the system is consistent in its scoring procedure, it's a good benchmark. In terms of relevance, a multitude of benchmarks should be used when comparing hardware, everyone knows this.

And you make a good point with you last sentence, there is zero control over the rest of a user's build. What is stopping me from benching a 2080 Ti with a G3220 and 4GB of RAM? Nothing.

In addition to that form of manipulation, what is stopping companies from simply setting lower fixed clock speeds of competitor's products and running userbenchmark? Or running something in the background? You'd have to be a fool to believe something like userbenchmark can't be manipulated when companies like Google and Amazon are having issues keeping fake reviews off.

Not impossible, but improbable. Most actual users wouldn't do silly pairings, they just want to see how their current rig performs. If anything, performance will likely be skewed to Intel's favor due to MCE and other motherboard defaults that run computers out of spec by default. It's how most users use their PCs...

As for one company submitting a bunch of fake test results, it's certainly possible, but I don't think most people consult userbenchmark for buying advice; its just a small piece of the puzzle. It's probably a better use of budget to bribe companies with a larger audience to review or promote their products. Is the average person more likely to buy something because of Steve's Recommendation or because userbenchmark score? For most people, I believe it's the former.
 
...
Incorrectly gathered and inaccurate data.
...

Show me an actual evidence of that actually impacted the aggregated results. All I hear is lots of hating and not one lick of actual demonstration of where it has gotten things completely wrong.

The whole point is getting the signal out of the noise. The law of averages works in the favor of correct/accurate/repeatable/consistent results. 81 million data sets from benches will yield fairly reliable aggregate results. This kind of statistical sampling is both accurate and correct, plus it is quick and easy for everyone, less tan 5 minutes for any one person.

There is no way any single site however thorough can spend even 1/2 million minutes to test things much less do 81 million (btw 81 million minutes comes to 154 years). This is the power of distributed compute and massive number of cores spread across the world, the is how you use moar cores.
 
So like, how CPUs are actually used in the real world?

Taking up a variable amount of the component you are testing will lead to worthless results. If you are using 20% of your CPU while running the benchmark it completely defeats the purpose of running it in the first place. You aren't gauging your PC's performance accurately, you are getting a false sense of your PC's performance. If you wanted to benchmark real world performance, you'd have to do so in a manner that is consistent. Running random background tasks, changing clock speeds, ram speed, and many other variables is not "real world" performance, it's ignorance.

So, like, umm... Cinebench, another industry standard?

No, nothing like Cinebench. The only trait they share is giving points as an output, everything else is different. Cinebench does not run for 5 seconds. Cinebench does not pretend to give you an overall reflection of performance (which it doesn't). Cinebench has been vetted over and over again and the testing method is clear. Not to mention, Cinebench does not take everyone's results and submit them online as if they are accurate scores of a person's PC parts. There is a reason we have reviewers, a majority of PC gamers haven't a clue how to properly test their parts.

The goal for any benchmark is to establish relative performance for consistent workloads. As long as the system is consistent in its scoring procedure, it's a good benchmark. In terms of relevance, a multitude of benchmarks should be used when comparing hardware, everyone knows this.

The methodology must also be consistent and the tests need to be of length. If people are running userbenchmark while windows is take 20% of the CPU syncing files on dropbox, that is not an accurate score of that CPU yet userbenchmark will register it anyways. Have you not heard of the scientific method and how to isolate a variable to give accurate results?

Yes, many benchmarks should be used but it's worthless to include ones like Userbenchmark for the many points I have already stated. It's less then pointless to include bad data.

Show me an actual evidence of that actually impacted the aggregated results. All I hear is lots of hating and not one lick of actual demonstration of where it has gotten things completely wrong.

The whole point is getting the signal out of the noise. The law of averages works in the favor of correct/accurate/repeatable/consistent results. 81 million data sets from benches will yield fairly reliable aggregate results. This kind of statistical sampling is both accurate and correct, plus it is quick and easy for everyone, less tan 5 minutes for any one person.

There is no way any single site however thorough can spend even 1/2 million minutes to test things much less do 81 million (btw 81 million minutes comes to 154 years). This is the power of distributed compute and massive number of cores spread across the world, the is how you use moar cores.

Are you suggesting that I submit evidence to defend against your claim which was submitted without evidence? :joy:

Waste of my time but I'll humor you.

https://cpu.userbenchmark.com/Compare/Intel-Core-i9-9900K-vs-AMD-Ryzen-7-2700X/4028vs3958

https://www.guru3d.com/articles-pages/intel-core-i9-9900k-processor-review,7.html

Single threaded performance in the Guru3D article on cinebench is only 7% ahead on the 9900K vs the 2700X, a far cry from the 22% advantage userbenchmark claims. Userbenchmark is the only one I know of that gives it such a large advantage in single thread, contrary to every other review on the internet. When the average is 12%, Userbenchmarks's results would be removed as an outlier by most reviewers as it clearly isn't accurate.

https://www.techspot.com/review/1730-intel-core-i9-9900k-core-i7-9700k/

Comparing the multi-threaded performance that userbench got vs the multi-thread performance that techspot achieved.

Userbenchmark has the 9900K 16% above the 2700X at stock. Techspot has the 9900K only 12% ahead, a difference of 4% and outside the margin or error (3%)

Guru3D has that same 12% as TechSpot

https://www.guru3d.com/articles-pages/intel-core-i9-9900k-processor-review,7.html

What I also don't get is how the 9900K has a faster "Quad Core" speed then multi-core. The 9900K doesn't boost all four cores, it boosts 1 to max speed and then steps down the more cores engaged. The 2700X does the same, only it's more aggressive and is able to maintain higher all core clocks. Both CPUs should retain relativity the same performance difference between quad core and all core performance yet user benchmark has a whopping 8% difference between them.

From what I'm seeing the results on Userbenchmark can be very inaccurate and are commonly outside margin or error of what most reviewers obtained. To be expected given that lack of restrictions on results. Garbage in, garbage out.
 
Taking up a variable amount of the component you are testing will lead to worthless results. If you are using 20% of your CPU while running the benchmark it completely defeats the purpose of running it in the first place. You aren't gauging your PC's performance accurately, you are getting a false sense of your PC's performance. If you wanted to benchmark real world performance, you'd have to do so in a manner that is consistent. Running random background tasks, changing clock speeds, ram speed, and many other variables is not "real world" performance, it's ignorance.

No, nothing like Cinebench. The only trait they share is giving points as an output, everything else is different. Cinebench does not run for 5 seconds. Cinebench does not pretend to give you an overall reflection of performance (which it doesn't). Cinebench has been vetted over and over again and the testing method is clear. Not to mention, Cinebench does not take everyone's results and submit them online as if they are accurate scores of a person's PC parts. There is a reason we have reviewers, a majority of PC gamers haven't a clue how to properly test their parts.

The methodology must also be consistent and the tests need to be of length. If people are running userbenchmark while windows is take 20% of the CPU syncing files on dropbox, that is not an accurate score of that CPU yet userbenchmark will register it anyways. Have you not heard of the scientific method and how to isolate a variable to give accurate results?

Yes, many benchmarks should be used but it's worthless to include ones like Userbenchmark for the many points I have already stated. It's less then pointless to include bad data.

All data has bias and variance. Bias in this case is the methodology of testing (crunching digits of pi vs rendering images/ 3d). Variance is the differences between the systems (ram speed, background processes, etc is the variance. With enough samples, you can still generalize about performance.

If the average ryzen setup doesn't load XMP profiles, but the windows one does, it's indicative how most users will use the system.

The only real problem I have is with the sort duration. The way modern chips boost, the entire benchmark can be completed during the boost phase, which is only representative of peak performance. Cinebench is influenced by boost clocks as well as stated in the article you linked.

Isolation is great, but there's still bias (compared to realworld results) in reviewers' results. One instance was seen when the 8700K released. There was a whole debate about who was using MCE (on by default with a whole lot of motherboards) and who wasn't with results varying by up to like 14%. Reviewers are better due to reduced variance (reducing error), but it doesn't invalidate userbenchmark.

If we were talking about passmark, I'd be much more heasitant. I've seen benchmarks that seemed to place the A10 series too high compared to the 2200GE. It was consistent, but the results and rankings weren't relevant / representative to other workloads. I don't remember if it was on the CPU or GPU front though.

Are you suggesting that I submit evidence to defend against your claim which was submitted without evidence? :joy:

Waste of my time but I'll humor you.

https://cpu.userbenchmark.com/Compare/Intel-Core-i9-9900K-vs-AMD-Ryzen-7-2700X/4028vs3958

https://www.guru3d.com/articles-pages/intel-core-i9-9900k-processor-review,7.html

Single threaded performance in the Guru3D article on cinebench is only 7% ahead on the 9900K vs the 2700X, a far cry from the 22% advantage userbenchmark claims. Userbenchmark is the only one I know of that gives it such a large advantage in single thread, contrary to every other review on the internet. When the average is 12%, Userbenchmarks's results would be removed as an outlier by most reviewers as it clearly isn't accurate.

https://www.techspot.com/review/1730-intel-core-i9-9900k-core-i7-9700k/

Comparing the multi-threaded performance that userbench got vs the multi-thread performance that techspot achieved.

Userbenchmark has the 9900K 16% above the 2700X at stock. Techspot has the 9900K only 12% ahead, a difference of 4% and outside the margin or error (3%)

Guru3D has that same 12% as TechSpot

https://www.guru3d.com/articles-pages/intel-core-i9-9900k-processor-review,7.html

What I also don't get is how the 9900K has a faster "Quad Core" speed then multi-core. The 9900K doesn't boost all four cores, it boosts 1 to max speed and then steps down the more cores engaged. The 2700X does the same, only it's more aggressive and is able to maintain higher all core clocks. Both CPUs should retain relativity the same performance difference between quad core and all core performance yet user benchmark has a whopping 8% difference between them.

From what I'm seeing the results on Userbenchmark can be very inaccurate and are commonly outside margin or error of what most reviewers obtained. To be expected given that lack of restrictions on results. Garbage in, garbage out.

Maybe I'm reading the wrong chart on Guru 3D. 216 i9 9900K vs 180 R7 2700X gives a performance advantage of 20%. As for the fact that the quad score percentage seeming high, I'm guessing it's MCE. Supposedly the 9900K is dropping to 4.8 when 4 threads loaded. Percision boost 2.0 (on the 2700X) is supposed to drop from 4.3 to 4.2. If Intel can sustain 5ghz on MCE and Ryzen drop to 4.2, it's a 19% clock speed difference. From there, it's a slight difference in IPC (which I often see quoted as 3% when clocks are equal, though it varies widely by benchmark). If MCE weren't a thing, it would be 4.8( intel) / 4.2 (AMD) *1.03 (IPC) means ~17%, much closer to the Multi-core score.
https://www.anandtech.com/show/13591/the-intel-core-i9-9900k-at-95w-fixing-the-power-for-sff
https://images.anandtech.com/doci/12625/2nd Gen AMD Ryzen Desktop Processor-page-022.jpg

The 3% figure for Cinebench IPC when clocks are equal. 174 single thread 8700K vs 168 2700K.
https://www.techspot.com/article/1616-4ghz-ryzen-2nd-gen-vs-core-8th-gen/

Most motherboards enable MCE by default. When they were investigating the performance variance with the 9900K, there were only 2 of 10+ reviewers that had MCE off by default.
 
Last edited:
Show me an actual evidence of that actually impacted the aggregated results. All I hear is lots of hating and not one lick of actual demonstration of where it has gotten things completely wrong.

The whole point is getting the signal out of the noise. The law of averages works in the favor of correct/accurate/repeatable/consistent results. 81 million data sets from benches will yield fairly reliable aggregate results. This kind of statistical sampling is both accurate and correct, plus it is quick and easy for everyone, less tan 5 minutes for any one person.

There is no way any single site however thorough can spend even 1/2 million minutes to test things much less do 81 million (btw 81 million minutes comes to 154 years). This is the power of distributed compute and massive number of cores spread across the world, the is how you use moar cores.
Network, ToR for access to tons of different IPs (or botnet), and spoofing data (so it doesn't have to run a full test), but it's all much more of a hassle than it's worth imo. It also shows distribution curves for benchmarks, so insidious tampering would be observable, but I never check that; I doubt others do. If possible to spoof data, I'd imagine the main limitation is going to be the latency of refreshing IPs via the tor network (I haven't used tor, but I know it routes through three other locations). If your internet is decent, running a few PCs would drastically cut times. This is all assuming that ToR doesn't ban overactive IPs.

There's 227K benchmarks for the i7. There's 103K for the 2700X. If I wanted to drop the 9900K's speed, I'd need to A) do so in a way that shifts the entire distribution so it doesn't look suspicious B) Only be able to fudge down by about 10% maximum (the worst benchmarks show ~93% of performance of stock). To bring the overall score down by 5% would require additional 227K data points if I didn't care about the distribution and went with 90% performance. If one cares about the distribution, probably about 4X as many datapoints?. Really, such tainting would have to start at processor launch to be feasible and less obvious. There's still the fact that it would produce a wider distribution too. It's a lot of effort compared to just paying off reviewers and using neat tricks like MCE and benchmarks / compilers that prefer your own brand.

The preferred method would be compromising the database. Faster and cheaper if successful.
 
All data has bias and variance. Bias in this case is the methodology of testing (crunching digits of pi vs rendering images/ 3d). Variance is the differences between the systems (ram speed, background processes, etc is the variance. With enough samples, you can still generalize about performance.

If the average ryzen setup doesn't load XMP profiles, but the windows one does, it's indicative how most users will use the system.

The only real problem I have is with the sort duration. The way modern chips boost, the entire benchmark can be completed during the boost phase, which is only representative of peak performance. Cinebench is influenced by boost clocks as well as stated in the article you linked.

Isolation is great, but there's still bias (compared to realworld results) in reviewers' results. One instance was seen when the 8700K released. There was a whole debate about who was using MCE (on by default with a whole lot of motherboards) and who wasn't with results varying by up to like 14%. Reviewers are better due to reduced variance (reducing error), but it doesn't invalidate userbenchmark.

If we were talking about passmark, I'd be much more heasitant. I've seen benchmarks that seemed to place the A10 series too high compared to the 2200GE. It was consistent, but the results and rankings weren't relevant / representative to other workloads. I don't remember if it was on the CPU or GPU front though.

:facepalm:

No, there's a difference between 3% variance the average reviewer has compared to god knows how much with userbenchmark. That's the last I'm saying on this topic. I don't need to play verbal pong with a person who doesn't understand the basics of benchmarking and why isolating variables is important.

Maybe I'm reading the wrong chart on Guru 3D. 216 i9 9900K vs 180 R7 2700X gives a performance advantage of 20%

You are reading it wrong. If it were 20%, the math would be 216 * 0.2 = 43.2. 43.2 (20% of 216) + 180 = 223.2. The 9900K only scored 216.

My original number of 7% was incorrect as well, it's actually 16% rounded. Still a very significant difference from the numbers userbenchmark is giving.
 
You are reading it wrong. If it were 20%, the math would be 216 * 0.2 = 43.2. 43.2 (20% of 216) + 180 = 223.2. The 9900K only scored 216.

My original number of 7% was incorrect as well, it's actually 16% rounded. Still a very significant difference from the numbers userbenchmark is giving.

Increase or decrease / original. In this case, it's using the weaker chip as the baseline original. So it's 180 (original) + 180*.2 (increase in performance relative to original) = 216. The I9 is 20% faster than the 2700X.

I9 2044 vs R7 1828
2044 / 1828 = 11%
The i9 is 11% faster than the R7 according to the Guru3D article. Userbenchmark says 16% difference. I'd blame MCE again.

The first article puts MCE at 4.7ghz for 8 cores (assuming cooling can keep up). The base clock of 4.3 ghz on the R7 is a bit misleading. It's really the single core boost clock from what I understand with XFR taking it further. Most people get ~3.9-4ghz (4.1 according to AMD's slides) when the CPU is fully stressed. Accounting for a 3.5% difference in IPC, it's 4.7/4.1*1.035 = 18%. That's best case scenario for Intel. That doesn't account for thermal throttling on either as well as the 20% of mobos that don't enable MCE by default. It also doesn't account for manual overclocking.

16% doesn't seem absurd to me. It's just that Intel is being presented in the best light possible due to MCE, which some reviewers disable.
 
Back