5% gains for double the price to go intel way.
you sir of crazy
This is just not true.
My 9900K runs at 5.2 GHz and have been doing this since late 2018.
I can say for sure that the difference is way more than 5% when you're not
GPU bound, since I've had a Ryzen 3700X testbench, which had a 2700X before it using 3200/C14 memory, they ran 4.2 GHz (2700X) and 4.3 (3700X) all-core. I'm a 1440p/165Hz user and I aim for 120 fps minimum in all games. Ryzen simply makes this much harder to optain. 3000 series does way better than 1000/2000 tho. I don't think 4000 is going bring much, tho. Going from 12nm GloFo to 7nm TSMC was the "big jump". You saw 100% more cores and slightly better clockspeeds - You won't see more cores on 4000 series and I don't think clockspeeds will increase much - Just look at XT models, binned or not, maxes out at the same clockspeed pretty much. Intel has an edge in gaming thanks to ringbus. Intel consumer chips beat Intel HEDT chips in gaming because of this, too. A 6-10 core chip using ring bus with high clockspeeds is simply as good as it gets for gaming, emulation and most programs.
In some games, the difference is more like 10-25%, when you are using a high-end GPU and aims for high fps. Every single fps matters when you're trying to max out a high refresh rate monitor.
Problem with AMD is that SOME games simply run bad. Some run decent, but SOMETIMES you'll end up playing a game where Intel is simply miles better.
This is the same with AMD GPU's. In some games performance simply is not there either and Nvidia performs much better in comparison. Especially in lesser known titles, or early access titles. Compare Bannerlord for example, on an Intel/Nvidia rig vs AMD/AMD rig. You'll see what I mean. Night and day difference.
I also use my PC for emulation, and I can soundly say that Intel performs much better in most (or.. pretty much all) emulators. Many of these still rely on single thread perf and/or demands high clockspeed in 2-4 cores, and this won't change anytime soon. In CEMU, emulating Zelda BOTW, my 9900K beat the 3700X like 75-125% depending on area. The 3700X barely hold 60 fps with dips to low 30s at times, the 9900K did 120-160fps most of the time, barely dipped below 80 - lowest minimum fps was like 75. API didnt matter. Tweaking didnt matter.
Besides, Intel is not double the price. A 8700K from 2017 with OC is going to beat a 3950X with OC in 99% of games. i7-10600K beats 3950X in pretty much all games, and is much cheaper. It will also beat 3700X, and 10600K is cheaper. Same price if you need a cooler (and then you can OC to ~5 GHz all-core and beat it even more).
Ryzen is good value, and I use it in my server now but I'm so tired seeing people claim that they are on par in games, or "only a few percent slower", because they are not, unless you are a 100% GPU-bound gamer using 60 Hz, maybe. Then CPU won't matter much. Obviously.
For high refresh rate gamers, CPU brand
still matters. I hope Ryzen 4000 will catch up, but Intels new arch is coming too with IPC increase. Even if Rocket Lake "only" gets 8C/16T with 25% improved IPC - AMD is in for a hard time when it comes to gaming, emulation and "regular" workloads, that most people actually do, at home.
Don't get me wrong I love that AMD is competitive again. This is my own experience tho.
Consumer market is a small market really - in terms of money. AMD is way more competitive in HEDT/Enterprise, where the money actually are. So this is good.
Consumers mostly care about bang for buck -or- top-end performance for gaming. And Intel are still competitive here. Intel are in much bigger trouble when it comes to HEDT and Enterprise.