Intel Core i9-11900K Review: Not a Great Flagship CPU

Well, since I put together a new system over a year ago, with a 12-core 3900X, I can't justify running out and buying an i7-11700K now (rather than the i9-11900K, since it seems to be almost as good with a significant savings).
But locally, my computer store has plenty of those in stock, whereas the 5000 processors from AMD are just not available. So if someone needed to build a new system now, they might not have a choice.
Given how this chip does when benchmarked with Ycombinator, though, it may not be the flop that everyone is painting it as. Games do use a lot of floating-point, and they're going to be re-written to take advantage of AVX-512
Games do use floating point, but AVX is, as name says, vector instructions. And GPU's (on same time period) are or will be much better for vector instructions than AVX is or will be.
and then this chip is going to look like it has almost twice the performance that it looks like it has now.
So I think the time of Intel building "genuinely good stuff" has already arrived, despite the fact that this is yet another 14nm+++... chip!
I still think that people should buy AMD, but AMD has to do its part. Everyone was expecting that the Ryzen 3000 chips would be up against 10nm chips with AVX-512 from Intel, and against that kind of competition, AMD's 3000 series would have been in a similar position to the 1000 and 2000 series - good enough to be interesting, but still arguably well behind Intel.
It's only because of Intel's 10nm debacle - and thus because of sheer luck - that instead the 3000 was close enough to Intel for there to be no significant difference, and the 5000 was clearly ahead of Intel.
Given the competition AMD was anticipating from Intel, if it intended to be competitive, it should have included AVX-512 with the 3000 series (if that was feasible; maybe the cost of that in terms of die size and thermals would have been ruinous, and if so, of course they made the right decision to exclude it). Remember, both the 1000 and 2000 series, despite big improvements from the Bulldozer years, were still behind Intel in vector width.
So this fall, or next year, or whenever the Ryzen 6000 chips come out, they had better have AVX-512 in them as well. You can't expect people to buy AMD just for the sake of preserving competition if AMD doesn't have a competitive product.
I think that AVX-512 is a make-or-break feature. I could be wrong, since not many people agree with this. And the Ryzen 6000 chips may be too late in their design cycle anyways, although I really don't think AMD should have needed the i9-11900K to tell them to do what they knew (or should have known) they needed to do back when the Ryzen 3000 came out.
It's going to take time to update software to use AVX-512 anyways.
But if the Ryzen 7000 chips in late 2022 or early 2023 don't have AVX-512, that will be, in my opinion, inexcusable.
To put it very shortly and very oversimplified, AVX-512 unit is low latency version of GPU's vector calculation units. Since games don't exactly have problems with high vector calculation latency, it's very unlike AVX-512 will ever get much support on games. AVX-512 have uses on cases where data transfer on CPU takes so much time it's faster to calculate on CPU.

Since AMD rather wants vector calculations done on GPU and not CPU, it's not very surprising that AMD is reluctant to support AVX-512. I expect AMD will support it but as we already know, Intel's support for AVX-512 is somewhat, yeah, well...

luTt7Zf.png


Expect more to come.
 
Since AMD rather wants vector calculations done on GPU and not CPU, it's not very surprising that AMD is reluctant to support AVX-512. I expect AMD will support it but as we already know, Intel's support for AVX-512 is somewhat, yeah, well...

Yes, Intel is exploring new frontiers in Venn diagrams.
Different CPUs do support different levels of vector extensions. But when it comes to graphics cards, as they're a component separate from the CPU, what is available is going to be quite variable, although I suppose one just has to support both AMD and Nvidia, and there's OpenCL as well as OpenGL.
Still, generally the FP64 support of consumer graphics cards is quite limited, and as you note, it takes time to send data to, and get data from, a graphics card. So better capabilities on the CPU have their place.
In fact, what I'm hoping to see from the industry someday is a consumer product something like the SX-Aurora TSUBASA to provide really hefty floating-point power at the CPU end.
 
The punchy title of this article is a distinct improvement over the mealy-mouthed leader on the Hardware Unboxed video.

Steve Walton's not mealy mouthed. He's an excellent writer and speaker. If you don't like him, then go elsewhere.
 
Steve Walton's not mealy mouthed. He's an excellent writer and speaker. If you don't like him, then go elsewhere.
Guess I needed the /s ... for HU Patrons (as I am) who might read a little too quickly.
 
Last edited:
Back