Intel's decision to disable AVX-512 on certain platforms was indeed more about architectural consistency and design complexity than purely power consumption. Having different microarchitectures on the same package complicates everything from scheduling to cache coherency and instruction set management. Disabling AVX-512 across the board simplifies validation, reduces potential performance inconsistencies, and avoids support headaches.
Regarding Zen5 and AVX-512?
You're absolutely right, Zen5 changes the game. AMD’s implementation of AVX-512 is much more efficient compared to early Intel CPUs that struggled with it. Those older experiences may have scared off developers, but Zen5 offers a far better balance between performance gains and power consumption.
It’s a bit unfair though to call developers "stupid and lazy". Many developers just haven't had good reasons to adopt AVX-512 yet because most consumer level applications don’t demand it. This could change with Zen5, especially for workloads like scientific computing, video processing, AI inference, and cryptography, where AVX-512 shines.
The real opportunity here is to educate and encourage developers to revisit AVX-512, especially now that it’s practical and more consistent across platforms like Zen5. Once tooling, libraries, and compilers catch up and make it easier to adopt, we'll likely see much broader usage.
Encouragement is key, will it happen? Time will tell, for consumer applications, probably not.