I feel like I'm arguing with a character in an early Cronenberg film. There is such a hard limit, enforced by the PWM, which doesn't require calculation and thus is never exceeded.
There are other, lower limits, however, based on what portions of the chip are active and what load(s) they're running. Unavoidably, a dynamic response like this *does* require a calculation. You can argue that Intel shouldn't design chips this way, but you're simply arguing for a return to the fixed-power, fixed-clock rate CPUs of 30 years ago. No thanks. It isn't just Intel taking this approach, but AMD and all other CPU designers also.
You finally understood there are limits and now you say those limits are not enforced because they require calculation. Again, calculation doesn't matter if there are limit. No matter what result calculation gives, if it cannot exceed certain value, it doesn't really matter.
Now you're just being silly. Intel's microcode is thousands of lines of code with hundreds of execution paths -- and each single path also depends on hundreds of register and other values. One single 64-bit register alone can hold 18 million million million values; the total number of all possible scenarios is a value larger than all the atoms in the observable universe.
We are talking about voltages here, not that much code execution. What do we need to consider when deciding how much voltage is needed?
- CPU clock speed, that is, multiplier. That gives at most 60 values.
- Temperature, say 1 degrees intervals, that gives at most 100 values.
- CPU load, say 1 percent intervals, that gives at most 100 values.
- Consider these for at most 8 P-cores and 4 clusters of 4 cores, that gives number that depends but are not too big.
- Something else like amperage.
We are pretty far from even 32-bit integer values here. Every possible scenario can be easily calculated because there simply are not too many possibilities. Now when every calculation is done, it's pretty much impossible there are too high values for any scenario.
To put it another way, how Intel did figure out there was a bug? They had to, eh, calculate huge amount of values to notice that some calculation give too big value? That's about only way to notice it.
The mere fact that countless millions of these CPUs run code day in and day out, for months and years on end without experiencing the problem is definitive proof it's not a scenario that crops up often. Unless, of course, you're running the Unreal Engine.
Right. Now you are claiming some CPUs don't get broken because they do not experience this "rare scenario"? On other words, if this scenario happens once, CPU is then broken then? Those who have not failed CPU yet simply have not experienced this "rare" scenario EVER? That's just BS.
Again, some CPU units can handle certain voltage while some units cannot. It's very evident that Intel thought all CPUs can manage with certain voltage limits but not it's clear some cannot. Because Intel cannot admit mistake (limit was too high), they just say it's a bug.
Intel has already lied about Raptor Lake issues (motherboard makers are ones to blame) but now Intel admits it's indeed their own fault. Because Intel has already lied, only *****s consider Intel trustworthy on this issue any more.