Most power supplies won't support Haswell's C6/C7 low-power states

Shawn Knight

Posts: 15,282   +192
Staff member

Intel’s Haswell processor is just around the corner and if you are planning to pick up some new silicon, you’ll likely want to keep reading. That’s because it has been discovered that a number of power supplies currently on the market don’t support Haswell’s C6/C7 low-power states due to the fact they are unable to deliver less than 0.05 amps across the 12V2 rail according to a report from VR-Zone.

According to an Intel document viewed by the gang at The Tech Report, Haswell’s C6/C7 states requires a minimum load of 0.05 amps on the 12V2 line. Power supplies that are older or budget models simply aren’t capable of delivering such a small amount of juice. As a result, users with incompatible power supplies may experience stability problems or worse, the system may shut down completely if the PSU’s under- or over-voltage protection kicks in.

The problem will be difficult to diagnose as most power supply makers don’t advertise a unit’s 12V2 rail specifications. Intel’s Reseller Center website does have a list of power supplies that can be sorted by minimum 12V2 load. At present only 23 power supplies appear to be compatible with these new C6/C7 states: 19 from Corsair, three from InWin and one from Seasonic.

Corsair’s Robert Pearce said he expects most motherboard vendors to disable C6/C7 by default in the BIOS as there are simply too many power supplies on the market that don’t support the power-saving feature. He further noted that Corsair is working to ensure all of their power supplies are C6/C7 compliant.

Permalink to story.

 
Wow. Only 1 from Seasonic. I though Corsair bought from Seasonic. Why do low power states even matter for high end PSUs. It isnt like gamers are gonna be letting their CPUs go into low power states lol.
 
Well, hopefully in the future budget models will support the specification, and for not demanding tasks everyone thanks a lower electricity bill; like some radical new features, there's only few or none that support it at launch. Think about the real mode memory access migration to protected mode introduced by the Intel 286 - there was no OS or program designed to make use of it at launch and now we can't think of a program working on real mode in a PC.
 
Not sure what to think of this...

Honestly thats a strange thought, its normally the opposite problem (not enough power) now its too much power. Thats actually kinda funny and ironic in a way. Either way im sure most of the mainstream ones are fine or will show to be fine, but its just those off brand ones to worry about.
 
Not sure what to think of this...

Honestly thats a strange thought, its normally the opposite problem (not enough power) now its too much power. Thats actually kinda funny and ironic in a way. Either way im sure most of the mainstream ones are fine or will show to be fine, but its just those off brand ones to worry about.
I dont get what is the point of the low power state for desktops is though...
 
Not sure what to think of this...

Honestly thats a strange thought, its normally the opposite problem (not enough power) now its too much power. Thats actually kinda funny and ironic in a way. Either way im sure most of the mainstream ones are fine or will show to be fine, but its just those off brand ones to worry about.
I dont get what is the point of the low power state for desktops is though...

Yeah I agree, I mean in most cases on the upper end spectrum, we want more power and I doubt the effects of this are really important because how often will the CPU be under low enough stress to make this valid. Thats just my honest opinion, because if im not using my PC for anything, its off most of the time.
 
Yeah I agree, I mean in most cases on the upper end spectrum, we want more power and I doubt the effects of this are really important because how often will the CPU be under low enough stress to make this valid. Thats just my honest opinion, because if im not using my PC for anything, its off most of the time.
I think it is because Intel wants desktops to have batteries so they can be mobile (all in ones). But at the same time, how can you fit a full size PSU in there lol.
 
Yeah I agree, I mean in most cases on the upper end spectrum, we want more power and I doubt the effects of this are really important because how often will the CPU be under low enough stress to make this valid. Thats just my honest opinion, because if im not using my PC for anything, its off most of the time.

No, we want more computation power, not more [electric] power consumption; energy consumption is tried to be reduced on every iteration while increasing computing power, that's why the research on alternative technologies like analog, quantum, [you name it] processors and singletronics -one atom, sensed HIGH; no atom in a very small parasite capacitor, sensed LOW.

While you don't use something like solar energy or other alternatives, you're wasting fossil fuel on energy most probably and polluting indirectly. Perhaps, there are times of light work for the CPU or idle times -you go to the restroom; and brief periods -ticks- that turn the CPU to a low state for some miliseconds -seconds even on Linux, OS X, Windows 8- and the less power you draw on low state, the better -if you have some echological awareness.

While most of my classmates don't care about energy consumption while on college because they don't pay the electricity bill like at home by their parents -and later they get reprimanded-, "nothing happens"; but I try to care as much as at home. Even 50 mA is a relatively high current for a modern low power state, and it still it has to be tried to be lowered over time. There's no actual need to draw all the possible power all the time, at some point most people do very light tasks.
 
Well, there really isnt any point in reducing power consumption on the desktop end other than to reduce heat. I wanna see how Haswell turns out with the increased TDP, which will equate to more heat.
 
Well, there really isnt any point in reducing power consumption on the desktop end other than to reduce heat. I wanna see how Haswell turns out with the increased TDP, which will equate to more heat.
But yet there is no need in separating the desktop and portable market, if CPU's can be optimized to power them both. With every generation, there stands a chance of hurtles, this issue is no different than any other hurtle.
 
But yet there is no need in separating the desktop and portable market, if CPU's can be optimized to power them both. With every generation, there stands a chance of hurtles, this issue is no different than any other hurtle.
True. It is just more economical also.
 
No, we want more computation power, not more [electric] power consumption; energy consumption is tried to be reduced on every iteration while increasing computing power, that's why the research on alternative technologies like analog, quantum, [you name it] processors and singletronics -one atom, sensed HIGH; no atom in a very small parasite capacitor, sensed LOW.

While you don't use something like solar energy or other alternatives, you're wasting fossil fuel on energy most probably and polluting indirectly. Perhaps, there are times of light work for the CPU or idle times -you go to the restroom; and brief periods -ticks- that turn the CPU to a low state for some miliseconds -seconds even on Linux, OS X, Windows 8- and the less power you draw on low state, the better -if you have some echological awareness.

While most of my classmates don't care about energy consumption while on college because they don't pay the electricity bill like at home by their parents -and later they get reprimanded-, "nothing happens"; but I try to care as much as at home. Even 50 mA is a relatively high current for a modern low power state, and it still it has to be tried to be lowered over time. There's no actual need to draw all the possible power all the time, at some point most people do very light tasks.


Umm power usage is nice yes, we do essentially want less power usage, but we want less under load, not at idle. The point of having low idle dies off because why is the computer on if its getting to that point of low power. I dont feel the circumstances for this low power usage are goign to be as helpful. I want more lower power to begin with, but overall I will put more power into it to gain that overclock. But the point is, I would think focusing under load power is more important than idle (Yes when not doing much on the computer its nice not to burn lots of electricity, but I mean this is more for the computer is sitting do nothing by the way it sounds).
 
Well, there really isnt any point in reducing power consumption on the desktop end other than to reduce heat. I wanna see how Haswell turns out with the increased TDP, which will equate to more heat.
Business I guess. People leave work computers on ... going to meetings, overnight etc? For times when people aren't physically at the machine and it doesn't have idle load tasks like being a web server etc.
 
Umm power usage is nice yes, we do essentially want less power usage, but we want less under load, not at idle. The point of having low idle dies off because why is the computer on if its getting to that point of low power. I dont feel the circumstances for this low power usage are goign to be as helpful. I want more lower power to begin with, but overall I will put more power into it to gain that overclock. But the point is, I would think focusing under load power is more important than idle (Yes when not doing much on the computer its nice not to burn lots of electricity, but I mean this is more for the computer is sitting do nothing by the way it sounds).

I totally agree and get your point, but by definition, most systems have "ticks" -even the 'tickless'- that without you noticing take the CPU to a low state, not off, and when scheduled it wakes up and do the scheduled tasks; then low state, wake up, low state, etc. From miliseconds to even a second on tickless systems the CPU is on low state. For load there are the other techniques like shrinking the transistors, suspending cores when not all needed, etc. I have a Core i7 [in notebook] and not all [recent] games take it to its full; when working, not all the scenarios, except for intended stressing programming tests or long iterative mathematical/physics operations, which are on a lower proportion that the full stressed time.
 
Dumb idea, not using the energy isn't necessary saving, that energy we don't use gets wasted in the transformers relays or sent to ground.

Now if you want to save $$$ remove that floppy drive, that optical drive and use a SSD, also removing those 3 extra HDD's on your PC can take up to 1/2 of your energy bills (source: self bills).
 
Dumb idea, not using the energy isn't necessary saving, that energy we don't use gets wasted in the transformers relays or sent to ground.
Implementing the same technology across all processors is not a dumb idea. Pointing a finger at power inefficiency elsewhere, while making a negative statement against Intel for making their CPU's more efficient is two-faced.

Not everyone is going to have a power hungry setups and some have already done what you suggest. This doesn't mean we can't strive to reduce power usage elsewhere.
 
Good thing I always go with AMD. With Intel you always have to buy something new for your build.
 
Good thing I always go with AMD. With Intel you always have to buy something new for your build.
If the point of a build is not buying something new, why would you be fussing about Intel's new CPU? It's a new build, so expect to buy new things!
 
Implementing the same technology across all processors is not a dumb idea. Pointing a finger at power inefficiency elsewhere, while making a negative statement against Intel for making their CPU's more efficient is two-faced.

Not everyone is going to have a power hungry setups and some have already done what you suggest. This doesn't mean we can't strive to reduce power usage elsewhere.

So you say we should have the same technology, even not everyone is gonna have a low power ring?

Get real, in my job we always use a dedicated PC to save space on backups (~25%), why would we want to have a Haswell CPU on it?

The point I made was that if you want to save energy there are way better ways than buying a new CPU, we have Atoms for power saving; besides a CPU on low usage doesn't really spends that much compared to other devices on the PC.

From my experience the standard PC uses about .8 Amp, my hi end PC uses ~2 on rest ~3.4 while gaming, and my work dedicated PC uses about 1.9 with an i7 930 on full load. What people should do for saving energy is stop buying OEM PC's that come with CPU's that are in constant load for making simple tasks.

For example: a Dual Core Celeron WILL spend more energy than an i7 3rd gen, because the Celeron WILL be in constant high load while the i7 will be doing the same work with his pinky.

I despite this kind of "green" technologies that are supposed to save the environment, because most of the power plans work on boiling water and overheating the steam, water will always boil at 100 C and the overheated steam will always be overheated to the plant standards, so the same amount of fuel needs to be used; and since power plant can't nt store that energy, it eithers gets wasted on the city ground relays or its sold to another state/city. At the end its all about the money.
 
There's nothing to stop someone changing the C - state type in BIOS though right, like to C4 or C2. So really not much of a problem.
 
I never thought I would see someone fight so hard against advancement in CPU tech.

Cota, power is only generated if needed. If more power is needed than a plant can generate, another plant will be brought on-line. If power drops to a suitable level, plants will be taken off-line. The load on the plant will dictate how much power is produced, there is no waste because of lack in usage. The power supply in your PC works the same way, you increase the work load and your power supply will supply more. The name of the game is to reduce load where ever possible, which you are here complaining about.

Comparing a Celeron(always based on older tech) to an i7(newer tech), OMG. I'm not even gonna go there, thats apples to oranges.
 
I find it interesting for the 12V min load that it says "Yes" for compliant but the others are "N/A". Why not "No"? Or better yet why not a min current rating numerical value? I don't have a whole lot of confidence in using this tool as something to start a scare campaign on non-compliance because it is missing some pertinent info.

P.s. running all Corsair PSUs here and looks like I won't have a problem with the new states.
 
I never thought I would see someone fight so hard against advancement in CPU tech.

Cota, power is only generated if needed. If more power is needed than a plant can generate, another plant will be brought on-line. If power drops to a suitable level, plants will be taken off-line. The load on the plant will dictate how much power is produced, there is no waste because of lack in usage. The power supply in your PC works the same way, you increase the work load and your power supply will supply more. The name of the game is to reduce load where ever possible, which you are here complaining about.

Comparing a Celeron(always based on older tech) to an i7(newer tech), OMG. I'm not even gonna go there, thats apples to oranges.

I get the whole try to be a good steward of the environment thing but seriously reducing your CPU power usage is going to have a negligible at best effect on anything. Power plants are not finely tuned analog instruments that produce exactly the amount of power needed to run the grid, they make more than is necesary to absorb any sudden increases in demand so there is always waste. You would do more to fix the wasted power situation than anything else by replacing transmission lines with something like copper or silver which have far lower electrical resistence than the aluminum that is used now... but as stated above it all comes down to economics.

Given our current technology if you want to cut emissions due to electrical generation push for repealing Jimmy Carter's executive order banning the recycling of nuclear waste and also start pushing the replacement of coal and natural gas power plants with nuclear reactors.
 
Power plants are not finely tuned analog instruments that produce exactly the amount of power needed to run the grid.
You are right because power plants can not produce power that is not being consumed by some type of load. Its the load that dictates how much power is being produced. Whether it be a storage device or what not, without a load there is no power. You see power is rated by the product of voltage times current. If there is no current then the voltage is multiplied by zero, therefor equaling no power output. The only way for a power plant to produce power is to introduce a load.

Now can we please get back to the topic? The topic which is small scale compared to the power plant. The topic of why some of you think going green is a bad idea. For the life of me I can't understand the negativity this topic has generated. Perhaps thats due to the fact, I've only been reading forums a few years now and have not witnessed other tech hurtles the PC industry has overcome. Even though I've not read about them, I can probably name a couple dozen.
 
Wait... Just to confirm, we really have people on this thread arguing AGAINST lower power states?
really??

Surely a slight advancement in lower power states to use less power can only be a good thing? What happens if I want to build a small home server which is on 24/7 so Haswell looks perfect when over night the server is doing nothing, it will enter a lower power state and take such little power is perfect?

I am soo confused, to make an electric car costs more in wasted energy and materials than buying a second hand petrol powered car, I can see the argument in that one, but what the hell are we arguing about with processor states taking less power? This can only be a good thing?
 
Back