Correction: PCIe 4.0 won't support up to 300 watts of slot power

Scorpus

Posts: 2,156   +238
Staff member

Earlier this week we reported on the upcoming PCI Express 4.0 standard, with information from Tom's Hardware suggesting the updated specification would support up to 300 watts of power through the PCIe slot. It turns out this information was inaccurate.

Upping the amount of slot power from 75 watts to 300 watts would have made PCIe power cables redundant for most high-end graphics cards. A powerful GPU like the Nvidia GeForce GTX 1080, which consumes around 180 watts at load, would have been able to draw all this power through the motherboard, rather than through a combination of external cables and slot power.

Unfortunately, a spokesperson for the PCI Special Interest Group (PCI-SIG) incorrectly stated to Tom's Hardware that slot power limits would be raised in the PCIe 4.0 standard. This is not the case: PCIe 4.0 will still have a 75 watt limit on slot power, the same as previous iterations of the spec.

The increased power limit instead refers to the total power draw of expansion cards. PCIe 3.0 cards were limited to a total power draw of 300 watts (75 watts from the motherboard slot, and 225 watts from external PCIe power cables). The PCIe 4.0 specification will raise this limit above 300 watts, allowing expansion cards to draw more than 225 watts from external cables.

Having a graphics card that draws all its power through the PCIe slot would be fantastic as it would reduce cable clutter in the case, however the increased power flow through the motherboard would present design challenges for motherboard manufacturers. For the foreseeable, powerful graphics cards will still require external PCIe cables.

Permalink to story.

 
Its long past time that we got a whole new mainboard architecture where every card, drive and internal peripheral uses the same slot. And that slot should be a tool-free simple insert-and-power up design - like an 8-track tape.
 
Its long past time that we got a whole new mainboard architecture where every card, drive and internal peripheral uses the same slot. And that slot should be a tool-free simple insert-and-power up design - like an 8-track tape.
A more unified approach is still a long ways off I bet. Like the ATX form factor could benefit from quite a few revisions, but that won't take just like BTX.
 
Well it would be pointless to raise the PCIe lane power capacity as you would just move the PCIe power connector from the card to the motherboard. Whether this would be direct connection of PCIe power connectors to the mother bored or creating a 40pin power connector. Either way, you still need the same amount of cables to supply the same amount of power.
 
Its long past time that we got a whole new mainboard architecture where every card, drive and internal peripheral uses the same slot. And that slot should be a tool-free simple insert-and-power up design - like an 8-track tape.

This is going to hit the market in a Mac 15 years from now and everyone is going to say the idea is stupid.
 
I'd like for the motherboard to have an ATX connector on both ends. This way the wires from the PSU can be shortened. The last PSU I bought literally had three times more wire than needed to reach the motherboard. And the irony with the PSU was it being modular to reduce the unneeded wiring. If they are going to make a change, they should make one that would actually make a difference on a grander scale.
 
Honestly this isn't a terrible loss, the added cost of manufacturing the motherboard would out weight the benefits for the people who would actually take advantage of the added copper in the motherboard. The external power source is meant to reduce production cost on the motherboard side which makes OEM system cheaper to sell to the masses who would never consider throwing in a 300 watt dedicated GPU. And as already stated, the motherboard would still need to be power by additional auxiliary power connectors to subsidize the added power draw for the expansion slot, imagine a triple slot motherboard required to pump out 900 watts to the possible 3 GPUs, fire hazard anyone?

Albeit GPUs are becoming more and more efficient as the technology progresses so 300 watts in itself is a tad excessive, and triple or quadruple GPU rigs are soon to be a thing of the past so perhaps upping the slot power to something in the realm of 150 watts could have been feasible and beneficial to all but the highest end cards. Honestly however, I have no problem with the current PCIE power cable standard either, routing cables is in itself an art form. Don't forget the GPU manufacturers would be alienating all previous generations is they were to rely solely on the motherboard supply the juice, so think there would be a 5 year transition period before it was standard anyways.
 
I really think it makes more sense to power Graphics cards directly from the power supply.

I upgraded to a EVGA 1000W from the 400W base power supply so I could run my Core i7 6950, three Titan X 12GB card, my Hard Drive, my SSD and everything else. ( I do a lot of 4K video editing)

It really doesn't make sense to add all that copper to the motherboard and increase the price.

Imagine building a rig with 3 GPU cards and trusting all that power input to the motherboard?

That's definitely a fire hazard.

I'd also worry the board might burn out.

If my power supply stops working, I just get a new Power supply.

I think it makes more sense to keep the motherboard as affordable and safe as possible.
 
Last edited:
I really think it makes more sense to power Graphics cards directly from the power supply.

Imagine building a rig with 3 GPU cards and trusting all that power input to the motherboard?

That's definitely a fire hazard.

I imagine such a mobo would have a purpose-built cooling solution just for this reason. It would also have a massive price tag to go along with it.
 
I imagine such a mobo would have a purpose-built cooling solution just for this reason. It would also have a massive price tag to go along with it.
and that's my point. Sure it can be done...but the costs would make it not worth it.

Especially when you have to deal with overclockers who want liquid cooling solutions.
 
and that's my point. Sure it can be done...but the costs would make it not worth it.

Especially when you have to deal with overclockers who want liquid cooling solutions.

A liquid cooling tile would be the only way to make it work, so the OCs would certainly be satisfied. I can't imagine the size of an air cooled solution.

The novelty of the idea is starting to grow on me.
 
Honestly this isn't a terrible loss, the added cost of manufacturing the motherboard would out weight the benefits for the people who would actually take advantage of the added copper in the motherboard. The external power source is meant to reduce production cost on the motherboard side which makes OEM system cheaper to sell to the masses who would never consider throwing in a 300 watt dedicated GPU. And as already stated, the motherboard would still need to be power by additional auxiliary power connectors to subsidize the added power draw for the expansion slot, imagine a triple slot motherboard required to pump out 900 watts to the possible 3 GPUs, fire hazard anyone?

Albeit GPUs are becoming more and more efficient as the technology progresses so 300 watts in itself is a tad excessive, and triple or quadruple GPU rigs are soon to be a thing of the past so perhaps upping the slot power to something in the realm of 150 watts could have been feasible and beneficial to all but the highest end cards. Honestly however, I have no problem with the current PCIE power cable standard either, routing cables is in itself an art form. Don't forget the GPU manufacturers would be alienating all previous generations is they were to rely solely on the motherboard supply the juice, so think there would be a 5 year transition period before it was standard anyways.

I would actually disagree on the idea that GPU's will use less energy. They are more efficient, but they are also getting bigger every other year. In fact I believe AMD is working on a scalable arch that will allow them to link multiple dies together to make one single big card out of multiple cores (Acting as one big core). This could allow for massive 500w single-cards.
 
The PC traces to the PCI-E lanes would likely have to be so heavy, they would need to separated as much as possible from the rest of the mobo circuitry. Which when you come right down to it, it's still way easier to run a wire direct from PSU, the same as we are right now.

Oh, I can hear all you chronic whiners out there, "does this mean I'm going to have to pick up that big heavy wire and shove like hell to plug it in, the same as I do now for the foreseeable future...? :'( *nerd*.

Yeah well, if you don't like it, hire somebody to do it for you..
 
Didnt I say earlier in the first post that it couldnt provide that amount of power. its just a pci express connector. not like it has capacitors connected to it to offer such wattage
 
What exactly is it you think capacitors have to do with supplying wattage?
capacitors and mosfets store energy and provide a gateway for power to move from board to component. so something that takes up more wattage need caps to transmit power
 
capacitors and mosfets store energy and provide a gateway for power to move from board to component. so something that takes up more wattage need caps to transmit power
Where the hell did you read that from? Gateway, seriously a gateway! It's power that has no need for conversion. Passive is fine, which is exactly what the GPU connection from the PSU does. And no Capacitors and Mosfets don't store energy, they regulate and stabilize.
 
And no Capacitors and Mosfets don't store energy, they regulate and stabilize.
Well Cliff, you got 2 out of 3. Capacitors DO store electricity, and that's the mechanism by which they regulate, filter, and stabilize.

In a PSU, the capacitors are "charged" up to the average voltage of the power line they're on. In the case of the rectified output of the transformer, that voltage is nothing but a string of sine waves. In the case of a "full wave rectifier", the negative portion of the AC wave is turned, "upside down" to positive. So, the cap "inhales" that choppy power, and releases it when the voltage drops, to smooth out the waveform toward flat line DC. What is left after that process, you'll hear referred to "AC line ripple".

The residual ripple in the wave is kind of what you hear in an audio amp turned up, in the absence of a signal. Resistance causes the "hiss" you hear, and the AC ripple causes hum. (Some hum is also inducted from surrounding electrical fields, "EMI" if you will).

So, even rectified and filtered power from the PSU/line, isn't pure enough, (still too much ripple) for serious CPU-ing, which causes the need for the VRM, (voltage regulator module), which surrounds and feeds the CPU. The MOSFET acts like a spigot, responding to the rise and fall in voltage, by limiting the peaks, and "opening up" to allow more pass through when the voltage in the supply drops. The caps do store electricity, but they don't "supply it", the PSU takes care of that but those caps in the VRM provide additional filtering, flattening the DC voltage to almost a dead straight line. So, in short MOSFET = voltage control, Caps = final filtering out of any residual AC ripple

The classic use of large capacitors is in audio amplifiers. Since music is basically one big choppy AC wave, the peaks would draw down the PSU voltage causing the amp to distort or "clip". The caps release their stored capacity top prevent the supply voltage from dropping. After the signal ebbs, the caps recharge and the cycle begins again.

Another classic use of capacitors as storage devices, is with hard to start electric motors, such as those in air and A/C compressors, Without the cap in the AC line, those types of motors would never start, they'd just sit there and blow fuses, due to the stall condition creating massive inrush current. Don't forget, a stalled electric motor can draw virtually unlimited amount of power, up to the point where it destroys itself.

capacitors and mosfets store energy and provide a gateway for power to move from board to component. so something that takes up more wattage need caps to transmit power
Who writes this nonsense for you? It's like the prosaic "fractured fairy tales of electronics 101". A "Metal Oxide Semiconductor Field Effect Transistor" certainly does not "store" electricity. And this may be a semantic point but capacitors "store and release electricity", they don't "supply it". Which is why the "S" in "PSU" stands for "supply".
 
Back