PCIe 4.0 will make auxiliary power cables for GPUs obsolete

Will be interesting if Intel's new chipsets that are coming 2017 will integrate PCI 4... would be nice...Or maybe AMD could steal a march and include it with their new Zen chipset...
 
Will be interesting if Intel's new chipsets that are coming 2017 will integrate PCI 4... would be nice...Or maybe AMD could steal a march and include it with their new Zen chipset...

It would be nice thinking on AMDs part if they did because the timing of Q1 2017 does line up.
 
This idea that pci 4 would support 300watt graphic cards is not realistic. that would put a strain on the motherboard manufacturers cause they would have to implement a mosfet array the way way its done for cpus. this would drive the cost of motherboards up and have more things on the board prone to failure
 
This idea that pci 4 would support 300watt graphic cards is not realistic. that would put a strain on the motherboard manufacturers cause they would have to implement a mosfet array the way way its done for cpus. this would drive the cost of motherboards up and have more things on the board prone to failure

I'm guessing that the PCIe spec isn't being developed in isolation of the input of motherboard manufacturers. Plus, I'm sure there were similar concerns with every spec released.
 
This idea that pci 4 would support 300watt graphic cards is not realistic. that would put a strain on the motherboard manufacturers cause they would have to implement a mosfet array the way way its done for cpus. this would drive the cost of motherboards up and have more things on the board prone to failure

Why? The CPU needs these FET arrays to regulate 12v down to more like 1.2v!

PCI-e power would still need to be delivered to the card at 12v - no voltage regulator circuits needed on the motherboard.
 
This idea that pci 4 would support 300watt graphic cards is not realistic. that would put a strain on the motherboard manufacturers cause they would have to implement a mosfet array the way way its done for cpus. this would drive the cost of motherboards up and have more things on the board prone to failure
Really? You know something that all those very intelligent people designing this new standard don't?
Want to provide some evidence for this?
Whenever something new comes out, there's always someone saying "it'll never fly".... Well, sometimes it does fly :)
 
My guess would be the GPU connectors will be adapted and moved to the motherboard, Initially. Then the PSU connection will be altered to accommodate the extra 1500W's the PCIe slots may or may not need. If you ask me, I think things are fine the way they are. Power delivery is fine for the mass majority, no need in changing anything.
My thoughts, too. The power still has to come from somewhere. The article implies that the power will be delivered through the connector which may imply that they are adding extra traces or using some that are currently unused - if any. Still, though, the extra traces implied by this article have to handle the power that wires are handling now. If you add an extra power layer and an extra ground layer, then it might be relatively easy from a MB maker standpoint, but those extra layers cost money.
 
Why? The CPU needs these FET arrays to regulate 12v down to more like 1.2v!

PCI-e power would still need to be delivered to the card at 12v - no voltage regulator circuits needed on the motherboard.
I forgot to mention the added capacitors a motherboard would need in conjuction with the pci express controller to deliver 300watts
 
I forgot to mention the added capacitors a motherboard would need in conjuction with the pci express controller to deliver 300watts
If there is no voltage regulation between the PSU and the card slot, there will be no need in additional capacitors. The voltage regulation is for the motherboard components not the card slots. Cards will have their own voltage requirements and therefor their own voltage regulation circuit.
 
I'm having a little trouble with this. Back in the 90's I used to build a lot of speakers for friends and family, as I were the "nerd" that knew how to calculate crossovers and impedance corrections. For powerful speakers, I never soldered crossovers on circuitboards. Even though I could make the lanes on the board as wide as I wanted, I always hardwired crossover components, as there was a hell of a lot of resistance and loss in the lanes compared tohard wiring.

I mean, my crossovers were normally made to a 50-100 watt rating. I tried a couple of circuitboards for 300+ watt speakers and found they got terribly hot - even though the lanes were up to 10mm wide. There aimply wasn't enough copper to conduct the effect needed.

When I look at a modern motherboard and see the thin lanes, I would be very worried about stability and endurance with those kinds of wattages PCIE 4 seems to promise.

In my own Skylake i7-6700 (65 Watt TDP) I use a Gigabyte Geforce GTX 750 Ti Windforce edition. And even though the new Maxwell chipset is rated at a TDP that can be delivered from the PCIE connector, I specifically chose a graphics card that had a 6 pin power connector. I do not need my motherboard supplying more than what the CPU and other onboard chips and RAM needs.

Perhaps I'm just catious, but that's me in a nutshell...
 
Do you really think that all of the people designing this new standard haven't thought of these "problems"?

Let's give them some credit, and assume they HAVE... then we can wait for it to actually be implemented on a motherboard and slam them then....
 
And produced the same way all cheaply made products are manufactured. You would be hard pressed to actually find a product that is made as it should be. That is if you didn't want to pay 5 times the price for simple modifications that greatly enhance durability.
 
Do you really think that all of the people designing this new standard haven't thought of these "problems"?

Let's give them some credit, and assume they HAVE... then we can wait for it to actually be implemented on a motherboard and slam them then....

I'm not slamming anybody. Just expressing my concerns with my own experience as a reference.

And the laws of physics is hard to bend. More power requires more copper to conduct.

Yes they might have solved the issue some way, but in my opinion, a power cable from PSU to graphics card is a safe and proven way of delivering the required power.
 
I'm not slamming anybody. Just expressing my concerns with my own experience as a reference.

And the laws of physics is hard to bend. More power requires more copper to conduct.

Yes they might have solved the issue some way, but in my opinion, a power cable from PSU to graphics card is a safe and proven way of delivering the required power.
Yes... a power cable is a really nice thing... and for anything requiring more than the 300-500 watts, one will still be needed.... but let's keep an open mind as to whether it's needed for less than that... a bunch of REALLY smart people think it won't be... I'm inclined to have a bit more faith in them...
 
I don't like the sound of this.

Non backwards compatible cards still using the exact same slot.. afaik that hasn't been done before. PCI -> AGP -> PCI-E, and it's been the same ever since. If new tech using an updated version of that standard isn't compatible.. why's it still the same standard?

USB 3.1 keeps things the same.. USB 3.1 'Type-C' uses an entirely new connector and cable. As if that wasn't going to be enough of a problem.

I'd also rather have dedicated power cables running to high-draw things instead of having it pull an enormous amount of power through the motherboard.

I can only wonder what this will do to multi-GPU setups..
 
Wow 300-500watt

what is that going to do for power consumption of the motherboard itself?

I'm sure it's going to depend on a few factors but I doubt it's going to draw much more power than it needs as in I doubt it's going to be much higher of a draw than the current motherboards and video cards combined.

That being said I would be surprised if it isn't just a bit more inefficient than plugging the video card directly into the power supply but I'm not an electrical engineer so I'm not going to make any statements on that.
 
Back