Next-gen power connector for GPUs can handle up to 600W

mongeese

Posts: 643   +123
Staff
Why it matters: Modern GPUs have pushed the venerable 8-pin Molex connector to its limits. Nvidia denounced it with this generation’s Founders Edition cards, which have a proprietary 12-pin connector; AIBs choose between overstuffing hungry cards with three connectors or starving them with two.

A new power connector, called 12VHPWR, solves that problem by delivering as much power as four 8-pin Molex connectors -- 600W. It’s part of the PCIe 5.0 standard, and it could start appearing on GPUs next year.

Some manufacturers have already listed their 12VHPWR plugs and cables, hence the pictures. But the credit for most of the details goes to Igor’s Lab, which learned about the connector from power supply manufacturers and AIBs. The latter was relieved to be moving on from old 8-pins.

As its name would suggest, the 12VHPWR connector has 12 power pins/plugs. They’re 3 mm wide, instead of 4.2 mm like Molex. It also has four tiny contact pins/plugs on the underside to carry sideband signals and a latch on top to secure the pins in the plugs.

Each of the 12 power channels can carry at least 9.2 A at 12 V. When all 12 channels are active, it can carry a total of 55.22 A or 662.4W. But it can’t be paired with GPUs that draw more than 600W because it needs ~10% redundancy.

Impressively, at 19 mm wide, it’s only a fraction larger than a single 8-pin connector. It does require beefier cables, though, made out of more premium materials.

At the moment, there aren’t any GPUs that require the full 600W. The first GPU that’s rumored to use the 12VHPWR connector is the (unconfirmed) RTX 3090 Ti, which might draw 450W.

But the connector might be getting used to its fullest sooner than you’d expect. AMD and Nvidia’s next generation of data center GPUs are believed to use power-hungry chiplet designs, and be twice as large as their current offerings, which already consume 250-300W.

Masthead credit: Vagelis Lnz

Permalink to story.

 
The 3090 Ti will start this trend, but both Lovelace and RDNA3 will have higher TDP.

This is what I said in a YT comment:
3090 Ti reasons to exist:
1. For bigger nvidia margins.
2. Jensen's epeen is hurting because of the 6900 XTX(H) - unofficially spanking 3090 too many times for comfort.
3. Preparing/conditioning us for Lovelace which will have at least 450W, more like 500W+ TDP.
4. Too many fools with more money than sense? (actually that's not a question)
 
Not sure if consumers really need it at this point other than for convenience. And no, the 3090ti and similar cards are not what I consider "consumer" products simply because of the ridiculous requirements all around: the price, the heat and power requirements, the super tiny benefits over a sensible product like a 3070ti or a 3080, etc.

For data centers this is ok: yes is a lot of power but don't forget what we've learned from checking out Nvidia GPU paravirtualization: A single 3090ti class GPU in a datacenter can potentially service as much as 8 separate customers if they have something modest like a 1080p 60 FPS plan and are not loading up just 2077 all maxed all of the time: if at least half of those customers just launch something simple like fortnite then those resources get reallocated dynamically without issues to someone else.

So a single power hungry card is still way, way better than 8 single 1650s if you think about it on those terms.
 
Throw a CPU, monitor, speakers and other peripherals on top of that and you're not far off tripping the electrical loop in some houses. That's a lot of juice to play Doom. I'd agree with a lot of the comments here - it's getting out of hand.
 
Basically moors law is dead and the only way to enforce moors law is to double down on power usage and bigger more expensive coolers.
 
Do I want a 600w gpu? No.

Do I want a power connection capable of providing as much power as il ever use or consider using so I never have worry about the cord being a bottleneck? Yup, don't see a problem.
 
Is NVIDIA hitting a wall with performance/power consumption? With how long they have been in business with large profits and the talent is this just sad! I can understand if they were like AMD and small but it's like the CEO of NVIDIA doesn't care anymore.

I was thinking with COVID it was one of the reasons. Is the demand so high where workers are too tired? Are they over working the people due to gamers wanting the card but miners being the first grab? I understand nothing is fool proof and a perfect EXAMPLE is like the PCM can protect the engine but if the car is standard vs automatic the PCM can't stop you if you screw up while shifting into a wrong gear.
 
To Nvidia to hold the crown is to hold mind share hence sky is the limit tdp win at any cost. AMD is following the trend can't have Intel eat it's cake!
 
... I thought the world was moving to save energy. . .
For me, the video card market stopped a couple of years ago. I wont spend more than 250 euro for a video card. Nor for a monitor, of course
 
To Nvidia to hold the crown is to hold mind share hence sky is the limit tdp win at any cost. AMD is following the trend can't have Intel eat it's cake!

Very true now AMD will be very lax when it comes to power consumption.The bar has been lowered and 450W will be the norm lol.
 
Do you guys even like tech???

This boost in power is rumored to come with a 2.5-3x jump in performance.
Add in MCM's and it kinda makes sense. What also makes sense is 600W being used for tiers higher than the ones we have now. So we might get Titan Supers and Ti's?

*shrugs*
 
LOL so many people salty over bigger GPUs. If it angers you so much then buy a laptop TDP GPU and be happy, or go outside and touch grass.
Seems like nVidia isn't concerned about energy efficiency of their products.
Nobody in the high end gives a rat's arse about "muh efficiency". Also power draw /=/ efficiency, if a 600w GPU does 3x the work of a 400w GPU then the 600W model is far more efficient.
 
if a 600w GPU does 3x the work of a 400w GPU then the 600W model is far more efficient.
400w vs 600w ... with a x3 fps? awesome, if it were true, give me a 100w video card with the same "x3" technology, it would suffice my needs as a videogamer.

ps: "Efficiency" is also a technology improvement. So the point is: how much fps you can reach with a certain power consumption ?

To the two previous commentors: If you are so excited about rough power and big number only... why dont you go and buy a super-computer? with a few millions dollar you can play Doom Eternal with an increadible fps number :)
 
Last edited:
They're right, though. Efficiency is the ratio of useful work performed by a machine to the total energy expended.

It is possible for a "bigger" system to be more efficient than a "smaller" one, despite using more power, because it does proportionately more work. Perhaps the base cost of the support system can be amortised over a larger number of cores (see CPU 'uncore' overheads for an equivalent concept).

In reality, this probably means they're running the chips at the upper end of the efficiency curve, especially for consumer GPUs. It's likely that a 100W version would be more efficient. But I could see situations where having a large, power-hungry cache would be be beneficial only at a certain size. And if you want true 4K (or 8K?) gaming, at a minimum performance level to achieve, that 100W (or 200W) GPU may be useless for that purpose.

Personally I'll likely settle for the next round of APUs with RDNA2. I'm happy with light gaming and a 1440p desktop. I don't need a space heater; my room gets warm enough!
 
Last edited:
Seems like nVidia isn't concerned about energy efficiency of their products.
Energy efficiency is relative to output performance. The thermals bang for buck is what exactly?

1080Ti 11.3 TFlops at 250W TDP.
3090 is 35.6 TFlops at 350W TDP.

I think it's pretty obvious that the efficiency between generations has massively improved. They are making larger dies because they can and people want the perf.
 
Back