New 12v specification could mean the end of melting RTX 4090 power connectors

Because modern process nodes are capable of building GPUs far more complex then they were a decade ago.

There is no arbitrary watt limit for GPUs.

Building GPUs on newer nodes may mean a completely different thing:
- you can increase somewhat the speed (frequency and amount of transistors) and decrease the energy consumption

- or you can just forget the energy consumption and just increase the speed/ transistors up to the top until the tech cannot increase anymore. Basically speed over all.

Now, the desktop space (= gamers) showed Nvidia, AMD and Intel, that the most important thing are framerates. If the card pulls 1 kW and costs 2000€, that is secondary and it will sell. So why focus on energy consumption if the buyers really don't care? Those that do mining or AI or military interests using those cards to do the heavy lifting, also don't care about energy consumption.

Conclusion: brands follow the money and if those with money just care about speed, then that's it. And the connector was meant as a quick patch and the (bad) result is there. The engineers, the certification association and brands are guilty for just caring for money and speed. If they had tested it out correctly, that would never happen as they also would control hard what the Chinese connector manufacturers are (not) doing.

Dangerous...
 
While the previous power connector had a maximum rating of 600W, the limit on the new 12V-2x6 power connector is 675W (600W for the connector and an additional 75W from the PCIe slot)

What's the difference here? previous 12VHPWR was 600W alone, and the new 2x6 is 600W alone, and the PCIe slot in both cases could provide the additional 75W. In the new standard they just mentioned the obvious, or am I missing something here ??
 
In those days, no-one probably ever imagined that one GPU would encroach on the 1KW power-draw realm.
Two HD6990 cards = 10.56 billion transistors worth of GPUs, 8 GB worth of 5 Gbps DDR5, and a combined peak TDP of 725 W
One 4090 card = 76.3 billion transistors, 24 GB of 21 Gbps GDDR6X, and a peak TDP of 450W

No, I shouldn't imagine anyone back then could have foreseen the level of progress GPUs have made.
 
4090 FE spiked around 500W and Asus OC around 550W. Also looking at 3090Ti spikes and knowing that none reported a burned connector makes me wonder what changed related to the 12VHPWR connector or adapter!

power-spikes.png
 
Welcome to the new fashion in ATROCIOUS dark pattern design from Nvidia.
DLSS EXCLUSIVITY - DLSS1, DLSS2 - only for RTX 2 or 3xxx, DLSS3 only for RTX 4xxx,
Prepare for DLSS4 - exclusive for Nvidia 5or6xxx

Also16 pin power connector ATROCIOUS dark pattern design from Nvidia:
VERSION1 of 16 pin power connector designed by Nvidia - burn only Nvidia 4090 connectors, 3090 connectors are fine.
VERSION2 of 16 pin power connector designed by Nvidia - burn only Nvidia 5090 connectors, 4090 connectors are fine, minus those already burned by Version 1.
 
Building GPUs on newer nodes may mean a completely different thing:
- you can increase somewhat the speed (frequency and amount of transistors) and decrease the energy consumption

- or you can just forget the energy consumption and just increase the speed/ transistors up to the top until the tech cannot increase anymore. Basically speed over all.

Now, the desktop space (= gamers) showed Nvidia, AMD and Intel, that the most important thing are framerates. If the card pulls 1 kW and costs 2000€, that is secondary and it will sell. So why focus on energy consumption if the buyers really don't care? Those that do mining or AI or military interests using those cards to do the heavy lifting, also don't care about energy consumption.

Conclusion: brands follow the money and if those with money just care about speed, then that's it. And the connector was meant as a quick patch and the (bad) result is there. The engineers, the certification association and brands are guilty for just caring for money and speed. If they had tested it out correctly, that would never happen as they also would control hard what the Chinese connector manufacturers are (not) doing.

Dangerous...
Yes, believe it or not when people upgrade, they want more performance! It's shocking,. I know.

Also, ADA is significantly more energy efficient, both overall and per frame, then ampere was.

If you want to pearl clutch about "muh power use", then you shouldnt be buying high tech processors to play vidya in the first place.
 
What's the difference here? previous 12VHPWR was 600W alone, and the new 2x6 is 600W alone, and the PCIe slot in both cases could provide the additional 75W. In the new standard they just mentioned the obvious, or am I missing something here ??
It looks like the new one shifts the contact points deeper onto the pins, reducing the likelihood of a pin making poor initial contact, while still maintaining mechanical compatibility with the existing connectors.
 
It looks like the new one shifts the contact points deeper onto the pins, reducing the likelihood of a pin making poor initial contact, while still maintaining mechanical compatibility with the existing connectors.
Yep, it seems that if users do not fully insert the new connector, at least some pins will not make contact and will not power at all. On previous Nvidia disastruos failed design, if the connector is not fully inserted, it still did power on, but on a less surface, which generated heat and lead to the entire Nvidia 4090 burning mess.
PCI SIG had a "hard" choice to make. 1. To continue to demand user to insert the 16 pin power connector using Nvidia way, which means if it burned, it is users fault in Nvidia twisted perverted vision. 2. To redesign the 16 pin power connector properly.
It seems that they chose the proper method, so they released the new ATX 3.1 spec and the PCI Express 6.0 spec. This also means that actual owners of Nvidia 4090 were left behind and screwed by Nvidia again.
 
It looks like the new one shifts the contact points deeper onto the pins, reducing the likelihood of a pin making poor initial contact, while still maintaining mechanical compatibility with the existing connectors.
I meant to ask about the wording of the max power provided with the new standard: it is a known fact that the pcie bus can provide 75 watt of power to cards, that's how low end card doesn't require extra power cables, but in the new 12vhpwr 6x2 they say it can provide 675 watt (600 from the PSU cable and 75 from the bus), which seems not needed as this is the default behaviour anyway.
 
Back