In those days, no-one probably ever imagined that one GPU would encroach on the 1KW power-draw realm.Just visited my YT archives. Who remembers quad fire and quad sli drawing almost 1000 watts of power.
This was one of my favorite channels.
In those days, no-one probably ever imagined that one GPU would encroach on the 1KW power-draw realm.Just visited my YT archives. Who remembers quad fire and quad sli drawing almost 1000 watts of power.
This was one of my favorite channels.
Because modern process nodes are capable of building GPUs far more complex then they were a decade ago.
There is no arbitrary watt limit for GPUs.
While the previous power connector had a maximum rating of 600W, the limit on the new 12V-2x6 power connector is 675W (600W for the connector and an additional 75W from the PCIe slot)
Two HD6990 cards = 10.56 billion transistors worth of GPUs, 8 GB worth of 5 Gbps DDR5, and a combined peak TDP of 725 WIn those days, no-one probably ever imagined that one GPU would encroach on the 1KW power-draw realm.
Yes, believe it or not when people upgrade, they want more performance! It's shocking,. I know.Building GPUs on newer nodes may mean a completely different thing:
- you can increase somewhat the speed (frequency and amount of transistors) and decrease the energy consumption
- or you can just forget the energy consumption and just increase the speed/ transistors up to the top until the tech cannot increase anymore. Basically speed over all.
Now, the desktop space (= gamers) showed Nvidia, AMD and Intel, that the most important thing are framerates. If the card pulls 1 kW and costs 2000€, that is secondary and it will sell. So why focus on energy consumption if the buyers really don't care? Those that do mining or AI or military interests using those cards to do the heavy lifting, also don't care about energy consumption.
Conclusion: brands follow the money and if those with money just care about speed, then that's it. And the connector was meant as a quick patch and the (bad) result is there. The engineers, the certification association and brands are guilty for just caring for money and speed. If they had tested it out correctly, that would never happen as they also would control hard what the Chinese connector manufacturers are (not) doing.
Dangerous...
RTX4000900-TiI'm preparing my PSU for next gen GPU's
![]()
I got a good laugh -- unfortunately you're probably right!I'm preparing my PSU for next gen GPU's
![]()
It looks like the new one shifts the contact points deeper onto the pins, reducing the likelihood of a pin making poor initial contact, while still maintaining mechanical compatibility with the existing connectors.What's the difference here? previous 12VHPWR was 600W alone, and the new 2x6 is 600W alone, and the PCIe slot in both cases could provide the additional 75W. In the new standard they just mentioned the obvious, or am I missing something here ??
Yep, it seems that if users do not fully insert the new connector, at least some pins will not make contact and will not power at all. On previous Nvidia disastruos failed design, if the connector is not fully inserted, it still did power on, but on a less surface, which generated heat and lead to the entire Nvidia 4090 burning mess.It looks like the new one shifts the contact points deeper onto the pins, reducing the likelihood of a pin making poor initial contact, while still maintaining mechanical compatibility with the existing connectors.
I meant to ask about the wording of the max power provided with the new standard: it is a known fact that the pcie bus can provide 75 watt of power to cards, that's how low end card doesn't require extra power cables, but in the new 12vhpwr 6x2 they say it can provide 675 watt (600 from the PSU cable and 75 from the bus), which seems not needed as this is the default behaviour anyway.It looks like the new one shifts the contact points deeper onto the pins, reducing the likelihood of a pin making poor initial contact, while still maintaining mechanical compatibility with the existing connectors.