Nvidia RTX 5090 graphics card power cable melts at both ends, bulge spotted at PSU side

midian182

Posts: 10,836   +142
Staff member
Facepalm: Imagine getting ahead of the scalpers and securing a $3,000+ RTX 5090, only for the connectors on the card and cable to melt. There has been yet another one of these incidents reported, with both ends of the 12VHPWR, the card, and the PSU all showing burn damage.

The latest case, reported by a Redditor called Roachard, involved an MSI RTX 5090 Gaming Trio OC. He bought the card from Best Buy about a month ago for just over $3,000 and paired it with a Corsair SF1000L on an Asus Strix B650E-I motherboard. That PSU is an 80 Plus Platinum, ATX 3.0-compliant power supply that sells for $230 on Amazon – not a cheap, underpowered model, basically.

Roachard says the cable was plugged directly into the PSU. No extensions were used and he wasn't overclocking the RTX 5090.

He initially thought the damage was only on the GPU side – both the cable and card connectors – but there was also damage to the PSU end. There's a worryingly large bulge in the connector plugged into the power supply where the plastic has melted, next to a cable that has burned and turned white (masthead image).

There were reports of a cable being used with an RTX 5090 that melted in February. It also damaged both the card (below) and PSU, but it's believed that the issue was partly caused by the cable in question being a third-party model from Moddiy, though the website describes it as an ATX 3.0, PCIe 5.0, 16-pin to 16-pin model supporting up to 600W with the newer 12V-2X6 design.

Roachard says that he used the original 12VHPWR rated for 600W that came with the PSU. He assumed that avoiding third-party cables meant there would be no issues, but apparently not.

There have been several cases of RTX 5090 cables/connectors and PSUs melting, and at least one involving an RTX 5080. While there have only been a handful of these instances, it brings to mind the more widespread similar issues with the RTX 4090. Most of those were due to the 12VHPWR cable not being fully inserted as it was too stiff, leading to the updated 12V-2x6 cable design.

Earlier this year, overclocker Der8auer replicated the setup of one of these RTX 5090 melting incidents using a Corsair 12VHPWR cable. The cable's connectors reached 150°C on the PSU side and close to 90°C on the GPU side. The problem was an uneven distribution of power: two wires designed to carry 5 to 6 amps of current were carrying more than 20 amps each, while some cables carried as little as 2 amps.

Permalink to story:

 
Jayz2cents ran tests of voltage on a series of cables, on a recent YouTube video. He discovered that in cases where the pins were slightly pushed in, the the voltage was low on those cables. He had a special microscope to magnify and identify problems. There were some cables that were consistently in spec. Many others were not. An interesting watch.
 
Maybe video cards shouldn't be 600 w power monsters maybe scale it back a little bit

While I do agree that 600W powerhouse seems stupid, but if engineered properly, this issue could be avoided.

Nvidia doesn't care. They're not going to spend any money on pinpointing and fixing the issue. Even if a class action lawsuit was slapped on them the few million dollar fine they might be hit with is just laughable. The lawyers would end up with 30-40% of the payout and the balance would be split between thousands of normal day consumers that would get $10 back.
 
I've a wondered if we would see an uptick in "stuff" failures after the pandemic and crypto 2.0 thanks to manufacturers using lower grade components to get around supply chain issues to fullfill orders (stuff that has 50K MTBF instead of 200k MTBF etc) and then just sticking with those lower quality components when they realized no one complained/components still failed outside of warranty.
 
Looks like Darwin's natural selection is starting to hit electronics. There's a limit to how big something may grow, after which It scales down or die.
 
Undersized wires plus bad connections which oxidize and become resistive.
One has to remember that 600W at 12V means 50Amps of current. Those wires and their connectors look rather puny for the current they are supposed to carry.

I'm not sure who is certifying those cables/ connectors...

But hey, who needs regulation? It stifles innovation, right?
 
Last edited:
Here is another deal - I had a custom made cable melted but only on PSU side, plain old 8pin PCIe, 3080Ti and superflower platinum. Cable was of a poor quality I guess, but maybe power hog GPUs and overall poor engineering is to blame, not only 12vhpwr?
 
Undersized wires plus bad connections which oxidize and become resistive.
One has to remember that 600W at 12V means 50Amps of peak current. Those wires and their connectors look rather puny for the current they are supposed to carry.

I'm not sure who is certifying those cables/ connectors...

But hey, who needs regulation? It stifles innovation, right?
Spot on with the right response. Those wires and connectors aren't suitable for the flow of a large amount of current; it's simple school physics. A smaller wire will have greater resistance, directly proportional to the amount of current flow. But the problem is that instead of computers being more efficient compared to the 90s, companies are making more power-hungry PC components to extract more performance. However, this isn't an ideal time to make CPUs and GPUs more power-hungry, as the world is dealing with global warming. But our politicians and government don't care about it; they give a free hand to those private companies in a capitalist economy. And consumers are also having a bad experience dealing with failing hardware and the rejection of warranty claims after paying a hefty amount of money.
 
Maybe video cards shouldn't be 600 w power monsters maybe scale it back a little bit
It's not just the 600w value alone. GamerNexus discussed that 12VHPWR doesn't effectively spread the load between 8 contacts, overloading select few while idling the rest. Some retailer versions did correct the issue, but without any popular mentioning.
Thus it leads us to the headroom issue. PCI-E could deliver up to 300W without failure, which is 150W above its standard capacity, while 12VHPWR has only theoretical limit of only 80W above the spec.
Undervolt your cards, ladies.
 
This is entirely the fault of poorly made cables that don't meet spec, pure and simple. There is no other reason, whatsoever. Just look at HDMI, even major cable producers have perhaps at best a 50/50 shot of their cables meeting the actual spec.

You can hate Nvidia all you want, but they have nothing to do with it.
 
Spot on with the right response. Those wires and connectors aren't suitable for the flow of a large amount of current; it's simple school physics. A smaller wire will have greater resistance, directly proportional to the amount of current flow. But the problem is that instead of computers being more efficient compared to the 90s, companies are making more power-hungry PC components to extract more performance. However, this isn't an ideal time to make CPUs and GPUs more power-hungry, as the world is dealing with global warming. But our politicians and government don't care about it; they give a free hand to those private companies in a capitalist economy. And consumers are also having a bad experience dealing with failing hardware and the rejection of warranty claims after paying a hefty amount of money.

First of all, not capitalism. If it were, someone would have taken Nvidia's designs and improved on them. Except, that's illegal in our current society. We live in a corporatist society in America, not a capitalist one.

Also, the overall failure rate from these major electronics manufacturers is less than 1% and nobody is getting their warranty claims denied over a valid warranty claim. In all my life, every time I've submitted a warranty claim, with any company, it was honored. I even had one company in recent years still replace my stuff under warranty after admitting it was my own fault. I was merely inquiring if they had a paid repair program.
 
Maybe video cards shouldn't be 600 w power monsters maybe scale it back a little bit
Interestingly AMD's 500+W GPUs didnt melt cables. Hmmm......

600w is nothing in our modern world. Basic level 2 charging of your EV is 20x higher then this card's draw for a basic home charger.
This is entirely the fault of poorly made cables that don't meet spec, pure and simple. There is no other reason, whatsoever. Just look at HDMI, even major cable producers have perhaps at best a 50/50 shot of their cables meeting the actual spec.

You can hate Nvidia all you want, but they have nothing to do with it.
That's not accurate. The standard clearly has issues with power balancing, a proper standard would not allow a system to go 3 times over the same power draw limit per pin without throwing error codes.
Undersized wires plus bad connections which oxidize and become resistive.
One has to remember that 600W at 12V means 50Amps of peak current. Those wires and their connectors look rather puny for the current they are supposed to carry.

I'm not sure who is certifying those cables/ connectors...

But hey, who needs regulation? It stifles innovation, right?
It's within specification for the connector. 50A divided by 6 pins is 8.3 amps, the cables can handle 9 without issue. The issue is the cards drawing more then 9 amps per pin, this is a design flaw that nvidia has continuously ignored.
Alternatively, where I live energy is expensive. While I could probably afford to purchase one of these or similar cards, actually using them will result in crazy energy bills.
Major nope.
I'll happily enjoy my older card w/ slightly less eye candy.
Even in germany, which has the highest energy cost in the EU, the difference in power use generates bills that are a mere fraction the cost of the card itself. If $50 a year in extra electricity electricity cost scares you off from a $2000+ GPU, you couldnt comfortably afford such a luxury device in the first place.
 
Nvidia should’ve supplemented the power connector with an additional standard 8 pin - they can carry close to 300w and would’ve made sure this problem wouldn’t occure.
As for 5090’s - the only «safe» card to buy atm. Is the Rog Astral with pin sensors or a Rog Thor III with pin sensors on the gpu side.
There are plug in «cards» you can insert between the connector and cable that has pin sensors - but who knows maybe they end up causing issues instead of fixing them
 
Does his PC have dandruff

now for a totally pointless comment; this problem will never affect me , as have no desire for such a stupidly large power hog in my house to play a game

How green does my grass need to be, how big of a TV - Will it make me happier, or will cower n fear as new biggest thing hits the market

I get people who gaming at high specs s a big part of their lives, but I'm sure many just because and ego boost.

Get out and watch people mountain biking , hiking , photography - notice all the people that seem to do it the most , don't have latest and greatest and know how to use and maintain the gear - that looks well used eg a lot of those cyclist guys when chat with them can strip and rebuild bike in a few hours or 2
 
It's within specification for the connector. 50A divided by 6 pins is 8.3 amps, the cables can handle 9 without issue. The issue is the cards drawing more then 9 amps per pin, this is a design flaw that nvidia has continuously ignored.
Heh, the best cables I've seen are 16 AWG. A simple calculation for a solid 16 AWG wire shows that if fed 12V at 8A at the PSU end, it will deliver 11.9 V at the other end if it is 3 ft long (standard). However those wires are not one single core solid wire but multicore so skin effects will drop the voltage further. This will make the card ask for slightly more current to compensate, that's the law and that is your design flaw at least partially.

I'm currently holding one of those 12VHPWR cables proudly displaying a 600W label on it and it is made out of 16 AWG wires which are inadequate for this sort of current draw, their safety factor is simply too low. To be more precise, if they would be a few inches longer we would have more than 1% voltage drop. Sure, such a wire will take 9A draw individually but not without heating up, more so due to skin effects.

And then you look at the actual connector's sizes, the actual bit which has to transmit those 8 amps, it is simply too small, it barely has enough contact area for that current capacity if everything is ideal and the contact is almost perfect. Reality is different, contacts may not be perfectly clean or have a tiny bit of oxidation on them which is not uncommon, so they will heat up when the current draw increases, causing a bit more oxidation and so on, until the temperature caused by the increased resistivity of the contact melts the connector.

As an Automation Senior Engineer I'm quite familiar with power requirements and how power distribution works. When connectors melt is not due to short term peak current draw but rather because the RMS draw is beyond what the connector's duty cycle can take. Therefore the only logical conclusion one can take is the wires/ connectors are undersized. The hard evidence points that way.
 
Last edited:
Jayz2cents ran tests of voltage on a series of cables, on a recent YouTube video. He discovered that in cases where the pins were slightly pushed in, the the voltage was low on those cables. He had a special microscope to magnify and identify problems. There were some cables that were consistently in spec. Many others were not. An interesting watch.

JZ2C's testing is about as methodical as throwing bones and summoning familiars. There are much better sources such as GN or Debauer.
 
That was NOT an Nvidia cable that was a cable from an ATX 3.1 power supply that was specced by Intel who has sole control of the ATX power supply standard

Until Intel changes THEIR ATX 3.x standard this problem will persist and there is really nothing Nvidia can do about it. Nvidia adding another 12V2x6 connector isn't going to help when most power supplies only have one 12V2x6 port because then the port will just burn up. The only ATX 3.x power supplies I've seen with two 12V2x6 connectors are on the expensive 1500W models and everything below them just has one
 
Last edited:
Interestingly AMD's 500+W GPUs didnt melt cables. Hmmm......

600w is nothing in our modern world. Basic level 2 charging of your EV is 20x higher then this card's draw for a basic home charger.

That's not accurate. The standard clearly has issues with power balancing, a proper standard would not allow a system to go 3 times over the same power draw limit per pin without throwing error codes.

It's within specification for the connector. 50A divided by 6 pins is 8.3 amps, the cables can handle 9 without issue. The issue is the cards drawing more then 9 amps per pin, this is a design flaw that nvidia has continuously ignored.

Even in germany, which has the highest energy cost in the EU, the difference in power use generates bills that are a mere fraction the cost of the card itself. If $50 a year in extra electricity electricity cost scares you off from a $2000+ GPU, you couldnt comfortably afford such a luxury device in the first place.
The standard is actually set by INTEL not Nvidia and is part of Intel's ATX 3.x power supply standard. It'll never get fixed as long as people continue to blame Nvidia instead of Intel who actually sets and controls the ATX power supply standards. Intel has no incentive to change a thing as long as someone else is getting blamed for THEIR screwup
 
Jayz2cents ran tests of voltage on a series of cables, on a recent YouTube video. He discovered that in cases where the pins were slightly pushed in, the the voltage was low on those cables. He had a special microscope to magnify and identify problems. There were some cables that were consistently in spec. Many others were not. An interesting watch.
The problem with Jay's method is that just plugging it in and out is NOT a cycle. A cycle is plugging it in, running it under stress for a few hours, unplugging the cable and let it cool off. THAT is one cycle and to do a second cycle you repeat the process again. Each time you do this you monitor the voltage drop across the connector and that will tell you the resistance across the connection. Once that measured voltage drop hits certain level that is considered your failure point which is well before the connector starts to melt. It's the resistance of the connection that causes it to heat. For instance if you have 100W going in but only 90W on the output then you have 10W of heating through the connector and as the temperature rises the resistance tends to increase and the losses become even greater because it's a positive feedback loop.
 
Last edited:
Back