RTX 5090 cable melts while playing Wuchang: Fallen Feathers, despite proper seating claims

midian182

Posts: 10,868   +142
Staff member
What just happened? We've seen plenty of melted GPU and cable connectors involving Nvidia graphics cards, but some have question marks over their legitimacy. However, the latest incident, involving an RTX 5090 and Wuchang: Fallen Feathers, happened to the editor-in-chief of a gaming publication, and he recorded everything in great detail.

John Papadopoulos of DSO Gaming writes that he was playing new Soulslike Wuchang: Fallen Feathers yesterday when he noticed an unusual smell. Failing to discover its source, he decided to open his PC to check if it had fallen victim to the dreaded cable-melting issues plaguing so many Nvidia card owners.

Unfortunately for Papadopoulos, he saw that the 12V-2×6 power cable connected to his RTX 5090 was burned and emitting smoke.

Being aware of the many reported melting incidents, Papadopoulos said he always makes sure to fully seat the power connector in the graphics card. He provided a picture of the connector plugged into the card taken before the incident as proof.

While some people have pointed to what looks like a tiny gap between the card and connector at the top, the editor insists the cable was pushed in as far as it would go. For further evidence of this, he says that the RTX 5090 ran for 20 minutes at 100% usage with no issues.

Papadopoulos also points to the fact that there was no damage on the top row of connectors, which he said would have burned if the cable wasn't fully seated – the damage is at the bottom row on both the cable connector and card socket.

Papadopoulos said that for a test, he removed the RTX 5090, plugged it back in using the same burned power cable, and ran Wuchang for 20 minutes. The PC was stable with no smoke coming from the connector.

There's an admission that this could still have been caused by user error – Papadopoulos says he doesn't remove the cable from the RTX 5090 when taking the card out of his PC case, so it could have come loose at some point. Others might argue that a cable "fully plugged in" should not work itself lose.

This is far from the first melting RTX 5090/cable that we've seen. One of the earliest was reported in February, but it's believed that an unofficial cable was the cause. There were two cases in April, including one involving MSI's "foolproof" yellow-tipped 12V-2x6 cable. And another incident involving MSI's colorful cable was reported in May.

Image credit: DSO Gaming, John Papadopoulos

Permalink to story:

 
Still ZERO evidence about who/what is at fault... but Nvidia haters will continue to point at the MINISCULE amount of incidents as something statistically significant...
 
None of these are user error. Badly seated makes no difference. The whole seating argument is a red herring. It's deflection, like blaming the messenger.
The whole point of a connector is to eliminate user error and increase safety. If the user is able to incorrectly seat the connector then it has still failed its job.
 
Last edited:
If blame the user will NVidia lower prices, if not, its 100% NVidia fault, lol... Serious question though, many of the 5090 (as same 4090) ended up in AI startups (as cheap or entry card), has this been a problem for them, and if not, why? Have we seen any reports of melting cables when a server class power supply has been used?
 
I've tended to blame the plugs, as in one manufacturer is making rubbish female pins ... but then this was posted - https://www.techpowerup.com/forums/...for-asus-rog-astral-rtx-5090d-appears.339197/

Maybe there is an argument for where the GPU goes batty and just draws far too much power. They obviously are able to - now that I've seen that. If so, then I guess Nvidia would indeed be to blame.

PS: It would also explain why the problems abruptly started with the RTX4090 and not the RTX3090.
 
Last edited:
I found that using my power supplies supplied 12v-2x6 cable on my 5090, heated up to the point of being unable to physically hold the cable near and at the connector point to the graphics card and also at the power supply end. ATX 3.1, 1200w and the GPU are supported, but was frighteningly hot. Up to the point of being convinced on one test run that it had to have melted. Thankfully it was not.

However, Gigabyte (It is a Master Ice 32G) provided a 12v-2x6 connector fed by 4x8pin pcie connectors, each connector supplying 3 lines of almost comically thicker cables up to the smaller pin layout of the 12v-2x6 connector. Once moved to it, it has run no hotter than the ambient air around it. After several weeks monitoring GPU watt load and torture testing it (~630w reported sustain loads for testing), it never became hot again after swapping. I've finally settled down in feeling confident it will not ever melt.

So I can believe many 5090's are running their cable's at the thermal limit at all times and spikes finally do it in.
 
It's simply a bad design, period. If the connector lets an end user insert it improperly it's the design. If the the connector lets the GPU pull too much power it's the design at fault. And so on and so on. I normally buy one of the 80 series cards. With my last build I bought a 4070 that didn't use the new connector. I have bad luck when I gamble and there's no way I'd gamble something like 1,500CN on a product proven to fail. Even if it is a small percentage...
 
Maybe they should stick with 3 sets of 8 pin connectors!😲
Especially when you consider that Rumiko above has pointed out that using a Gigabyte provided 12v-2x6 connector fed by 4 standard 8pin PCI-e connectors eliminated the problem. No matter how much NVIDIA tries to push the narrative of user error, the real issue is their absolute failure at basic electrical engineering. They probably used AI to come up with everything which explains it.
 
I’m no electronics engineer, but I still say those pins are too small for the amount of power going through them…
The pins are not too small. By default, they support 9 amp each, but only run 8.3 at full tilt at 600 sustained watts.

The issue here is that these cards have no proper load balancing built in and proceed to consume all 50+ amps on 1 or 2 pins, drastically overloading them.

Even if it was using old 8 pin connectors, loading that much amperage would melt them too. This is a "we cheapened our card design" problem.
Maybe they should stick with 3 sets of 8 pin connectors!😲
That wouldn't be enough, you'd need at least 4 to properly feed a 5090. The design is 20 years old, we needed some kind of update.
I've tended to blame the plugs, as in one manufacturer is making rubbish female pins ... but then this was posted - https://www.techpowerup.com/forums/...for-asus-rog-astral-rtx-5090d-appears.339197/

Maybe there is an argument for where the GPU goes batty and just draws far too much power. They obviously are able to - now that I've seen that. If so, then I guess Nvidia would indeed be to blame.

PS: It would also explain why the problems abruptly started with the RTX4090 and not the RTX3090.
The cards need power balancing/negotiation hardware built in. USB C doesnt have any of these issues, despite pushing higher voltage and amperage on smaller pins. Difference is that the device and power supply must negotiate what they and the cable can do, and if that negotiation is lost, it revents back to a low power state.

There is no reason such tech shouldnt be in the 12v2 connector. In fact it absolutely should be, given the lack of headroom these pins have.
 
Even if it was using old 8 pin connectors, loading that much amperage would melt them too.
Exactly. So either the 600W plugs aren't doing the job that they're spec'd for, or the GPU is drawing more than rated. And it looks like either is possible.

All other arguments are red herrings. Negotiating for a power rating won't help if the firmware is buggy and the GPU just overloads anyway. Balancing is no fix - If some pins are lacking then you've got less power than rated or negotiated. That's a fail.

PS: The only way a GPU could handle a bad plug is to drop to a lower power rating until the plug gets replaced with one that isn't faulty. That's not a negotiation with the power supply, that's just self limiting.
 
Last edited:
I sidestep this issue by simply not using 600 watt GPUs. My 12-pins are working great. We live in a society that chose to mine virtual currency instead of sustainable energy though, so obviously I'm an outlier.
 
Exactly. So either the 600W plugs aren't doing the job that they're spec'd for, or the GPU is drawing more than rated. And it looks like either is possible.
Incorrect, the plug IS doing its job fine, there is no plug on earth that can handle 3x its spec without issues. These pins are designed for 9 amp. pushing 40+ amps is completely out of spec. Even if they were designed for 15 amp you would be running into problems.
All other arguments are red herrings. Negotiating for a power rating won't help if the firmware is buggy and the GPU just overloads anyway.
Pot meet kettle. Also wrong. Negotiation would prevent the PSU from allowing over spec amperage from going over a pin, regardless of what the GPU demands.

Again, see how USB C works and how we dont see phones getting 120 watt shoved into a 10 watt port.
Balancing is no fix - If some pins are lacking then you've got less power than rated or negotiated. That's a fail.
Balancing IS a fix, it prevents the cable from melting and the GPU can then send a system message that the plug is not making good contact.
PS: The only way a GPU could handle a bad plug is to drop to a lower power rating until the plug gets replaced with one that isn't faulty. That's not a negotiation with the power supply, that's just self limiting.
It is negotiation, you dont seem to understand the fundamentals of how negotiation works and prevents overheating in systems with limited overhead.

You want a system that can maintain maximum draw regardless of how many pins are connected....sorry that is not how electronics work.

I sidestep this issue by simply not using 600 watt GPUs. My 12-pins are working great. We live in a society that chose to mine virtual currency instead of sustainable energy though, so obviously I'm an outlier.
Well, aside from ignoring all the solar and wind that has been built, there is only 1 sustainable baseline power source available, and environmental groups will sue endlessly to stop them from being built because green glowing rocks scary.

Meanwhile China is building dozens of Gen IV+Breeder designs, so the AI companies will eventually find themselves going overseas for that sweet cheap power.
 
Last edited:
That wouldn't be enough, you'd need at least 4 to properly feed a 5090. The design is 20 years old, we needed some kind of update.
what does the age of the design have to do with anything? it worked fine for 20 years and things only went wrong when someone tried to update it. Just because something is new doesn't mean it's good. Just because something is old doesn't mean it's bad.

I've seen some cable management jobs with multiple 8 pin connectors that makes the inside of a PC look like a racing engine.
 
Like a broken record. Nvidia loves one time use graphics cards. Enthusiast PC gaming 2025 bring your own thermal camera. 😱🤦‍♀️
 

Attachments

  • Xinf_250523_174224_657.jpg
    Xinf_250523_174224_657.jpg
    890.6 KB · Views: 2
I used to joke that these 600w cards should be plugged directly into the wall. maybe that's what should be happening now.

Of course they should be plugged into the wall. When we need kilowatts and multiple amps of power for a consumer graphics card, not to mention workarounds and parts so they don't sag and break under their own weight, the industry (and its consoomers) obviously have gone too far.

The external power bricks will of course drive up costs, but that's obviously no concern for people willing to dish out money for 90 series cards.
 
Back