Melting RTX 4090 cables could be much more common than previously reported

Well I'm inclined to believe it because from the very start I thought pumping more power through less connectors is just a bad idea. Went from 300 watts in 16 pins to 600watts in 12 connectors.

But this will be fairly easy to confirm or deny.

Simply dismissing it outright is foolish. The claims of melted connectors never went away even after 2 revisions. So if this guy has a repair shop and this is what he's getting we should atleast hear what he has to say.

If nVidia did create a faulty connector, made a standard around it and it cost consumers millions in damages while potentially increasing the risk of fire in people's homes, they deserve to be held accountable. On the otherside, if this guy is using sensationalism to generate money with social media, he deserves to be made a fool of. But NorthridgeFix didn't just make this claim the other day, they have been posting about getting hundreds of fried 4090s for nearly 9 months now.

But with all that said, nVidia has done enough dumb things around the 12 pin connector that I would actually like some investigation into this. one reason I heard that may answer the 3090ti/4090 question is that the 4090 has been prone to transient spikes that are so severe that they shut powersupplies off or. The power problems of the 4090 have been making headlines during its entire production.
Can you link me one incident of fire caused by the connector? Why do we have to make stuff up in every nvidia thread? You realize the connector is NOT made by nvidia? They are following the PCIe sig standard, it has nothing to do with nvidia. For God's sake....

And no, 4090 have no power spikes, not anywhere near the levels of the 3090ti which can hit over 1k. But that doesn't matter anyways cause power spikes don't cause connectors to melt.

Basically everythinf you said on that post was just flat out wrong, lol.
 
Can you link me one incident of fire caused by the connector? Why do we have to make stuff up in every nvidia thread?
LMFAO do you need it explained to you why melting connectors and electrical shorts can cause fires?
You realize the connector is NOT made by nvidia? They are following the PCIe sig standard, it has nothing to do with nvidia. For God's sake....

"NVIDIA states in the video that this 12-pin design is of its own creation"


And no, 4090 have no power spikes, not anywhere near the levels of the 3090ti which can hit over 1k. But that doesn't matter anyways cause power spikes don't cause connectors to melt.

Basically everythinf you said on that post was just flat out wrong, lol.
My man has never heard of a fuse before.
 
Can you link me one incident of fire caused by the connector? Why do we have to make stuff up in every nvidia thread? You realize the connector is NOT made by nvidia? They are following the PCIe sig standard, it has nothing to do with nvidia. For God's sake....

And no, 4090 have no power spikes, not anywhere near the levels of the 3090ti which can hit over 1k. But that doesn't matter anyways cause power spikes don't cause connectors to melt.

Basically everythinf you said on that post was just flat out wrong, lol.

MSI has even used the transient power spikes of the 4090 in their marketing material stating that the 4090 can have spikes of upto 1350watts over the 12pin connector

But don't take my word for it, I posted the link to their marketing material on their webpage
 
I've been saying for a long time that a huge function of a connector is to eliminate user error. If it fails in that function then it fails as a connector. That's the most basic argument I use. The other one I use is that it seems like a stupid idea to put more than double the power going through the connector while also making the contact area smaller.

I fail to see how either of those arguments are invalid.

Now someone else brought something up interesting earlier. Why wasn't this an issue on the 30 series? If nVidia changed the manufacturing process of the connector leading to a higher rate of user error then it is absolutely their fault.

The connector eliminates user error by making sure you don't plug the wrong pins into the wrong contacts. If you can't be bothered to ensure your connector is secure, that's on you. The grand majority of 4090 owners seem to be doing it correctly since they're not seeing failures.
 
The connector eliminates user error by making sure you don't plug the wrong pins into the wrong contacts. If you can't be bothered to ensure your connector is secure, that's on you. The grand majority of 4090 owners seem to be doing it correctly since they're not seeing failures.
If the connector allows full operation when not fully seated properly, it is not preventing user error. Stop trying to shift blame from the company with a $1.2 trillion market cap onto the consumer.
 
The connector eliminates user error by making sure you don't plug the wrong pins into the wrong contacts. If you can't be bothered to ensure your connector is secure, that's on you. The grand majority of 4090 owners seem to be doing it correctly since they're not seeing failures.
There is this thing called "tolerance" in manufacturing. This problem didn't seem to exist in 30 series connectors showing that nVidia is capable of building the 12pin connector without issue. However, when they release the connector again it somehow doesn't fit the 4090 as perfectly to spec. So when nVidia changes the acceptable tolerance on a power connector installed on $1500+ graphics cards and it fails it somehow becomes user error in installation?
 

MSI has even used the transient power spikes of the 4090 in their marketing material stating that the 4090 can have spikes of upto 1350watts over the 12pin connector

But don't take my word for it, I posted the link to their marketing material on their webpage
Oh, a PSU manafacturer that just released uber expensive ATX 3 power supplies is telling you how much you really really need them? LOLk.

According to hwbusters, you know - the no1 expert on these kinds of things, the 4090 spikes up to 618w. That's peanuts compared to both the 3090 and the 3090ti. But still irrelevant, cause spikes do NOT melt cables. A spike lasts some nanoseconds, its not anywhere near enough to increase the cables temperature in order to melt it.

EG1. Also I think MSI should sue you. They did not say that the 4090 does spike up to 1350 watts. You OBVIOUSLY, in order to avoid admitting you were wrong, made stuff up. What MSI said is that a 4090 can have a spike up to 1350w and their new PSU can handle it. That's a totally different statement.
 
Last edited:
If the connector allows full operation when not fully seated properly, it is not preventing user error. Stop trying to shift blame from the company with a $1.2 trillion market cap onto the consumer.
On first read, your point seems correct, but then when you realize that a lot / most connectors actually allow full operation even when not fully seated properly, we realize that this is down to the usual "nvidia - lets hate on it". And again, the connector is NOT nvidias. It's the PCIE sig standard. EVERYONE would be using it. AMD would be using, but they had finalized the cards before it became the standard.

Just yesterday funnily enough, I was using my laptop while the charger was half plugged into the socket. A couple of months ago I burned the charging port of my phone because of poor contact. So, that's something very, very, very common. As far as I know with anything computer related, only hdmi and display port do not allow full operation when not fully seated. Everything else, it's up to you
 
On first read, your point seems correct, but then when you realize that a lot / most connectors actually allow full operation even when not fully seated properly, we realize that this is down to the usual "nvidia - lets hate on it". And again, the connector is NOT nvidias. It's the PCIE sig standard. EVERYONE would be using it. AMD would be using, but they had finalized the cards before it became the standard.
Hmmm....no. Most connectors will work, but wont deliver FULL POWER when not seated correctly, which would avoid this issue.

And again, nvidia designed it. I already linked proof. You can deny reality all you want, just makes you look the fool.
Just yesterday funnily enough, I was using my laptop while the charger was half plugged into the socket. A couple of months ago I burned the charging port of my phone because of poor contact. So, that's something very, very, very common. As far as I know with anything computer related, only hdmi and display port do not allow full operation when not fully seated. Everything else, it's up to you
Ironically, you prove my point, that laptop didnt burn down, because that connector is designed for use in non optimal scenarios. Interesting. So why doesnt nvidia's connector do that?
 
Hmmm....no. Most connectors will work, but wont deliver FULL POWER when not seated correctly, which would avoid this issue.

And again, nvidia designed it. I already linked proof. You can deny reality all you want, just makes you look the fool.
Your so called proof was talking about....a different connector. :laughing:

Ironically, you prove my point, that laptop didnt burn down, because that connector is designed for use in non optimal scenarios. Interesting. So why doesnt nvidia's connector do that?
The laptop didn't burn down cause I noticed and fixed it. I don't know what would or wouldn't happen if I didn't.

You are acting like the issue is a fact, when all the "evidence" we have is someone wth a youtube channel claiming he receives hundred cards every month. You realize how insane that is? If he said 5 cards I might be inclined to believe him, but 100 cards a month is just - it's just cannot be true. What percentage of people with melted connectors send their cards to this guy? 1%? 5%? 10%? So there are 1k 4090s melting every month? Yeah, okay buddy.
 
EG1. Also I think MSI should sue you. They did not say that the 4090 does spike up to 1350 watts. You OBVIOUSLY, in order to avoid admitting you were wrong, made stuff up. What MSI said is that a 4090 can have a spike up to 1350w and their new PSU can handle it. That's a totally different statement.
So I can see you actually read my source, it's nice to see you aren't 100% an nVidia shill. I left that little part in their on purpose to see if you thuroughly read spurced links, although I doubt you'll believe that.

But you made an interesting point for me and I was hoping you would. There is a reason I selected MSI as they produce both 4090s using the 12pin and ATX 3.0 powersupplies.
Oh, a PSU manafacturer that just released uber expensive ATX 3 power supplies is telling you how much you really really need them? LOLk.
If the powersupply manufacturer is telling you how much you need them in order to sell more wouldn't it stand to reason that a GPU manufacturer would shift the blame of a hardware fault somewhere else to not impact sales?

And you still have not been able to answer why the 3090ti is not a victim of "user error" or even the hardware faults we see from the 4090 but the 4090 is? User error should be statistically identical across products. The fact that this does not happen on the 30 series shows us that it is not user error. I would like to remind you that you brought that the 3090ti is more power hungry, suffers from more extreme transient spikes but does not suffer from "user error".

What explanation do you have for this aside from "lol, k" and cheers'ing yourself for what you feel is a job well done for defending nVidia.

See, my issue has never been with nVidia over the 12vhpwr connector, although I continue to thing its a stupid feature noone asked for. My issue is with the connector itself. nVidia does now own injection molding factories where they make the connector. The difference between the connectors would not be their fault. Injection molds degrade with continued use. The plant making the connectors for nVidia could be using the same molds from the 30 series.

It might not have seemed like an issue when first releasing them, but the continued use of old injection molds is likely the root cause of why these things are failing. There are many issues that stack on top of this, like puting 2.6x the power through a smaller connector or user error. However, these problems only came about after continued use and degradation of the molds.

If nVidia wants the 12vhpwr connector to be successful it has to hold it to very high manufacturing standards. The 4090 shows that they did not hold their connector to high standards and simply blamed the users.
 
And you still have not been able to answer why the 3090ti is not a victim of "user error" or even the hardware faults we see from the 4090 but the 4090 is? User error should be statistically identical across products. The fact that this does not happen on the 30 series shows us that it is not user error. I would like to remind you that you brought that the 3090ti is more power hungry, suffers from more extreme transient spikes but does not suffer from "user error".

What explanation do you have for this aside from "lol, k" and cheers'ing yourself for what you feel is a job well done for defending nVidia.
I don't need an explanation for something that does not happen. Just because that random dude says 4090s are melting by the 100dreds doesn't make it true. What explanation do you have for that sever 1060 melting issue? I got 1000 of them every month :laughing:
 
We are even later into the 3090ti lifecycle, where are all the damages connectors there? Apparently the connector isn't trash
The 4090 does have approximately 100 watts more power draw (355 for the 3090ti and 450 for the 4090). And some go higher. I think it's the MSI Suprim X 4090 that peaks at 500 watts.
 
The 4090 does have approximately 100 watts more power draw (355 for the 3090ti and 450 for the 4090). And some go higher. I think it's the MSI Suprim X 4090 that peaks at 500 watts.
Nope. That's the FE model of 3090. Custom 3090tis went way higher than 355.
 
Back