Nvidia investigating cases of melting RTX 4090 power cables, RDNA 3 won't use 12VHPWR

Mwaahahahah, another for the "they don't make them like they used to".

There is a good reason why electric load need a good surface area and contact pressure.

amps is Watts divided by Volts which is 50 amps or Volts X Amps equal watts

At 120v 600 watts is 5 amps
as graphics cards use 12v it's 50 amps which needs a thick pipe to supply.

For 120v you forgot to add PSU efficiency, for 80 add extra 20% and for 80 Platinum add extra 10%.
But like all other say here it's not about your power cord from PSU to soket that melts. It's about 12V and here 600W means 50Ah.

That is one shlt connector and if the issue escalates GTA modders will replace C4 with 4090. Like they did with Note4
 
You bought a $1500 card without research or knowledge about the card. Info has been around for over a month, maybe much longer on rumor side. Not the smartest thing on your part. Don't blame a company for your mistake. If your card has no issues, just be cautious is all or use a atx 3.0 psu which has been recommendation from the beginning.
"Purchase shaming?" That's a thing now?
 
What's a little puzzling here is that I don't recall (but more than happy to be shown otherwise!) that the 3090 Ti, which also used the 12VHPWR connector (via a three 8 pin socket adapter), showed no signs of burning out connectors. That's a 450W card too.
 
What's a little puzzling here is that I don't recall (but more than happy to be shown otherwise!) that the 3090 Ti, which also used the 12VHPWR connector (via a three 8 pin socket adapter), showed no signs of burning out connectors. That's a 450W card too.
some of the cards can reach 600 watts with over clocking
 
Never fear, Techspot is here to protect poor and defenseless Nvidia!

Man, why are media outlets pushing Nvidia cr@p so hard?

That sentence will apply to EVERY SINGLE HALO PRODUCT!

Yet, here we are…


Ok, rant over. I am not an electrical engineer, but I had concerns for that connector from the get go.

Convenient? Yes, but cant shake the feeling that its simply too much power going by very few and thin cables…
Ehh. I've seen higher voltage & current over smaller cables.

Heating under electrical currents has to do with surface area of the conductor. Larger surface are = less heating, because electricity travels over the surface of a conductor, not through its interior. This is why stranded wires can handle a higher current than solid core wires of a similar gauge; the individual strands have a higher combined surface area than a single solid-core wire.

So, in this case, with the heating happening on the GPU side, and not the PSU side (PSU side would be the result of trying to draw more current than the PSU could provide), I would expect the issue to be mechanical. The connector pins are either have an insufficient surface area (unlikely, imo, too obvious of an oversight), or they aren't properly inserting/remaining inserted to the GPU, reducing their contact area, increasing their temperature. From there, as the connector heats, its resistance will rise, resulting in more heating, resulting in an even higher resistance, leading to even more heating, etc, until the whole thing fails.

I'll bet that nVidia and/or AIBs cheaped out on the connector of the included adapter cable, and either the pins are too loose, don't align properly, requires too much insertion force, and/or the overall connection (clip and plastic housing, combined) doesn't have retention force. Its also possible that PSU manufacturers cheaped out, too, on this new connector because of a "no one will use this" attitude from management.
 
You bought a $1500 card without research or knowledge about the card. Info has been around for over a month, maybe much longer on rumor side. Not the smartest thing on your part. Don't blame a company for your mistake. If your card has no issues, just be cautious is all or use a atx 3.0 psu which has been recommendation from the beginning.

You definitely sound like one of those amazing people who love to blame the victim or a person who got raped because of their clothes!!
 
some of the cards can reach 600 watts with over clocking
That's not the point here, as the two initial reports of the 12VHPWR connector melting (here and here) concern cards with TDPs in the same ballpark as the 3090 Ti (I.e. 450W or higher). There wasn't, as far as I'm aware, the same kinds of issues with that card's power connector.

This sounds to be more to do with manufacturing QC, than too much current going through the connector.
 
I really don't see the point of these oddball connectors. It's just another proprietary thing from nVidia to make the noobs think that it's special. It's just one more non-standard thing that isn't readily available on the open market. Why the hell would nVidia not just use the already established PCI-Express supplementary power connectors that have been around for years?

I'm glad that AMD is just using the tried and true standard ATX PCI-Express supplementary connectors because they work, everyone already knows how to use them and it's one less thing to go missing since they're permanently attached to ATX-standard PSUs. These connectors were made correctly not to melt and all I could do was shake my head when I first saw these ridiculous connectors on Ampere. I remember thinking "Ok, so Ampere draws more juice than Turing. Why don't they just add a third PCI-Express power socket on the card? I guess that would be too simple and smart for the corporate knuckleheads at nVidia." and I believe that I was probably right.

I really don't think that we'll be seeing melted PCI-Express connectors anytime soon so anyone who buys a Radeon will be safe from this. You know, some people like to talk smack about ATi's driver history but they've NEVER had something like this happen.
 
Last edited:
It's just another proprietary thing from nVidia to make the noobs think that it's special
It’s not proprietary to Nvidia, though. They may well have pushed for its inclusion with PCI-SIG, but it’s not exclusive to them.

The PCI-SIG specification states a peak sustained draw of 55A, so it absolutely shouldn’t be melting with 450W (38A) cards, irrespective of the manufacturer.
 
Way back on Socket A days one friend of mine decided to change the thermal paste. When reinserting the cooler in those damn clamps one didn't lock all the way up.
He started the PC and had to go out, someone on front door. Guess what in 5 minutes his room was on fire. And after all investigation police and firefighters concluded the PC was the cause.
That was back in the days AMD cpu's didnt have the thermal shutdown like now.
 
All graphics cards draw current at 12V, be it through the PCIe slot or the power connectors. So 600W total is indeed 50A of current (600/12 = 50).
And once again my lack of technological depth creeps into my life.
I always thought stated CPU\GPU wattage was at the wall.

When you (anyone) has the time, I'm curious how the industry actually measures this.
Let's say I turn on a desktop and load it down, CPU and GPU, with all the subsystems associated with it. Checking the draw at the wall reveals the system is drawing 600 watts total. That is 5 amps from the wall. (2.5 amps from 240 volt path).

Amperage = Watts\Volts
 
Last edited:
And once again my lack of technological depth creeps into my life.
I always thought stated CPU\GPU wattage was at the wall.

When you (anyone) has the time, I'm curious how the industry actually measures this.
Let's say I turn on a desktop and load it down, CPU and GPU, with all the subsystems associated with it. Checking the draw at the wall reveals the system is drawing 600 watts total. That is 5 amps from the wall. (2.5 amps from 240 volt path).

Amperage = Watts\Volts
You're correct in looking at wall voltage and currents - for total system load and heat going into your room.

But within the PC, the PSU converts the AC wall power to DC and steps its voltage way down, to 12V and below, where semiconductor junctions are far, far happier. So, on-board wattages will be at those voltages.

Personally, I am and have for a long time been astonished at the level of current managed in these PCBs and devices. For $200 I can buy an umpteen-layer mobo switching literally hundreds of amps at multiple voltages to multiple loads... wow.
 
It’s not proprietary to Nvidia, though. They may well have pushed for its inclusion with PCI-SIG, but it’s not exclusive to them.
Well, it may as well be because Radeons are just using the normal connectors. I don't really understand the logic behind it because normal PCI-Express supplementary power connectors work just fine so this is just a gimmick to me.
The PCI-SIG specification states a peak sustained draw of 55A, so it absolutely shouldn’t be melting with 450W (38A) cards, irrespective of the manufacturer.
I agree with you there..... but nevertheless, here we are. :laughing:
 
I don't really understand the logic behind it because normal PCI-Express supplementary power connectors work just fine so this is just a gimmick to me.
Having a single connector that allows up to 600W, plus offers additional sensing and signalling pins, in a format that’s way more compact than three 8 pin PCIe connectors is definitely of interest to the GPU industry. It’s cheaper, for one. 🤣
 
Having a single connector that allows up to 600W, plus offers additional sensing and signalling pins, in a format that’s way more compact than three 8 pin PCIe connectors is definitely of interest to the GPU industry. It’s cheaper, for one. 🤣

It would be four of the 8 pin PCIE power connectors for the 600 watt power limit cards. That's why they switched to the newer connector.
 
It would be four of the 8 pin PCIE power connectors for the 600 watt power limit cards. That's why they switched to the newer connector.
Correct. I’d only mentioned three because that’s how many Nvidia used, via the 12VHPWR adapter, for the 3090 Ti (and probably for the 4090 too, but I’ve not checked).
 
Good thing it didn't need a large part of that 800 watts.
Probably a bit overblown by the users though.
It's kinda hard to overblow something like this. That's like saying that the Gigabyte PSUs that poofed were overblown by Steve Burke. If the connector melted, it melted. It might not be a fire hazard but it's still pretty damn bad, especially considering that it cost US$1,600 for this melting privilege! :laughing:
Nvidia 4090 wanted the crown regardless the power and the price, now they got the burning crown for sure.
They need to lower the power and the price or add a fire extinguisher in every 4090 box. :)
What is it about GeForce cards that start with the number 4 after the letter prefixes? :laughing:
mqdefault.jpg

I won't buy any card that pulls more than 300W when gaming, or that costs more than $700. And it will be Team Red when I do upgrade. But if I had been considering a 4090 this would stop me cold. From the facts so far, it's a serious problem, involves major finger-pointing, and hard to be sure of a solid fix. I wouldn't want to buy into that situation, even potentially.
Can't say that I blame you. To pay US$1,600 for the privilege of melting a piece of your card would be a tough pill to swallow for me too.
I gotta say. I recently bought a 4090 and this whole fiascos making me regret my decision...
You bought a halo card from nVidia mere weeks before the launch of the new Radeon cards? Methinks that you're going to regret a lot more on November 3. A $1,600 impluse purchase is just insane unless you're rich enough that you don't need a job. :laughing:
Without research or knowledge. Of course. let me bring up my excel sheet right now.
What he means is that there were whispers all over the internet about this problem occurring with one of the prototypes. I'm guessing that you're new to this and everyone's entitled to royally screw up the first time. What's important now is that you don't repeat this mistake and above all else, don't listen to what other people say about things. Watch reviews on YouTube on channels like Hardware Unboxed, Gamers Nexus and Paul's Hardware.

And above all else, never buy a product, especially a halo product, when another company is so close to launching their competing product. At worst, you're able to make an informed decision and at best, you get a much better deal. Either way, it's just a matter of self-control. In fact, it's usually a better idea to buy the top model of the last generation when the new generation comes out because the release of the new generation pushes the prices of the older generation into the basement.
Bad connector. Too small of pins, cheaping out trying to keep costs down. there's a reason the 8 pins are so big when they officially only put out 150 watt.

Three 8 pins could have easily fed the 400, they were fine for the hungrier 295x2.
I agree. I don't know what nVidia was smoking when they came up with this idea. I'd say I want some but I don't ever want to become that stupid! :laughing:
With a 7900X and a 4090 total system power draw can definitely hit 800W.
Just imagine what an i9-13900K would do!
flat,750x,075,f-pad,750x1000,f8f8f8.jpg
 
"Purchase shaming?" That's a thing now?
No, it's properly referred to as "Frankly Speaking" or "A Reality Check".

If someone does something galactically dumb, are you going to cheer them on for fear of "purchase shaming"? I assume that he's new to this and made a beginner's mistake but it was still a bad decision. People generally don't get to make a bad decision and then complain about it to others without having to eat their fair share of crow in the process. Note that the only person who agrees with you is the person who committed this titanic blunder. Everyone else just read your post thought "Pfft! Yeah, ok there buddy!" and moved on. I figured that you were at least entitled to an explanation so here I am giving it.

Welcome to the world, try not to burn yourself. :laughing:
I'll bet that nVidia and/or AIBs cheaped out on the connector of the included adapter cable, and either the pins are too loose, don't align properly, requires too much insertion force, and/or the overall connection (clip and plastic housing, combined) doesn't have retention force. Its also possible that PSU manufacturers cheaped out, too, on this new connector because of a "no one will use this" attitude from management.
Not exactly what one would expect from one of the most expensive video cards ever released, eh? :laughing:
Way back on Socket A days one friend of mine decided to change the thermal paste. When reinserting the cooler in those damn clamps one didn't lock all the way up.
He started the PC and had to go out, someone on front door. Guess what in 5 minutes his room was on fire. And after all investigation police and firefighters concluded the PC was the cause.
That was back in the days AMD cpu's didnt have the thermal shutdown like now.
It's just mind-boggling the things that weren't considered imperative back then, eh? 🙄
And once again my lack of technological depth creeps into my life.
I always thought stated CPU\GPU wattage was at the wall.

When you (anyone) has the time, I'm curious how the industry actually measures this.
Let's say I turn on a desktop and load it down, CPU and GPU, with all the subsystems associated with it. Checking the draw at the wall reveals the system is drawing 600 watts total. That is 5 amps from the wall. (2.5 amps from 240 volt path).

Amperage = Watts\Volts
Gamers Nexus has some fancy equipment for just that purpose but I have no idea how it works. All I know is that it cost them thousands.
It would be four of the 8 pin PCIE power connectors for the 600 watt power limit cards. That's why they switched to the newer connector.
They should have just used the four PCI-Express connectors. That would've been A LOT simpler (and evidently, more reliable too).
 
You definitely sound like one of those amazing people who love to blame the victim or a person who got raped because of their clothes!!
Wow you are a genius using rape to compare product knowledge.
It takes people minutes find basic info out. It's not rocket science.
 
"Those looking to upgrade without replacing their ATX 2.0 PSUs can use adapters bundled with the cards connecting three 8-pins or four 8-pins to one 12VHPWR cable."

Question: Unless I'm very mistaken, doesn't a single 8-pin have a 150W limit? Wouldn't that imply a converter cable that uses just three 8-pin connectors would result in at least one of those 8-pins exceeding that limit (if total draw of the GPU is >450W)?
 
Back