Nvidia shares RTX 3000 graphics card details ahead of August 31 unveiling

6 pin and 8 pin connectors are already smaller than a single slot cooler they were designed to fit into, why is foot print even an issue?
IMO, it was more about increasing total power ... while the engineers were at it, they decided to decrease the footprint a bit as well, in the hopes of reducing some of the customer pushback they expected.
 
The graphics card looks like it takes up 3 slots and is far taller than previous cards, footprint is a none issue. 6 pin and 8 pin connectors are already smaller than a single slot cooler they were designed to fit into, why is foot print even an issue?
Footprint on the PCB - if you look at the first video image I posted, you can see that the connector sits on the 'V' at the end of the PCB. It doesn't look there would be room for two 8 pin connectors.
 
Footprint on the PCB - if you look at the first video image I posted, you can see that the connector sits on the 'V' at the end of the PCB. It doesn't look there would be room for two 8 pin connectors.
what did they do to decrease the footprint? They're still 12V so it needs to provide the same amount of amps. To make the math easy, If you need a total area on the wire of 10mm^2 to provide 30 amps(360w@12v), splitting it up over 12 wires instead of 8 wires isn't going to decrease the overall volume of wire required. Aside from a single image and a claim of "making it smaller while also carrying more power", they didn't really explain anything about it. That said, I did skip through it quickly trying to find anything about them talking about the connector. I actually probably spent more time skipping around than I would have if I just watched it the whole way through.

This is also coming from the company that just went on about thermodynamics
" As Nvidia thermal architect David Haley recounts, the first law of thermodynamics states that energy cannot be created or destroyed in a closed system. "

Ddoes physics only apply when nVidia wants it to? Also, I looked up the adapter online, it uses 2X8pin to 12pin adapter, at least that's what Anandtech said. They designed and patented their 12 pin connector, they're likely to charge PSU manufactures to put it on because "mUh NvIdIa". There will start being "Geforce Ready" or "RTX ready" powersupplies that's nothing but marketing that they can charge other companies for. "give us money and you can put a sticker on your product"

nVidia makes this stupid card with a stupid cooler and then they have to design an entire power connector around it to fit. The fact that the card is the largest PCIe card I've ever seen and they have to make a whole new power connector just to make it fit, or whatever, really makes me question the hype of the 3000 series. It's bigger in every dimension than the last cards, height, width and depth, atleast that's what I get from the pictures. They need to make a special connector because 2 8pin wont fit?

Going into the connectors, PCI-E power connector math really doesn't make any sense. 6 pin provides 75watts and 8pin provides 150w? I don't understand how adding another negative and positive wire adds 75 watts if a 6 pin is 3 negative and 3 positive. If we want to do some math, each wire can only carry 25watts(or ~2A@12V). I don't understand how going to 8 pins magically makes the same gauge wire capable of ~3amps. I will say that I've salvaged wire from PSU's to use them in other things and pushed them well past the 20amp@12v mark on a single wire.

This whole thing is the GPU equivalent of taking the headphone jack off the iPhone to make it thinner, but instead they made it just gets bigger anyway.
 
Yep, it's the reason Fermi still sold well despite being extremely hot. Nvidia just has a massive brand power advantage.
You forget two very important factors:
1) Fermi held the crown. The 6970 couldnt touch the 580's performance, even OC vs OC. AMD fell asleep at the wheel and panicked when Nvidia fixed Thermi.
2) AMD's terrascale drivers were utter trash. Unspeakable garbage. Back then and today, their Terrascale drivers suck arse, and there is a reason AMD got a reputation of making WORSE drivers then ATI.

Fermi OTOH was rock stable.

Fermi didnt just have brand name. It was faster and more stable. AMD had power efficiency on their side but nothing else.
 
what did they do to decrease the footprint? They're still 12V so it needs to provide the same amount of amps.
Footprint, as in the physical area it takes up on the PCB - a smaller, narrow 12 pin connector, mounted vertically, covers less of the board than two PCIe 8 pin connectors side-by-side.

Going into the connectors, PCI-E power connector math really doesn't make any sense. 6 pin provides 75watts and 8pin provides 150w? I don't understand how adding another negative and positive wire adds 75 watts if a 6 pin is 3 negative and 3 positive.
The extra wires in the 8 pin are just additional grounds, not negatives/positives. It just provides better line stability at higher current draws.
 
The extra wires in the 8 pin are just additional grounds, not negatives/positives. It just provides better line stability at higher current draws.
Could have sworn it was 4x4, but I'm human and I've been mistaken before.

EDIT:
when I was talking about wire area, I was talking about traces on the PCB, but I guess you're talking about connector footprint on the PCB. I so still think it's silly.
 
Last edited:
Could have sworn it was 4x4, but I'm human and I've been mistaken before.

EDIT:
when I was talking about wire area, I was talking about traces on the PCB, but I guess you're talking about connector footprint on the PCB. I so still think it's silly.
4x4 is CPU, not PCIe.
 
Oh well I just hope there are waterblock designs coming out ASAP, can't move back to air cooled GPU now ~_~.
 
4x4 is CPU, not PCIe.
what are you talking about, there is no context to what you're saying. are you talking about 4 positive and 4 negative, are you talking about connector type? because my car is 4x4, maybe you're talking about that.
 
You forget two very important factors:
1) Fermi held the crown. The 6970 couldnt touch the 580's performance, even OC vs OC. AMD fell asleep at the wheel and panicked when Nvidia fixed Thermi.
2) AMD's terrascale drivers were utter trash. Unspeakable garbage. Back then and today, their Terrascale drivers suck arse, and there is a reason AMD got a reputation of making WORSE drivers then ATI.

Fermi OTOH was rock stable.

Fermi didnt just have brand name. It was faster and more stable. AMD had power efficiency on their side but nothing else.

Fermi competed against Terascale 2: https://www.techpowerup.com/review/nvidia-geforce-gtx-480-fermi/8.html

The 580 is not original fermi, it's a revision of fermi which released a whole 8 months later and that's considering the 480 was already 6 months behind the 5870. You definitely have your GPU generations mixed up. Given that you were replying to my mentioning of fermi and I was clearly referring to FERMI, the subject is the GTX 480, not the 580 or any later generation AMD card.

The GTX 480 held the crown for single GPU performance, AMD held the overall performance crown: https://www.techpowerup.com/review/nvidia-geforce-gtx-480-fermi/30.html.

You could get an AMD dual GPU card or SLI/Crossfire and the power consumption would be on par with a single 480 while providing more performance. SLI / Crossfire was supported in more games back then and was much more common for enthusiasts so the comparison is fair given the context.

It beat the 5870 (AMD's top single GPU card) by about 15% and consumed 150% more power. There is over a 100w difference between the two, absolutely ridiculous. Not to mention the 480 released a full 6 months after the 4870.

Reviews around the internet seem to echo something along the same line: https://www.guru3d.com/articles_pages/radeon_hd_5870_review_test,27.html

Mind you, it was AMD that first came out with multi-monitor gaming called eyefinity which Nvidia copied and renamed Nvidia surround. Nvidia's version was not bug free when it first released either.

I can't speak on AMD 6000 series drivers as I've never had a card in that generation. Just a heads up though, the 6000 series was Terascale 3. I can say that the drivers for the 5000 and 4000 series were pretty dang good and reviews seem to echo this sentiment. Roy Read took over AMD in 2011 and that coincides with the company's slip in both the CPU and GPU markets. He pushed AMD to produce smaller GPUs and did not want them to spend money on large GPUs.
 
Ground is negative. It's not an earthed AC ground.
Yes, it's not an actual earthed connection. The remark was to indicate the only difference between an 8 pin and 6 pin PCIe power connector is that they're not extra +12V lines, just two GND.

It's interesting to note that the Seasonic's PSU adapter cable physically connects two 8 pin PCIe to Nvidia's 12 pin one, but electrically it's two 6 pins:

5YcitNp9eMSUC4VFCHzawA-970-80.jpg.webp



This would suggest that, for Seasonic's PSUs at least, the modular outputs can take the same current draw - regardless as to whether a 6 pin-powered or 8 pin-powered graphics card is using it. Which raises the possibility, albeit a rather unlikely one, that the NV12P system is rated for more than 300W.
 
Nvidia is clearly gimping the ram on these cards. Going from an 8gb standard to a 10gb standard instead of 16gbs is probably a decision they made to give people more reason to upgrade again before the ps5 era ends. Since PC exclusives don't happen anymore, a gaming PC only needs to be as good as a console, bit if they don't make any cards with at least as much vram as consoles will have, these cards won't run certain games well. Nvidia is playing a shitty game here and I hope people see it for what it is and skip these cards. 1080ti has 11gbs but the 3080 will have less? Clearly planned obsolescence.
Especially as RAM prices are crashing.

Oddly though I like their idea of designing the card around cooling as it means they can get the max performance out of the silicon. This reminds me of what Mark Cerny has done with the PS5 in regards of designing it around a maximum power draw.
 
Performance has gone up mostly through component scaling. There doesn’t seem to be much, if any, increase in the clocks - time will tell on that one, of course. Consumer grade Ampere may well be Nvidia’s FX moment again but even if it’s not, I suspect many people’s expectations are not going to be met.

Reading around the web shows that too many people are hoping for huge performance gains, in the region of 50% or more. That’s just not going to happen for these products - the chip required would be the size of the GA100. If the likes of the 3090 is, all things considered, 30% better than the 2080 Ti, then despite the decent improvement (how off does one see a CPU improve by 30% with each generation), Nvidia is likely to get panned for it.
Also Nvidia will want to profit from these cards big time. So I think they'll make smaller dies mm2 than the Turing GPU's. I think they'll be aiming for Pascal regions of die sizes which worked out very profitable for them. So if they can pump more wattage into the GPU and get more performance without increasing the die size then it's a winner for them even if consumers are suddenly having to pay for more energy usage.
 
Also Nvidia will want to profit from these cards big time. So I think they'll make smaller dies mm2 than the Turing GPU's. I think they'll be aiming for Pascal regions of die sizes which worked out very profitable for them.
The general overview of the rumour mill on the die size is putting the GA102 at roughly 80% the size of the TU102 - if true, this would give a 30% increase in die count yield over Turing, but still be 30% lower than what Pascal achieved (just considering the largest chips for each architecture).
 
This would suggest that, for Seasonic's PSUs at least, the modular outputs can take the same current draw - regardless as to whether a 6 pin-powered or 8 pin-powered graphics card is using it... Which raises the possibility...that the NV12P system is rated for more than 300W.
I don't follow that logic. First of all, an 8pin connector is 150w; even if one assumes no difference between the dual-six and dual-eight wattage, that's still only 300w, not more than that.

Secondly, another possibility is that NVidia needed more than 150w from the connector, but not double that. The actual cable draw could be anywhere from 150 to 300 watts. Certainly no more. Add in the 75w from the bus and you're at 375w max for the card itself.

Thirdly, a PSU's 12v rails -- Seasonic or otherwise -- allow the same current draw, no matter what is plugged into them. For a PSU with a single 12v rail, all three of an 8- (or 6-) pin adaptor's 12v lines are going to be fed off that same rail.
 
I like their idea of designing the card around cooling as it means they can get the max performance out of the silicon.
When you're designing cards with these sorts of power draws, cooling is an idea that's forced upon you, whether you like it or not.

At the rate NVidia is going, the 5000 series of cards will abandon the PC power supply entirely, and come with a 120 volt AC wall plug.
 
I don't follow that logic. First of all, an 8pin connector is 150w; even if one assumes no difference between the dual-six and dual-eight wattage, that's still only 300w, not more than that.

Secondly, another possibility is that NVidia needed more than 150w from the connector, but not double that. The actual cable draw could be anywhere from 150 to 300 watts. Certainly no more. Add in the 75w from the bus and you're at 375w max for the card itself.
The 150W limit is only there for achieving PCI Express compliance and there's no requirement for any GPU vendor to adhere to this. For example, AMD's Radeon Vega FE LC could easily exceed the two 8 pin + PCIe slot 375W rating, especially when overclocked.

Nvidia's connector is currently proprietary (although it has apparently been submitted for PCIe approval) which provides the scope for a higher limit. Given that they've gone with this design to get around the space requirements for two 8 pin connectors, it would be somewhat short-sighted to create something that has no benefits other than simple PCB footprint reduction.

Looking at the Seasonic adapter, it would appear the NV12 pin connector uses a total of six 12V pins. If one assumes that they retain the present compliance limit for +12V current (a little over 4 A), then yes - it won't support more than 300W. But let's say it's a 5 or 6 A limit, then you're looking at a higher limit.

Edit: I should also apologise for an earlier mistake. The additional wires in the 8-pin vs 6-pin aren't both grounds, one of them is a simple high/low check to indicate whether it's 75 or 150W.
 
Last edited:
The 150W limit is only there for achieving PCI Express compliance...it would appear the NV12 pin connector uses a total of six 12V pins. If one assumes that they retain the present compliance limit for +12V current (a little over 4 A), then yes - it won't support more than 300W. But let's say it's a 5 or 6 A limit, then you're looking at a higher limit.
All true ... but I read your original statement as concluding that, since the cable is electrically equivalent to a dual 6-pin, rather than a dual 8-pin, then it allows the possibility of exceeding a 300w draw.

The cable may or may not exceed 300w, but I don't see the relevance of the first fact to establishing the potential for the second.
 
All true ... but I read your original statement as concluding that, since the cable is electrically equivalent to a dual 6-pin, rather than a dual 8-pin, then it allows the possibility of exceeding a 300w draw.

The cable may or may not exceed 300w, but I don't see the relevance of the first fact to establishing the potential for the second.
Yes, apologies for that mess. In my head, I was trying (and miserably failing) to remark that it while it appears to be wired as two 6-pin PCIe connectors slapped together, it couldn't be that way as it wouldn't provide enough power. In my defence, I had barely drunk any coffee at that point :)

Having drunk some more now, and nosed around the web, there's some evidence to suggest that the new connector does have scope for >300W power:



Of course, the current connectors themselves can cope with higher current draws that the PCIe rating, and none of this automatically means that any of the forthcoming Ampere cards have TDPs north of 375 W. But it is interesting that Nvidia seems to be pushing this change.
 
In my defence, I had barely drunk any coffee at that point :)
No worries; I honestly thought I might be missing something in the initial argument.

But it is interesting that Nvidia seems to be pushing this change.
Now that their market cap exceeds Intel, it wouldn't surprise me to see them leading a redesign of the entire ATX standard to be more GPU-centric, rather than CPU-centric.
 
Back