Nvidia's GeForce RTX 3090, RTX 3080, and 3070 specs have been leaked

Is it common knowledge what TGP is? I never heard this acronym before.
Searching Wikipedia brings up a few likely candidates, e.g. Tokyo Ghetto *****.

Well I guess it could be like this article claims too, but I have my doubts.
https://www.geeks3d.com/20190613/graphics-cards-power-tdp-tgp/

TGP = 'Total Graphics Power'

"The power that a power supply should provide to the graphics subsystem, an add-in-card in most cases. The application used to define TGP and TDP is a stressful “real world” application."

In other words, how much power the card will require from the PSU in a realistic, maximum scenario.
 
The 24GB one seems fine but presumably will be very expensive. Everyone else gets screwed with 10GB in 2020 which is less than the 2 generations back 1080 ti... The only change will be the new price... I suspect not many will be buying these cards.
 
Yes, MS Flight Sim 2020 in 8K with TAA uses all 24Gb of an RTX Titan. There are scenarios already in gaming that will fill up 24Gb. This same resolution will run a good 25-30% faster on 3090 than RTX Titan, theoretically...
Uhm not really there is a difference between what it will reserve versus use, it appears as if it's using all of it, but it's not.
 
I'm really interested in seeing what ray-tracing performance these cards will have now that the tech will be out of "beta". (and I would like to thank the RTX 2000 series beta testers for their sacrifice :D )
Jensen stated the 3060 has double the rtx performance of the 2080ti if that helps what it means though and how that translates to resolution and frames we will see may be a different story, either way people jump the gun, synthetics are 1 thing but we need to wait to see actual performance numbers of the games themselves to see if the price is warranted, currently the bottleneck is the monitor market anyways.
 
The 24GB one seems fine but presumably will be very expensive. Everyone else gets screwed with 10GB in 2020 which is less than the 2 generations back 1080 ti... The only change will be the new price... I suspect not many will be buying these cards.
Depends on how NVcache is used as well.
 
Oh Cyberpunk 2077 will fully utilize even the RTX 3090 alright.
There is nothing such as enough performance :), the more performance the better.
Now the only question is can gamers afford those extravagant GPU...and looking at Nvidia financial situation the answer is probably yes.
I would probably say yes Cyberpunk will be the poster child, to show how and why you should upgrade, for some it will justify the price, others well according to comments across the net many already made their mind up beforehand.
 
but but but you won't have ray tracing!! Its apparently gonna be ubiquitous with the PS5 & XBOX.

*sigh*

No it won't. Just because their GPUs can physically do ray tracing doesn't mean games will take advantage of it in any meaningful way.

No different from the ability to output @ 120Hz. Hell, Sony's first exclusive (Ratchet & Clank) for the PS5 is apparently running at 30Hz because of all the other graphical effects they are using.
 

TGP = 'Total Graphics Power'

"The power that a power supply should provide to the graphics subsystem, an add-in-card in most cases. The application used to define TGP and TDP is a stressful “real world” application."

In other words, how much power the card will require from the PSU in a realistic, maximum scenario.
The forum censored my post, I feel violated :D
It would have been more funny with some context, so here you go :D118581231_1239547573065164_7953619629865585415_o.png
 
*sigh*

No it won't. Just because their GPUs can physically do ray tracing doesn't mean games will take advantage of it in any meaningful way.

No different from the ability to output @ 120Hz. Hell, Sony's first exclusive (Ratchet & Clank) for the PS5 is apparently running at 30Hz because of all the other graphical effects they are using.
It seems Ratchet & Clank has a 60 FPS mode with upscaling and a native 4K 30FPS mode. It's good that they are at least they offering the option for 60FPS considering how good it looks.
 
Want to bet on it?
Devil is in the details. I predict both that initial supply will sell out, but also that total quantity will be a tiny fraction of the total video card market, meaning you could both be right.
 
I'm guessing that the high end chip is designed using TSMC's Product Design Kit (PDK), probably the same group that designed GA100. Mid and low entry chips on the other hand are designed using Samsung's PDK. I hope Nvidia has better luck than Qualcomm in dealing with Samsung's FinFET and EUV.
 

TGP = 'Total Graphics Power'

"The power that a power supply should provide to the graphics subsystem, an add-in-card in most cases. The application used to define TGP and TDP is a stressful “real world” application."

In other words, how much power the card will require from the PSU in a realistic, maximum scenario.

So from what I understand, TGP is a conservative power figure that that graphic card will never exceed under stressful testing condition (maximum power draw) while TDP is only an average figure during real world usage (meaning the GPU can sometimes exceed this figure).

Taking 2080 Ti for example, in the spec it says 2080 Ti Founder Edition has a TDP of 260W, however it can draw close to 290W peak
power_peak.png


That means RTX 3090 will rarely use 350W or even reach that at all, Nvidia is just being cautious with their power usage figures...
 
So from what I understand, TGP is a conservative power figure that that graphic card will never exceed under stressful testing condition (maximum power draw) while TDP is only an average figure during real world usage (meaning the GPU can sometimes exceed this figure).
TDP is about heat transfer and this will always be less than the electrical power consumed (one can’t make energy from nowhere), hence TGP is more useful for determining PSU requirements - it’s also directly measurable via onboard sensors on the card’s +12V lines.

But hitting TGP isn’t going to be a rare thing at all - graphics cards routinely reach their power limit, if they’re not thermally throttled. My 2080 Super, for example, always hits its 250 W TGP, even under normal gaming.
 
TDP is about heat transfer and this will always be less than the electrical power consumed (one can’t make energy from nowhere)
Isn't the correct statement that TDP will always be equal to the electrical power consumed? If a card is drawing 250 watts at a moment in time, it is generating 250 watts of waste heat.
 
Last edited:
Maybe someone already commented on this but is there any point to having 24GB of RAM? If that's one of the things making the price so high on the 3090 it would be silly of me to pay for something that's essentially useless. Though I wonder if the 10GB on the 3080 is potentially too little in a year or two's time? The resident evil games required 13GB if I remember rightly at 4K with ultra settings, although the game ran fine even with 11GB.

Davinci Resolve eats VRAM for breakfast.
 
Isn't the correct statement that TDP will always be equal to the electrical power consumed? If a card is drawing 250 watts at a moment in time, it is generating 250 watts of waste heat.

Well TDP is the term meant for cooler design. A cooler designed for a 125W TDP CPU can tolerate 250W of heat for a short time, that why Intel use low TDP figures for their CPU which are capable of drawing much more power than stated TDP (10900K that has a TDP of 125W but PL2 is 250W for 56 seconds).

Meanwhile Nvidia and AMD use TDP but I think TGP is a more correct term since their video cards never exceeds the stated TDP (except for power spikes that last a few millisecond).

TDP is about heat transfer and this will always be less than the electrical power consumed (one can’t make energy from nowhere), hence TGP is more useful for determining PSU requirements - it’s also directly measurable via onboard sensors on the card’s +12V lines.

But hitting TGP isn’t going to be a rare thing at all - graphics cards routinely reach their power limit, if they’re not thermally throttled. My 2080 Super, for example, always hits its 250 W TGP, even under normal gaming.

Well different vendor have different TGP value for their own model, for example the Aorus 2080 Super has a max TGP of 350W and you will never see the card use that much power in real world usage.

The 2080 Ti FE has a max TGP of 320W, not that much lower than the rumored 350W of RTX 3090. I would think the 350W is the upper TGP limit of RTX 3090 FE.
 
Last edited:
Isn't the correct statement that TDP will always be equal to the electrical power consumed? If a card is drawing 250 watts at a moment in time, it is generating 250 watts of waste heat.
No, because some of that drawn energy gets stored in magnetic fields (in the inductors) and used to transfer charge across electric potentials (in the transistors and diodes). The large majority of it does end up as heat though.

Intel use low TDP figures for their CPU which are capable of drawing much more power than stated TDP (10900K that has a TDP of 125W but PL2 is 250W for 56 seconds).
Both PL1 and PL2 are 'TDP' values - Intel uses the term to mean power consumption, rather than heat transfer (something that can be verified with their own power gadget).
 
TDP and TGP have always been not so accurate values that in reality tell only approximate thermal dissipation/power consumption. When comparing values between different products from same manufacturer, usually there are many oddities. And comparing TDP/TGP between products from different manufacturers is usually total waste of time.

So at this stage looking at TGP values only tell that "cards will consume quite lot" but comparing directly against GTX 2080 series is just pointless.
 
No, because some of that drawn energy gets stored in magnetic fields (in the inductors)....
True, but its not stored there permanently. An RLC circuit continually oscillates ... at any given moment, it may be storing or releasing energy, but integrated over any significant time period, the average is zero.

Stated in more concrete terms, if you run your video card at 125w for 2 hours, (7200s), the total amount of heat produced will be 900,000 joules .... minus a few spare joules which may or may be stored in the L's and the C's (and, if we assume an arbitrary period, were there beforehand anyway).

...and used to transfer charge across electric potentials (in the transistors and diodes).
When you transfer charge across a transistor or diode junction, you do so against a resistance. That generates an amount of heat exactly equal to the work done (the energy consumed) in transferring the charge.
 
True, but its not stored there permanently. An RLC circuit continually oscillates ... at any given moment, it may be storing or releasing energy, but integrated over any significant time period, the average is zero.
The arithmetic mean for the component may be average, but the environmental losses aren't (I.e. e/m fields aren't fully constrained). They're not significant though, of course.

When you transfer charge across a transistor or diode junction, you do so against a resistance. That generates an amount of heat exactly equal to the work done (the energy consumed) in transferring the charge.
I obviously over-simplified matters, because with MOSFETs, you also have a number of capacitive losses. But even if they, and the resistive losses, didn't exist, work is still done is transferring charge. Hence, the total energy required is the potential change + resistive + capacitive losses, but it's only the resistive loss that really becomes heat. But of course, even this is far too simplistic a view. This paper provides far more depth on the matter (and it's a very good read):


The total heat transferred from an entire graphics card will always be less than the total energy being transferred to it - what it won't be is notably less. I've done demonstrations for students of thermodynamics, using TRTI on power amps, and there's never a direct 1:1 ratio between the measured energy consumption and calculation heat loss (within the granularity of the system).
 
Back