Nvidia's GeForce RTX 3090, RTX 3080, and 3070 specs have been leaked

The arithmetic mean for the component may be average, but the environmental losses aren't (I.e. e/m fields aren't fully constrained).
They're not fully constrained to the inductor, or even to the card itself, true. But they still eventually interact with something, and become waste heat. You point out they're not significant (I believe the FCC limits emissions to the level of a few microvolts/meter) but even ignoring those limits, the work-energy equation still must balance.

Hence, the total energy required is the potential change + resistive + capacitive losses, but it's only the resistive loss that really becomes heat. But of course, even this is far too simplistic a view. This paper provides far more depth on the matter (and it's a very good read):
I believe you're misinterpreting statements in that paper such as " The [classical] expression above will overestimate the total power dissipated in a quasi-ballistic device..." Certainly, classical-scale calculations of both resistance and charge flow become tricky at nanoscale levels, and you cannot consider the gate of a nanoscale device as a closed system, and blindly apply Joule's Law.. But I didn't see anything in that paper contradicting that law, and in fact it specifically states it should be applied "with care". The calculated power dissipation may indeed change ... but of the power that is dissipated, it all eventually becomes heat. That's inherent in conservation of energy.

Let's take capacitive losses, for instance. An ideal capacitor has none. But in a real-world capacitor, current leads voltage and a small loss occurs. That loss becomes heat ... which is why an overdriven capacitor can heat up enough to explode (that's why can-type caps all have the 'X' atop them-- to keep them from becoming little bombs in a worse-case scenario). These losses, like the result of any non-conservative force, dissipate into some form of mechanical energy ... in this case, heat.

Now let's take potential change. An increase becomes stored potential energy (hence the name), not heat, true. But a real-world circuit switching in the gigahertz range is changing state several billion times a second. Half that time any arbitrary point in the circuit is increasing potential, the other half is decreasing potential. Unless you integrate over a half-cycle or shorter period, your average potential change is zero.
I've done demonstrations for students of thermodynamics, using TRTI on power amps, and there's never a direct 1:1 ratio between the measured energy consumption and calculation heat loss (within the granularity of the system).
If you're measuring a closed system, it has to be. I too have taught thermodynamics, and I hope I wasn't teaching it wrong. :) Where else does that energy have to go? Now if you're referring to a non-shielded circuit radiating a significant amount of its consumed power as RF, then obviously if you measure only the heat produced by the device itself, you'll find a shortfall. But that's not a closed system ... and I don't believe it's applicable to graphics cards in any case.
 
the work-energy equation still must balance.
Absolutely! It's basically TGP = TDP + tiny bit non-heat losses :)

but of the power that is dissipated, it all eventually becomes heat. That's inherent in conservation of energy.
Again, agreed! Losses to an e/m field, for example, do ultimately result in a total heat, which leads onto:

Now if you're referring to a non-shielded circuit radiating a significant amount of its consumed power as RF, then obviously if you measure only the heat produced by the device itself, you'll find a shortfall. But that's not a closed system ... and I don't believe it's applicable to graphics cards in any case.
A PC isn't a thermodynamically closed system. It would be interesting to know just how much e/m radiation graphics cards do emit. As you've pointed out, legally it can't be very much, but there is some.

And this is all that I was ultimately implying when I originally said "TDP is about heat transfer and this will always be less than the electrical power consumed" - I'm definitely not trying to argue that it's significantly less. It would be an interesting exercise to mount a card in a sealed volume of gas and record temperatures to estimate the actual heat transfer, although this method would miss the heat lost via conduction to the motherboard.

I think we thermonerds have taken this rather too far off-topic :neutral:
 
Yes, MS Flight Sim 2020 in 8K with TAA uses all 24Gb of an RTX Titan. There are scenarios already in gaming that will fill up 24Gb. This same resolution will run a good 25-30% faster on 3090 than RTX Titan, theoretically...
Maybe someone already commented on this but is there any point to having 24GB of RAM? If that's one of the things making the price so high on the 3090 it would be silly of me to pay for something that's essentially useless. Though I wonder if the 10GB on the 3080 is potentially too little in a year or two's time? The resident evil games required 13GB if I remember rightly at 4K with ultra settings, although the game ran fine even with 11GB.
Don't worry, Nvidia will release a 3080 Super with 12 or 16 GB VRAM in 2021, for only 20% more money.
 
I agree. This shocked me, too. I was fully expecting native DP 2.0 support. The only explanation I have is that the DP 2.0 spec was released in June 2019, so NVIDIA probably did not have enough time to support it. Maybe the RTX 3000 lineup specs had been finalized prior to June 2019, or soon afterwards that they just could not include it.
Nah, it must be trolling from Videocardz. No way that in this second half of 2020 these new video cards will offer support for just DP1.4a with its ancient 8b/10b encoding and no support for 144Hz@4K, while at the same supporting HDMI 2.1. Unless...Nvidia is planning to releases the "Super" 3000 series cards in 2021, with "next-gen" DP 2.0 support. :)
 
I hope there is some gain in IPC because these specs don't look like huge improvement over Turing.

If we ignore the new RT cores and tensor cores. And assume that there is no IPC difference

Then RTX 3070 looks like higher clocked version of RTX 2080. It will probably performs same as RTX 2080 Super or maybe slightly faster. Again I'm assuming there is no IPC difference and ignoring ray tracing performance.

RTX 3080 look like overclocked RTX 2080 Ti. Same cuda cores number. Memory bandwith is higher on 3080 despite the smaller bus width (because of GDDR6X)

It does not sound really impressive. RTX 3080 will probably priced between RTX 2080 Ti and RTX 2080. RTX 3070 will mostly like priced between RTX 2070 and RTX 2080

performance per dollar is still better on Ampere, but not a generational leap. Let us hope it has some IPC gains and huge improvement in Ray tracing to justify the price

I also feel that 24GB is overkill on RTX 3090. They could release cheaper 12GB version to fill the gap between RTX 3080 10GB and RTX 3090 24GB

I feel GPU improvement has been slowing down in last few years. Even on AMD side, we have not seen any big improvement at same price like Polaris and Pascal did. After many year we got RX5500 for same price as polaris which is a joke. We are getting small improvements even after many years.
 
3080 should comfortably beat the 2080Ti.

Same number of Cuda cores, better IPC, higher clocks and more memory bandwidth.

The IPC will make the difference, the gains between Pascal and Turing were pretty good on a mild process improvement so this should be significant. E.g GTX1660Ti configuration versus GTX1070.

3070 could be within a stone's throw of the 2080Ti. Nvidia need to not charge an arm and a leg for it.
 
Oh Cyberpunk 2077 will fully utilize even the RTX 3090 alright.
There is nothing such as enough performance :), the more performance the better.
Now the only question is can gamers afford those extravagant GPU...and looking at Nvidia financial situation the answer is probably yes.
well the high end cards were never for the average gamer. They were always for the rich gamers...it just turns out that there are way more rich gamers than Nvidia realized and that they were willing to pay more money, hence why each generation we see much higher prices for the high-end GPUs. I mean really if you're "rich", $1000 vs $1300 is nothing, but to Nvidia that $300 makes a massive difference for their total revenue.
 
Jensen stated the 3060 has double the rtx performance of the 2080ti if that helps what it means though and how that translates to resolution and frames we will see may be a different story, either way people jump the gun, synthetics are 1 thing but we need to wait to see actual performance numbers of the games themselves to see if the price is warranted, currently the bottleneck is the monitor market anyways.
Well nevermind,
3080 is 60-98% faster than a 2080 in Raster and RT according to Digital Foundry, and Doom Eternal today revealed benchmark of a 2080ti getting destroyed getting consistently 30 to 60 frames more in gameplay......
NVidia just blew a powder keg.
 
Back