Nvidia's GeForce RTX 3090, RTX 3080, and 3070 specs have been leaked

Polycount

Posts: 3,017   +590
Staff
Rumor mill: Nvidia's highly-anticipated GeForce event is just around the corner. It's scheduled for September 1, and if the teases and leaks we've seen so far are accurate, it will finally bring the grand reveal of Nvidia's RTX 30-series GPUs. However, it seems we won't have to wait until the event to know what to expect from Nvidia's first three consumer-focused Ampere GPUs: their specifications have just been leaked.

Update (Sept 1): The official RTX 3000 series announcement is live, read here.

Update (Aug 31): There's less than 24 hours before Nvidia makes the official debut of Ampere on consumer RTX graphics cards, but leaks have kept on coming. Slides and photos from manufacturer Gainward have all but confirmed the rumored specs we published during the weekend.

The same CUDA cores counts (5248 cores on the RTX 3090, 4352 cores on the RTX 3080), clocks speeds, memory capacity and bandwidth as shown in the table below. It's likely Nvidia will only announce the high-end RTX 3090 and 3080 tomorrow, while the RTX 3070 will be unveiled at a later date.

The original leak comes courtesy of anonymous sources that spoke to Videocardz, claims Nvidia is preparing to launch three Ampere GPUs in September. These cards will be the GeForce RTX 3090, the GeForce RTX 3080, and the GeForce RTX 3070. Each GPU has been built using 7nm fabrication tech, and they will reportedly support PCIe 4.0 out of the box.

The cards will introduce 2nd-gen RT cores and 3rd-gen Tensor Cores, according to Videocardz, and they will likely ship with support for DP 1.4a and HDMI 2.1 connectors.

In terms of nitty-gritty specifications, we'll start with the RTX 3070. If this leak is true, the card should ship with 8GB of GDDR6 VRAM clocked at 16 Gbps, across a 256-bit memory bus. The TGP should be about 220W, though the CUDA Core count and boost clock speed are unknown at the moment.

Moving on to the RTX 3080, we're looking at 10GB of GDDR6X VRAM (clocked at 19 Gbps), a 320-bit memory bus, 4352 CUDA cores, a boost clock of 1710 MHz, and a TGP of 320W. Videocardz says a second variant of the 3080 with twice the memory (20GB in total) is also in development, but it was unable to determine when the card might release.

The final leaked SKU, the RTX 3090, is an absolute monster. It's expected to have 5248 CUDA cores, a boost clock of 1695 MHz, a whopping 24GB of GDDR6X VRAM clocked at 19.5 Gbps, a 384-bit memory bus, and a TGP of 350W. Memory bandwidth for the 3070, 3080, and 3090 should be 512 GB/s, 760 GB/s, and 936 GB/s, respectively.

Strangely, Videocardz says the 3080 and 3090 will both require dual 8-pin power connectors instead of a single 12-pin connector. This seems to be at odds with the Ampere engineering video we covered a few days ago, in which Nvidia discussed its decision to transition toward 12-pin connectors for modern GPUs.

That inconsistency aside, the rest of this leak sounds fairly credible to us. Of course, we still recommend taking this information with a grain of salt -- rumors are rumors, and it's always best to wait for official confirmation before getting too invested in pre-launch spec details.

Permalink to story.

 
Just to put the numbers into perspective, the purported 3080 has 41% more CUDA cores than the 2080 Super. Memory bandwidth is also getting a bump.

The 3070 looks to have similar memory bandwidth to the 2070, only getting a small bump.

Neither GPU is getting an increase in total VRAM.
 
In other words, if you're in the mid-range market you really have nothing to look forward to.
The memory bandwidth may not be increasing much, but the cuda cores are reportedly much improved ... enough potentially to make ray-tracing actually viable.
 
"Strangely, Videocardz says the 3080 and 3090 will both require dual 8-pin power connectors instead of a single 12-pin connector. This seems to be at odds with the Ampere engineering video we covered a few days ago, in which Nvidia discussed its decision to transition toward 12-pin connectors for modern GPUs."
Their information seems to be from a partner, like seen in the table "Data based on custom non-overclocked models.", and I don't know if they updated the post or what, but it says this:
"Custom boards are powered by dual 8-pin power connectors which are definitely required since the card has a TGP of 350W."
Most custom boards are probably going to use normal 8-pin connectors, as they probably have no space issues to make the change to the 12-pin worth it.
 
Last edited:
I don't care about all the technical aspects.

I need that 3090 just to pump my Youtube channel numbers.

The ultimate 3 year flex.
 
Maybe someone already commented on this but is there any point to having 24GB of RAM? If that's one of the things making the price so high on the 3090 it would be silly of me to pay for something that's essentially useless. Though I wonder if the 10GB on the 3080 is potentially too little in a year or two's time? The resident evil games required 13GB if I remember rightly at 4K with ultra settings, although the game ran fine even with 11GB.
 
I'm trying to find a way to rationalize getting the 3090 that doesn't involve huge opportunity cost (cos you could get a 3080 and PS5 for same price as 3090).

What I've come up with is this: 2080ti can currently be sold for 70 to 80 percent of their original value. However the 2080 can't because they've been usurped by the super. So you'd be lucky to get 60% of your money back. The same will likely happen again, so a $1500 card will likely be worth 11-1200 by the time the next generation rolls around, while a $900 card will likely be worth $500. So, the cost of ownership in lost value of both cards is similar and the 3090 could even be better value.
 
Maybe someone already commented on this but is there any point to having 24GB of RAM? If that's one of the things making the price so high on the 3090 it would be silly of me to pay for something that's essentially useless. Though I wonder if the 10GB on the 3080 is potentially too little in a year or two's time? The resident evil games required 13GB if I remember rightly at 4K with ultra settings, although the game ran fine even with 11GB.
Yes, MS Flight Sim 2020 in 8K with TAA uses all 24Gb of an RTX Titan. There are scenarios already in gaming that will fill up 24Gb. This same resolution will run a good 25-30% faster on 3090 than RTX Titan, theoretically...
 
Maybe someone already commented on this but is there any point to having 24GB of RAM? If that's one of the things making the price so high on the 3090 it would be silly of me to pay for something that's essentially useless. Though I wonder if the 10GB on the 3080 is potentially too little in a year or two's time? The resident evil games required 13GB if I remember rightly at 4K with ultra settings, although the game ran fine even with 11GB.


the issue is that these new games with high-end 4K textures that demand low latency RAM. You end up needing faster CPU, larger capacity SSD and more VRAM in your GPU now for these new games like MSFS 2020 and Crysis Remastered.
 
Yes, MS Flight Sim 2020 in 8K with TAA uses all 24Gb of an RTX Titan. There are scenarios already in gaming that will fill up 24Gb. This same resolution will run a good 25-30% faster on 3090 than RTX Titan, theoretically...
I'm unlikely to attempt playing at 8K for many years. But it sounds like you're saying even at 4K 10gb is potentially already on the low side?
 
Leaked Timespy Extreme scores indicated that RTX 3090 is 57% faster than 2080 Ti, we are looking at a similar transition from Maxwell to Pascal here where the new xx80 is 30% faster than the previous king and the xx80 Ti (now xx90) is ~60% faster.
 
Last edited:
With 24GB vram I can finally play mario!.. Jokes aside, there's a few things I'd have liked to see like DP 2.0 for example, like why isn't that available on such a top tier card.

Lets hope they bring some excitement to the other wise lackluster table, I doubt we'd see any company bring anything capable of utilising such a card for 3-5 years yet, Unless we count Flight simulator lol
 
Are you sh-ing me?! No DP 2.0? That would be taking the p#ss.
I agree. This shocked me, too. I was fully expecting native DP 2.0 support. The only explanation I have is that the DP 2.0 spec was released in June 2019, so NVIDIA probably did not have enough time to support it. Maybe the RTX 3000 lineup specs had been finalized prior to June 2019, or soon afterwards that they just could not include it.
 
I am more concerned about the dissipated heat than anything else. Summer temperatures have been rising where I live, and my PC turns my room into an oven in less than 30 minutes (and I am talking about RX 5700, which is nothing compared to those monsters). If this keeps going with newer cards, I might end up buying a console instead.
 
With 24GB vram I can finally play mario!.. Jokes aside, there's a few things I'd have liked to see like DP 2.0 for example, like why isn't that available on such a top tier card.

Lets hope they bring some excitement to the other wise lackluster table, I doubt we'd see any company bring anything capable of utilising such a card for 3-5 years yet, Unless we count Flight simulator lol

Oh Cyberpunk 2077 will fully utilize even the RTX 3090 alright.
There is nothing such as enough performance :), the more performance the better.
Now the only question is can gamers afford those extravagant GPU...and looking at Nvidia financial situation the answer is probably yes.
 
Last edited:
Lets hope they bring some excitement to the other wise lackluster table
Let's assume for the moment that (a) the figures in the table are totally genuine and (b) the fundamental elements of the Ampere architecture is no different to Turing's (leaving aside changes to the tensor and RT cores).

5128 CUDA cores, running at 1695 MHz, would be a 32% FP32 throughput increase over the 2080 Ti. If the core count is representative of a full GA102 chip, then this would indicate that it has 82 SMs, giving 328 TMUs in total - the texturing rate would be also be 32% better. The 12 memory controller 384-bit bus looks to be no different, other than using GDDR6X, so probably still going with 96 ROPs: the pixel rate is only 10% better.

That means any application that's shader bound is likely to be around 32% better, although a lot depends on the increase in L2 cache and changes to the internal memory structure - an improvements here will result in shader bound situations being better than 32%.

Local memory bandwidth, though, is approximately 52% better than a standard 2080 Ti, which is an enormous improvement and will help out in almost every situation. And with more SMs, the vertex setup rate of the GA102 will be better than the TU102's too.

So the potential improvements, all things considered, are arguably not lacklustre. Of course, if all true, the cost for this is a 40% increase in TDP...
 
Wouldn’t surprise me if this was a soft launch on the 1st to coincide with the 21 years of Nvidia tech, then actual availability around October/November time. Followed by massive stock shortages of course.
 
In other words, if you're in the mid-range market you really have nothing to look forward to. OK, I'll be sticking with my GTX1060 or going to something in the AMD camp.
The new RT and Tensor core architecture is apparently very significant. So much so that I've heard to expect a lot of 20xx series to be dropped onto 2nd hand market.
 
Back