Nvidia's GeForce RTX 3080 and 3090 could enter mass production in August

I'll be upgrading my 980 finally so it will be a pretty big jump for me. If both teams fail to impress this fall maybe I'll just settle for a PS5 instead and turn in my PCMR card.
 
You can... but you shouldn’t... and most don’t... if your GPU is $1,000, you’d be a fool to buy a cheap motherboard/cpu... and even then, you still haven’t accounted for the HD (SSD), monitor, speakers, case... that adds a few hundred more... $2,000 is about the bare minimum for a $1000 GPU...
Most don't cause when they are buying a 1k graphics card, money is of no concern, that's why they don't. But why shouldn't you? A b450 tom max with a 3600 and 16 gb of ddr4 3200 will have the exact same performance at 4k compared to any other combination out there. Or even 1440p for that matter. That's a total of 350.

You can spend 50 for an hdd, in case you need one (I don't have one), 65€ for a 500gb ssd, 60 for a seasonic or an evga PSU, and as much as you want for a tower. Probably something around 50 to a 100€. So you are around the 1600 mark.
 
Most don't cause when they are buying a 1k graphics card, money is of no concern, that's why they don't. But why shouldn't you? A b450 tom max with a 3600 and 16 gb of ddr4 3200 will have the exact same performance at 4k compared to any other combination out there. Or even 1440p for that matter. That's a total of 350.

You can spend 50 for an hdd, in case you need one (I don't have one), 65€ for a 500gb ssd, 60 for a seasonic or an evga PSU, and as much as you want for a tower. Probably something around 50 to a 100€. So you are around the 1600 mark.
How about a monitor... if you're gaming at 4k, you're gonna need a pricy one... and speakers...and a mouse... keyboard... possibly a headset...

They all add up...

Also... the 3900 WILL give better gaming performance than the 3600... marginal... but it's there....

The golden rule of computers has always been you're limited by your slowest part... if you're spending $1000 on a GPU, you better make sure there aren't any bottlenecks elsewhere...
 
Last edited:
I didn’t say 75% across the board, but it will be more like 50% easily averaged for 3090 vs 2080Ti, hell 3080 will be on par if not faster than 2080Ti and if you do ray tracing it will crush the 2080Ti.
you will get about 25% over the 2080ti
if you look at the speed increases the last few years that will be about right.
You may get more with 3rd party cars up to 40% but not 75 no chance.
if they did that they would have to slash all their existing lines.
 
Here's something I wonder about early vs late production:

I have a 1060 6GB Gaming X, so a high end model 1060, bought early in the release cycle. That GPU doesn't tolerate undervolting well and really only likes to run stably at stock voltages. OCs OK with additional voltage but no golden sample.

I have a 1050Ti bought later in the prod. cycle and a 1080 bought at the very end after the 2000s were released. Those undervolt like champs, which is necessary as the 1050Ti is a slot-power only model so lower power means more performance. The 1080 is a PNY with a *crap* cooler so the undervolt also helps a lot there as well to keep temps under 75 at 1923MHz.

I wonder if early adopters get slightly cruddier silicon like my 1060, before gradual process improvements are made, or if I just happened to get a 1060 which won't undervolt because it's an outlier.

FWIW I have a very early 1660 Super which also undervolts like a champ but that card also came out after a year of Turing so it's not like the process was immature.
It makes sense for the production process to improve over the time, and NVidia could even release Super versions with better specs thanks to that.
But the silicon lottery is still a lottery, so you could even get a poor silicon GPU in a later stage.
 
I am not of this opinion, all good leaks point out to the fact that Nvidia took the AMD threat very seriously and that we'll see a larger gap this time. It will be larger in rasterized gaming, and even larger in RT gaming.

Which "good leaks" are you referring to as I'm not aware of any leaks that would classify as "good" right now.
 
Which "good leaks" are you referring to as I'm not aware of any leaks that would classify as "good" right now.
Right now there's a lot of misinformation going around, it feels like AMD and Nvidia themselves are participating in spreading misinformation in order to keep the competition in the dark.

A month ago, things were calmer and leaks coming out were less chaotic. A good source for leaks is Moore's Law is Dead, then there are Adored, Coreteks, Igor's lab.

But you must make your own analysis and corroborate the information in order to draw a relatively accurate conclusion.
 
But you must make your own analysis and corroborate the information in order to draw a relatively accurate conclusion.

Yeah, I agree on that but from what I gathered the only credible info we have so far is the GPU shroud (which doesn't tell much in regards to performance), the fact that it's going to be on 7nm with TSMC and Samsung sharing the load and some idea that there will be an x90 part (which wasn't used since the old dual GPU days).

Just seeing x90 naming makes me not put much credence in this information as it's been sort of a meme for many generations now, with people always claiming that a x90 part will be announced.

The fact that they've introduced a brand new (and very popular!) naming moniker in this generation (Super) only makes me even more skeptical of an x90 part.

IMO the only real info we've got is about their A100 datacenter GPU yet the datacenter and consumer GPUs have diverged so much in the past 5 years that they share little in common in terms of architecture.

I recall when a lot of people were super excited due to the performance increase offered by the Volta arch with a ton of "leaks" saying this and that but in the end, it was exclusively for the enterprise.
 
How about a monitor... if you're gaming at 4k, you're gonna need a pricy one... and speakers...and a mouse... keyboard... possibly a headset...

They all add up...

Also... the 3900 WILL give better gaming performance than the 3600... marginal... but it's there....

The golden rule of computers has always been you're limited by your slowest part... if you're spending $1000 on a GPU, you better make sure there aren't any bottlenecks elsewhere...
And a chair, a desk, a house to put them in etcetera. K..you are right.
 
Well, there is a lot of deduction in what I think I know but let me elaborate briefly.
Yeah, I agree on that but from what I gathered the only credible info we have so far is the GPU shroud (which doesn't tell much in regards to performance), the fact that it's going to be on 7nm with TSMC and Samsung sharing the load and some idea that there will be an x90 part (which wasn't used since the old dual GPU days).
We know they are considering that shroud, that doesn't mean it's the only one, or that will be the final choice.
Just seeing x90 naming makes me not put much credence in this information as it's been sort of a meme for many generations now, with people always claiming that a x90 part will be announced.

The fact that they've introduced a brand new (and very popular!) naming moniker in this generation (Super) only makes me even more skeptical of an x90 part.
Nvidia isn't shy about having many SKU's and overly complicated segmentation, they do that all the time. In the current generation, there are 4 different flavours of 1650 (non-super), different flavours of 2060, not to mention the mobile where they have a plethora of similar SKU. Confusing a bit maybe, but it doesn't seem to affect their sales, so I don't thin,k that would deter them at all to launch a new one. The new one makes a lot of sense if they are unsure it will have the performance crown. They will keep from launching, the Titan until they see AMD's cards and they are sure it can't be beaten.
IMO the only real info we've got is about their A100 datacenter GPU yet the datacenter and consumer GPUs have diverged so much in the past 5 years that they share little in common in terms of architecture.

I recall when a lot of people were super excited due to the performance increase offered by the Volta arch with a ton of "leaks" saying this and that but in the end, it was exclusively for the enterprise.
At that time, they had no competition. Right now, we know for a fact that the RDNA2 GPU in the PS5 will boost to 2.3 GHz in a console (limited TDP, limited power). That is impressive, and Nvidia isn't ignoring that.
So we know they're coming with something powerful, we know Nvidia likes large die sizes, the exact CU count doesn't matter that much.
The rest is just details, while some things are still subject to change, like naming scheme, pricing, those things can be left for the very last moments. And Nvidia is an extremely adaptable company, they are capable of launching a new SKU for an existing architecture in just weeks if not less.
 
'm upgrading from an MSI GTX 670 Power edition I got back in 2013.. so I can expect like what, a 500000% gain in performance? :D
This is the beauty of holding out so long. I was still able to play many games in the last 3 years at midrange-high settings at manageable fps of 30-50. I didn't need 60fps at Ultra high to have an enjoyable experience. Especially because I played older titles or indie ones that weren't as performance intensive.
Now the upgrade will truly feel epic, and worth the money. On top of that I saved thousands of dollars over the years not doing incremental upgrades.
 
You may think you’re being sarcastic... but I’ll accept it since you clearly have nothing new to argue...
What's there to argue, if you are literally building from scratch you do indeed need a monitor, alongside a desk to put it on, a chair to sit on, and probably a house with electricity.

It's generally accepted than when people say they build an XXX € PC they refer to the tower. The same way when someone says he bought a 400€ console, he doesn't include the TV in there. Frankly, you weren't including a monitor on your original argument, cause then your figure don't make sense. How exactly are you going to fit an expensive mobo / cpu / ram / case / psu AND a monitor with 1k? Unless you were talking about a cheap monitor
 
Squid Surprise and Strawman, you've both made your points and your off topic argument is getting repetitive. If you want to continue your personal discussion, please do so via PM. Thank you.
 
Nvidia doesn't need to deliver anything. They've been at the top of the performance spectrum for the past 3 generations with little to no real competition from AMD.
It's AMD that has to deliver.

There are 2 areas that make Nvidia a lot of money:
1. enterprise GPUs (ai, automation & GPGPU accelerated workloads)
2. enthusiast GPUs

Why do you think they've been gradually increasing enthusiast prices with each gen now for the past 3 gens?
They have no competition there so they can.

Mark my words:
- the 3080 will be similar to the 2080ti in 3D perf and they will boast about superior RT performance (like that even matters with how many games have it and their **** implementation) while the price will be just slightly cheaper than it (think $50 off)
- the 3080ti/3090 will likely be $1400 - $1500 and with a max of 25% perf over the 2080ti in regular 3D workloads
- by the time AMD comes around with their GPUs, they will have the $600-$1000 segment covered with other models
- I wouldn't be surprised if they introduce a $2000 Titan at some point
- all the typical performance improvements that you would have seen from a 5-7nm fab process will be spent on RT so even if AMD will be competitive with them in terms of perf you will still think "hhm, but RT is way better on nvidia"
- and let say that, by some miracle, AMD really does come up with something amazing.. they can just reduce prices and everyone will still stay with nvidia
- given how entrenched they are, it would take AMD two generations of kicking their asses for them to get to a point where they "need to deliver a decent bump"

Those two areas (Enterprise & Enthusiast) are divergent.

And over the last 10 years, nVidia has been moving away from gaming towards Enterprise. And thus, it is where they make most of their money. But mainstream dGPU is where nVidia looses to AMD's new rdna. (Which is 100% gaming uarch.)


So, nVidia has no choice but to lunch a full enterprise die, as a gaming dGPU just to compete with a more efficient rdna2 gaming architecture.. As such, nVidia's mainstream ($299-$799) dGPU cards are coming from Samsung 8nm node, in spring 2021 and will be great values.

Again, these 7nm "Ampere" cards will be $1,500 - $2,300 like Volta Titan was. It's just that nVidia is just going to market the SKU's a it differently... until they see AMD's pricing.
 
Last edited:
But experts on the Internet told me those cards will be out by the end of July and that I should hold off. Who should I believe here?
 
At that time, they had no competition. Right now, we know for a fact that the RDNA2 GPU in the PS5 will boost to 2.3 GHz in a console (limited TDP, limited power). That is impressive, and Nvidia isn't ignoring that.
Boosting to 2.3GHz on 7nm seems more like expected to me than a sign of competition.
We've yet to see any real-world performance numbers on RDNA2 so at most they are being cautious about AMD's Big Navi.

And don't get me wrong, I would love to see AMD come out with a banger.
Those two areas (Enterprise & Enthusiast) are divergent.

And over the last 10 years, nVidia has been moving away from gaming towards Enterprise. And thus, it is where they make most of their money. But mainstream dGPU is where nVidia looses to AMD's new rdna. (Which is 100% gaming uarch.)

Nvidia hasn't been moving away from gaming at all - no clue where you got that from.
As I've pointed out, along with enterprise, enthusiast GPUs are one area that is making them $$$.
They've certainly heavily invested in the enterprise space in the past 10y and it is their highest source of income but have also shown no signs of moving away from gaming.

Their RT push, Ansel, DLSS, GeForce Experience, GeForce Now, ReShade integration etc. are all things that point to them continuing to treat the gaming segment as a valued source of income.
So, nVidia has no choice but to lunch a full enterprise die, as a gaming dGPU just to compete with a more efficient rdna2 gaming architecture.. As such, nVidia's mainstream ($299-$799) dGPU cards are coming from Samsung 8nm node, in spring 2021 and will be great values.

Again, these 7nm "Ampere" cards will be $1,500 - $2,300 like Volta Titan was. It's just that nVidia is just going to market the SKU's a it differently... until they see AMD's pricing.
See, people said the same thing about Volta, that they "have to launch it due to competition" but that wasn't true then and it's not any different now.

Unlike previous Titan cards, Titan V was directly aimed at workstations in the AI & learning space, much like Titan RTX is today, so basically as an entry-point into their more expensive Quadro lineup.

I doubt they'll go over $1500 with the normal top-end lineup (so a 3080/90 Ti).
 
Boosting to 2.3GHz on 7nm seems more like expected to me than a sign of competition.
We've yet to see any real-world performance numbers on RDNA2 so at most they are being cautious about AMD's Big Navi.

And don't get me wrong, I would love to see AMD come out with a banger.

Well Navi 10 was also 7nm and it didn't boost to 2 GHz, it goes to 2.3 only with extreme overclocking and 450 W TDP.

Doing that and still be inside the efficiency window is quite a feature.

The latest leaked Nvidia benchmarks show less than 2 GHz frequency, which wouldn't be an absolute surprise, given that Nvidia is only arriving on this fresh node.
 
My bad, I wasn't aware that Navi 10 was already on 7nm.

That's actually pretty interesting but there are AIB partner versions of the 5700 XT that can boost to 2GHz or more out of the box, the Gigabyte Aorus 5700 XT being one of them.

The architecture differences between AMD and Nvidia make it so that we can't compare clock to clock and come to any sensible conclusions.

If you look at the GHz difference between RDNA1 and the info we have for RDNA2 that would be a 15% increase in clock speed over the current top SKU.

So even if they were to not make any architectural changes, it would be at least 15% more perf just out of that though it could also be that they might have reduced ROPs or similar arch change to get a higher clock speed (been done before with Nvidia) so even that isn't really 100% accurate.

Personally, I would like to see RDNA2 at least 25% better than RDNA1 though that would barely get it to trade blows with Nvidia's current top SKUs let alone Ampere.

I am aware that they've claimed that RDNA2 would "target [...] to drive another 50%" but I just can't see AMD making such huge gains with RDNA2 arch alone given that they are already on 7nm with RDNA1 as we know that RDNA2 will be on a refined 7nm node.

If they do pull it off and show the world that they can make such a huge leap by just an arch change over the course of a little over a year (RDNA1 was launched in July 2019) then I would happily support them and switch to team red, just like I did on the CPU front.
 
My bad, I wasn't aware that Navi 10 was already on 7nm.

That's actually pretty interesting but there are AIB partner versions of the 5700 XT that can boost to 2GHz or more out of the box, the Gigabyte Aorus 5700 XT being one of them.
Indeed, but in doing so it has a 250W TDP for a 250 sq mm die (a full die would use around 500W). It's way above the efficiency sweet spot, which is at about 150W and 1750MHz for Navi 10.
If the APU inside a console can boost to 2.23GHz, this means imo that the efficiency sweet spot has moved to at least 2GHz.
Personally, I would like to see RDNA2 at least 25% better than RDNA1 though that would barely get it to trade blows with Nvidia's current top SKUs let alone Ampere.

I am aware that they've claimed that RDNA2 would "target [...] to drive another 50%" but I just can't see AMD making such huge gains with RDNA2 arch alone given that they are already on 7nm with RDNA1 as we know that RDNA2 will be on a refined 7nm node.
AMD claimed 50% increase in performance per watt, which is definitely achievable, given that with the 5700XT they just sold a factory overclocked card (thus very inefficient) and they had a lot of time to tame the 7nm process.
If they do pull it off and show the world that they can make such a huge leap by just an arch change over the course of a little over a year (RDNA1 was launched in July 2019) then I would happily support them and switch to team red, just like I did on the CPU front.
It is perfectly possible, you know, ATI/AMD has held the performance crown in the past, albeit quite a while ago.
However, no matter what happens with pure rasterized performance, Nvidia is trying to switch the public focus to the ray-traced performance where (they and I think) they have the upper hand, so the battle might move more towards RT/DLSS performance than pure rasterized gaming.

About the support part, when I vote with my wallet, I tend to vote more based on the company history, and given Nvidia's history of anticompetitive and anticonsumer practices, it's an easy decision, but that is a completely personal option.
 
Last edited:
Indeed, but in doing so it has a 250W TDP for a 300 sq mm die (a full die would use around 500W). It's way above the efficiency sweet spot, which is at about 150W and 1750MHz for Navi 10.
If the APU inside a console can boost to 2.23GHz, this means imo that the efficiency sweet spot has moved to at least 2GHz.

I'm really curious to see where this goes in the next gen in and out of the consoles. AMD showed that the original 1560 MHz spec of the 5600XT was running close to the sweet spot of power efficiency and then pushed it up from there with the BIOS update to increase performance to 1750 MHz. TechPowerUp's power efficiency graphs are great to visualize this, with the 5600XT original BIOS clearly the most efficient video card out there. I believe laptop GPUs run in this range as well to get the most performance from the fewest watts.

I do the same for my GTX 1080 using Afterburner and the fastest peak efficiency is around 0.9v at 1911 MHz (or 1923 MHz with a better cooler) but that's optimally undervolted which needs to be done on a per-card basis. Standard clocks for 0.9v are at 1733 MHz, right at the number you mention.

It will be interesting to see if clocks do increase noticeably with Nvidia's node shrink and whether AMD's IPC and other improvements in Navi 2 can keep up. Intel has already given a hint that some companies to not see a clock speed improvement with a smaller node as their new laptop CPUs on 10nm have lower top speeds than their very mature and optimized 14nm CPUs, though IPC improvements make up for the speed deficit.
 
Back