Nvidia GeForce RTX 3080 Review: Ampere Arrives!

In order to see what the generation over generation technical advance is, his approach is imho correct.
- You need to compare the 2080ti to the 3080 as both use the same die class (102).
- The 2080 OC has the same power consumption as the stock 3080 FE. This allows you to see the performance delta at the same power consumption

Take the reviews and imagine just two non-technical things to be different:
- The 2080Ti was sold at the previous Ti price point of $699
- The 3080 was instead called 3080Ti

What do you think the reviews' conclusion would have been in that case?

Yes, the bang for buck has improved noticeably going from Turing to Ampere, but let's not forget that pricing made it very poor going from the 1080Ti to the 2080Ti. And nVidia calling the 3080 that is naming - that does not change that the 3080 and 3090 are using the same die class as the 2080Ti and Titan. 2080 and Super were on the 104 die that the 3070 is using now.

I'm good with the technical comparison, I do those all the time. My favorite is comparing the 1080 to the 2070 Super as they have the same count of everything except memory bandwidth. I wish someone would compare the 1080 with 12GB/s memory (mine can do this) and the 2070 S at 12 GB/s (I don't know if you can underclock the memory that much) at the same CUDA core speed to see what the actual improvement in CUDA IPC was from Pascal to Turing.

But the *Tweet* from Yuri was pure clickbait.
 
Last edited:
In order to see what the generation over generation technical advance is, his approach is imho correct.
- You need to compare the 2080ti to the 3080 as both use the same die class (102).
- The 2080 OC has the same power consumption as the stock 3080 FE. This allows you to see the performance delta at the same power consumption

Take the reviews and imagine just two non-technical things to be different:
- The 2080Ti was sold at the previous Ti price point of $699
- The 3080 was instead called 3080Ti

What do you think the reviews' conclusion would have been in that case?
I agree with you. I had a post that also said that the 3080 should be compared to the 2080 Ti but your reasons are much better than mine were. My reasons were to ignore the muddying of the waters through nVidia's branding structure (to Ti or not to Ti) but having the same die class is a rock-solid technical reason. Nobody can really argue that. (y) (Y)
Yes, the bang for buck has improved noticeably going from Turing to Ampere, but let's not forget that pricing made it very poor going from the 1080Ti to the 2080Ti. And nVidia calling the 3080 that is naming - that does not change that the 3080 and 3090 are using the same die class as the 2080Ti and Titan. 2080 and Super were on the 104 die that the 3070 is using now.
I couldn't agree more. The bang for your buck has improved but the bar was set so low that it's like saying "Yay! He doesn't murder people anymore!" (of course, that's an exaggerration meant to convey the point, not to actually describe Turing). This is why I looked at the branding set-up and realised that nVidia was deliberately not lining them up properly because they wanted to cause confusion when comparing Ampere to previous architectures. The pricing and performance structures don't lie. Part numbers and suffixes do. (y) (Y)
 
I agree with you 100%, there is a $1000USD tier that some people are willing to pay. However, I won't be saying good things about it and I'll be damned if I ever celebrate this horrid fact. I point things like this out to people because I can see that nVidia's marketing is specifically designed to distract people from what is actually happening by dangling (admittedly) very pretty things in front of their noses. I suppose that I just care enough to give them the slap in the face that brings them back down to Earth. LOL

It is true that nvidia's business practices are not to be commended, but some years ago 90% of GPUs were not performing well (in terms of the desirable 60fps). Today you can for less money. It's up to AMD now
 
YUP. The 3080 destroys the 2080Ti at $500 less. What a RIPOFF!! I can see why you're upset. O_o

Speaking of sheep, I'm glad I'm not an AMD fanboy who has been eternally waiting for that nVidia killer GPU that hasn't happened in years. Keep the night light on, boys! Jebus is coming!

Sounds a bit preemptive of you considering that AMD BIG NAVI 6000 series is due v soon. Juicy interesting times. This could be it. But its not so much about being the top dog performer nowadays if you've not noticed. Its about being able to shift max numbers of product. Hence Nvidia reducing pricing for the 3000 series. They way fearful of a repeat of 2000 series low end sales again. With Big NAVI on the horzion they need large numbers of sales from fanbois like yourself. AMD has mad good reputation for delivering high end at lower pricing than rivals now. They've got the trust back. And many fanbois and clever people (IMO) will be waiting to see how BIG NAVI compares to NV3000. Surely thats the wise play here?

It is true that nvidia's business practices are not to be commended, but some years ago 90% of GPUs were not performing well (in terms of the desirable 60fps). Today you can for less money. It's up to AMD now

What? What years ago? How many? X no. of years ago NO cards could perform 60FPS because they hadn't got that far along yet. So 60FPS was not desirable then. It wasnt possible. 30FPS was the desirable standard in early 2000's. 90% of what GPU's? You're chatting pure sh**e!
 
Last edited:
I can't take anything away from the 3080, (if you can get it at MSRP.)

I've taken the time to read through this whole discussion... and I have to say that People are going to have to wait to see what Navi2 brings. RDNA is a new 100% gaming architecture & will be more compact & efficient ("ipc") at what it does, than Ampere's uArch.

I'm giving my rtx2080 to a friend and picking up two cards... I'll wait until both whitepapers are out...!


 
I'm one who asked, so I'll answer. I asked because my understanding of what drives VRAM usage is hazy. With no actual knowledge, my intuition is that if this card is making the leap from 1440p to 4K (so ~2.25x pixels processed), might the VRAM usage not also require the same 2.25x adjustment? And yes, I understand the current (=soon to be last) generation games ran fine on 4-8 GB. But the next generation is upon us and I have no information about what its games will require. I see the consoles have 16 GB RAM total, some of which has to be for the OS and the app, but I believe in theory could allocate more than 10 GB to VRAM purposes. So I'm asking.

As to a study or serious prediction, that's the whole point. If there's one out there that says Yeah or Nay either way, I haven't seen it, but I'd like to before making my purchase. I think the people who would know for sure are probably the people with design oversight of games targeted to launch in the next 2-4 years. I'd love to know what their design assumptions are for available VRAM.

Thanks for the input.

I definitely see your point and it makes sense.
 
You clearly can't read, there is a 0% difference between the 3950x and the 10900K at 4k. Look at the benchmarks above! Trolls.

Which is expected; at that point, the GPU matters a ton more then the CPU, and the differences is relative CPU power will be heavily compressed.

Run at 1080p or 1440p, and the difference between the two becomes apparent.
 
When a graphics card is doing all of the work to render a frame, then VRAM will be filled with various blocks of data - some of it is the raw building blocks of the 3D scene itself (I.e. information about shapes, textures to cover with them, the list of instructions of what to do) and some of it is 'working data.'

In the case of the former, this doesn't change with resolution: it's fixed in size. However, working data does vary and by quite a lot. For example, a modern game like Doom Eternal or Shadow of the Tomb Raider will create several versions of the frame, in different formats, to use in the creation of the visual effects seen on the monitor. These might be the same size as the final, completed frame, but they will always scale with the resolution.

So going from 1440p to 4K doesn't involve a full 2.25 times more memory, as a good chunk of it will be the assets for the frame. Only temporary stuff will get scaled, some by 2.25 times, others will be a set fraction of this.

Pretty much this; VRAM usage isn't going to scale linearly with resolution.

My best guess is NVIDIA decided to keep prices down by cutting down on VRAM usage a bit, since VRAM is relatively expensive compared to the rest of the card. Long term it could become a limitation, but in the 1-2 year term 10GB should be more then enough to handle most anything thrown at it.
 
It is true that nvidia's business practices are not to be commended, but some years ago 90% of GPUs were not performing well (in terms of the desirable 60fps). Today you can for less money. It's up to AMD now
Actually, 60fps is a very recent thing, well, recent depending on how old you are. Go back ten years and 40fps was considered amazing. Everything is relative and is based on comparisons with similar tech of the day. Consoles put out 30fps and were considered perfect so anything that could do 30fps+ smoothly was also considered perfect. They didn't run badly, they ran as they were designed to based on the technology at the time.

Yes, today you can get 60fps for less money than in the past, that's one of the wonders of tech that nVidia has been trying to circumvent since the RTX 20 series. One thing that blows my mind is when I think back to 2008 and remember that the GTX 295 cost over $1000CAD. That's when I worked at Tiger Direct and I wouldn't pay that for a video card TODAY, let alone twelve years ago. Hell, I remember being appalled at seeing the GTX 260 for $500CAD but was saved by the release of the ATi Radeon HD 4870 which slightly exceeded the GTX 260 for only $350CAD.

As you say, it's up to AMD now. Personally, I'm not all that concerned because I have an RX 5700 XT and it does everything that I want it to. I'm only interested in RDNA2 because I want competition in the marketplace.

Kind of like bringing balance to the force, the way that Ryzen did.....?
 
Un available in Europe. Vapourware. Launched early to get a lead on AMD. Not much of plan if you have none to sell. I waiting to see what AMD are going to offer. Just built a 3700x on a Strix x570. Upgrade from a core 2 quad running at 3.6 on water. My rx480 8gb is running bf5 at 75 avg at 1080p ultra. 2k moniter and card together is the next purchase. I can wait ?
 
Actually, 60fps is a very recent thing, well, recent depending on how old you are. Go back ten years and 40fps was considered amazing. Everything is relative and is based on comparisons with similar tech of the day. Consoles put out 30fps and were considered perfect so anything that could do 30fps+ smoothly was also considered perfect. They didn't run badly, they ran as they were designed to based on the technology at the time.

Yes, today you can get 60fps for less money than in the past, that's one of the wonders of tech that nVidia has been trying to circumvent since the RTX 20 series. One thing that blows my mind is when I think back to 2008 and remember that the GTX 295 cost over $1000CAD. That's when I worked at Tiger Direct and I wouldn't pay that for a video card TODAY, let alone twelve years ago. Hell, I remember being appalled at seeing the GTX 260 for $500CAD but was saved by the release of the ATi Radeon HD 4870 which slightly exceeded the GTX 260 for only $350CAD.

As you say, it's up to AMD now. Personally, I'm not all that concerned because I have an RX 5700 XT and it does everything that I want it to. I'm only interested in RDNA2 because I want competition in the marketplace.

Kind of like bringing balance to the force, the way that Ryzen did.....?

60FPS was never acceptable for first person shooters, or other online Games. It's always been acceptable for Desktop and arcade games, that have no relation to screen refresh.

My CRT was @ 100Hz~144Hz+... and my ancient Overlord LCD was over 120Hz...



Consoles were attached to TVs in the past, that is why their Hz didn't matter.
 
60FPS was never acceptable for first person shooters, or other online Games. It's always been acceptable for Desktop and arcade games, that have no relation to screen refresh.

My CRT was @ 100Hz~144Hz+... and my ancient Overlord LCD was over 120Hz...



Consoles were attached to TVs in the past, that is why their Hz didn't matter.
I don't know exactly what you're referring to. CRTs? 60fps never was acceptable for first-person shooters? Ok, clearly, I'm going to have to dig up some graphs because what you're talking about is completely wrong. I'll post a bunch of cards that people are familiar with and I'll make sure that there are FPS games in the graphs because I have no idea what you're talking about because yes, CRTs were faster but video cards and CPUs weren't fast enough to use them.
This is from AnandTech in 2008:
17129.png

17132.png

17133.png

2009:
20105.png

20102.png

In 2008, reaching 60fps required cards that were extremely high-end and even those weren't up to the task all the time. Back then, if you could hit 40fps you were thought to be doing well. A CRT didn't help because what good is a high-refresh monitor when the PC itself wasn't capable of 60fps? You see that HD 4870x2? It had an MSRP of over $500 and that was in 2009! It's only in the last 7-8 years that we've managed to reliably hit 60fps on average and even then, you often needed VERY EXPENSIVE hardware to do it. PC gaming goes back over 30 years so yeah, it's pretty recent.
 
You clearly can't read, there is a 0% difference between the 3950x and the 10900K at 4k. Look at the benchmarks above! Trolls.

TechSpot, one of the most popular games in the world is COD Warzone and you guys don't include benchmarks. It's literally the only game a lot of us play and you ignore it completely. I also don't understand the lack of 1080P testing. You reviewed a 360hz 1080P monitor yesterday. I have a 280hz 1080P monitor and desperately want to see benches! I get some of your editors don't care about high refresh gaming but a LOT of us do!
CoD what?? Sorry, we stopped carrying about P2W games a long time ago.
 
60FPS was never acceptable for first person shooters, or other online Games. It's always been acceptable for Desktop and arcade games, that have no relation to screen refresh.

My CRT was @ 100Hz~144Hz+... and my ancient Overlord LCD was over 120Hz...



Consoles were attached to TVs in the past, that is why their Hz didn't matter.
"Unacceptable" as it might be, most of us have played, even online shooters, on a 60 Hz monitor for more than 10 years. Of course, people with 120 Hz monitors did better, but they also had 20 ms ping while being like 10 km from the server while I still have 60-70ms ping in their shitty P2W online games even now on a fiber-optics connection and I'm still hundreds of km from their bloody servers. So what's you flicking point?
 
Last edited:
Un available in Europe. Vapourware. Launched early to get a lead on AMD. Not much of plan if you have none to sell. I waiting to see what AMD are going to offer. Just built a 3700x on a Strix x570. Upgrade from a core 2 quad running at 3.6 on water. My rx480 8gb is running bf5 at 75 avg at 1080p ultra. 2k moniter and card together is the next purchase. I can wait ?
Available on eBay for 3x the price or more, from all kinds of "trustworthy" sellers. Just buy it!
 
I am in the impression that the new generation 3000 base model performance per watt will be very similar to 2000 series. it's a nice improvement in terms of raw performance vs price for NVIDIA since a while.
 
Back