Nvidia RTX 3080 Gaming Performance at 1440p: CPU or Architecture Bottleneck?

Good job with the article Steve!
The inclusion of 720p results was great to see.

This was very informative as this generation of GPUs is starting to show there is more to it than just the ye old "is it CPU or GPU bottlenecked?".

Hopefully, other reviewers take note of this and start to update their testing methodologies to avoid such scenarios (as much as possible!).

So basically we're seeing:
- a higher dependence on a good scheduler/driver to take advantage of all those SMs
- DX12 & Vulkan helps with removing software bottlenecks and paint a clearer picture in the CPU vs GPU bottleneck debacle
- current-gen CPUs seem to be less likely the bottleneck than older software is
 
It'd be an interesting choice if another card, designed and optimized around 1440p, offered much better value and/or performance to gamers who expected to not go 4K in the next couple years at least.

I'm probably one of those 1440p gamers - at least my desktop monitors are - but then again there's always the theoretical possibility I might want to hook my PC up to a 4K TV...
 
Without GPU utilization metrics ... The data from this review isn't enough to conclude whether it's CPU or GPU architecture. Please consider monitoring GPU usage during these benchmarks and then scaling up and down CPU speeds to hone in on CPU vs GPU bottlenecks...
I would really like to hear why you think this given the information presented in this article.

Whether the GPU ends up getting 100% utilization or not doesn't really show anything that FPS doesn't: a game that has an inherent software bottleneck (like FS 2020) will scale poorly when compared to a game that uses the newer 3D APIs (like F1 2020).

So even if a single game doesn't drive the GPU to 100% (or drives a single CPU core to 100%) that doesn't mean that you're CPU bottlenecked when other games can do it - it just means that you're software bottlenecked.

Does that make sense?
 
FYI Digital foundries found out the more demanding you made the graphics the bigger the delta gap got between the 2080ti and 3080 FYI. You scaled from 4k to 720p on mixed High or very high settings. They scaled 4k on high to max settings and even raytracing with or without dlss in which dlss also had smaller gaps depending on base resolution.
The more GPU bound you are the bigger the benefit is.
Will more mature drivers change the scaling on lower resolutions or will the change be more or less linear throughout like someone mentioned?

Also is anyone getting any notifications in their inbox of restocking inventory?
 
It'd be an interesting choice if another card, designed and optimized around 1440p, offered much better value and/or performance to gamers who expected to not go 4K in the next couple years at least.

I'm probably one of those 1440p gamers - at least my desktop monitors are - but then again there's always the theoretical possibility I might want to hook my PC up to a 4K TV...
Yeah I have a 65 inch 4k I got for about $350. I really want to try it out. The mandalorian looks epic on it. Despite it being a cheapy 4k tv.
 
No, the problem is an architectural bottleneck.

The 3070 will not be on par with the 2080 Ti. It will likely land slightly above the 2080. When Nvidia said the 3070 will be on par with the 2080 Ti, it meant in ray traced scenarios only.

According to nvidia benches it will be at least as fast as the RTX2080Ti. Just a couple of weeks till we find out.
 
Yeah I have a 65 inch 4k I got for about $350. I really want to try it out. The mandalorian looks epic on it. Despite it being a cheapy 4k tv.
Today there is the LG CX oled 48 inch TV which is more than a gaming monitor vs anything out there with similar quality at the $1499 price range or even higher. Hence the relevant to topic.
While you can hook up your tv to watch movies you can actually can do that without a pc as well through smart hub.
 
I think nvidia just got greedy. I think what they are calling the 3080 was originally going to be the 3070 and the 3090 was going to be the 3080. I think someone got greedy and said, "We can't produce a Titan class chip on Samsung's tech so why don't we rebrand the 3080 as the 3090 and charge Titan prices and call it a Titan class chip." Then they proceeded to rebrand the 3070 as the 3080 to fill that gap.

When you think about it the 3090 uses way too much power for it to be a Titan so we know it isn't one of those.

I think if the renaming hadn't happened it would have put the 3070 at 2080TI speeds which would have reflected good on their hardware. Instead they renamed which makes it look like there weren't any performance benefits to upgrade to the 3080 but hey check out that new GPU they released... isn't it bad *** for a grand more?
 
My go-to game is still Assassin's Creed Odyssey and 3080 is not that impressive on that game. :/
 
This is interesting, obviously these games are playing at great framerates @ 1440p with the RTX 3080, but it seems like Nvidia's choice was to target 4K with Ampere. Owning a 1440p monitor @ 144hz, and having no intention to upgrade to a 4K monitor anytime soon, this has me wondering if there is any point in Ampere for me. AMD's top tier card is reported to have 80 CU and run at frequencies higher than 2Ghz. If that is true AMD's top card will probably have 20.5-22.5 TFlps which is right around the same as the RTX 3070. However, if AMD's architecture is better suited for 1440p, equivalent performance at that resolution might be gained by a 60 CU (15-17 Tflps) card. AMD might be a much better option for sub 4K gaming especially from a frames per dollar perspective. Of course this assumes that AMD's 80 CU card is priced competitively with the RTX 3070 and not the 3080 and that a 60 CU card is then priced to match up with the coming 3060.

As a console gamer for years, having recently built my very first gaming PC, I must say that 4K really isn't that important to me. I also realize that the PS5 and XSX are not serious 4K machines, many of the games will have 30 fps locked framerates at 4K resolution. The 60fps native 4K game will be rare. Most 60fps games on these systems I imagine will have dynamic resolutions. So I am more than satisfied if I have a PC that can achieve 2K @ 60fps with high or ultra settings in next gen games, I just don't think the 4K @ 60fps experience is worth the additional costs, yet. While you can get cheap 4K televisions, 4K monitors are still very expensive and if you are going to spend that on the monitor you really need the $700.00 RTX 3080 to make sure you are taking full advantage. I guess if you have that kind of money to spend on these machines more power to you, but I have a family haha.
 
Last edited:
problems Ampere has are impossible to overcome with updates alone. Ampere will likely gain a similar amount of performance as turing did with updates, which was a small amount.

I upgraded from 1080ti to a 2080 back when it was new. I remember all the reviews showing the card being neck and neck with just a slight favor towards the 2080. Looking at newer titles and benchmarks at the 3080 launch there is closer to 20% difference between the cards. Improving 20% with 2 years of driver updates isn't that bad.

There are plenty of improvements that need to be taken advantage by by devlopers so it is quite possible to see even more of a difference. Things like changes to speed up RTX/DLSS that are not implemented in ANY game right now.
 
I think nvidia just got greedy. I think what they are calling the 3080 was originally going to be the 3070 and the 3090 was going to be the 3080. I think someone got greedy and said, "We can't produce a Titan class chip on Samsung's tech so why don't we rebrand the 3080 as the 3090 and charge Titan prices and call it a Titan class chip." Then they proceeded to rebrand the 3070 as the 3080 to fill that gap.

When you think about it the 3090 uses way too much power for it to be a Titan so we know it isn't one of those.

I think if the renaming hadn't happened it would have put the 3070 at 2080TI speeds which would have reflected good on their hardware. Instead they renamed which makes it look like there weren't any performance benefits to upgrade to the 3080 but hey check out that new GPU they released... isn't it bad *** for a grand more?

For some reason, nvidia is getting rid of all their video card brands, Titan, Tesla and Quadro are gone, rumor says, Tegra is going bye bye too. Only Geforce will remain.

I think the RTX3090 is more like giving a piece of the pie to OEMs, as Titan was exclusive to select nvidia partners. I'm sure OEMs loved to offer a $1,500+ dlls cards.
 
I upgraded from 1080ti to a 2080 back when it was new. I remember all the reviews showing the card being neck and neck with just a slight favor towards the 2080. Looking at newer titles and benchmarks at the 3080 launch there is closer to 20% difference between the cards. Improving 20% with 2 years of driver updates isn't that bad.

There are plenty of improvements that need to be taken advantage by by devlopers so it is quite possible to see even more of a difference. Things like changes to speed up RTX/DLSS that are not implemented in ANY game right now.

I don't know what reviews you are looking at but from what I'm seeing, there is ZERO improvement:


The 2080 went from 201 FPS at 1440p ultra to 201 FPS at 1440p ultra in R6:Siege

Please provide a source for that 20% figure.
 
Sorry, but I don't get this article. 1440p is faster than 2160p, and slower than 1080p as expected. So what's the problem again? Oh, seems there really is no problem after all.
 
I'll have to say that I'm confused. Since the 3080 performed significantly better than the 2080 Ti on some titles at 1080p, I would have thought that this showed it couldn't be the architecture of the GPU at that resolution, but had to be a CPU bottleneck, because different games might be less CPU-intensive.
 
Your conclusion is consistent with your initial statement.

The point is that when lowering the resolution on the 2080 Ti from 2160p to 1440p you get 77% more frames and from 2160p to 1080p you get 135% more frames. This card gives decent FPS gains when lowering the resolution to where many people play their games at high refresh rates.

Lowering the resolution on the 3080 from 2160p to 1440p gets you 68% more frames and from 2160p to 1080p you get 110% more frames. This is not giving as good an FPS boost at lower resolutions as the previous gen card does, and is especially lackluster in games like WWZ, Metro Exodus, Hitman 2, and FS'20.

So this card has less value for 1080p HiRR players in those games than its excellent 4K numbers would suggest. And its value is somewhat diminished at 1440p HiRR, though overall it's pretty good there.
 
Just stop with this old-hat 4k (invisible gaming) malarkey.
Nvidia marketing said these were 8k (invisible gaming) gaming cards.
And why aren't the tech press calling out Nvidia for the 'scalpers' BS, when it is patently clear these products were made available by Nvidia to the gen public in EXTREMELY limited numbers.
The various AIB's have said as much.
 
If you manually set the CPU speed at 3 frequencies then you could get a better idea of the possible FPS per CPU GHz (eg. 4.5GHz, 3GHz, 2.25GHz). If you halve the CPU clockspeed (all-core GHz) you could estimate the maximum frame rate expected for the max clockspeed (double the half speed FPS). Use the CPU & GPU monitoring to see % usage and other counters.

It's interesting to see when doubling the pixels per frame the frame rate does not halve.
 
Who exactly is dumb enough to buy a 3080 and game at 1440p?!
3xxx are for 4k gaming.
Well considering that there are not many 4K monitors that go above 60hz and if you find one they are very pricey, I'm guessing a lot of people that want 120-144hz gaming @ a 1440p resolution.
 
AMD Radeon RX 6000 Big Navi (which model?) early benchmark shows It trailing GeForce RTX 3080 however by how much. Looking at the benchmarks the RTX 3080 is at 76 FPS on Gears 5 @4K Ultra Settings (this review) compared to the AMD RX 6000 series presented at 73 FPS.

The footnote from the presentation is interesting:

RX-532: Testing done by AMD performance labs 09/26/20 on a system configuration with a new AMD graphics card, graphics driver 2009241322_20.45. Ryzen 9 5900X CPU, 16GB DDR-3200MHz, engineering motherboard and bios, on Win10 Pro x64 19041.508. Games tested at 4K as follows: Borderlands 3 (DX12 Badass). Call of Duty: Modern Warfare (DX 12, Ultra). Performance may vary. GPU Confidential. RX-532

Notice they are only using DDR-3200MHz which indicates the benchmarks from Techspot (Steve from Hardware Unboxed) using a Core i9-10900K and DDR-3200MHz is a good comparison compared to other reviews.
 
As an current Intel user, I'm going on the record here saying that by 1440p and all settings cranked in a game, telling the difference between AMD and Intel is nil. Yes, you can benchmark a small difference, but at this res, with maxed quality settings like ray tracing eating up GPU performance already, which even my vaunted 3090 experiences to some degree (very minor, 3090 eats 1440p ray tracing for breakfast mostly, but isn't same as no ray tracing at all), you will not feel any differences in gaming on AMD or Intel CPUs.

NEITHER drops enough to convey a loss of smoothness in gameplay that is felt if you have a solid adaptive synchronous refresh equipped monitor. Honestly, we live in an era of "you can't go wrong" in gaming performance, why is that so hard for people to get and be happy about? I love my Intel machine, but will build an AMD machine for somebody in a heart beat just as quickly.

The only people I REALLY recommend choose Intel over AMD are the competitive, low res, high refresh gamers where Intel's single threaded advantage shows up. That also may be about to change, we will see how Zen 3 plays out, recommendations will adapt as new information emerges. THAT is how it's done.

It's okay to be a fan of one or both companies, but failure to acknowledge the reality of living in a time of pretty much either choice resulting in a wicked fast gaming machine is just living in the past and refusal to live in the now.
 
Back