Nvidia RTX 3080 Gaming Performance at 1440p: CPU or Architecture Bottleneck?

I think it is a combination of the CPU and the GPU architecture as shown here, but it's also the game and driver software at this point. The drivers need a bit more maturity I suspect.

I think as we see more forward looking software with more modern game engines arriving in the next year or two the 3080 will stretch its legs a bit better at 1440p.

It's typical of this scenario that the newer GPU begins to pull out more of a lead on newer games, even if it is slight.
 
Be interesting to see how Ryzen 5000 changes this, could well see upto 25% improvements in titles like FS2020.
 
Since the term 'architecture' applies equally to CPUs, the title's phrasing of "CPU or architecture bottleneck" bugs all hell out of me. "CPU or GPU bottleneck", perhaps, or maybe "CPU or Ampere architecture" at least.

It's a bit of a comprehension fail if you can't work out the architecture in question is that of the RTX 3080.... given the title.
 
What about ultrawide, I have a 3440x1440p screen with my 3080 and everything I have tried apart from Q2 RTX oddly gets well over 60fps and over 100 on most RTX games, but be interested how it would do in your benchmarks with your R9, I just have a Ryzen 7 so cant really compare like for like.
 
It's a bit of a comprehension fail if you can't work out the architecture in question is that of the RTX 3080.... given the title.
Sure, however basic journalistic standards are that headlines should be clear and precise, rather than comprehension exercises for the reader. Also, it's a bit more than just a language issue, as you are implying that any potential gpu bottleneck must exist with the Ampere architecture itself, rather than with the drivers, vram bandwidth, etc.
 
Some of that sounds like immature drivers...maybe there will be a follow up article in several months or so.
 
I think it is a combination of the CPU and the GPU architecture as shown here, but it's also the game and driver software at this point. The drivers need a bit more maturity I suspect.

I think as we see more forward looking software with more modern game engines arriving in the next year or two the 3080 will stretch its legs a bit better at 1440p.

It's typical of this scenario that the newer GPU begins to pull out more of a lead on newer games, even if it is slight.
Absolutely. The 3080 is so powerful that at lower resolutions software can’t keep up.

Game engines don’t scale infinitely, GTA V for instance starts becoming more inconsistent the faster you push it and above 120 becomes wholly unreliable for benching. Most games and their engines are not meant to run 150+ FPS, and something like AI code or asset loading will get in the way.

Next gen games will take better advantage, but this gen has seen demands on hardware slow down. It’s getting prohibitively expensive to push graphics further. Look how well the 290x has held on over 7 years already.
 
You tested it with 3950x, which is a massive bottleneck at 1080p and 1440p. Every site is showing that which tested Amd and Intel cpu at once.

Stop rationalizing your bad choice and spreading malarkey.
 
Absolutely. The 3080 is so powerful that at lower resolutions software can’t keep up.

Game engines don’t scale infinitely, GTA V for instance starts becoming more inconsistent the faster you push it and above 120 becomes wholly unreliable for benching. Most games and their engines are not meant to run 150+ FPS, and something like AI code or asset loading will get in the way.

Next gen games will take better advantage, but this gen has seen demands on hardware slow down. It’s getting prohibitively expensive to push graphics further. Look how well the 290x has held on over 7 years already.

Absolute nonsense, there are very few engines having fps or physics tied to fps, it has been decoupled since DX8 or DX9 years years years ago and if that is still a thing, it is a design choice made on purpose.

The absolute majority of games has no problems with high value of fps.
 
Missed opportunity. While pointing out the question of CPU bottleneck or architectural lack of optimization, the missed opportunity here was that very little attention was given at all to the subject of architectural optimizations at the driver level that are still immature for Ampere versus Turing's 2 years of driver level optimization and noticeable growth of performance versus the Turing launch levels of performance.

While I understand the article's primary focus was detecting CPU bottlenecking and it's effects, not pointing out Turing's driver optimization maturity versus Ampere's lack thereof, it is still an important factor and one not to be underestimated in the final assessment.
 
You tested it with 3950x, which is a massive bottleneck at 1080p and 1440p. Every site is showing that which tested Amd and Intel cpu at once.

Stop rationalizing your bad choice and spreading malarkey.
Dude, the whole testing in this article was performed with an i9-10900K. And they did some comparisons between 3080 on AMD and Intel platforms before. Stop rationalizing your fanboyism and spreading malakrey.
 
So I guess this is how BigNavi will beat the RTX3080 @1440 and below.

I'm curious if the RTX3070 will show the same behavior. Given that nvidia's benchs show it on par or faster than a RTX2080Ti @1440p I'm going to say maybe not.

 
You tested it with 3950x, which is a massive bottleneck at 1080p and 1440p. Every site is showing that which tested Amd and Intel cpu at once.

Stop rationalizing your bad choice and spreading malarkey.

Yes. They tested with a 3950x initially, which is why the 10900k used here was choking the 3080 so badly.

Smh.

Thanks for proving that reading comprehension is not a strong supporting point with your arguments.
 
"Our overall outlook for the RTX 3080 has not changed, and that means the GPU is on average about 70% faster than the 2080 at 4K"

Stop using this misleading comparison. For all intents and purposes the 3080 is the Ti model this time around. There's a mere 10% difference between it and the 3090 and the 3080 is using the big die.

If your intent was to compare price to price don't. Turing in general was terrible value all around. Might as well say 70% over a 1080 Ti, same damn card at the same price, only a generation older. Just shows you how bad of a value turing was.
 
I think it is a combination of the CPU and the GPU architecture as shown here, but it's also the game and driver software at this point. The drivers need a bit more maturity I suspect.

I think as we see more forward looking software with more modern game engines arriving in the next year or two the 3080 will stretch its legs a bit better at 1440p.

It's typical of this scenario that the newer GPU begins to pull out more of a lead on newer games, even if it is slight.

No, the problem is an architectural bottleneck.

The problems Ampere has are impossible to overcome with updates alone. Ampere will likely gain a similar amount of performance as turing did with updates, which was a small amount.

Absolutely. The 3080 is so powerful that at lower resolutions software can’t keep up.

Game engines don’t scale infinitely, GTA V for instance starts becoming more inconsistent the faster you push it and above 120 becomes wholly unreliable for benching. Most games and their engines are not meant to run 150+ FPS, and something like AI code or asset loading will get in the way.

Next gen games will take better advantage, but this gen has seen demands on hardware slow down. It’s getting prohibitively expensive to push graphics further. Look how well the 290x has held on over 7 years already.

GTAV and Bethesda games have issues with high frame-rates. One was designed for consoles and the other is still using an engine from the 90s. The vast majority of games have no issues with high frame-rates.

"Absolutely. The 3080 is so powerful that at lower resolutions software can’t keep up."

This comment makes very little sense and is disproven by the article where there are games that are bottle necked by the GPU at 1080p and even more so at 1440p.

So I guess this is how BigNavi will beat the RTX3080 @1440 and below.

I'm curious if the RTX3070 will show the same behavior. Given that nvidia's benchs show it on par or faster than a RTX2080Ti @1440p I'm going to say maybe not.

The 3070 will not be on par with the 2080 Ti. It will likely land slightly above the 2080. When Nvidia said the 3070 will be on par with the 2080 Ti, it meant in ray traced scenarios only.

So I guess nvidia ampere is just like amd vega

Correct. The architecture is designed in such a way that the only workloads you could get full occupancy of all the cores are professional ones.
 
Without GPU utilization metrics ... The data from this review isn't enough to conclude whether it's CPU or GPU architecture. Please consider monitoring GPU usage during these benchmarks and then scaling up and down CPU speeds to hone in on CPU vs GPU bottlenecks...
 
Back