How Does the GTX 1080 Ti Stack Up in 2019? 39 Game Benchmark

A 2070 cant outperform a 1080ti at 4K ultra(insane where poss) as it takes around 10gb vram to run it at this level. The 2070 only has 8gb vram. Consequently only the 1080ti and 2080ti can actually run this game at max settings as they are the only two cards with enough vram
 
Okay. I have to say this out loud. I recently spent $1080 on a MSI Geforce 1080ti 11GB DUKE gpu. I know, but the card is a beast and it was the only one available at the time. The guy was basically price gauging. I wanted the card because I also bought a 144mhz 1440 monitor with Gsync. Actually got a good deal on that. 27'' 1440 144mhz gsync for only $450 on Amazon. It is huge! I literally just bought this card like 3 months ago. I don't remember exactly when but I do remember RTX being stocked in Best Buy now instead of whatever.

I stopped gaming like 5 years ago. I just started back up. I originally spent $900 to build a PC. It wasn't bad. The GPU was a 1050ti. The CPU is an 8th gen i5 but overclocks to 4.6 ghz(I will be upgrading that soon to i7 or better). Okay, well I made some upgrades because I got back into gaming.

I got another 16GB of corsair memory for a total of 32gb. This was fairly cheap so no big problem. When I bought the memory at best buy I noticed they had this whole new stock of RTX cards. I remember reading about them. I picked one out for like $600 bucks or something. I forget which one. Anyways, I took it home and it would not fit into the PCI x 16 slot. I really am an *****. The problem, I later realized, was that I didn't take the protective cap off the PCI pin on the gpu which made it seem like it required some type of newer slot. I was in such a hurry to get the card into the computer I didn't look. Also, on the box when I read it at Best Buy it said it required a double width slot which I mistook to mean it required like a pci x 16 slot but that was double. I know that sounds stupid but it just meant it requires 2 spaces on the backplate. I KNOW. This led me to look online for another GPU. I had the money. It really wasn't a big deal. I figured I wanted to get something good. I found a 1080ti 8GB that was new for like $500-600. Then I saw this huge 1080ti 11GB "Duke" MSI card. Only 1 left from 1 person. Well, I bought it. I noticed after it was delivered he posted another listing for about $100 more and said 1 left. He is just a price raising f#$k but if I am willing to pay for it, I Guess more power to him. Anyways, Wolfenstein runs at over 200FPS during high/Ultra High settings.

Anyways, I love my computer now and I love that extremely huge 1080ti card. My motherboard is micro-atx ( I was not thinking forward) and the card literally goes past the mobo about 2 1/2 inches. It is that large. So, basically, my CPU and motherboard can be upgraded and I would get some better performance. My memory could also go up to a higher Mhz as the mobo supports it. Sadly, I have to wait for these things because I just had to drop $5700 on dental work. That along with the 1080ti and monitor was all my savings.

Okay, I had to get that out there because a part of me is a little pissed off I had to pay that much for the card but the other side of me is happy because the card is literally a beast. I have been playing all the new an old titles on high/ultra. Every game looks great. The card never goes above 59 Celsius. It idles at 29/30 depending if I run my fan or not. It is actually strange because people are still buying that same Duke model GPU on Amazon for that crazy price like the 1080ti is going out of style. Now I have my old 1050ti sitting around doing nothing. I think I am more pissed I spent $900 originally to build my PC and then went on to buy a GPU for just over $1000 along with a $450 monitor. I probably could of got an RTX card..like a nice one for $1000? I don't know. I have not been paying attention since I bought the new GPU and monitor. Just gaming on 2k.
 
Last edited:
“Half” so right where Moore’s Law would predict. Oh wait, Moore’s Law would put half at the same performance, not slower.

There is nothing complex about the 5700 series that requires tons of engineering. Let me qualify that - the 5700 does take a ton of extra cooling and considering AMD’s history with hot cards, that is saying something. There won’t be any small form factor 5700 cards. Or medium ones. AIBs are slow rolling because margins are to thin.

By your logic, AMD hasn’t had a “flagship” card in quite a while. In the real world, the best of your product line is, by default, your flagship. Saying “it’s not their best card because they named it middle” is like saying “I have a girlfriend but she lives in Canada you don’t know her”

1. Did you even read the power consumption numbers in this article of the 5700 and 5700 XT? Have you even seen one aftermarket card review of either the 5700 or 5700 XT? You clearly have not as the card runs cool on even the cheaper AIB solutions.

2. Nothing complex? Says the guy who can't even read a power consumption part. Right, it's only an entirely new uArch :laughing:

3. Wrong. Vega 64 was a flagship and so was the Vega VII. Your analogy is completely nonsensical. It doesn't take a genius to figure out that a card with mid range pricing and mid range die size isn't a flagship. All the facts are there, you choose to ignore them. There isn't a single factor that points to it being mid range so you have to fabricate your own reasoning.
 
Last edited:
Why did they choose to put the flagship 1080ti up against the 2070 and not the 2080ti? Isn't that kinda comparing apples to oranges? Of course a top of the line graphics card is going to edge out a next gen medium range card.
 
A 2070 cant outperform a 1080ti at 4K ultra(insane where poss) as it takes around 10gb vram to run it at this level. The 2070 only has 8gb vram. Consequently only the 1080ti and 2080ti can actually run this game at max settings as they are the only two cards with enough vram
It's not a local memory issue - it's a ROP limitation. An RTX 2070 produces a peak fill rate 75% that of a GTX 1080 Ti, and at 4K, a game's performance is far more affected by fill rate than running short of local memory. This isn't too say that having 11 GiB of RAM compared to 8 or 6 GiB isn't an advantage, it's just that it's a smaller advantage compared to having 88 ROPs @ 1.5 GHz compared to 64 ROPs @ 1.7 GHz.

It's worth noting that VRAM measurements by logging software such as Afterburner aren't necessarily indicating the amount of local memory being used. You can see this in benchmarks such as this one:


If you look at the games where a built-in benchmark is used, e.g. Shadow of the Tomb Raider, you can see that each separate test has a notably different value for the VRAM being 'used', Given that this is exactly the same graphics workload each time, there's no reason for a different amount of data being stored on the graphics card, just because a different number of CPU cores is being used - it's the same meshes, textures, render targets, etc. The game could be determining how best to manage the resource pool based on the hardware detected, and it's possible that SotTR is assuming a 2 core CPU won't have the resources to manage the 'full' amount.
 
I find this very frustrating. I saw this headline and saved the article to read on my lunch break, since I have a 1080Ti and have been thinking to myself: "hmm, I wonder if I should consider upgrading my GPU before the next generation rolls around". This article seemed like EXACTLY what I needed to know.

WRONG. Because you only compare it to TWO FREAKING CARDS, we have no idea how it "stacks up" in 2019, because you've not shown us anything close to the comparable options that a 1080Ti owner would consider. Cause I can assure you that it would not be either of the cards you chose.

This article is for new-builds, looking for within a VERY specific budget range. Totally misleading
 
Radeon 5700 XT is a clear winner here. Even though it's slightly slower in most of the tests, but considering it's 40% cheaper...... there's no competition. Unless you love throwing money into the wind. In which case why don't you just send your cash surplus to me?
 
...you only compare it to TWO FREAKING CARDS, we have no idea how it "stacks up" in 2019, because you've not shown us anything close to the comparable options that a 1080Ti owner would consider
What would a GTX 1080 Ti owner consider? Just graphics cards released in 2019? If so, then here are those releases:

GeForce RTX 2080 Super
GeForce RTX 2070 Super
GeForce RTX 2060 Super
GeForce RTX 2060
GeForce GTX 1660 Ti
GeForce GTX 1660
GeForce GTX 1650 Ti
GeForce GTX 1650
Radeon VII
Radeon RX 5700 XT 50th Anniversary
Radeon RX 5700 XT
Radeon RX 5700
Radeon RX 640
Radeon RX 560 XT
Radeon RX 540 XT
Radeon 630
Radeon 550X
Radeon 540X
Radeon 540

The cards pitched against the 1080 Ti were the 2070 Super and the RX 5700 XT; so other than the 2080 Super and the Radeon VII, all of the others are less powerful the tested pair. Want to know how a 1080 Ti compares to the other two? Then use these reviews:

https://www.techspot.com/review/1881-geforce-rtx-2080-super/
https://www.techspot.com/review/1789-amd-radeon-vii/

Now if one is looking to compare the 1080 Ti against graphics cards available now but released in the last two years, then just look up reviews for the GeForce RTX 2080 Ti, 2080, 2070, Radeon 590X, Radeon 580X:

https://www.techspot.com/review/1701-geforce-rtx-2080/
https://www.techspot.com/review/1727-nvidia-geforce-rtx-2070/
https://www.techspot.com/review/1781-geforce-rtx-2060-mega-benchmark/
https://www.techspot.com/review/1747-amd-radeon-rx-590/

So using these we can see that the 590X and 580X are slower than a RTX 2070, which in turn is a little bit slower than a GTX 2080, and that card is pretty much the same (overall) as a 1080 Ti (a 2080 Super is a little faster still than the older 2080). That means we can deduce that an Nvidia-based upgrade, the choices are a 2080 Super (for around 5% better performance) or a 2080 Ti (for around a 22% better performance). For an AMD-based upgrade, the tests show that a 1080 Ti is faster than a RX 5700 XT and about the same as a Radeon VII. Naturally, all of this is system and game/settings dependent.

Now one may want to factor in power/heat, noise, cost, additional features into the upgrade consideration, but those don't really affect the decision (a slower graphics card that offers, for example, DLSS support isn't much of an 'upgrade'). Long story short - there's only one upgrade that a 1080 Ti would consider, and that's a 2080 Ti. One might be tempted by the RX 5700 XT, simply because it's 'new' but this particular article clearly dismisses that idea.
 
“Half” so right where Moore’s Law would predict. Oh wait, Moore’s Law would put half at the same performance, not slower.

There is nothing complex about the 5700 series that requires tons of engineering. Let me qualify that - the 5700 does take a ton of extra cooling and considering AMD’s history with hot cards, that is saying something. There won’t be any small form factor 5700 cards. Or medium ones. AIBs are slow rolling because margins are to thin.

By your logic, AMD hasn’t had a “flagship” card in quite a while. In the real world, the best of your product line is, by default, your flagship. Saying “it’s not their best card because they named it middle” is like saying “I have a girlfriend but she lives in Canada you don’t know her”

1. Did you even read the power consumption numbers in this article of the 5700 and 5700 XT? Have you even seen one aftermarket card review of either the 5700 or 5700 XT? You clearly have not as the card runs cool on even the cheaper AIB solutions.

2. Nothing complex? Says the guy who can't even read a power consumption part. Right, it's only an entirely new uArch :laughing:

3. Wrong. Vega 64 was a flagship and so was the Vega VII. Your analogy is completely nonsensical. It doesn't take a genius to figure out that a card with mid range pricing and mid range die size isn't a flagship. All the facts are there, you choose to ignore them. There isn't a single factor that points to it being mid range so you have to fabricate your own reasoning.


1 - I trust Toms numbers on power more than Hardware Unboxed. Toms has the XT at 217w with a peak of 243w. Yeah those AIB cards use HUGE heat sinks and fans. The Sapphire version has a huge overhang in addition to being long as hell. As I said, there won't be any SFF cards because they need tons of cooling.

2 - Hey *******, the internal architecture has ZERO to do with the VRM, DRAM, and caps or the related cooling. Oh wait, MSI couldn't put full heat pads on their DRAM so maybe it is hard when you are trying to shave pennies. AIBs are not making the chips, AMD makes them, delivers them, AIBs plug them into the card and build the cooling.

3. All 7 people who bought the Vega 64 love you for that comment. There is a reason you can buy Vega VII in reference card form only. Oh and they are both ELOed so back to my point, the 5700XT is their flagship now.
 
I think I am more pissed I spent $900 originally to build my PC and then went on to buy a GPU for just over $1000 along with a $450 monitor. I probably could of got an RTX card..like a nice one for $1000? I don't know. I have not been paying attention since I bought the new GPU and monitor. Just gaming on 2k.

Oh ya, that $1,000 could have been spent much wiser. But, hey man, don't sweat it. It's over and you have a beast for a long while.
 
1 - I trust Toms numbers on power more than Hardware Unboxed. Toms has the XT at 217w with a peak of 243w. Yeah those AIB cards use HUGE heat sinks and fans. The Sapphire version has a huge overhang in addition to being long as hell. As I said, there won't be any SFF cards because they need tons of cooling.

2 - Hey *******, the internal architecture has ZERO to do with the VRM, DRAM, and caps or the related cooling. Oh wait, MSI couldn't put full heat pads on their DRAM so maybe it is hard when you are trying to shave pennies. AIBs are not making the chips, AMD makes them, delivers them, AIBs plug them into the card and build the cooling.

3. All 7 people who bought the Vega 64 love you for that comment. There is a reason you can buy Vega VII in reference card form only. Oh and they are both ELOed so back to my point, the 5700XT is their flagship now.

1. These must the numbers you are referencing:

https://www.tomshardware.com/review...2060-super-geforce-rtx-2070-super,6207-6.html

The 2070 Super consumes 226w in their testing, which is close enough to techspot's own testing. They did use different games to measure power draw so it's reasonable to think that both results are valid. You have to assume that the cards are trading blows depending on the game that they are in fact very close to each other in consumption. I don't see how one could "trust" Tom's numbers over TechSpot either. Tom's, the same company that made the infamous "just buy it!" article?

2. You said this?

"MSI couldn't put full heat pads"

:laughing: EVGA forgot to put any pads at all on their Nvidia 10xx series SC and FTW models as well. No, that your single straw you are grasping on represents nothing other then a single company messing up on a single model. Ditto goes for EVGA. Your insistence on going to maximum hyperbole is at the least providing entertainment.

3. I repeat: It doesn't take a genius to figure out that a card with mid range pricing and mid range die size isn't a flagship. All the facts are there, you choose to ignore them. You continue to submit your opinion without providing any supporting argument or facts.
 
Hehe sorry but I cannot resist, why are we comparing a last year flagship to this year's mid range again? 8-D if anything one would expect a flagship to flagship comparison, unless you guys are assuming a flagship owner won't be looking to upgrade to this year flagship again for some reason.. Jf be splashed last year then more likely then jot he will be look ing to splash again ;-) but what do I know ;-)
It's probably to show how much GPU has progressed. Today's upper midrange options (2070, 5700XT) at ~$400-$450 MSRP are equal to last generation's top tier option (1080 Ti) at $700-$750 MSRP.

It's also a decent price for price comparison of getting an older used top tier GPU vs getting a new current gen upper mid tier GPU. Since they are all going for around $400-$500, this is useful for people with $500 comparing used vs new GPUs.

How much they have "progressed"? You must be new to this. The 2070 super is equal to the 2080 which started priced at 700$, the same I paid for my 1080ti over a year before.

Compare that to the 980ti to 1080ti replacement cycle. The 1080ti was massively faster and had a 30-50$ higher MSRP on release. The only card faster than my 1080ti when turing was released was the 2080ti at a 500 to 600$ price *increase*!

The *only* reason we have the 2070 super at the 500$ price point is because AMD released Navi. Notice how AMD has nothing to compete with the 1080ti so its still priced with that huge premium?

There won't be a competitively priced replacement for the 1080ti until Intel or AMD come out with a card that can compete with Nvidias 2080ti. And within days of AMD or Intel announcing a 700$ GPU that competes with the 2080ti Nvidia will announce a 2080ti Super priced at 700-800$.
 
How much they have "progressed"? You must be new to this. The 2070 super is equal to the 2080 which started priced at 700$, the same I paid for my 1080ti over a year before.

Compare that to the 980ti to 1080ti replacement cycle. The 1080ti was massively faster and had a 30-50$ higher MSRP on release. The only card faster than my 1080ti when turing was released was the 2080ti at a 500 to 600$ price *increase*!

The *only* reason we have the 2070 super at the 500$ price point is because AMD released Navi. Notice how AMD has nothing to compete with the 1080ti so its still priced with that huge premium?

There won't be a competitively priced replacement for the 1080ti until Intel or AMD come out with a card that can compete with Nvidias 2080ti. And within days of AMD or Intel announcing a 700$ GPU that competes with the 2080ti Nvidia will announce a 2080ti Super priced at 700-800$.

That's not even including that Pascal stayed around for longer then normal either. As you mentioned, the kick to the face was that the cards didn't even provide more performance at the same price. You had to spend more just to get a faster card then you bought 3 years ago.
 
How much they have "progressed"? You must be new to this. The 2070 super is equal to the 2080 which started priced at 700$, the same I paid for my 1080ti over a year before. Compare that to the 980ti to 1080ti replacement cycle. The 1080ti was massively faster and had a 30-50$ higher MSRP on release. The only card faster than my 1080ti when turing was released was the 2080ti at a 500 to 600$ price *increase*!

You must not have actually read my post. The 2070 Super is roughly equal to the GTX1080Ti and the 2070 Super's MSRP is $500. The 1080TI's MSRP is $700. You can say that isn't very much progress, but it's still some progress and thus provides a legitimate reason to why Tech spot is comparing them. Nowhere did I claim it was amazing progress. I only said it was some progress that at least warranted making a comparison article about.

The *only* reason we have the 2070 super at the 500$ price point is because AMD released Navi. Notice how AMD has nothing to compete with the 1080ti so its still priced with that huge premium?

What you're talking about is completely outside the scope of my comment and discussion. People were wondering why Techspot decided to compare the mid tier 2070 Super and the mid tier 5700XT with the upper tier 1080Ti. I hypothesized it could be to show generational progress of today's mid range cards VS last gen's upper range cards.

Nowhere did I talk about why Nvidia released something at a certain price point or whether AMD has nothing to compete with the upper tier Nvidia cards.

There won't be a competitively priced replacement for the 1080ti until Intel or AMD come out with a card that can compete with Nvidias 2080ti. And within days of AMD or Intel announcing a 700$ GPU that competes with the 2080ti Nvidia will announce a 2080ti Super priced at 700-800$.

There already is a competitively priced replacement for the GTX1080Ti. The article clearly showed the 2070 Super is roughly comparable to the 1080Ti and comes in at $500 MSRP, while the 5700XT is about 9% slower and comes in at $400 MSRP. The 2070S is clearly a replacement for the 1080Ti's performance level, and comes in $200 cheaper than the 1080Ti's $700 MSRP.

They're not competitors to the GTX2080 or 2080S because they're not priced at $700. The GTX2080S is the replacement for the 1080Ti's price range as it comes in at $700 and is about 15% faster than the 1080Ti. We can claim that it's not very much progress, but it is still some progress and thus warrants a comparison article by Techspot.

Whether something has enough progress to warrant a comparison article by Techspot is not the same as your opinion on whether the progress is significant.
 
Last edited:
Here's what is not hoopla.

G-sync is superior to Freesync.
PhsyX (albeit different tech is out now...my point is, this was just another feature Nvidia had, and AMD did not)
AMD doesn't offer Ray Tracing support (yet, again, another tech brought to life my Nvidia for AMD to copy, just like G-Sync)
AMD has more issues, bugs and stability issues across the board, from old to new games.
AMD has more issues with sound and video. A recent poll on overclock.net had overwhelming results for AMD users with various little hiccups, latency, stability issues amoung other things, compared to folks on Nvidia cards.
There is a reason Nvidia GPUs, from low-mid-high range absolutely wipe the floor with AMD in steam results. They are superior, in every way. And its more then just about raw FPS.


AMD has less features that matter, and the comparative features are always years/months behind, or not as polished.
https://www.techradar.com/news/comp...idia-who-makes-the-best-graphics-cards-699480

Still, GeForce Experience boasts the game optimization features we’re all crazy for. So when you don’t know what settings are best for your computer in The Witcher 3, Nvidia takes care of the heavy lifting for you.

AMD users can download and install Raptr’s Gaming Evolved tool to optimize their gaming experience. However, the add-on is less than ideal considering its biggest rival’s audience can accomplish nearly everything from within GeForce Experience. That includes using Nvidia Ansel to take way cool in-game photos at resolutions exceeding 63K (16 times that of which a 4K monitor can display).

Nvidia also has a leg up when it comes to streaming games whether it’s to another gaming PC with at least a Maxwell-based GPU, as well as the company’s self-made tablets and set-top box. Not to mention, Nvidia also has a cloud-based gaming service call GeForce Now available to Windows 10 and MacOS users.

And, of course you can’t talk about Nvidia in 2019 without mentioning ray tracing. When Team Green announced its Turing line of graphics card, it made huge claims about revolutionizing gaming with real-time ray traced lighting, shadows and reflections. Games with these features have been out for a while now, and while they certainly look great, these effects drain performance, even from cards designed for them


Yeeeessss they do.:joy:

Because it delivers driver updates and optimizes games in addition to letting you broadcast gameplay and capture screenshots as well as videos directly from its easy-to-use interface, Nvidia GeForce Experience is posited as the one PC gaming application to rule them all.

Meanwhile, AMD’s newly announced Radeon Software Adrenalin 2019 Edition aims to overtake Nvidia’s solution. The latest update is stacked features including automatic overclocking (that doesn’t need tensor cores) and stream games to your mobile device.
Ohh look AMD is copying Nvidia's Geforce Experience.
Shocker.
They must be copying/replicating/reacting to Nvidia's move once again because their drivers are packed with MORE features by far.:p
Hey, AMD has come along way, I will admit.
But when it comes to the overall gaming experience, if your gaming on AMD, your gaming on 2nd fiddle. It is what is is.
I have a 1080ti (actually 3 of them) and I have to tell you, your post is just wrong. Ray tracing exists for like, 30 years, Nvidia didn't invent it. Nvidia drivers and software are as buggy. GeForce control panel is laggy and a mess, and you saw what happened with the recent drivers for Forza, were all of the RTX's were lagging behind AMD.

Nvidia sales more for lots of reasons. Lower power consumption means more OEM's put them on premade, and dont forget the mining craze that only left 1050 / 1050ti and 1060 3gbs on the market the last 3 years.
 
What would a GTX 1080 Ti owner consider? Just graphics cards released in 2019? If so, then here are those releases:

GeForce RTX 2080 Super
GeForce RTX 2070 Super
GeForce RTX 2060 Super
GeForce RTX 2060
GeForce GTX 1660 Ti
GeForce GTX 1660
GeForce GTX 1650 Ti
GeForce GTX 1650
Radeon VII
Radeon RX 5700 XT 50th Anniversary
Radeon RX 5700 XT
Radeon RX 5700
Radeon RX 640
Radeon RX 560 XT
Radeon RX 540 XT
Radeon 630
Radeon 550X
Radeon 540X
Radeon 540

The cards pitched against the 1080 Ti were the 2070 Super and the RX 5700 XT; so other than the 2080 Super and the Radeon VII, all of the others are less powerful the tested pair. Want to know how a 1080 Ti compares to the other two? Then use these reviews:

https://www.techspot.com/review/1881-geforce-rtx-2080-super/
https://www.techspot.com/review/1789-amd-radeon-vii/

Now if one is looking to compare the 1080 Ti against graphics cards available now but released in the last two years, then just look up reviews for the GeForce RTX 2080 Ti, 2080, 2070, Radeon 590X, Radeon 580X:

https://www.techspot.com/review/1701-geforce-rtx-2080/
https://www.techspot.com/review/1727-nvidia-geforce-rtx-2070/
https://www.techspot.com/review/1781-geforce-rtx-2060-mega-benchmark/
https://www.techspot.com/review/1747-amd-radeon-rx-590/

So using these we can see that the 590X and 580X are slower than a RTX 2070, which in turn is a little bit slower than a GTX 2080, and that card is pretty much the same (overall) as a 1080 Ti (a 2080 Super is a little faster still than the older 2080). That means we can deduce that an Nvidia-based upgrade, the choices are a 2080 Super (for around 5% better performance) or a 2080 Ti (for around a 22% better performance). For an AMD-based upgrade, the tests show that a 1080 Ti is faster than a RX 5700 XT and about the same as a Radeon VII. Naturally, all of this is system and game/settings dependent.

Now one may want to factor in power/heat, noise, cost, additional features into the upgrade consideration, but those don't really affect the decision (a slower graphics card that offers, for example, DLSS support isn't much of an 'upgrade'). Long story short - there's only one upgrade that a 1080 Ti would consider, and that's a 2080 Ti. One might be tempted by the RX 5700 XT, simply because it's 'new' but this particular article clearly dismisses that idea.

Exactly. What a long winded way to agree with my original point...

The 2080Ti is obviously the only upgrade option..

So if you're going to have an Article that is titled "How the 1080Ti stacks up in 2019" and you don't include its ONLY real upgrade option as a comparison ... that's pretty much a wasted article, because you're only showing me how it stacks up based on price competitors.. which is only a tiny part of the story. Especially for people who tend to buy at the top of the product stack.
 
During this summer, I played through Crysis, Crysis Warhead and Crysis 2 on a Core i7 laptop with a GTX 1060 and 16GB DDR4 of RAM.

I was able to play in the highest settings on 1080p and the game looked absolutely marvelous in every conceivable way.

Unless you are going for ultra high 4K settings, the 1050Ti and 1060 are all you really need for gaming nowadays, but the advertisers out there want to convince you that unless you're playing a game in ultra high at the highest settings and watching your FPS to make sure it doesn't drop below 60 that your computer is "inferior" and needs to be upgraded.

I bought a 2080tiFTW3 for my computer mostly for future-proofing purposes, but I've realised that most developers are targeting low end systems with low end CPU and GPU. Steam shows that most users have a 1060 or 1050Ti (which is mostly due to the lower price of entry off-the-shelf gaming PC's during the past 3 years where cryptocurrency inflated the price of hardware). I could have gotten away with just a 2080 or just about anything less to game in 1440p on my 34" curved Alienware gaming monitor.

The 1080Ti is still the penultimate powerhouse of the last generation. Even the 1080 in my newer laptop was powerful enough to run most games in full quality.

What really annoys me however is that I feel the lowest end RTX card should have been more powerful than 1080Ti especially when you want to justify these prices.
---------------------------------------------------------------------------------------------------------------------------------------------------------
Whole heartedly DITTO!
 
It's not a local memory issue - it's a ROP limitation. An RTX 2070 produces a peak fill rate 75% that of a GTX 1080 Ti, and at 4K, a game's performance is far more affected by fill rate than running short of local memory. This isn't too say that having 11 GiB of RAM compared to 8 or 6 GiB isn't an advantage, it's just that it's a smaller advantage compared to having 88 ROPs @ 1.5 GHz compared to 64 ROPs @ 1.7 GHz.

It's worth noting that VRAM measurements by logging software such as Afterburner aren't necessarily indicating the amount of local memory being used. You can see this in benchmarks such as this one:


If you look at the games where a built-in benchmark is used, e.g. Shadow of the Tomb Raider, you can see that each separate test has a notably different value for the VRAM being 'used', Given that this is exactly the same graphics workload each time, there's no reason for a different amount of data being stored on the graphics card, just because a different number of CPU cores is being used - it's the same meshes, textures, render targets, etc. The game could be determining how best to manage the resource pool based on the hardware detected, and it's possible that SotTR is assuming a 2 core CPU won't have the resources to manage the 'full' amount.
So you are saying that a gpu with only 8gb of vram can comfortably run a game that requires 10gb vram?
I was under the impression that overloading the vram causes memory stuttering which obviously has severe implications for gaming performance when this happens.
I would say that running a gpu at least 25% over capacity is going to cause many performance issues.
 
Last edited:
Since purchasing my GTX 1080TI for $550 on eBay last year during BF, after fully reading this article in its entirety I can say w/o a doubt that I'm still satisfied by the purchase I made w/ no regrets.
 
I bought my 1080Ti just as the prices started to settle down in the UK (August 2018) and I haven't played a single game that runs badly at completely maxed settings. I have absolutely no regrets.

I will say I haven't played any games that have ray tracing enabled in them (metro went to epic games store so didn't bother, BF:5 is a train wreck so didn't bother and Lara Croft I'm waiting for it to drop in price).

I'm in the middle of downloading Gears 5 that looks phenomenal, from what I've read. It'll run at maxed out settings happily at 1440p. Still no reason to even look for an upgrade.
 
So you are saying that a gpu with only 8gb of vram can comfortably run a game that requires 10gb vram?
I was under the impression that overloading the vram causes memory stuttering which obviously has severe implications for gaming performance when this happens.
I would say that running a gpu at least 25% over capacity is going to cause many performance issues.
From the benchamrks I've seen, just because a game "can" use a lot of VRAM doesn't mean it actually needs that much VRAM to run well. I don't recall if it was an AC or Far Cry benchmark or not, but a game consuming like 5-6GB on an 8gb card got basically the same minimum and average fps as when it was consuming less vram (eg. 4gb) on a lower vram card of the same model at the same settings.
 
Last edited:
So you are saying that a gpu with only 8gb of vram can comfortably run a game that requires 10gb vram?
I was under the impression that overloading the vram causes memory stuttering which obviously has severe implications for gaming performance when this happens.
I would say that running a gpu at least 25% over capacity is going to cause many performance issues.
There are several ways that a rendering engine can be programmed to handle buffers - the blocks of memory that store all of the required data to render a frame (command, vertex, index, texture, mesh, etc). If we take Direct3D12, the first thing that needs to be programmed for is allocating memory space for the resources but before this happens, the programmer really needs to know how big those resources are going to be. In some cases, this is quite easy to determine (e.g. the texture buffers), whereas others are essentially unknown at that time (e.g. the size of the command list). The GPU drivers will tell Direct3D how much available local memory there is, but this obviously can't be hard-coded into the engine, because of the sheer number of different GPUs available. So the programmer needs to consider how to handle memory allocations if they exceed the amount of local memory.

Now this is where things vary enormously, depending what graphics API is being used, what operating system, the hardware in question, and the game's rendering engine. Done properly, the whole setup can smoothly move chunks of memory back and forth between GPU local memory and system RAM, all behind the scenes. However, the movement of the data may potentially cause a huge performance stall, if the engine moves a block out of local memory that the very next instruction requires. One really good way to hide this is to have the GPU doing a lot of pixel processing or compute work, taking numerous cycles of processing time - this essentially covers up the time shifting memory blocks about.

But there's no point in doing this if the GPU just isn't up to the job of handling a lot of pixel processing and compute work; e.g. giving a GTX 1050 a workload that an RTX 2080 Ti could handle would certain create lots of time to move memory blocks about before they're needed, but the overall performance would be so low, that nothing has been gained by doing this. This is why local memory sizes are matched to the respective capabilities of the GPU. Theoretically, it's possible to make a chip, like a GTX 1050, but with the same amount of RAM as a Titan V (these days, the memory crossbar is independent of the ROPs), and so you'd never run in memory allocation problems. But the chip's performance would make the whole exercise entirely moot.

So to answer your initial question of "are saying that a gpu with only 8gb of vram can comfortably run a game that requires 10gb vram?", the answer is yes but with the caveats mentioned above. In general, while graphics APIs aren't particularly good at memory management, the issue can be masked by games piling on the pixel processing and compute work, as long as it makes sense to do so.
 
It's not a local memory issue - it's a ROP limitation. An RTX 2070 produces a peak fill rate 75% that of a GTX 1080 Ti, and at 4K, a game's performance is far more affected by fill rate than running short of local memory. This isn't too say that having 11 GiB of RAM compared to 8 or 6 GiB isn't an advantage, it's just that it's a smaller advantage compared to having 88 ROPs @ 1.5 GHz compared to 64 ROPs @ 1.7 GHz.

It's worth noting that VRAM measurements by logging software such as Afterburner aren't necessarily indicating the amount of local memory being used. You can see this in benchmarks such as this one:


If you look at the games where a built-in benchmark is used, e.g. Shadow of the Tomb Raider, you can see that each separate test has a notably different value for the VRAM being 'used', Given that this is exactly the same graphics workload each time, there's no reason for a different amount of data being stored on the graphics card, just because a different number of CPU cores is being used - it's the same meshes, textures, render targets, etc. The game could be determining how best to manage the resource pool based on the hardware detected, and it's possible that SotTR is assuming a 2 core CPU won't have the resources to manage the 'full' amount.
So you are saying that a gpu with only 8gb of vram can comfortably run a game that requires 10gb vram?
I was under the impression that overloading the vram causes memory stuttering which obviously has severe implications for gaming performance when this happens.
I would say that running a gpu at least 25% over capacity is going to cause many performance issues.
There is no game that requires 10gb of vram. Probably not even 8. If we stick at 1080p especially 6gb is plenty fine for all games. You need to go all the way down to 4gb to see performance regression due to vram limitations.
 
GTX1080Ti held onto it's value very strongly, in that sense if you bought one over two years ago you are still probably pretty pleased about it. It's not like you really have anywhere else to trade up to except to a very expensive 2080Ti. I wouldn't be at all surprised to see people hang onto these cards for another year.

We're onto 3 year life cycles for many GPUs these days, before true replacements. Far cry from the olden days where your card was replaced by something much faster in typically less than two years. Usually to play games like Far Cry!

I own one from release day and its still going strong. There is no reason to upgrade it at this point in the time. Performance is similar to the 2080 and the 2080 Ti only offers a bit of a jump, less if you turn on RTX features.

1080Ti still has a good year in it as a top end card IMO, however until nVidia releases something much better in the same cost factor, it might last for a while longer.
 
Back