GeForce RTX 4070 Ti vs. GeForce RTX 3080: 50 Game Benchmark

DLSS2/FSR2 often beats alot of built in AA and gives more perf on top, including TAA. Depends on implementation.

And no, it does not make the image blurry. If it does, it's a bad implementation, or you have to change dll file. I use DLSS and FSR whenever I can, unless its is crap. Mostly its good.
Just because you can't see the blurriness doesn't mean others won't.
 
Great article, Steve.

I would have liked to have seen the results using the 12 GB variant of the 3080, since your review said it was nearly the same as a 3080 TI and oddly, surpassed it in a few games. With a 384-bit bus and more cores than the 3080 10GB, I wonder how much of a difference that would have made versus the 4070 TI.

But, I suppose the title of that review being "Shameless Cash Grab", Nvidia asked for it back LOL 😁

I realize it's a somewhat rare model but as an owner of one, I have no regrets that I didn't wait. I've been enjoying it for a year.
 
Last edited:
"Still, many gamers did purchase a 10GB GeForce RTX 3080, and many are probably considering the upgrade to an RTX 40 series product, but of course, the problem is that pricing... it's gone up, by a lot! The GeForce RTX 4070 Ti starts at $800 (MSRP), with most models going for $850 or more at retail. Worse still, if you want the more desirable RTX 4080, those cost $1,200 -- they are supposed to cost that, but in reality they are closer to $1,300 -- ouch."

I'm surprised more people don't just save gaming for consoles.

A couple of points there...

Re the 3080, not me, no, never... Imo even in the first half of 2021 that 10Gb (even at GDDR6X) was never cutting it across the board in AAA gaming (keeping in mind optimisation can be all over the place these days) at 3440x1440, let alone 4K... Even if only expected to last into the current gen. My old 1070 had been maxing it's 8Gb (admittedly slower GDDR5) at 1080p for some time by then. The 6800XT was the only answer and tbf it did kind of help that even in that time of crazy prices it was 'only' £1200 (for a premium Sapphire Nitro+ SE) and available immediately against the 3080 ranging from £1800 for reference cards up to £2400 for the premium ones... and for many folks I know ordering at the same time, no availability/cards in their hands until closer to Christmas that year.
As for RT, well on top of the anaemic VRAM cap, Ampere was also short on RT resources too as evidenced by DLSS being required to make good the big fps drop using it. And RT still isn't a thing I feel FOMO over as much as poorly optimised games (an increasing standard unfortunately) not having FSR.

Re saving gaming for consoles... Well, I've said a good few times that when consoles feature certain PC exclusives that I have favoured for a long time (even at reasonably lower settings and fps than I'm used to on PC) then I'd look twice. Not that consoles are 'bad' per se, I just know that I'd miss my TW, CoH and a bunch of other games/genres just like I did when I moved to PS4/XB1 for 6 months a few years ago. As for this 4070ti, not enough of a jump over a 3080 to be an upgrade for a 6800XT. I'd settle for at least 20Gb going forward (given my 6800XT has used 13-14Gb at most to date) for significantly better (+33% or more) fps than I have rn and a good run at 4K. That means a 7900XT at the least but preferably a 7900XTX if I upgrade this gen but tbh my 6800XT is still killing it so it'll probably be an RDNA4 upgrade.
 
I too would have liked seeing the 3080 12G model shown on the charts. As others have pointed out, the benchmarks typically show 3-5% difference vs. the 10G model, but I do wonder how the added VRAM and memory bandwidth would compare in some of the 4K results vs. the 4070Ti (where VRAM becomes the limiting factor).

I purchased an MSI Gaming Z in September for $760 and haven't looked back. I could have waited for the 4070Ti, but it's not like NVidia gave away the added performance for free. As for power and heat, that is a clear positive for the newer cards. However, I was able to pretty easily undervolt without lowering clocks and significantly reduce heat output, so I'm satisfied.

When one considers that the die size of the 4070Ti is smaller and the bus width is halved (384-bit vs. 192-bit), NVidia has basically made a cost-reduced version of a 3080-class GPU for an approximately similar price/performance ratio. This goes beyond stagnation - NVidia is cashing-in. AMD also seems content to hose people this generation.
 
Last edited:
Just because you can't see the blurriness doesn't mean others won't.
I can easily see blurryness - 20/20 vision - and I hate it, thats why I tweak DLSS and sharpness filter. Again, you have zero actual knowledge about DLSS if you don't know how to optimize it and fix blurryness in a game that has bad implementation.

DLSS1 was crap, DLSS2 is amazing. DLSS3 is amazing too, for some people, not me.
 
Another excellent comparison that makes you think carefully about upgrade paths.
I’m an old fart now but since the early 80s I’ve “Always” built using current top shelf hardware. Yet last fall is the first time I did it differently, buying a USED 5950x & 3090 system. Still made it my own after but just can’t justify the current market.
 
Considering that many games nowadays use cartoon / low poly / flat shading graphics, both cards may be overkill for a huge number of players.
 
Newer gens are supposed to be better, and they are, just way more expensive. And Nvidia doesn't want to release any mid-range. There is no 3060 Ti equivalent in the 40 series and there won't be any time soon. If the rumors are true, the 4070 is going to be $750, so not really the same class. One wonders what the 4060 Ti would sell for, $650?

 
Considering that many games nowadays use cartoon / low poly / flat shading graphics, both cards may be overkill for a huge number of players.

Counter-Strike and Eve Online definitely require me to buy an entire render farm full of 4090s. Especially now that they announced CS2 :)
 
Well, at least nVidia is being consistent with the (lack of) VRAM being included with their cards. The RTX 4070 Ti has 19% more performance than the RTX 3080 and 20% more VRAM to go with it.

A card that potent with 12GB of VRAM, what could go wrong? :laughing:
 
I too would have liked seeing the 3080 12G model shown on the charts. As others have pointed out, the benchmarks typically show 3-5% difference vs. the 10G model, but I do wonder how the added VRAM and memory bandwidth would compare in some of the 4K results vs. the 4070Ti (where VRAM becomes the limiting factor).

I purchased an MSI Gaming Z in September for $760 and haven't looked back. I could have waited for the 4070Ti, but it's not like NVidia gave away the added performance for free. As for power and heat, that is a clear positive for the newer cards. However, I was able to pretty easily undervolt without lowering clocks and significantly reduce heat output, so I'm satisfied.

When one considers that the die size of the 4070Ti is smaller and the bus width is halved (384-bit vs. 192-bit), NVidia has basically made a cost-reduced version of a 3080-class GPU for an approximately similar price/performance ratio. This goes beyond stagnation - NVidia is cashing-in. AMD also seems content to hose people this generation.

Just search for the TechSpot 4070 Ti review, they list the 3080/12GB on the list. The 4070 Ti is about 20fps faster at 1440p and 7 fps faster in 4K in 16 games.
 
Well, at least nVidia is being consistent with the (lack of) VRAM being included with their cards. The RTX 4070 Ti has 19% more performance than the RTX 3080 and 20% more VRAM to go with it.

A card that potent with 12GB of VRAM, what could go wrong? :laughing:

They could add an M2 connector so that RAM can be expanded with an SSD :)
 
Personally I would get 16GB minimum for 4K gaming. Meaning 4080 and up or 7900XT as bare minimum. Last gen cards with 16GB might work but GPU seems too weak to utilize the VRAM anyway, a 3090 Ti and 6950XT can easily be brought to its knees in demanding games at 4K when fps goal is more than 30-60 fps. Personally I want at least 100 fps when gaming on PC.
I agree with you but there's more to it than just that. One HUGE gobbler of VRAM is maximum quality textures, something that makes about 100x more difference than ray-tracing. Those textures can overflow a card's frame buffer without regard to resolution or the actual potency of the GPU. In the long-term, someone with an RTX 3070 Ti will be stuck at 1080p because by then, games will require more than 8GB of VRAM for 1440p with more than just the bare minimum of settings and (what will be considered by them to be) low-quality textures.

For some games, even those that aren't outrageously demanding on the GPU (like Far Cry 6), either a card has enough VRAM for the full experience or it doesn't. A lot of people remember the HD texture pack in Far Cry 6. That pack requires a video card to have a minimum frame buffer of 11GB in order to use it. That annoyed the hell out of me because when Far Cry 6 came out, I was still using my RX 5700 XT as my main card. I had thought that surely 8GB wouldn't be a hindrance on a card like the RX 5700 XT because I was only gaming at 1080p or (at most) 1440p but... here we are. As annoyed as I was by that, I would've been absolutely livid if I had paid five times the amount that I paid for the RX 5700 XT for an RTX 3080 (<$500CAD (pre-COVID) vs. ~$2500CAD during COVID) and couldn't use Far Cry 6's HD textures from day one.

Sometimes it doesn't even matter how potent the card is because some games have these special things in them that require lots of VRAM to use. Things that are completely unaffected by the GPU's power or the graphics settings being used. Far Cry 6 is not an aberration, it is merely a harbinger of what is to come.
4070 Ti and 3090 Ti performs almost identical at 1440p, but when you move to 4K, 3090 Ti tends to perform 10% better. This is probably a result of bandwidth bottlenecking and OCing the memory on 4070 Ti should help some.
Yep, but it's also the fact that the RTX 3090/Ti has literally double the 4070 Ti's VRAM. The RTX 3090 and 3090 Ti both have 24GB of VRAM. You see, nVidia did the RTX 3090 cards the right way but nothing else. Like, seriously, jumping from level-8 to level-9 shouldn't result in a 14GB VRAM increase. That should've been a big red flag to people looking at the RTX 3080 and below (except for the inexplicable 12GB RTX 3060).
I still think 4070 Ti is for 1440p high refresh gaming over 4K. 12GB VRAM won't age well here. Games already started to use way more memory (both system and VRAM). Games that only comes to PS5 and XSX that is. "Next Gen Games" bumps PC requirements up fast, now 16GB RAM is bare minimum but 32GB is preffered and delivers higher minimum fps in ALOT of games now + System is generally much smoother without using pagefile all the time.
The RTX 4070 Ti has only 1GB more than the nVidia flagship of three generations ago despite being 108% faster. Hell, it only matches the RX 6700 XT of last-generation despite being ~75% faster. The fact that nobody noticed or talked about this was mystifying to me. Then when I brought it up, there were actually people who denied that it would make a difference (The Dunning-Kruger Effect in operation). This was very telling to me about how little people actually know about tech (even people on tech sites).

To his credit, Steve Walton did mention it in his review of the RTX 3080 but then he quickly dismissed it with some nonsense about PCI-Express v4.0:
"Doesn't necessarily future proof the GPU however, as Nvidia paired it with 10GB of VRAM which might prove insufficient in a year or two, though PCIe 4.0 will be a lot more useful in mitigating those performance loses in modern platforms."
- No it won't and it hasn't. I don't know what Steve's talking about here because connectivity speed stops being a factor when you run out of memory. System RAM can start swapping to disc, but video cards can't and so games either have FPS cliff-dives or crash altogether.

I don't know why Steve thinks that PCI-Express v4.0 will make any difference. This makes about as much sense as saying:
"Doesn't necessarily future proof the GPU however, as AMD paired it with 4GB of VRAM which might prove insufficient in a year or two, though the fact that it's HBM with a 4096-bit memory bus will be a lot more useful in mitigating those performance loses in modern platforms."

If you run out of VRAM, it doesn't matter how fast it is or how fast PCI-Express v4.0 is because you're all out of VRAM. I would take more RAM that is slower than less RAM that is faster 7 days a week and twice on Sunday.

I don't know why Steve said this but it's obviously wrong. PCI-Express v4.0 hasn't mitigated anything with regard to insufficient VRAM. This doesn't surprise me because there has never been a version of PCI-Express that could mitigate something like this and PCI-Express v4.0 is no different from any previous version of PCI-Express except that it's faster.
 
I agree with you but there's more to it than just that. One HUGE gobbler of VRAM is maximum quality textures, something that makes about 100x more difference than ray-tracing. Those textures can overflow a card's frame buffer without regard to resolution or the actual potency of the GPU. In the long-term, someone with an RTX 3070 Ti will be stuck at 1080p because by then, games will require more than 8GB of VRAM for 1440p with more than just the bare minimum of settings and (what will be considered by them to be) low-quality textures.

For some games, even those that aren't outrageously demanding on the GPU (like Far Cry 6), either a card has enough VRAM for the full experience or it doesn't. A lot of people remember the HD texture pack in Far Cry 6. That pack requires a video card to have a minimum frame buffer of 11GB in order to use it. That annoyed the hell out of me because when Far Cry 6 came out, I was still using my RX 5700 XT as my main card. I had thought that surely 8GB wouldn't be a hindrance on a card like the RX 5700 XT because I was only gaming at 1080p or (at most) 1440p but... here we are. As annoyed as I was by that, I would've been absolutely livid if I had paid five times the amount that I paid for the RX 5700 XT for an RTX 3080 (<$500CAD (pre-COVID) vs. ~$2500CAD during COVID) and couldn't use Far Cry 6's HD textures from day one.

Sometimes it doesn't even matter how potent the card is because some games have these special things in them that require lots of VRAM to use. Things that are completely unaffected by the GPU's power or the graphics settings being used. Far Cry 6 is not an aberration, it is merely a harbinger of what is to come.

Yep, but it's also the fact that the RTX 3090/Ti has literally double the 4070 Ti's VRAM. The RTX 3090 and 3090 Ti both have 24GB of VRAM. You see, nVidia did the RTX 3090 cards the right way but nothing else. Like, seriously, jumping from level-8 to level-9 shouldn't result in a 14GB VRAM increase. That should've been a big red flag to people looking at the RTX 3080 and below (except for the inexplicable 12GB RTX 3060).

The RTX 4070 Ti has only 1GB more than the nVidia flagship of three generations ago despite being 108% faster. Hell, it only matches the RX 6700 XT of last-generation despite being ~75% faster. The fact that nobody noticed or talked about this was mystifying to me. Then when I brought it up, there were actually people who denied that it would make a difference (The Dunning-Kruger Effect in operation). This was very telling to me about how little people actually know about tech (even people on tech sites).

To his credit, Steve Walton did mention it in his review of the RTX 3080 but then he quickly dismissed it with some nonsense about PCI-Express v4.0:
"Doesn't necessarily future proof the GPU however, as Nvidia paired it with 10GB of VRAM which might prove insufficient in a year or two, though PCIe 4.0 will be a lot more useful in mitigating those performance loses in modern platforms."
- No it won't and it hasn't. I don't know what Steve's talking about here because connectivity speed stops being a factor when you run out of memory. System RAM can start swapping to disc, but video cards can't and so games either have FPS cliff-dives or crash altogether.

I don't know why Steve thinks that PCI-Express v4.0 will make any difference. This makes about as much sense as saying:
"Doesn't necessarily future proof the GPU however, as AMD paired it with 4GB of VRAM which might prove insufficient in a year or two, though the fact that it's HBM with a 4096-bit memory bus will be a lot more useful in mitigating those performance loses in modern platforms."

If you run out of VRAM, it doesn't matter how fast it is or how fast PCI-Express v4.0 is because you're all out of VRAM. I would take more RAM that is slower than less RAM that is faster 7 days a week and twice on Sunday.

I don't know why Steve said this but it's obviously wrong. PCI-Express v4.0 hasn't mitigated anything with regard to insufficient VRAM. This doesn't surprise me because there has never been a version of PCI-Express that could mitigate something like this and PCI-Express v4.0 is no different from any previous version of PCI-Express except that it's faster.

I think VRAM is highly overrated in general. Typically when VRAM amount becomes an issue, the GPU itself is too weak anyway. Most often, medium preset don't look noticable worse than high as well. Ultra sucks in general (often forced mb, dof etc)

PS5 and XSX have 16GB shared RAM and pretty much no games utilize more than 4-6GB for vdeo memory here. This will reflect on PC gaming requirements as well, and there's 5-6 years left of this generation. We have seen a CPU requirement bump on PC because of current gen consoles, however VRAM requirements did not change much.

I think it will take many years before 8GB will be a problem for 1440p gaming. The most demanding games today with ultra settings (NO RT) barely hits 5-6GB usage in 1440p. Many gamers hovers around the 4GB mark. However some engines allocate 80-90% of VRAM regardless of amount.

When you remove all the useless crap like motion blur, dof and whatnot, the VRAM usage is lowered as well. Also, DirectStorage and RTX IO can do wonders here; Lower VRAM usage, while still using high-res textures with no texture pop-in + Better loading times, also DLSS/FSR exist to further bring down VRAM usage, if you actually needs to, while retaining highest quality textures. Alot of games will utilize these techs going forward.

Lets be honest, who is going to think their 3070 8GB or 6800 16GB will max out a demanding new games in 2025-2026? With lowered settings, they probably run fine.

Back when Fury X launched with 4GB it was already on the low side; 6-8GB cards existed. 980 Ti aged much better than Fury line, because of 6GB. It was a very stupid decision from AMD to go with only 4GB but HBM was too expensive and they had to settle. Fury and Fury X aged like milk, some of the worst cards AMD ever put out.

I generally think VRAM amount is highly overrated. 24GB on 3090, 3090 Ti, 4090 and 7900XTX are not going to benefit performance in any games, for example. By the time games can actually utilize 24GB, the GPU will be severely outdated and too slow anyway.

3080 Ti performs pretty much on par with 3090 in 4K, 12GB vs 24GB VRAM. I have not seen a single game that run better on 3090 because of double VRAM and today, I would not even classify 3080 Ti and 3090 as good 4K cards (without DLSS/FSR)
 
Last edited:
I think VRAM is highly overrated in general. Typically when VRAM amount becomes an issue, the GPU itself is too weak anyway. Most often, medium preset don't look noticable worse than high as well. Ultra sucks in general (often forced mb, dof etc)

PS5 and XSX have 16GB shared RAM and pretty much no games utilize more than 4-6GB for vdeo memory here. This will reflect on PC gaming requirements as well, and there's 5-6 years left of this generation. We have seen a CPU requirement bump on PC because of current gen consoles, however VRAM requirements did not change much.

I think it will take many years before 8GB will be a problem for 1440p gaming. The most demanding games today with ultra settings (NO RT) barely hits 5-6GB usage in 1440p. Many gamers hovers around the 4GB mark. However some engines allocate 80-90% of VRAM regardless of amount.
I'm afraid that you're just plain wrong when you say that:
When you remove all the useless crap like motion blur, dof and whatnot, the VRAM usage is lowered as well. Also, DirectStorage and RTX IO can do wonders here; Lower VRAM usage, while still using high-res textures with no texture pop-in + Better loading times, also DLSS/FSR exist to further bring down VRAM usage, if you actually needs to, while retaining highest quality textures. Alot of games will utilize these techs going forward.

Lets be honest, who is going to think their 3070 8GB or 6800 16GB will max out a demanding new games in 2025-2026? With lowered settings, they probably run fine.
If we're being honest, those TechTubers have already defeated your arguments for me.
Back when Fury X launched with 4GB it was already on the low side; 6-8GB cards existed. 980 Ti aged much better than Fury line, because of 6GB. It was a very stupid decision from AMD to go with only 4GB but HBM was too expensive and they had to settle. Fury and Fury X aged like milk, some of the worst cards AMD ever put out.
Now you're defeating your own argument about VRAM size. You've never owned an R9 Fury I'm guessing because if you did, you wouldn't be saying that. Remember that the Furies were flagship-level cards that were aided in their longevity by AMD fine-wine drivers and the fact that 4GB of HBM was better than 4GB of GDDR5. The 8GB cards we're talking about now are mid-to-high-end and will age even worse than the Furies did.
I generally think VRAM amount is highly overrated. 24GB on 3090, 3090 Ti, 4090 and 7900XTX are not going to benefit performance in any games, for example. By the time games can actually utilize 24GB, the GPU will be severely outdated and too slow anyway.

3080 Ti performs pretty much on par with 3090 in 4K, 12GB vs 24GB VRAM. I have not seen a single game that run better on 3090 because of double VRAM and today, I would not even classify 3080 Ti and 3090 as good 4K cards (without DLSS/FSR)
I agree with this but the reason that those halo-level cards have so much VRAM is because, without a Titan, that's what prosumers will use for workstation-level tasks. For their case-uses, that VRAM is vital. For gaming, I agree that even a card like the RTX 4090 would be just fine with 20GB.

Of course we have to put things in perspective because, IIRC, you bought an RTX 3070 (or was it a 3070 Ti, can't remember exactly) for the one situation in which they would excel, for 1080p high-FPS gaming. In your use-case, sure, the VRAM won't limit the card because you're only playing at 1080p which won't use much VRAM while the high-FPS aspect is well-suited to the power of your high-end GPU. The thing is, most people buy high-end level-7 cards for 1440p gaming and for them, this will suck.

I seem to remember saying that you made a wise decision with your choice because your kind of gaming is ideal for these cards. If high-FPS 1080p was a more typical form of gaming (it's not exactly rare but not exactly the norm either), then these cards wouldn't be considered so bad. Just remember though, these are only the wisest choices when it comes to GeForce cards because they're still a bad value, only getting worse as you go up in the stack. Things may be different where you are but in the USA (and I'm not even in the USA but US prices are still a good guideline) this is how they stack up in order from weakest to strongest:

RX 6700 10GB - $300 (100% performance)
RTX 3060 Ti 8GB - $410 (105% performance)
RX 6700 XT 12GB - $350 (107% performance)
RTX 3070 8GB - $515 (123% performance)
RTX 3070 Ti 8GB - $640 (132% performance)
RX 6800 16GB - $485 (136% performance)
RX 6800 XT 16GB - $565 (154% performance)

Now, there is not a chance in hell that anyone can say that it's worth buying nVidia with these amounts of VRAM, performance and cost. Who in their right mind would choose to pay an extra $75 over the price of an RX 6800 XT, the rival of the RTX 3080 for reduced performance and half of the VRAM? The answer is nobody.

This is the crux of the problem, the fact that you're paying MORE for LESS performance and LESS VRAM. If you were paying less (or even the same), then it would actually make sense. Keep in mind that I'm not doing this for me because I already made the wisest choice with the RX 6800 XT and its 16GB buffer means I don't even have to think about it.

I know that you think VRAM size is overrated, but I think that paying more for less performance and less VRAM is even more overrated! :laughing:
 
Last edited:
I think VRAM is highly overrated in general. Typically when VRAM amount becomes an issue, the GPU itself is too weak anyway. Most often, medium preset don't look noticable worse than high as well. Ultra sucks in general (often forced mb, dof etc)

PS5 and XSX have 16GB shared RAM and pretty much no games utilize more than 4-6GB for vdeo memory here. This will reflect on PC gaming requirements as well, and there's 5-6 years left of this generation. We have seen a CPU requirement bump on PC because of current gen consoles, however VRAM requirements did not change much.

I think it will take many years before 8GB will be a problem for 1440p gaming. The most demanding games today with ultra settings (NO RT) barely hits 5-6GB usage in 1440p. Many gamers hovers around the 4GB mark. However some engines allocate 80-90% of VRAM regardless of amount.

When you remove all the useless crap like motion blur, dof and whatnot, the VRAM usage is lowered as well. Also, DirectStorage and RTX IO can do wonders here; Lower VRAM usage, while still using high-res textures with no texture pop-in + Better loading times, also DLSS/FSR exist to further bring down VRAM usage, if you actually needs to, while retaining highest quality textures. Alot of games will utilize these techs going forward.

Lets be honest, who is going to think their 3070 8GB or 6800 16GB will max out a demanding new games in 2025-2026? With lowered settings, they probably run fine.

Back when Fury X launched with 4GB it was already on the low side; 6-8GB cards existed. 980 Ti aged much better than Fury line, because of 6GB. It was a very stupid decision from AMD to go with only 4GB but HBM was too expensive and they had to settle. Fury and Fury X aged like milk, some of the worst cards AMD ever put out.

I generally think VRAM amount is highly overrated. 24GB on 3090, 3090 Ti, 4090 and 7900XTX are not going to benefit performance in any games, for example. By the time games can actually utilize 24GB, the GPU will be severely outdated and too slow anyway.

3080 Ti performs pretty much on par with 3090 in 4K, 12GB vs 24GB VRAM. I have not seen a single game that run better on 3090 because of double VRAM and today, I would not even classify 3080 Ti and 3090 as good 4K cards (without DLSS/FSR)
It turns out that I was wrong about something. I had agreed with you that it will probably be years before 8GB starts causing issues at 1080p. Well, that's unfortunately not the case:
 
It turns out that I was wrong about something. I had agreed with you that it will probably be years before 8GB starts causing issues at 1080p. Well, that's unfortunately not the case:
At the rate these game developers are making games so they're unoptimized well, yeah, it'll be problem for people in the very near future.

Or, the people that think you need to play a game on ultra because PC Master Race otherwise the game SUCKS! Yeah, those people will also have issues.
 
Hmmmm a new 4700 TI @ $850 or a used 3800 @ $450? Considering the 4070ti is on average 19% faster, this is a very easy decision for most gamers.
 
What games are you playing that require 16gb system ram?

Hogwartz Legacy, FS 2020 etc.

Next Gen games are coming; 16GB is the new 8GB.

Heavily modded games might also require even more than 16 GB.

From personal experience, a large metropolis in Cities Skylines with lots of mods might use up to 32 GB of ram (which means you really need at least 40 GB of actual system ram to leave some for the OS and be comfortable). Euro Truck Simulator 2 with lots of mods can also use more than 16 GB of ram.
 
It turns out that I was wrong about something. I had agreed with you that it will probably be years before 8GB starts causing issues at 1080p. Well, that's unfortunately not the case:
You couldn't pick a more horribly optimized game if you tried.
 
Back