Personally I would get 16GB minimum for 4K gaming. Meaning 4080 and up or 7900XT as bare minimum. Last gen cards with 16GB might work but GPU seems too weak to utilize the VRAM anyway, a 3090 Ti and 6950XT can easily be brought to its knees in demanding games at 4K when fps goal is more than 30-60 fps. Personally I want at least 100 fps when gaming on PC.
I agree with you but there's more to it than just that. One HUGE gobbler of VRAM is maximum quality textures, something that makes about 100x more difference than ray-tracing. Those textures can overflow a card's frame buffer without regard to resolution or the actual potency of the GPU. In the long-term, someone with an RTX 3070 Ti will be stuck at 1080p because by then, games will require more than 8GB of VRAM for 1440p with more than just the bare minimum of settings and (what will be considered by them to be) low-quality textures.
For some games, even those that aren't outrageously demanding on the GPU (like Far Cry 6), either a card has enough VRAM for the full experience or it doesn't. A lot of people remember the HD texture pack in Far Cry 6. That pack requires a video card to have a minimum
frame buffer of 11GB in order to use it. That annoyed the hell out of me because when Far Cry 6 came out, I was still using my RX 5700 XT as my main card. I had thought that surely 8GB wouldn't be a hindrance on a card like the RX 5700 XT because I was only gaming at 1080p or (at most) 1440p but... here we are. As annoyed as I was by that, I would've been absolutely livid
if I had paid five times
the amount that I paid for the RX 5700 XT for an RTX 3080 (<$500CAD (pre-COVID) vs. ~$2500CAD during COVID) and couldn't use Far Cry 6's HD textures from day one.
Sometimes it doesn't even matter how potent the card is because some games have these special things in them that require lots of VRAM to use. Things that are completely unaffected by the GPU's power or the graphics settings being used. Far Cry 6 is not an aberration, it is merely a harbinger of what is to come.
4070 Ti and 3090 Ti performs almost identical at 1440p, but when you move to 4K, 3090 Ti tends to perform 10% better. This is probably a result of bandwidth bottlenecking and OCing the memory on 4070 Ti should help some.
Yep, but it's also the fact that the RTX 3090/Ti has literally double
the 4070 Ti's VRAM. The RTX 3090 and 3090 Ti both have 24GB of VRAM. You see, nVidia did the RTX 3090 cards the right way but nothing else. Like, seriously, jumping from level-8 to level-9 shouldn't result in a 14GB VRAM increase. That should've been a big red flag to people looking at the RTX 3080 and below (except for the inexplicable 12GB RTX 3060).
I still think 4070 Ti is for 1440p high refresh gaming over 4K. 12GB VRAM won't age well here. Games already started to use way more memory (both system and VRAM). Games that only comes to PS5 and XSX that is. "Next Gen Games" bumps PC requirements up fast, now 16GB RAM is bare minimum but 32GB is preffered and delivers higher minimum fps in ALOT of games now + System is generally much smoother without using pagefile all the time.
The RTX 4070 Ti has only 1GB more than the nVidia flagship of three generations ago despite being 108% faster. Hell, it only matches the RX 6700 XT of last-generation despite being ~75% faster. The fact that nobody
noticed or talked about this was mystifying to me. Then when I brought it up, there were actually people who denied that it would make a difference (The Dunning-Kruger Effect in operation). This was very telling to me about how little people actually know about tech (even people on tech sites).
To his credit, Steve Walton did mention it in his review of the RTX 3080 but then he quickly dismissed it with some nonsense about PCI-Express v4.0:
"Doesn't necessarily future proof the GPU however, as Nvidia paired it with 10GB of VRAM which might prove insufficient in a year or two, though PCIe 4.0 will be a lot more useful in mitigating those performance loses in modern platforms."
- No it won't and it hasn't. I don't know what Steve's talking about here because connectivity speed stops being a factor when you run out of memory. System RAM can start swapping to disc, but video cards can't and so games either have FPS cliff-dives or crash altogether.
I don't know why Steve thinks that PCI-Express v4.0 will make any difference. This makes about as much sense as saying:
"Doesn't necessarily future proof the GPU however, as AMD paired it with 4GB of VRAM which might prove insufficient in a year or two, though the fact that it's HBM with a 4096-bit memory bus will be a lot more useful in mitigating those performance loses in modern platforms."
If you run out of VRAM, it doesn't matter how fast it is or how fast PCI-Express v4.0 is because you're all out
of VRAM. I would take more RAM that is slower than less RAM that is faster 7 days a week and twice on Sunday.
I don't know why Steve said this but it's obviously wrong. PCI-Express v4.0 hasn't mitigated anything with regard to insufficient VRAM. This doesn't surprise me because there has never been a version of PCI-Express that could mitigate something like this and PCI-Express v4.0 is no different from any previous version of PCI-Express except that it's faster.