If you look at the 4K results of Turing it's pretty clear that its VRAM is being overwhelmed in multiple titles (go check the updated results that Steve released). People aren't yet making a big fuss about it since we don't know the compression levels of the new cards, but if they don't improve from Turing then they'll have the same problems. The bandwidth didn't increase by that much, especially for the 3070.Check out some of the coverage of this "issue" on gamers nexus. The "vram usage" line you see in your software of choice isn't a true use of vram, and the game is useing less then that (its just the tuning software can't actually tell whats allocated and whats used).
This "not enough vram" has been a consistent refrain for years now and it never really pans out. Plus you have the problem of market penetration. AMD accounts for a couple percent of GPU market share. Current games in development are targeting the majority of the market which is 6GB of VRAM. Soon to be 8GB of VRAM. By the time 10GB of VRAM isn't "enough" (ie, that it impacts framerates or the quality of the gaming) we will be talking about the 5xxx series cards (or even later).
I'm excited to see what AMD brings to the table, especially in the CPU field. But their big navi will not be competitive with the 3080 in any way. And the really annoying part for me is even if it was competitive on a price and silicon level, it still wouldn't be competitive at the driver level. They have gotten much better the past 2-3 years but their still lagging far behind the level of game and driver support Nvidia brings to the table.
Nvidia was too focused on AI, and enterprise and Turing was an embarrassing misstep for them. I don't expect that to happen twice.
Last edited: