higher res textures consume more of the buffer for storage and more of the bandwidth when used, so when the bus is overwhelmed, the fps will drop.
You have no idea what you're talking about.
Textures cannot be loaded into VRAM any faster than the storage medium they're stored on. Even if you have a PCIe 4.0 SSD that can load files at 7 GB/s, that's still only 2.4% of the bandwidth of the 4060 Ti's VRAM. And it's physically impossible to load textures any faster than that.
Though in practice it's even less than that, because loading textures is not a sequential operation, and even the very best SSDs in existance today still only do somewhere between 1 GB/s to 2 GB/s in high QD random reads, which is at most 0.7% of the 4060 Ti's bandwidth.
Loading textures has negligible bandwidth requirements for GPUs. The reason they have memory with such high bandwidth has nothing whatsoever to do with textures.
Otherwise, why would anyone build faster memory, why would you overclock it?
Because faster memory speeds up other things that have nothing to do with textures, such as compute operations and shaders.
Same goes for the idea of using large caches to alleviate the need for bandwidth in RDNA and Ada Lovelace. Those caches aren't for textures, you'd fit an insignificant amount of textures in them. Those caches are for data the GPU is doing compute tasks with.
You see steeper drops when vram size is insufficient because when fetching it from the PCs main memory through PCIE incurs a large penalty latency wise.
Latency has nothing whatsoever to do with bandwidth, those are two completely separate specs. You can have memory setups that are high bandwidth and high latency, and you can have memory setups that are low bandwidth and low latency (or any other combination of the two). They are not correlated.
If, instead of DDR, PCs used GDDR RAM sticks with hundreds of GB/s of bandwidth, you'd still see the same impact on GPUs when they run out of VRAM, because that bandwidth does absolutely nothing to mitigate the latency penalty of having to access the system RAM through PCIe.
I think you are confusing anisotropic filtering with mipmap:
No, I'm not. You completely mnisunderstood this part.
I'm saying that loading textures into VRAM has insignificant bandwidth requirements (because, again, they cannot be loaded any faster than your SSD can transfer them), and that anisotropic filtering (which is a different thing, it's a resampling operation done to textures that are already loaded into VRAM) has very small bandwidth requirements and a minuscule impact on framerate.
15 years ago I had a HD 4850 with 63 GB/s GDDR3, and on that GPU 16x AF was already essentially free.
Shaders also read textures as you can see in the unity optimization guide:
"It’s recommended that you manually optimize your shaders to reduce calculations and texture reads"
Shaders can read texture data (like they can read any other kind of data that is stored on the VRAM), but the amount of shaders that do this is insignificant and it doesn't affect actual in-game performance. As you can see from the myriad of game benchmarks that exist that show changing texture settings in a game has zero impact on framerate (so long as you have enough VRAM).
Texture settings don't affect framerate so long as the card is not bandwidth starved
Texture settings don't affect framerate AT ALL.
If you disagree with this statement, you're welcome to provide the link to a benchmark that proves it wrong, by showing a game where turning settings from medium to high, for example, lowers the framerate.
Again, the ONLY thing that matters for texture settings is how much VRAM you have. If you don't run out of VRAM, textire settings have zero impact on performance. Bandwidth doesn't matter for texture settings, because the bandwidth requirement for loading textures is minuscule, a fraction of a percent of the bandwidth the GPU's VRAM has.
While I agree that the price difference is too big between the 8 and 16GB versions, I believe this is a temporary situation and could not have been predicted when the gpu was developed. Also, since the trend is for wafer price to increase and RAM cell not to shrink enough to compensate, on the long run, memory prices will increase and we will not see density increases until they find a way to scale the capacitor in the cell. Until then, manufacturers will probably try to limit the amount of memory they ship.
Again, no idea what you're talking about. DRAM prices have been in a downwards slope for years, and the manufacturing processes are evolved just fine. There was zero indication that DRAM prices would go up in the past, and there are zero indications that DRAM price will go up today.
You're the one who seems to be confusing DRAM (the type of memory that is used as DDR, GDDR and others) with SRAM (the pools of memory that go inside chips themselves and are used as registers and cache). SRAM is built in the same manufacturing processes as the chips themselves (TSMC N7, TSMC N5, Samsung 8 nm, Intel 7 and so on), and that's the type of memory that has been scaling poorly on the latest processes. But that has nothing whatsoever to do with DRAM manufacturing and the GDDR market.