Nvidia GeForce RTX 4060 Ti 16GB Reviewed, Benchmarked

What is this consooomer mindset? It's available, thus it has value?

Bruh no. The 6500 and 6400 were objectively bad values, no matter how you twist it.

AMD got "hate" because they released a 1650, 4 years later, at a 33% HIGHER price, with a gimped PCIe bus on top of it, that has no real place in the stack. Sorry, but being the underdog is no excuse to release half baked products. Do better.
Simply put, comparing how good product is can be either absolute or relative. On absolute terms, 6500XT was not good. But on relative value, well. Your choices are 6500XT or some u tter crap like GT730. From those options 6500XT is simply miles netter. Best option under $300 category as Techspot stated.

Your comment just proves how some people just cannot see big picture. 6500XT launch was easily most important GPU launch this decade so far. Why? It ended worst GPU shortage. First, scalpers have no chance to buy all cards any more. Second, since there were low end GPU available, there is no longer need to buy mid end or high end products because nothing else was available.

Whatever AMD does seems to be wrong.
No, crap is still crap, you are convincing yourself that it doesnt smell so you can justify consooming product instead of just....not doing that.
Seems hard to understand that on many cases overpriced crappy GPU is much better than no GPU at all.

Incorrect. I refused to lower my standards. Vote with your dollars and don’t reward companies’ crap products. Plain and simple.
Always easy to say when you have choices. But in situation where you have to (no, waiting is not an option) buy new GPU and 6500XT is best option, you quickly change your mind. That's why 6500XT sold well.

Also feel free to tell what AMD could have done better. Or Nvidia. So many complain about 6500XT while not offering any better "what AMD should have made" -solution.
 
Cheeses Crust, at 1440p it's trailed by $330 RX 6700 XT, and is obliterated by $460 RX 6800. That's some "fine wine" by AMD.
By a big stretch, frame generation should probably be It's only redeeming value, as I wouldn't try RayTracing at this performance range.
 
I remember when the 6500XT was the only card available to purchase, that were some salty times, even HUB and TS recommending the card (not because it was good). It was available because nobody want that trash. Like now when cards are available across the stack but no one want or can't buy at current prices. I was talking the other days with some friends working for a major retailer here. Looks like 6600 and 6650XT were top volume sales followed by the 4070 (non Ti) the 6700XT and 3060 12GB.
 
I remember when the 6500XT was the only card available to purchase, that were some salty times, even HUB and TS recommending the card (not because it was good). It was available because nobody want that trash.
Yeah right https://www.hardwaretimes.com/amds-radeon-rx-6500-xt-is-the-bestselling-gpu-in-germany/

It was available because it was made using 6nm tech, about only that had some capacity available (that probably delayed Zen4 and/or RDNA3 launch a bit however). Unlike this 4060ti anomaly that can be considered as useless release, 6500XT was really needed.
 
I remember when the 6500XT was the only card available to purchase, that were some salty times, even HUB and TS recommending the card (not because it was good). It was available because nobody want that trash. Like now when cards are available across the stack but no one want or can't buy at current prices. I was talking the other days with some friends working for a major retailer here. Looks like 6600 and 6650XT were top volume sales followed by the 4070 (non Ti) the 6700XT and 3060 12GB.

YOU TAKE THAT BACK! We never recommended that trash lol :D
 
Yeah right https://www.hardwaretimes.com/amds-radeon-rx-6500-xt-is-the-bestselling-gpu-in-germany/

It was available because it was made using 6nm tech, about only that had some capacity available (that probably delayed Zen4 and/or RDNA3 launch a bit however). Unlike this 4060ti anomaly that can be considered as useless release, 6500XT was really needed.
There isn't really separate capacity for 6nm vs 7nm at TSMC. N6 is a part of TSMC's 7nm process family, it is produced in the same fabs, like their fab 15B. TSMC plans and reports capacity for a whole process familiy together (https://www.anandtech.com/show/16732/tsmc-manufacturing-update) and N6, as an evolutionary improvement upon N7, has been seen to replace N7 in fab output over time, not supplement it. Even fab utilization rates are reported together for the entire process family (https://www.moomoo.com/community/feed/110026264674309).
 
Cheeses Crust, at 1440p it's trailed by $330 RX 6700 XT, and is obliterated by $460 RX 6800. That's some "fine wine" by AMD.
By a big stretch, frame generation should probably be It's only redeeming value, as I wouldn't try RayTracing at this performance range.

It was as low $430(6800XT $480, 6950XT $570) but stock is running dry.
 
Simply put, comparing how good product is can be either absolute or relative. On absolute terms, 6500XT was not good. But on relative value, well. Your choices are 6500XT or some u tter crap like GT730. From those options 6500XT is simply miles netter. Best option under $300 category as Techspot stated.

Your comment just proves how some people just cannot see big picture. 6500XT launch was easily most important GPU launch this decade so far. Why? It ended worst GPU shortage. First, scalpers have no chance to buy all cards any more. Second, since there were low end GPU available, there is no longer need to buy mid end or high end products because nothing else was available.

Whatever AMD does seems to be wrong.

Seems hard to understand that on many cases overpriced crappy GPU is much better than no GPU at all.


Always easy to say when you have choices. But in situation where you have to (no, waiting is not an option) buy new GPU and 6500XT is best option, you quickly change your mind. That's why 6500XT sold well.

Also feel free to tell what AMD could have done better. Or Nvidia. So many complain about 6500XT while not offering any better "what AMD should have made" -solution.
I mean I literally said “It would have to have at least eight lanes, hardware decode, and 3 display output” but I guess you missed that part. This publication held the same opinion as a matter of fact.

My guy, just admit the 6500 was a mistake and basically a giant middle finger to the entry level segment. It was unfortunately a sign of things to come in the following generation (which we have just come to realize this year), but, at the time it was unprecedentedly awful which is why it got a 20/100. Companies release turds every once in a while… nothing to get defensive about. Lisa Su still makes billions for AMD.

But most importantly, quit consooming. Sometimes the only winning move is not to play.

Peace…
 
The 6500 XT was unforgivably awful. It deserved the score it got.

“Best value” my rear end. You had to have a current-gen platform to even get the most out of it (largely defeating the purpose of its existence), and it was severely stripped down in functionality due to it being a repurposed laptop GPU. It being the “best value” was largely thanks to the GPU shortage, somehow managing to stay on shelves despite said shortage (I wonder why?). In reality, even at $200 MSRP, it was twice as expensive as it should ever have been. It was also no faster than it’s predecessor!

The words value and 6500XT should never be used in any sentence. Card is a steaming POS that I would buy for $99. It has the performance of what one would expect from a 6400 and prices are just ludicrous for this turkey.
 
There isn't really separate capacity for 6nm vs 7nm at TSMC. N6 is a part of TSMC's 7nm process family, it is produced in the same fabs, like their fab 15B. TSMC plans and reports capacity for a whole process familiy together (https://www.anandtech.com/show/16732/tsmc-manufacturing-update) and N6, as an evolutionary improvement upon N7, has been seen to replace N7 in fab output over time, not supplement it. Even fab utilization rates are reported together for the entire process family (https://www.moomoo.com/community/feed/110026264674309).
Of course TSMC 6nm and 7nm are different capacities. They are different processes and doing 7nm chip on 6nm requires redesign (and other way around). For 6500XT AMD used 6nm because it was designed to be 6nm and so it gave more capacity for GPUs. Of course it also meant AMD could do less something else on 6nm but it gave more GPU capacity when needed. Similarly every io chip on 5nm Ryzens and Epycs are 6nm, not 7nm.

I mean I literally said “It would have to have at least eight lanes, hardware decode, and 3 display output” but I guess you missed that part. This publication held the same opinion as a matter of fact.
It was low end laptop chip, makes no sense to have those on chip like that. Techspot didn't understand that 6500XT was not meant to be desktop chip at all and so complaining about missing features is pointles. They just cannot admit missing that.
My guy, just admit the 6500 was a mistake and basically a giant middle finger to the entry level segment. It was unfortunately a sign of things to come in the following generation (which we have just come to realize this year), but, at the time it was unprecedentedly awful which is why it got a 20/100. Companies release turds every once in a while… nothing to get defensive about. Lisa Su still makes billions for AMD.

But most importantly, quit consooming. Sometimes the only winning move is not to play.

Peace…
Easily most important GPU launch of decade, perhaps most important GPU launch of all time, is a miss? 6500XT ended worst GPU shortage because it was 1. available 2. not suitable for cryptomining and both 1 and 2 meant scalpers could no longer can buy everything.

Basically, everyone complains about GPU shortage. When AMD does something that solves problem, that is wrong. This is all about seeing big picture, not just looking "bad specs, AMD sucks LOL".
The words value and 6500XT should never be used in any sentence. Card is a steaming POS that I would buy for $99. It has the performance of what one would expect from a 6400 and prices are just ludicrous for this turkey.
Once again, when 6500XT launched, it had very good value. For determining value, you have to check what is available for what price. Just admit it finally. Even Techspot had to admit that under $300 it was best choice https://www.techspot.com/bestof/gpu-2022/

In other words, Techspot recommends card they gave 20/100. Admitting themselves they messed up with score :D
 
I would buy 4060ti 16gb for 300-350€ in a heartbeat, there is no bad product just bad price. I have my series X for modern titles, paying another console price for basically similar performance (without other PC components) is insane and this Nvidia strategy (average mainstream GPU price equal to modern console price) is destined to fail badly.

Upselling in this generation is crazy as well, 350-370€ for 12gb cards (even those are older generation) and then huge gap towards almost 600€ for 4060ti 16gb and then it's just 7900xt for 850€ as next 16gb card in line. Everything else is sold out here and nothing new has been released.
 
"Ultra" settings for things like geometry, shadows, shader effects, post-processing and so on are usually inefficient in terms of how much visual improvement they bring compared to how much performance they cost.
The same does not apply to textures. Higher resolution textures have no effect whatsoever on FPS, so long as you have enough VRAM for them you can turn settings to "ultra" and it doesn't affect your performance at all. For textures, all that matters is the amount of VRAM you have.
higher res textures consume more of the buffer for storage and more of the bandwidth when used, so when the bus is overwhelmed, the fps will drop. Otherwise, why would anyone build faster memory, why would you overclock it? both size and speed are essential and when either of them is insufficient the performance will drop. You see steeper drops when vram size is insufficient because when fetching it from the PCs main memory through PCIE incurs a large penalty latency wise.

They do. If your card doesn't have enough VRAM, you can turn your texture settings down. The issue here is that consoles have 16 GB of shared memory, with 10+ GB being able to be used as VRAM, and 8 GB cards will simply not be able to match the visual quality of the consoles (as far as textures go). It's pathetic that Nvidia is launching $300, $400 GPUs that can't match the quality settings consoles use.
I totally agree and believe that the 4060ti with 192bit bus and 12GB memory would have been a lot better and the 8GB version should not have been built.

Your understanding is wrong then. Texture quality settings are not bandwidth intensive, and texture filtering settings have a very modest bandwidth requirement (we got past the point where anisotropic filtering is "free" performance-wise on PC over 15 years ago). If a card has 16 GB of VRAM, it can benefit from it to use high-res textures, regardless of how narrow its bus is.
I think you are confusing anisotropic filtering with mipmap:
- anisotropic filtering differs from the texture quality setting. Rather than switching out one texture for a higher-resolution version, it modifies the appearance of the texture to account for viewing angle;
- mipmaps are smaller, pre-filtered versions of a texture image, representing different levels of detail (LOD) of the texture. They are often stored in sequences of progressively smaller textures called mipmap chains with each level half as small as the previous one. Mipmapping increases cache efficiency, reducing bandwidth usage and increasing application performance;

Yes, because shaders (especially pixel shaders) now have to work with even more data and even larger framebuffers. It's not because of textures.
Shaders also read textures as you can see in the unity optimization guide:
"It’s recommended that you manually optimize your shaders to reduce calculations and texture reads"

The 3060 is perfectly capable of handling ultra settings for textures, because 1) texture settings don't affect framerate, and 2) it has enough VRAM to do it.
Texture settings don't affect framerate so long as the card is not bandwidth starved so it can transfer the bigger textures and has enough memory to fit them. These things go hand in hand.

DRAM prices right now are the lowest they have been in ages. Recent estimates are that 16 Gigabit GDDR6 modules are at around $6 to $7 each, meaning 8 GB worth of modules costs less than $30 to AIB partners.
While I agree that the price difference is too big between the 8 and 16GB versions, I believe this is a temporary situation and could not have been predicted when the gpu was developed. Also, since the trend is for wafer price to increase and RAM cell not to shrink enough to compensate, on the long run, memory prices will increase and we will not see density increases until they find a way to scale the capacitor in the cell. Until then, manufacturers will probably try to limit the amount of memory they ship.
 
What I don't understand is that, if 8GB is not enough for modern GPU, why is the 16GB card only a few FPS faster. Doesn't the fact it's almost the same speed mean that modern cards don't need more memory than 8GB (at least up to 1440p)? (just curious rather than trying to start an argument)
 
What I don't understand is that, if 8GB is not enough for modern GPU, why is the 16GB card only a few FPS faster. Doesn't the fact it's almost the same speed mean that modern cards don't need more memory than 8GB (at least up to 1440p)? (just curious rather than trying to start an argument)

Just read article or watch review on YouTube. 1% lows increased dramatically meaning some games go from unplayable 15fps and stuttering to smooth playable experience.
 
"As for the 16GB RTX 4060 Ti, obviously you shouldn't buy it at the current price – for $400 maybe, but we don't expect it to hit that price any time soon."
My two cents, it's going to hit that price soon. They can keep pushing this bullsh1t pricing, but when nobody buys, they'll start losing big. Supply and demand, you mfers...

Unless they bet on shifting all production to AI for now and only have vaporware for gamers. Hoping that gamers will embrace them when the AI bandwagon slows down.
 
higher res textures consume more of the buffer for storage and more of the bandwidth when used, so when the bus is overwhelmed, the fps will drop.
You have no idea what you're talking about.
Textures cannot be loaded into VRAM any faster than the storage medium they're stored on. Even if you have a PCIe 4.0 SSD that can load files at 7 GB/s, that's still only 2.4% of the bandwidth of the 4060 Ti's VRAM. And it's physically impossible to load textures any faster than that.
Though in practice it's even less than that, because loading textures is not a sequential operation, and even the very best SSDs in existance today still only do somewhere between 1 GB/s to 2 GB/s in high QD random reads, which is at most 0.7% of the 4060 Ti's bandwidth.
Loading textures has negligible bandwidth requirements for GPUs. The reason they have memory with such high bandwidth has nothing whatsoever to do with textures.

Otherwise, why would anyone build faster memory, why would you overclock it?
Because faster memory speeds up other things that have nothing to do with textures, such as compute operations and shaders.
Same goes for the idea of using large caches to alleviate the need for bandwidth in RDNA and Ada Lovelace. Those caches aren't for textures, you'd fit an insignificant amount of textures in them. Those caches are for data the GPU is doing compute tasks with.

You see steeper drops when vram size is insufficient because when fetching it from the PCs main memory through PCIE incurs a large penalty latency wise.
Latency has nothing whatsoever to do with bandwidth, those are two completely separate specs. You can have memory setups that are high bandwidth and high latency, and you can have memory setups that are low bandwidth and low latency (or any other combination of the two). They are not correlated.
If, instead of DDR, PCs used GDDR RAM sticks with hundreds of GB/s of bandwidth, you'd still see the same impact on GPUs when they run out of VRAM, because that bandwidth does absolutely nothing to mitigate the latency penalty of having to access the system RAM through PCIe.

I think you are confusing anisotropic filtering with mipmap:
No, I'm not. You completely mnisunderstood this part.
I'm saying that loading textures into VRAM has insignificant bandwidth requirements (because, again, they cannot be loaded any faster than your SSD can transfer them), and that anisotropic filtering (which is a different thing, it's a resampling operation done to textures that are already loaded into VRAM) has very small bandwidth requirements and a minuscule impact on framerate.
15 years ago I had a HD 4850 with 63 GB/s GDDR3, and on that GPU 16x AF was already essentially free.

Shaders also read textures as you can see in the unity optimization guide:
"It’s recommended that you manually optimize your shaders to reduce calculations and texture reads"
Shaders can read texture data (like they can read any other kind of data that is stored on the VRAM), but the amount of shaders that do this is insignificant and it doesn't affect actual in-game performance. As you can see from the myriad of game benchmarks that exist that show changing texture settings in a game has zero impact on framerate (so long as you have enough VRAM).

Texture settings don't affect framerate so long as the card is not bandwidth starved
Texture settings don't affect framerate AT ALL.
If you disagree with this statement, you're welcome to provide the link to a benchmark that proves it wrong, by showing a game where turning settings from medium to high, for example, lowers the framerate.
Again, the ONLY thing that matters for texture settings is how much VRAM you have. If you don't run out of VRAM, textire settings have zero impact on performance. Bandwidth doesn't matter for texture settings, because the bandwidth requirement for loading textures is minuscule, a fraction of a percent of the bandwidth the GPU's VRAM has.

While I agree that the price difference is too big between the 8 and 16GB versions, I believe this is a temporary situation and could not have been predicted when the gpu was developed. Also, since the trend is for wafer price to increase and RAM cell not to shrink enough to compensate, on the long run, memory prices will increase and we will not see density increases until they find a way to scale the capacitor in the cell. Until then, manufacturers will probably try to limit the amount of memory they ship.
Again, no idea what you're talking about. DRAM prices have been in a downwards slope for years, and the manufacturing processes are evolved just fine. There was zero indication that DRAM prices would go up in the past, and there are zero indications that DRAM price will go up today.
You're the one who seems to be confusing DRAM (the type of memory that is used as DDR, GDDR and others) with SRAM (the pools of memory that go inside chips themselves and are used as registers and cache). SRAM is built in the same manufacturing processes as the chips themselves (TSMC N7, TSMC N5, Samsung 8 nm, Intel 7 and so on), and that's the type of memory that has been scaling poorly on the latest processes. But that has nothing whatsoever to do with DRAM manufacturing and the GDDR market.
 
I would like to have a multiplied attention on the 6950 XT, to see a card nipping on the heels of the 7900 XT while costing $579 is something to appreciate.
 
Of course TSMC 6nm and 7nm are different capacities. They are different processes and doing 7nm chip on 6nm requires redesign (and other way around). For 6500XT AMD used 6nm because it was designed to be 6nm and so it gave more capacity for GPUs. Of course it also meant AMD could do less something else on 6nm but it gave more GPU capacity when needed. Similarly every io chip on 5nm Ryzens and Epycs are 6nm, not 7nm.


It was low end laptop chip, makes no sense to have those on chip like that. Techspot didn't understand that 6500XT was not meant to be desktop chip at all and so complaining about missing features is pointles. They just cannot admit missing that.

Easily most important GPU launch of decade, perhaps most important GPU launch of all time, is a miss? 6500XT ended worst GPU shortage because it was 1. available 2. not suitable for cryptomining and both 1 and 2 meant scalpers could no longer can buy everything.

Basically, everyone complains about GPU shortage. When AMD does something that solves problem, that is wrong. This is all about seeing big picture, not just looking "bad specs, AMD sucks LOL".

Once again, when 6500XT launched, it had very good value. For determining value, you have to check what is available for what price. Just admit it finally. Even Techspot had to admit that under $300 it was best choice https://www.techspot.com/bestof/gpu-2022/

In other words, Techspot recommends card they gave 20/100. Admitting themselves they messed up with score :D
No, going from 7nm to 6nm does not require a redesign, those two processes use the same design rules and allow for IP reuse and straight porting from N7 to N6 (like Sony did with PS5). Production process is altered slightly, but fundamentally uses the same fabs and machines, only removing a couple of manufacturing steps by replacing them with light EUV usage. EUV is already used in 7nm family since N7+, though, so the fabs have required EUV tools already anyway.
See https://www.tsmc.com/english/dedicatedFoundry/technology/logic/l_7nm, specifically: "N6 manufacturing process delivers 18% higher logic density over the N7 process. At the same time, its design rules are fully compatible with TSMC's proven N7 technology, allowing its comprehensive design ecosystem to be reused."
 
You have no idea what you're talking about.
Textures cannot be loaded into VRAM any faster than the storage medium they're stored on. Even if you have a PCIe 4.0 SSD that can load files at 7 GB/s, that's still only 2.4% of the bandwidth of the 4060 Ti's VRAM. And it's physically impossible to load textures any faster than that.
Though in practice it's even less than that, because loading textures is not a sequential operation, and even the very best SSDs in existance today still only do somewhere between 1 GB/s to 2 GB/s in high QD random reads, which is at most 0.7% of the 4060 Ti's bandwidth.
Loading textures has negligible bandwidth requirements for GPUs. The reason they have memory with such high bandwidth has nothing whatsoever to do with textures.


Because faster memory speeds up other things that have nothing to do with textures, such as compute operations and shaders.
Same goes for the idea of using large caches to alleviate the need for bandwidth in RDNA and Ada Lovelace. Those caches aren't for textures, you'd fit an insignificant amount of textures in them. Those caches are for data the GPU is doing compute tasks with.
you keep telling me I have no ideea what I'm talking about that I almost start to believe you. Ok, I am no authority on the matter. But how about nvidia:
"Textures consume the largest amount of the available memory and bandwidth in many applications. One of the best places to look for improvements when short on memory or bandwidth to optimize texture size, format and usage. "
You can read more about it here, if you are interested https://developer.nvidia.com/docs/d...mmon/topics/graphics_content/Textures124.html
Latency has nothing whatsoever to do with bandwidth, those are two completely separate specs. You can have memory setups that are high bandwidth and high latency, and you can have memory setups that are low bandwidth and low latency (or any other combination of the two). They are not correlated.
If, instead of DDR, PCs used GDDR RAM sticks with hundreds of GB/s of bandwidth, you'd still see the same impact on GPUs when they run out of VRAM, because that bandwidth does absolutely nothing to mitigate the latency penalty of having to access the system RAM through PCIe.
of course it is not directly tied, but once you have a cache miss, you need to fetch from main ram or ssd and wait for the data to arrive.
No, I'm not. You completely mnisunderstood this part.
I'm saying that loading textures into VRAM has insignificant bandwidth requirements (because, again, they cannot be loaded any faster than your SSD can transfer them), and that anisotropic filtering (which is a different thing, it's a resampling operation done to textures that are already loaded into VRAM) has very small bandwidth requirements and a minuscule impact on framerate.
15 years ago I had a HD 4850 with 63 GB/s GDDR3, and on that GPU 16x AF was already essentially free.
yes, loading textures in VRAM has litle effect on bandwidh. using the textures from VRAM on the other hand necessitates transferring them to the GPU via the memory bus, and as stated by nvidia it is one of the main things that consume bandwidth.
Shaders can read texture data (like they can read any other kind of data that is stored on the VRAM), but the amount of shaders that do this is insignificant and it doesn't affect actual in-game performance. As you can see from the myriad of game benchmarks that exist that show changing texture settings in a game has zero impact on framerate (so long as you have enough VRAM).


Texture settings don't affect framerate AT ALL.
If you disagree with this statement, you're welcome to provide the link to a benchmark that proves it wrong, by showing a game where turning settings from medium to high, for example, lowers the framerate.
Again, the ONLY thing that matters for texture settings is how much VRAM you have. If you don't run out of VRAM, textire settings have zero impact on performance. Bandwidth doesn't matter for texture settings, because the bandwidth requirement for loading textures is minuscule, a fraction of a percent of the bandwidth the GPU's VRAM has.
please read the above article on texture in the link from nvidia
Again, no idea what you're talking about. DRAM prices have been in a downwards slope for years, and the manufacturing processes are evolved just fine. There was zero indication that DRAM prices would go up in the past, and there are zero indications that DRAM price will go up today.
You're the one who seems to be confusing DRAM (the type of memory that is used as DDR, GDDR and others) with SRAM (the pools of memory that go inside chips themselves and are used as registers and cache). SRAM is built in the same manufacturing processes as the chips themselves (TSMC N7, TSMC N5, Samsung 8 nm, Intel 7 and so on), and that's the type of memory that has been scaling poorly on the latest processes. But that has nothing whatsoever to do with DRAM manufacturing and the GDDR market.
I am aware that SRAM has also scaling issues though they managed to scale it more because a SRAM cell has 6 transistors that do scale with the process and 1 capacitor that scales poorly. the DRAM cell on the other hand employs a 1-transistor/1-capacitor (1T1C) design. With one transistor that scales and a capacitor that scales poorly it is even worse than SRAM. Again, don't believe me, read it on authoritative sources:
anandtech: "DRAM density growth as a whole has been slowing over the years due to scaling issues, and GDDR7 will not be immune to that." and you can see that because the chips released in the past years are mostly 2Gb since the GDDR5 era and will stay the same for GDDR7. here and you might see why wo do not have 4x more memory on the current cards than we had in 2016 when 1080 was released with 8GB RAM.
 
Last edited:
It was low end laptop chip, makes no sense to have those on chip like that. Techspot didn't understand that 6500XT was not meant to be desktop chip at all and so complaining about missing features is pointles. They just cannot admit missing that.

Easily most important GPU launch of decade, perhaps most important GPU launch of all time, is a miss? 6500XT ended worst GPU shortage because it was 1. available 2. not suitable for cryptomining and both 1 and 2 meant scalpers could no longer can buy everything.

Basically, everyone complains about GPU shortage. When AMD does something that solves problem, that is wrong. This is all about seeing big picture, not just looking "bad specs, AMD sucks LOL".

Once again, when 6500XT launched, it had very good value. For determining value, you have to check what is available for what price. Just admit it finally. Even Techspot had to admit that under $300 it was best choice https://www.techspot.com/bestof/gpu-2022/

In other words, Techspot recommends card they gave 20/100. Admitting themselves they messed up with score :D


I’m convinced you’re just trolling at this point, or, delusional.

The fact is, what ended the GPU shortage was not the turd 6500XT, it was in fact the crypto crash of 2022 that cratered crypto mining demand, thus flooding the market with tons of used inventory which rapidly undercut the sudden flood of new inventory.

If the 6500XT was listed above MSRP (as you mentioned earlier) that demonstrates that it did NOT end the GPU shortage; it was actually a PART of said shortage! 🤦‍♂️

If you still wish to desperately cling to your stance, please explain how a product launched at the BOTTOM of the market, which was clearly not purchased in any significant quantities (see below), somehow “ended” the GPU shortage.

You can’t, because the reality is, people still weren’t able to buy what they REALLY wanted higher up the stack. Where exactly does the 6500XT fall on the Steam Hardware Survey? Currently, less than 0.25%. I don’t even think that’s top 50. In fact, it currently runs neck and neck with the 5x more expensive 6900XT! Not a good showing for what is SUPPOSED to be a cheap, “great”, widely available product. Usually the mid/low end cards are a manufacturer’s best selling product (see 1650/3060 at the top of the chart). I’d bet that if you go through the archives, it will be painfully clear that this card was never purchased or used in any significant quantities. But you’re welcome to try.

 
"Textures consume the largest amount of the available memory and bandwidth in many applications. One of the best places to look for improvements when short on memory or bandwidth to optimize texture size, format and usage. "
You can read more about it here, if you are interested https://developer.nvidia.com/docs/d...mmon/topics/graphics_content/Textures124.html
You're getting too hang up on unimportant wording. Maybe it does on "many applications", but it absolutely does not on games.
Textures are stored on a SSD and you can only get textures out of a (PCIe 4.0) SSD at a maximum of 7.5 GB/s (but realistically less that half of that). If textures are to to consume most of the 4060 Ti's 288 GB/s of bandwidth, where are those hundreds of GBs of texture files coming from?

of course it is not directly tied, but once you have a cache miss, you need to fetch from main ram or ssd and wait for the data to arrive.
No, if you have a cache miss you have to fetch data from the VRAM. Only after you're out of VRAM that you go to RAM and SSD.

yes, loading textures in VRAM has litle effect on bandwidh. using the textures from VRAM on the other hand necessitates transferring them to the GPU via the memory bus, and as stated by nvidia it is one of the main things that consume bandwidth.
That's not how any of this works.
If textures were consuming a large portion of the bandwidth of a GPU, then turning texture settings up would cause framerates to drop as less bandwidth would be available to other tasks in the graphics pipeline. Similarly, turning texture settings down would increase framerate, as it would make more bandwidth available for other tasks and speed them up.
But your claim is not compatible with reality. When we look at benchmarks, we see that changing texture settings has essentially zero impact on framerate. That's because textures consume an insignificant portion of a GPU's bandwidth.

here and you might see why wo do not have 4x more memory on the current cards than we had in 2016 when 1080 was released with 8GB RAM.
While DRAM density has been slow to improve, that doesn't mean 16 Gb modules didn't get significantly cheaper. AMD has been selling 12 GB cards for ~$300, 16 GB cards for around ~$450, and 20 GB cards for ~$750. Intel also sold 16 GB cards for ~$350. Even Nvidia themselves had the $330 RTX 3060 with 12 GB, which is now $280. Selling GPUs with adequate amounts of VRAM is perfectly economical. It's Nvidia specifically who intentionally decided to milk their customers, this isn't the result of "market forces" pushing VRAM prices up (because VRAM prices aren't up, they're the lowest they've been in years).

As opposed to SRAM, which didn't get cheaper because the manufacturing process as a whole got more expensive. For DRAM, if you want more memory you can just get more modules (and they're cheap). For SRAM that option doesn't exist.
 
No, going from 7nm to 6nm does not require a redesign, those two processes use the same design rules and allow for IP reuse and straight porting from N7 to N6 (like Sony did with PS5). Production process is altered slightly, but fundamentally uses the same fabs and machines, only removing a couple of manufacturing steps by replacing them with light EUV usage. EUV is already used in 7nm family since N7+, though, so the fabs have required EUV tools already anyway.
See https://www.tsmc.com/english/dedicatedFoundry/technology/logic/l_7nm, specifically: "N6 manufacturing process delivers 18% higher logic density over the N7 process. At the same time, its design rules are fully compatible with TSMC's proven N7 technology, allowing its comprehensive design ecosystem to be reused."
While processes are quite similar, it still requires some work to switch production from 7nm to 6nm. Even if AMD could use existing 7nm design with 6nm, die area rediction would be zero. Since 6nm is likely more expensive, that hardly makes any sense. AMD did that with GF 12nm Ryzen 2000 series though.

Anyway easier and less work in this case does not equal not work at all.
 
While processes are quite similar, it still requires some work to switch production from 7nm to 6nm. Even if AMD could use existing 7nm design with 6nm, die area rediction would be zero. Since 6nm is likely more expensive, that hardly makes any sense. AMD did that with GF 12nm Ryzen 2000 series though.

Anyway easier and less work in this case does not equal not work at all.
Yes, there is some minor work required, but the engineering and costs involved are minimal. The 7nm design can be used almost as is, with a simple conversion process that requires minimal work, according to https://www.angstronomics.com/p/ps5-refresh-oberon-plus and https://www.anandtech.com/show/14290/tsmc-most-7nm-clients-will-transit-to-6nm. TSMC itself seems to think that involved work and costs are so low that almost everyone will move from 7nm to 6nm.
Regarding die size reduction: as PS5 example shows and TSMC itself advertises, moving a design from N7 to N6 can reduce die size by 15%, so no, it's far from zero. 15% smaller dies gives you around 18% more dies per wafer. N6 manufacturing is actually slightly less complicated and uses only one more EUV layer, while reusing the same fabs and tools, so any wafer cost increase is probably minimal. Due to increased dies per wafer, cost per die is almost certainly lower, otherwise Sony would have no reason to migrate PS5 to 6nm without changing the overall design in any way.
So, yeah, it isn't free lunch, but it's an easy and cheap way to reduce manufacturing costs and reduce power usage slightly, which is why a lot of TSMC clients migrated to it.
 
Back