An upcoming budget Radeon GPU will be restricted to PCIe 4.0 x4 bandwidth, but what does that mean for gaming performance? Here's a brief investigation on PCIe performance with 4GB and 8GB GPU models.
An upcoming budget Radeon GPU will be restricted to PCIe 4.0 x4 bandwidth, but what does that mean for gaming performance? Here's a brief investigation on PCIe performance with 4GB and 8GB GPU models.
even if everything did work fine or they gave it x8 or x16, it's apparently only ~26% faster than the RX 570 which launched nearly 5 years ago at around the same priceI remember several people telling me how this wasn't going to be a big deal even when I pointed out that reducing to both just 4x AND 3.0 would cripple this card, now we can say for certain I was right in being cautious: Nobody should be buying this turd at release.
So the question is, who did AMD create this train wreak of a card for?
I don't buy the argument that limiting VRAM usage in games to 4GB is hard or impossible. Just dropping top the mip-level off textures will reduce the VRAM used by textures by ~66% and textures usually constitute the majority of VRAM usage.
8x I can understand but dropping to 4x is crazy.
Is there a PCIe lane shortage too???
It really is a ridiculously crippled design decision. Even the £99 GTX 1050 (non Ti) 2GB had 16 lanes (link) so it can't be "cost cutting"...
That's why I cannot emphasise enough that AMD have gimped this card to the point of making this card redundant. Sure, gamers can buy it because of good availability, but that's because nobody is buying it. It is poor for gamers especially if it cost anything more than the MSRP. It is useless for miners because of the 4GB VRAM. It is less desirable for HTPC users cause it is lacking up to date decoder. It is too expensive for driving multiple displays for work since I can get a cheap GT 1050 and it works fine. So the question is, who did AMD create this train wreak of a card for? I sound harsh, but this is the fact. I think if one is looking for a cheap card, might as well consider a GTX 1650 which does not have up to date video decoder, but its got some encoder. Its got a PCI-E 3.0 x16 connection. It is also probably cheaper than this RX 6500/ 6400, at least from what I have seen from time to time. And it runs most games well enough at 1080p at mid/ low graphic settings.
4GB VRAM is definitely pretty limited for a modern GPU and PCIe 3.0x4 will impact performance some even when not exceeding 4GB of VRAM usage, but a lot of games will check the amount of VRAM available and adjust the default setting appropriately even it that means low quality texture settings. I haven't done the testing, but I would be surprised if there are many games that can't be adjusted to run in 4GB.The reviews tested Cybperpunk 2077 on medium setting and medium texure, and PCI-E3.0 4x still got performance hit
FarCry 6 was tested on medium preset and PCI-E4.0 4x had big hit performance (specially the 1% lows), and PCI-E3.0 4x got even a bigger hit
So medium setting might not fix the problem in some games (specially if you use PCI-E3.0 mobo)
In my opinion, there is nothing wrong with 4GB VRAM cards, especially if the price is right. But when you have limited VRAM, it is common sense that there is no avoiding the spillover to use system ram. And that is where the faster bus speed will be meaningful. In this case, you have a combination of limited VRAM, and PCI-E x4 (I still cannot believe AMD can cut cost to this extend), it can be disastrous. The performance hit in the review above highlights the issue clearly. No matter how good the card is when it comes to specs, you will run into the PCI-E bottleneck very quickly. AMD is advertising this to use with FSR, and I see the reason for doing so.What we learned - do not ever buy a 4GB card. Honestly can't believe it's a thing in 2022 on a card with RRP of $200US. Even worse the 64 bit bus of the 6500XT is a sad pathetic joke IMO. I'm amazed the 6400 doesn't have a 32 bit bus, 2GB, and 4MB IC.