AMD Radeon RX 5500 XT 8GB Review: Navi at $200

This is just terrible. You get an extra 2GB memory, but it is slower than the 1660 Super with the same cost per frame. And you have AMD's broken Wattman and drivers to deal with. Sad. I really wanted the new RX 570, the best performance per dollar, but faster than the RX 590. As an AMD fan, I'm looking forward to Intel's GPU now? They should sell off Radeon at this point if they can't do better. Just like we wanted a 7nm GTX 1080 but got the crap Turing instead, now we get a worse RX 5500 XT instead of a 7nm RX 590.
 
Last edited:
Thanks for the review! Looks like this card is competing in the $125-150 market with a $200 price - as a cheap gamer, this is my range. I picked-up my RX 480 for $150 about 2-3 years ago and it's not much slower. Nothing compelling here with the RX 5500!
 
I am totally disappointed, If 8 GB version was priced at 169 $ and 4 GB say for 139 $ then it would be great, however Amd insists to to lose to Nvidia . I really wonder who is responsible for AMD graphics department hope ATI days comeback
 
Just as crappy as the GTX1660 and only ~$30 less ... The 8 gb should be $170, then it would be compelling ... as is, it may as well be just yet another NVidia card slotted in between 10 million others ...
 
It's all about the pricing. It's a really competitive tight segment and what you should buy often comes down to what deals and what is available at that very moment.

At the moment where I am there are good clearance deals. Pick up an RX590 while they are clearing them out to make way for these. Better AIB models with 8GB of memory for under $170.
 
Seeing it that far below the 1660 is just unfortunate.

How far below?
2-3 fps on average for reasonably better $ per frame ... that's not terrible (well, I mean within whats available)

Unless you were referring to the 1660 super ... But even there though, the $ per frame is identical - so its the exact same value.

It's problem is that it needs a better price. The 5600XT will compete ~ 1660 super ... but again AMD has to be more compelling (price/perf) than this ...
 
Unless you were referring to the 1660 super ... But even there though, the $ per frame is identical - so its the exact same value.
Depends on where you live. In the UK, the "$230" GTX 1660 SUPER and the "$200" RX 5500 XT are both the same £199, whilst the "$160" GTX 1650 SUPER is down to £138. So the £199 RX 5500XT ends up both 15% slower than the GTX 1660 SUPER at the same price, or +45% higher price tag vs the 1650 Super for just +8% more fps... Even if the 4GB version comes in at a predicted £170-£175, it's still a hard sell vs either of the 1650/1660 Supers over here.
 
Depends on where you live. In the UK, the "$230" GTX 1660 SUPER and the "$200" RX 5500 XT are both the same £199, whilst the "$160" GTX 1650 SUPER is down to £138. So the £199 RX 5500XT ends up both 15% slower than the GTX 1660 SUPER at the same price, or +45% higher price tag vs the 1650 Super for just +8% more fps... Even if the 4GB version comes in at a predicted £170-£175, it's still a hard sell vs either of the 1650/1660 Supers over here.

Interesting ... I wonder what the reasons are for that disparity?
 
AMD is killing it on the CPU front, but I’ve never been a fan of their graphics card. This is most underwhelming and I hardly see the point of the release. I’m holding off for Ampere at this stage unless Navi+ is a big jump in performance per dollar.
 
Looks like they're trying the nVidia strategy; Overprice the slower card so that people buy it anyway despite being inferior, like the RX 570 vs the 1050 Ti.
 
Looks like they're trying the nVidia strategy; Overprice the slower card so that people buy it anyway despite being inferior, like the RX 570 vs the 1050 Ti.

Just bad strategy, GDDR6 is expensive. last I read it cost 11.60usd per 1GB chip for 14gbps GDDR6
https://www.guru3d.com/news-story/gddr6-significantly-more-expensive-than-gddr5.html
1660 with GDDR5 still beats 5500XT, that means equipping these cards with GDDR6 is kinda pointless. Or maybe because Navi still lack the compression technique that Nvidia employ, thus requiring bigger and faster VRAM to compete.

GDDR 6 might be cheaper now, AMD is charging 30usd for 4 more GB. Techpowerup inspected the PCB and mention that it is overengineered for this kind of performance bracket. So overall bad decisions were made over at AMD.
 
Last edited:
Just bad strategy, GDDR6 is expensive. last I read it cost 11.60usd per 1GB chip for 14gbps GDDR6
https://www.guru3d.com/news-story/gddr6-significantly-more-expensive-than-gddr5.html
1660 with GDDR5 still beats 5500XT, that means equipping these cards with GDDR6 is kinda pointless. Or maybe because Navi still lack the compression technique that Nvidia employ, thus requiring bigger and faster VRAM to compete.

GDDR 6 might be cheaper now, AMD is charging 30usd for 4 more GB. Techpowerup inspected the PCB and mention that it is overengineered for this kind of performance bracket. So overall bad decisions were made over at AMD.

I should probably point out that memory compression only reduces bandwidth requirements, not size requirements.

https://devblogs.nvidia.com/nvidia-turing-architecture-in-depth/

"NVIDIA GPUs utilize several lossless memory compression techniques to reduce memory bandwidth demands as data is written out to frame buffer memory. The GPU’s compression engine has a variety of different algorithms which determine the most efficient way to compress the data based on its characteristics. This reduces the amount of data written out to memory and transferred from memory to the L2 cache and reduces the amount of data transferred between clients (such as the texture unit) and the frame buffer. Turing adds further improvements to Pascal’s state-of-the-art memory compression algorithms, offering a further boost in effective bandwidth beyond the raw data transfer rate increases of GDDR6. As shown in Figure 10, the combination of raw bandwidth increases, and traffic reduction translates to a 50% increase in effective bandwidth on Turing compared to Pascal, which is critical to keep the architecture balanced and support the performance offered by the new Turing SM architecture."

8GB on a card like the 5500 XT might be overkill.
 
Unless they lower prices or can increase performance via drivers. idk why you would buy either of these cards.
 
I have a strong feeling that the RX 5500 series is more aimed at OEM, allowing them to offer AMD-AMD based budget gaming PC (Ryzen 3500/3600 + RX 5500 instead of a GTX 1650).

For OEM, the extra Watts used by Polaris based cards matter as they drive up the total system cost (case, GPU, CPU and PSU cooling as the extra heat affects all components).

 
Unless they lower prices or can increase performance via drivers. idk why you would buy either of these cards.
It's not really a better or worse option than a 1650 (super) overall. It will probably work better together with AMD CPU than a nVidia based card.

I'd really like to see a test with a Ryzen 3600 to confirm or disprove this.
 
"NVIDIA GPUs utilize several lossless memory compression techniques to reduce memory bandwidth demands as data is written out to frame buffer memory. The GPU’s compression engine has a variety of different algorithms which determine the most efficient way to compress the data based on its characteristics. This reduces the amount of data written out to memory and transferred from memory to the L2 cache and reduces the amount of data transferred between clients (such as the texture unit) and the frame buffer. Turing adds further improvements to Pascal’s state-of-the-art memory compression algorithms, offering a further boost in effective bandwidth beyond the raw data transfer rate increases of GDDR6. As shown in Figure 10, the combination of raw bandwidth increases, and traffic reduction translates to a 50% increase in effective bandwidth on Turing compared to Pascal, which is critical to keep the architecture balanced and support the performance offered by the new Turing SM architecture."
8GB on a card like the 5500 XT might be overkill.

"This reduces the amount of data written out to memory and transferred from memory to the L2 cache and reduces the amount of data transferred between clients (such as the texture unit) and the frame buffer".
That means by reducing the amount of data transfer the effective bandwidth is increased. Let say you compress a 100GB file into a 90GB zip and transfer it over gigabit network, the time to transfer is reduced because the physical size is reduced. But yeah lossless compression does not reduce the physical size that much though.

Also there is a wide difference between 4GB and 8GB model in Gears 5 even at 1080p
index.php


So yeah 5500XT 8GB is the model to go for in the near future, yet it is too expensive for what it offers.
 
Last edited:
Back