AMD: The new Radeon RX 5500 will boast 'up to' 37 percent faster performance than the...

Polycount

Posts: 3,017   +590
Staff
Something to look forward to: If you're in the market for a decent graphics card but don't want to join the Green Team, AMD has you covered pretty well as of late. The 5700 and 5700 XT are both great cards for their price, and soon, the RX 5500 will be joining their ranks.

Whereas the 5700 and 5700 XT perform well in the 1440p arena, AMD has positioned the 7nm RX 5500 as the king of 1080p gaming at a (presumably) reasonable price point.

The first RX 5500s will be made available in pre-made desktops and notebooks; though the latter will ship with the tweaked RX 5500M. There will be a discrete version of the card at some point in the future, but we don't have a specific release window for now.

Preliminary information aside, what exactly does the 5500 bring to the table? According to AMD, it boasts "up to" 37 percent faster performance on average than its competition (Nvidia's GTX 1650).

In more concrete numbers, AMD says the RX 5500 can deliver up to 60 FPS in "select" AAA titles (such as Gears of War 5 and Ghost Recon Breakpoint), and up to 90 FPS in eSports games (Fortnite, Apex Legends, Overwatch). As usual, we recommend taking manufacturer claims with a grain of salt until third-party benchmarks (such as our own) hit the web.

The 5500 features standard clock speeds of up to 1717Mhz and boost clocks of up to 1845Mhz, PCI-E 4.0 support, as well as a 128-bit memory interface and 8GB of GDDR6 memory. Further, AMD says the RX 5500 will have access to many of the same features the 5700 and 5700 XT do, including Radeon Image Sharpening and Radeon Anti-Lag.

We don't know how much the discrete version of AMD's RX 5500 will cost on launch, but ideally, it shouldn't be too far off from the GTX 1650's price tag. If it was much pricier, AMD's performance comparisons wouldn't make much sense. For reference, the 1650 will currently run you about $150-$170, depending on where you look.

Again, we recommend waiting for reviews before making a purchasing decision, but if you absolutely must snag a 5500 for yourself ASAP, the first machines with the card installed will come from HP and Lenovo this November, with Acer alternatives following in December. Notably, these rigs qualify for AMD's recently-announced "Raise the Game" bundle deal.

Until then, you can look forward to the MSI Alpha 15's release later this month. The gaming notebook will include the stripped-down (in comparison to the standard RX 5500) RX 5500M, which has a base clock speed of up to 1448Mhz (and a boost clock of up to 1645Mhz), and 4GB of GDDR6 VRAM.

Permalink to story.

 
AMD's website says that the maximum supported memory size is 4 GiB:


Edit: One thing that is puzzling is that the board image on the website has 8 GDDR memory modules. Crucial and SK Hynix only produce 8 Gib modules; Samsung produce 8 and 16 Gib. Eight 8 Gib modules gives a total storage of 8 GiB, not 4.

On top of this, all current GDDR6 memory modules are 32 bits wide; this is why the 5700 XT has a 256 bit bus and 8 GiB of memory - it has eight 32 bit 8 Gib GDDR6 modules!

So with a 128 bit bus, the 5500 can only have 4 memory modules (there isn't any 16 bit wide GDDR6), so the image on the website of the board is wrong (it's obviously just a 5700 shot). However, this does mean that the maximum memory of the 5500 is 4 GiB or 8 GiB, depending on what brand of GDDR6 gets used:

Micron / SK Hynix / Samsung 8 Gp = 4 GiB maximum
Samsung 16 Gp = 8 GiB maximum

But what concerns me far more about this product is that the TDP is 150W; the GeForce 1650 is half that!
 
Last edited:
The way we compare video cards do a disservice to AMD cards. They are designed for smoothness. All the technology AMD has (Radeon Anti-Lag, AMD Chill) are designed to limit frame rates but maintain smooth game play and quick response. By simply running these cards full out and seeing what you get undervalues those technologies and is not indicative of what a player would experience should they use all the tech provided.

Change the way we look at cards: Measure the relevant metrics (power, heat) but use both companies tech to limit and throttle game play, have a set gameplay scenario that can be reproduced. Pay attention to the 1% low and 0.1% lows as that will indicate smoothness which is way more important than pumping out more frames than your eyes can see, let alone what your monitor can show.

Look at console gaming, that is what is happening... time for reviewers to respond.
 
The way we compare video cards do a disservice to AMD cards. They are designed for smoothness. All the technology AMD has (Radeon Anti-Lag, AMD Chill) are designed to limit frame rates but maintain smooth game play and quick response. By simply running these cards full out and seeing what you get undervalues those technologies and is not indicative of what a player would experience should they use all the tech provided.

Change the way we look at cards: Measure the relevant metrics (power, heat) but use both companies tech to limit and throttle game play, have a set gameplay scenario that can be reproduced. Pay attention to the 1% low and 0.1% lows as that will indicate smoothness which is way more important than pumping out more frames than your eyes can see, let alone what your monitor can show.

Look at console gaming, that is what is happening... time for reviewers to respond.
This is such a red team copout. "oh, we cant compete in frametimes, lets, uhhhh, manipulate images! THAT'LL work!"

no it wont. You know what produces universal smoothness? High frame rates and low frame times. You know what lowers 1% and .1% frame times, regardless of driver, special feature, or new arch? More powerful GPUs. Looking at console gaming.....well, PC gaming has been exploding the last few years, and console gamers lap up games like just cause 3 that run at 15 FPS on xbox, powered by AMD. Whoops.

If AMD cant compete, they will not sell. Image manipulation tools wont save AMD, just like mantle, DX12, mantle, tru audio, ece. We've been down this road MANY times before. When a card coem sout that performs well and is competitively priced (5700xt) it sells well. When a card doesnt perform competitively and is not competitively priced (Vega 64) then it wont sell. Simple as that.
 
Last edited:
1650 will still sell better.......
Given the 1650 has been out for, what, over a year now? I'm guessing the market is somewhat saturated already.

AMD's website says that the maximum supported memory size is 4 GiB:


Edit: One thing that is puzzling is that the board image on the website has 8 GDDR memory modules. Crucial and SK Hynix only produce 8 Gib modules; Samsung produce 8 and 16 Gib. Eight 8 Gib modules gives a total storage of 8 GiB, not 4.

On top of this, all current GDDR6 memory modules are 32 bits wide; this is why the 5700 XT has a 256 bit bus and 8 GiB of memory - it has eight 32 bit 8 Gib GDDR6 modules!

So with a 128 bit bus, the 5500 can only have 4 memory modules (there isn't any 16 bit wide GDDR6), so the image on the website of the board is wrong (it's obviously just a 5700 shot). However, this does mean that the maximum memory of the 5500 is 4 GiB or 8 GiB, depending on what brand of GDDR6 gets used:

Micron / SK Hynix / Samsung 8 Gp = 4 GiB maximum
Samsung 16 Gp = 8 GiB maximum

But what concerns me far more about this product is that the TDP is 150W; the GeForce 1650 is half that!
Also note, the RX5500 has an 8 pin connector. Why does a 110 watt or 150 watt card need an 8 pin connector on it? Smells fishy.
 
1650 will still sell better.......
Looking at the 1080p numbers for the 1650 over at TechPowerUp a 37% increase over the stock 1650 puts this firmly between the RX580 and RX590. This new card would be nipping at the heels of the 1660 6GB at 1650 pricing. If AMD lives up to their figure and prices it correctly it should sell well enough.
 
The way we compare video cards do a disservice to AMD cards. They are designed for smoothness. All the technology AMD has (Radeon Anti-Lag, AMD Chill) are designed to limit frame rates but maintain smooth game play and quick response. By simply running these cards full out and seeing what you get undervalues those technologies and is not indicative of what a player would experience should they use all the tech provided.

Change the way we look at cards: Measure the relevant metrics (power, heat) but use both companies tech to limit and throttle game play, have a set gameplay scenario that can be reproduced. Pay attention to the 1% low and 0.1% lows as that will indicate smoothness which is way more important than pumping out more frames than your eyes can see, let alone what your monitor can show.

Look at console gaming, that is what is happening... time for reviewers to respond.
LMAO - the post
 
"AMD aims to dominate the 1080p gaming scene"
Nope, haven't heard this before.

"AMD aims to dominate the 1080p gaming scene"
RX 550 - 8-pin + 500W recommended PSU
GTX 1650 - no supp pwr + 300W recommended PSU

Um.... something wrong here.
 
Last edited:
Also note, the RX5500 has an 8 pin connector. Why does a 110 watt or 150 watt card need an 8 pin connector on it? Smells fishy.
A standard PCIe 6 pin connector is rated for just over 3A per 12V pin, so it provides up to 75W:

PCIe_pinout.png


The 8 pin connector actually permits higher current draw in the 12V lines, a little over 4A, which is why the three 12V lines can provide up to 150W.

So a slot (up to 75W itself) + 6 pin connector standard combination, would be bang on the 150W limit for the 5500. Better to have scope to draw more power for OEM overclocked models. There is a scope for a 24 CU version of the Navi 14 chip (2 CUs are disabled in the 5500), so an 8 pin connector by default easily permits OEM 5500 XT or a 5600-esque card versions (which will require more than 150W).
 
AMD's website says that the maximum supported memory size is 4 GiB:


Edit: One thing that is puzzling is that the board image on the website has 8 GDDR memory modules. Crucial and SK Hynix only produce 8 Gib modules; Samsung produce 8 and 16 Gib. Eight 8 Gib modules gives a total storage of 8 GiB, not 4.

On top of this, all current GDDR6 memory modules are 32 bits wide; this is why the 5700 XT has a 256 bit bus and 8 GiB of memory - it has eight 32 bit 8 Gib GDDR6 modules!

So with a 128 bit bus, the 5500 can only have 4 memory modules (there isn't any 16 bit wide GDDR6), so the image on the website of the board is wrong (it's obviously just a 5700 shot). However, this does mean that the maximum memory of the 5500 is 4 GiB or 8 GiB, depending on what brand of GDDR6 gets used:

Micron / SK Hynix / Samsung 8 Gp = 4 GiB maximum
Samsung 16 Gp = 8 GiB maximum

But what concerns me far more about this product is that the TDP is 150W; the GeForce 1650 is half that!
Interesting. This is not what their promotional materials say.


Here, it's clearly labeled as "up to" 8GB. But it references the RX 5500 "series," so perhaps more are on the way. I'll make some adjustments to the article for clarity.
 
I wonder if it's just in reference to the possibility of OEM versions using Samsung 16 Gib modules. It is odd that they don't say 8 on the website though; it only suggests that OEM versions will only differ with regards to game frequency.

The fact that they have this image...

237107-radeon-rx5700-memory-1260x709.jpg


...doesn't help. There's 8 memory chips there, which would be a 256 bit memory bus! AMD could have clarified matters better, in my opinion.
 
I wonder if it's just in reference to the possibility of OEM versions using Samsung 16 Gib modules. It is odd that they don't say 8 on the website though; it only suggests that OEM versions will only differ with regards to game frequency.

The fact that they have this image...

237107-radeon-rx5700-memory-1260x709.jpg


...doesn't help. There's 8 memory chips there, which would be a 256 bit memory bus! AMD could have clarified matters better, in my opinion.
I'm just going to email them and see what they say.
 
Uh Oh... I think Jensen is going to get jebatied again.


He going to announce the Super, and Dr Su will announce 5500 series pricing.
 
"AMD aims to dominate the 1080p gaming scene"
Nope, haven't heard this before.

"AMD aims to dominate the 1080p gaming scene"
RX 550 - 8-pin + 500W recommended PSU
GTX 1650 - no supp pwr + 300W recommended PSU

Um.... something wrong here.

There sure is.

I will assume the RX 550 part of this post is a typo. I have built quite a few inexpensive E-Sports systems with the 550 and it has no 6 or 8 pin supplemental power requirements. It also calls for a 400 watt PSU.
 
This is such a red team copout. "oh, we cant compete in frametimes, lets, uhhhh, manipulate images! THAT'LL work!"

no it wont. You know what produces universal smoothness? High frame rates and low frame times. You know what lowers 1% and .1% frame times, regardless of driver, special feature, or new arch? More powerful GPUs. Looking at console gaming.....well, PC gaming has been exploding the last few years, and console gamers lap up games like just cause 3 that run at 15 FPS on xbox, powered by AMD. Whoops.

If AMD cant compete, they will not sell. Image manipulation tools wont save AMD, just like mantle, DX12, mantle, tru audio, ece. We've been down this road MANY times before. When a card coem sout that performs well and is competitively priced (5700xt) it sells well. When a card doesnt perform competitively and is not competitively priced (Vega 64) then it wont sell. Simple as that.
Vega actually really performs well and for those of us that bought Vega when it came out; we are reaping the rewards now.
My card went from "a bit behind a 1070" to "comfortably ahead of a 1080" in most newer titles.
Additionally with things like Bios flashing, undervolting, OC, etc... my card really exceeds a 1080 and my framerates are even sometimes beating a 2070 and I only have a Vega56.

So like usual the marketing will push people to buy Team Greeds cards; and then they will have to upgrade them in another year or so when they can't hang with the new game tech like what always happens.
AMD's are usually built with the future in mind; bringing new features to the table (like DX12, Mantle, TruAudio, etc) that then later get used (in some form) in newer games. Thus resulting in better performance over time. Where an Nvidia card gets its best performance the day it comes out and declines basically continuously after launch until the new model comes out and then the cycle starts again.
 
This card would be more appealing if it had a 256 bit bus and 6-8GB of GDDR6, but were see what real world tests show later :).
 
Vega actually really performs well and for those of us that bought Vega when it came out; we are reaping the rewards now.
My card went from "a bit behind a 1070" to "comfortably ahead of a 1080" in most newer titles.
Additionally with things like Bios flashing, undervolting, OC, etc... my card really exceeds a 1080 and my framerates are even sometimes beating a 2070 and I only have a Vega56.

So like usual the marketing will push people to buy Team Greeds cards; and then they will have to upgrade them in another year or so when they can't hang with the new game tech like what always happens.
AMD's are usually built with the future in mind; bringing new features to the table (like DX12, Mantle, TruAudio, etc) that then later get used (in some form) in newer games. Thus resulting in better performance over time. Where an Nvidia card gets its best performance the day it comes out and declines basically continuously after launch until the new model comes out and then the cycle starts again.

Yeah, AMD always had some wonder weapon that will work in the distant future if ever to make up for market time low performance. They really did surprise with Ryzen.
 
Back