AMD Raven Ridge 8GB vs. 16GB Reserved Memory Benchmark & Explanation

Excellent write up Steve. This will be very handy for people with only 8 GB of system Ram.
When using an APU, you have as much vRam as you do system Ram. They are the same speed, plain and simple!

It is important to note that BF1 used only 7 GB combined at 1080p. The only instance where 8 GB may suffer, is with COD: WW2. For eSport games, 8 GB will be plenty.

I concluded this using your other material, but posted on a different site:

https://hardforum.com/threads/reviews-for-amds-apu-ryzen-2400g-are-in.1954364/page-3#post-1043491136

Please let me know if everything was properly sourced / linked!
 
Last edited:
Excellent write up Steve. This will be very handy for people with only 8 GB of system Ram.
When using an APU, you have as much vRam as you do system Ram. They are the same speed, plain and simple!

It is important to note that BF1 used only 7 GB combined at 1080p. The only instance where 8 GB may suffer, is with COD: WW2. For eSport games, 8 GB will be plenty.

I concluded this using your other material, but posted on a different site:

https://hardforum.com/threads/reviews-for-amds-apu-ryzen-2400g-are-in.1954364/page-3#post-1043491136

Please let me know if everything was properly sourced / linked!

Thanks mate. Note VRAM usage is higher in the other test because we aren't using low settings. The GTX 1060 is much much more powerful than the Vega 8 GPU in the 2200G. That said with ultra quality textures and everything else set to low we still got the same results with the Ryzen 3 2200G, that is to say allocation size made no difference.
 
Good review Steve. The biggest problem I can see is that tested titles like Battlefield are actually fairly well optimised whilst other untested "heavier weight" Indie's using UE4, Unity, etc, can use more RAM than some AAA's. Eg, Obduction, Everybody's Gone To The Rapture, Quern Undying Thoughts, Maize, Subnautica, etc, titles like that have peaked at +5-7GB process usage after multiple hours of playtime (not just a 10min benchmark) and on top of that you've got system RAM usage for "iGPU VRAM", the OS, background tasks, etc). Deus Ex: Mankind Divided crashes a lot more in Prague City on 8GB vs 16GB. Dishonored 2 hit 7GB RAM usage. Upgrading from 8GB to 16GB RAM cured some crashes for me even with a 4GB dGPU, so there are already untested titles that will struggle with only 8GB - 2GB = 6GB RAM, let alone future titles.

Current RAM pricing is unfortunate for those wanting a "budget" build with these chips, because if we returned to the sane RAM prices we used to have not that long ago, everyone would absolutely be universally recommending 16GB min for these for future games.
 
What about multi tasking surely there maybe a latency issue

For CPU work, I would think there would be less issues than the 1300x/1500x cpus since this uses just one ccx. Some applications do take a small penalty because of the lower cache.

But yeah, can't see anyone multitasking during gaming.
 
What about multi tasking surely there maybe a latency issue

Not that I can see. Most gamers don't multi-task when gaming though.
I do alt-tab a lot to look at things in the browser (I may even have a stream/youtube video open while gaming), but I doubt that will change things from what you've shown us in your tests.

On a more extreme side, I play Eve Online from time to time and having more than 1 client open is not uncommon (playing on multiple accounts at the same time). These APUs seem perfect for such an MMO.
 
Good review Steve. The biggest problem I can see is that tested titles like Battlefield are actually fairly well optimised whilst other untested "heavier weight" Indie's using UE4, Unity, etc, can use more RAM than some AAA's. Eg, Obduction, Everybody's Gone To The Rapture, Quern Undying Thoughts, Maize, Subnautica, etc, titles like that have peaked at +5-7GB process usage after multiple hours of playtime (not just a 10min benchmark) and on top of that you've got system RAM usage for "iGPU VRAM", the OS, background tasks, etc). Deus Ex: Mankind Divided crashes a lot more in Prague City on 8GB vs 16GB. Dishonored 2 hit 7GB RAM usage. Upgrading from 8GB to 16GB RAM cured some crashes for me even with a 4GB dGPU, so there are already untested titles that will struggle with only 8GB - 2GB = 6GB RAM, let alone future titles.

Current RAM pricing is unfortunate for those wanting a "budget" build with these chips, because if we returned to the sane RAM prices we used to have not that long ago, everyone would absolutely be universally recommending 16GB min for these for future games.

What GPU and graphics settings did these titles use when getting these high Ram usages?
Let's face it, this APU is still only good for about 1080p low in most games. At that resolution / quality, 8 GB looks to be sufficient. But yeah, more is always better to an extent. You just have to choose where to cut corners when making a budget build. Right now, it is on Ram and discreet graphics :p
 
What GPU and graphics settings did these titles use when getting these high Ram usages?
IIRC, it was 1080p Med (not that far off 1080p low). The card was a 1050Ti so it definitely wasn't 1080p Ultra. Obviously 720p/Low may use less RAM, but personally I don't like dropping down that far due to ugly scaling that comes with non-native res. As for "people don't multi-task when playing games" that depends on the app. Most sane people won't be trying to render video in the background or leave Photoshop open with 8GB RAM, but for many games it's quite common to have a web browser open in the background for a walkthrough / game wiki, etc.
 
I always found it weird that amd integrated graphics let you pick the amount of vram from the bios. I haven't seen a intel chip allow you to do this since like the extreme graphics days. You can do some stuff in Windows to set the amount of vram for a intel gpu to the max dynamic amount, but it's not in a always reserved state, Intel has a really weird max number as well 1792mb.
 
Great testing, thanks! Extremely useful to Raven Ridgers such as myself. That said, you're a bit mistaken on one issue here:

...some integrated GPUs like the Vega M graphics in upcoming Intel Kaby Lake-G processors...

Vega M isn't an integrated GPU, it's very much a dGPU using PCIe lanes. Broadwell DT might have been a better example, but even that isn't dedicated vram (intel stressed this) for the iGPU, rather it's a cache.

Interestingly, a user at r/realAMD has extensively tested Vega's HBCC and shown that even dedicated VRAM (on dGPUs) can end up stressing system memory buses.

https://www.reddit.com/r/realAMD/comments/7x5bao/exploring_vega_hbcc_and_its_effect_on_the_system/

Here's to hoping we see AMD's Fenghuang APU with HBM and 3DXpoint dimms soon - sure would be nice to ditch DDR4 altogether!
 
I always found it weird that amd integrated graphics let you pick the amount of vram from the bios. I haven't seen a intel chip allow you to do this since like the extreme graphics days. You can do some stuff in Windows to set the amount of vram for a intel gpu to the max dynamic amount, but it's not in a always reserved state, Intel has a really weird max number as well 1792mb.
It's still there for Intel, 32mb it starts at and goes all the way to 256mb on the latest Cannonlake i7. Recently had a HP ProBook 470 G5 round the office.
 
It's still there for Intel, 32mb it starts at and goes all the way to 256mb on the latest Cannonlake i7. Recently had a HP ProBook 470 G5 round the office.
Really? I haven't seen it in any of the consumer stuff I worked on, I checked prob 50 different models from all sorts of brands starting back on sandy bridge when I worked retail. Seems weird since intel is so proud of there dynamic memory tech ( well they were when it launched with the x4500) On the intel HD systems I have messed around with I never felt the need to mess with it though, the dynamic allocation seems to work pretty well.
 
Great testing, thanks! Extremely useful to Raven Ridgers such as myself. That said, you're a bit mistaken on one issue here:

...some integrated GPUs like the Vega M graphics in upcoming Intel Kaby Lake-G processors...

Vega M isn't an integrated GPU, it's very much a dGPU using PCIe lanes. Broadwell DT might have been a better example, but even that isn't dedicated vram (intel stressed this) for the iGPU, rather it's a cache.

Interestingly, a user at r/realAMD has extensively tested Vega's HBCC and shown that even dedicated VRAM (on dGPUs) can end up stressing system memory buses.

https://www.reddit.com/r/realAMD/comments/7x5bao/exploring_vega_hbcc_and_its_effect_on_the_system/

Here's to hoping we see AMD's Fenghuang APU with HBM and 3DXpoint dimms soon - sure would be nice to ditch DDR4 altogether!

It's integrated graphics mate, integrated doesn't mean same die. They are integrated into the same package!

I do alt-tab a lot to look at things in the browser (I may even have a stream/youtube video open while gaming), but I doubt that will change things from what you've shown us in your tests.

On a more extreme side, I play Eve Online from time to time and having more than 1 client open is not uncommon (playing on multiple accounts at the same time). These APUs seem perfect for such an MMO.

Having more system memory available should actually help in that example.
 
So the bottomline is, Ryzen APU's will not be a big leap over previous generations when it comes to gaming. Only when fast graphic memory can somehow be integrated into the chip housing or a dedicated graphics memory channel added to the MB will we see any appreciable gaming benchmarks out of APU's.
 
I've been loving the elegance of UMA since first AMD's APUs - all data in one memory without pushing it around through PCIE. If only system memory would be faster...

I wonder if it will lead to some kind of spectre-like vulnerabilities in the future..?
 
Do you remember that you had to have 4MB of VRAM to set the display to 1024x768 in 32bit color in your Windows 95? ;)

Well if you want to display triplebuffered 4k 12bpc RGBA (48MB per frame) for some reason set the framebuffer to at least 256MB. If I'm getting it all right :p
 
"So in the case of the RX 550, it has a bandwidth of 112GB/s when accessing data locally using the VRAM, but when accessing data from system memory it's limited to 16GB/s (PCIe 3.0 x16 limit), which is to say that it takes at least seven times longer to process the same data."

If the RX 550 can process data 7x faster over the integrated Vega graphics on the 2200/2400G, why is there such a negligible gain in FPS when pairing it with these chips? That's according to your own review charts:

https://www.techspot.com/review/1574-amd-ryzen-5-2400g-and-ryzen-3-2200g/page5.html

I mean, I understand what you saying, that the performance levels are about equal. But how?

The review estimates that the Vega 8 on the 2200G is "equivalent" to a discrete RX 550 graphics card. Okay, let's assume that. How does it makes up for all that lost bandwidth? If the iGPU on the APU has to use system memory at a rate that is 7x slower than the memory on a dedicated graphics card, where is that magic happening that creates this (mostly) equal performance between the two different approaches in the end game?

Is this all about the need to move textures in and out of memory?
 
"So in the case of the RX 550, it has a bandwidth of 112GB/s when accessing data locally using the VRAM, but when accessing data from system memory it's limited to 16GB/s (PCIe 3.0 x16 limit), which is to say that it takes at least seven times longer to process the same data."

If the RX 550 can process data 7x faster over the integrated Vega graphics on the 2200/2400G, why is there such a negligible gain in FPS when pairing it with these chips? That's according to your own review charts:

https://www.techspot.com/review/1574-amd-ryzen-5-2400g-and-ryzen-3-2200g/page5.html

The review estimates that the Vega 8 on the 2200G is "equivalent" to a discrete RX 550 graphics card. Okay, let's assume that. How does it makes up for all that lost bandwidth? If the iGPU on the APU has to use system memory at a rate that is 7x slower than the memory on a dedicated graphics card, where is that magic happening that creates this (mostly) equal performances between the two different approaches in the end game?

Is this all about the need to move textures in and out of memory?

That's not what I'm saying here. When the RX 550 is forced to use system memory (RAM) it does so though the PCIe bus which is the bottleneck, not the system memory. The 16GB/s it's limited to doesn't impact the Vega 8 and 11 GPUs as they use the Infinity Fabric.
 
"So in the case of the RX 550, it has a bandwidth of 112GB/s when accessing data locally using the VRAM, but when accessing data from system memory it's limited to 16GB/s (PCIe 3.0 x16 limit), which is to say that it takes at least seven times longer to process the same data."

If the RX 550 can process data 7x faster over the integrated Vega graphics on the 2200/2400G, why is there such a negligible gain in FPS when pairing it with these chips? That's according to your own review charts:

https://www.techspot.com/review/1574-amd-ryzen-5-2400g-and-ryzen-3-2200g/page5.html

The review estimates that the Vega 8 on the 2200G is "equivalent" to a discrete RX 550 graphics card. Okay, let's assume that. How does it makes up for all that lost bandwidth? If the iGPU on the APU has to use system memory at a rate that is 7x slower than the memory on a dedicated graphics card, where is that magic happening that creates this (mostly) equal performances between the two different approaches in the end game?

Is this all about the need to move textures in and out of memory?

That's not what I'm saying here. When the RX 550 is forced to use system memory (RAM) it does so though the PCIe bus which is the bottleneck, not the system memory. The 16GB/s it's limited to doesn't impact the Vega 8 and 11 GPUs as they use the Infinity Fabric.

"The Raven Ridge APUs for example are limited to a memory bandwidth of around 35GB/s for system memory when using DDR4-3200."

Okay, I went back and re-read the relevant part of your post. So, the APU has about 1/3 the memory bandwidth as a discrete RX 550 graphics card. So, I still find myself puzzled. How is the equalization between the two being achieved? The APU uses the Infinity Fabric as a faster alternative to system ram? Maybe the system memory accesses required by the APU are much less often than I imagining. I was thinking it was "all the time" since an iGPU has no memory of it's own. But that fact alone, doesn't require that it be used 24/7 I guess.
 
"The Raven Ridge APUs for example are limited to a memory bandwidth of around 35GB/s for system memory when using DDR4-3200."

Okay, I went back and re-read the relevant part of your post. So, the APU has about 1/3 the memory bandwidth as a discrete RX 550 graphics card. So, I still find myself puzzled. How is the equalization between the two being achieved? The APU uses the Infinity Fabric as a faster alternative to system ram? Maybe the system memory accesses required by the APU are much less often than I imagining. I was thinking it was "all the time" since an iGPU has no memory of it's own. But that fact alone, doesn't require that it be used 24/7 I guess.

Bandwidth is just part of the equation and the RX 550 doesn't necessarily require the bandwidth it has. The read/write performance of the RX 550 in reality is more like 88 GB/s, but that's still about 2.5x greater. It's a bit like low end graphics cards that have big frame buffers, that's nice and all but for the most part they don't really need them as they can't take full advantage.

Keep in mind the GT 1030 often beats the RX 550 and it only has a theoretical peak of 48 GB/s.
 
So the bottomline is, Ryzen APU's will not be a big leap over previous generations when it comes to gaming. Only when fast graphic memory can somehow be integrated into the chip housing or a dedicated graphics memory channel added to the MB will we see any appreciable gaming benchmarks out of APU's.

You would then force people to spend a lot of extra money on a motherboard for a feature that may not be used later. This defeats the purpose of an APU - an inexpensive package with descent graphics This works for laptops, but not desktops.

It is rather remarkable that the Vega 11 was able to match the Rx 550 despite the latter having 3x the bandwidth. The RX 550 has closer to 2.5x the bandwidth as Steve showed in his testing. Perhaps the lower latency helps.

The next step would be a 'Vega 32' in a TR4 board with 70 GB/s quad channel memory. This would be 1/3 the Theoretical bandwidth of an RX 570/580 and again probably more than that in the real world.
 
Bandwidth is just part of the equation and the RX 550 doesn't necessarily require the bandwidth it has. The read/write performance of the RX 550 in reality is more like 88 GB/s, but that's still about 2.5x greater. It's a bit like low end graphics cards that have big frame buffers, that's nice and all but for the most part they don't really need them as they can't take full advantage.

Keep in mind the GT 1030 often beats the RX 550 and it only has a theoretical peak of 48 GB/s.

That's some good information. Thank You!
 
So the bottomline is, Ryzen APU's will not be a big leap over previous generations when it comes to gaming. Only when fast graphic memory can somehow be integrated into the chip housing or a dedicated graphics memory channel added to the MB will we see any appreciable gaming benchmarks out of APU's.

Um, first this isn't the article that draws those conclusions. Second, Ryzen APUs are a big leap over previous APU generations and rival Intel chips.
 
Back