AMD Ryzen Gaming Performance In-Depth: 16 Games Played at 1080p & 1440p

For 1: More pins required, for relatively little change in performance. Sure there are a FEW cases where RAM is critical for performance, but if it won't be enough to justify raising the price by $30-$50, then sales are lost due to price for almost no gain. BIOS updates are really the key for RAM support, since the CPU itself can handle even ECC, the BIOS on most boards just doesn't like it.

For 2: I wonder how much of this also goes back to the BIOS, RAM being used, etc. There might be something on the CPU itself that is causing problems with clocking beyond 4.1GHz, but I suspect it is motherboard related. If motherboard, BIOS updates may do the trick.

For 3: Unlike previous chips, Ryzen really is a system on a chip, and most of the PCI Express lanes are actually handled by the CPU itself, the chipset just handles the connections. An added PLX chip on motherboards would add lanes, and we may see that in future motherboards after this initial batch is around for six months. I would like to see an option to disable some features and use the available PCI Express lanes for other things, so if we disable the m.2 slot, get some more lanes there. Disable those 1x slots for more lanes, etc.

My problem is not even the gaming performance, because that could be improved with patches.
I Have mainly 3 Problems with the Ryzen(and im an AMD fun):

1. Why Dual Channel RAM.... At the moment as I saw even if the mainboards can partially OC the RAMs even to 3600 MHz, but just on 2 slots, if I use 4 slots, the speed would drastically fall. I hope thats possible to change with a bios update

2. Overclocking and XFR - was so much wrote about the XFR..and even if I push a Predator 240 or Predator 360 I can get +100 Mhz from XFR.. that very very disapointing, the same about overclocking...the 6900 is able to overclock to 4,4Ghz and by the Ryzen 1800X is by 4,1 the end.....by the XFR range...

3. 20 PCI Lanes. Why to make X370 chipset for SLI setups with 20 PCI lanes. SLI setup is mostly made with high end GPUs which need x16 for the best performance. 20 Lanes are definetly not enough for a good modern processor. there should be at least 40. 2x 16 for the SLI setups and 2x 4 for two M2 or U2 slots.
 
For 3: Unlike previous chips, Ryzen really is a system on a chip, and most of the PCI Express lanes are actually handled by the CPU itself, the chipset just handles the connections. An added PLX chip on motherboards would add lanes, and we may see that in future motherboards after this initial batch is around for six months. I would like to see an option to disable some features and use the available PCI Express lanes for other things, so if we disable the m.2 slot, get some more lanes there. Disable those 1x slots for more lanes, etc.

Ryzen CPU has 2*SATA Express. Each SATA Express can be either 2*SATA or 2*PCI Express 3.0. X370 chipset has also 2*SATA Express. So basically it's 4*SATA, 2*SATA + 2*PCI Express 3.0 or 4*PCI Express 3.0 for both CPU and chipset.
 
What I don't understand is, why are the real life gaming performance so different than the synthetic benches like (let's say) 3d Mark. In 3d mark amd parts perform quite competitively but in actual gaming they fall short? Yes I know there's the user input factor in actual gaming but shouldn't this affect both amd and intel the same way?
The problem seems to come from how Windows assigns threads to the cores, and other related issues. This isn't a processor issue as much as needing Windows to understand how to divide up the threads, and THAT is something that can be fixed without needing new parts.
 
What I don't understand is, why are the real life gaming performance so different than the synthetic benches like (let's say) 3d Mark. In 3d mark amd parts perform quite competitively but in actual gaming they fall short? Yes I know there's the user input factor in actual gaming but shouldn't this affect both amd and intel the same way?

My theory:

In synthetic benchmarks, you have full CPU loading. As a result, you don't have to worry about scheduling effects; every core is going to 100% anyway, so it really doesn't matter how individual cores get loaded.

In games, you have two or three threads that do ~80% of the total work. You are relying on the scheduler putting those threads on CPU cores that have all their resources free for use. If SMT is not handled properly, you can have these threads being interrupted due to a SMT thread taking some of the HW resources that the main thread requires.

When Intel brought back HTT in core, they added a CPUID bit to indicate what CPUs were HTT capable, so the OS/Developers could specifically code against it. That's why no Intel CPU since Core has come out has needed a patch within the OS for scheduling programs, as the OS is already aware how to handle HTT.

AMD has no functional equivalent. As far as the OS is concerned, Ryzen is a native 16 core CPU with no SMT. As a result, you have suboptimal thread scheduling. This is the same EXACT issue Bulldozer had at launch, and will likely be "fixed" via a patch to the OS scheduler.

In my opinion, it's sub-optimal to force MSFT to patch the OS every time the CPU architecture changes. The simple solution would be for AMD to re-use the CPUID bit Intel uses to broadcast which CPUs have HTT to broadcast what CPUs have SMT. This way MSFT only has to patch the OS once, and this patch should cover every AMD CPU that has SMT from that point forward.
 
Even while I always build a mid-range PC without bells and whistles ( one mid-range GPU, no overclocking, 2 sticks of memory with the standard speed etc ), I'd still go for the i5 7600 if I was in the market today. It's like something sits in my throat the very few times I'm thinking about jumping to AMD platform.

Same for the GPU. Only once went to AMD ( 7950 ) but for some reason didn't like it. Having said that, I'd like this time the 6-core CPU if the reviews were telling me that had better IPC with lower power requirements. In that case, I wouldn't say "no" to a system with 1600/1700 CPU with an RX480.

But the fact the CPU and motherboards are so new, doesn't make it attractive. Motherboards are gonna need many BIOS updates with better microcodes and less errors. Also, when the programs and games will start utilizing more efficiently the available cores, that's an other story.
 
Last edited:
Ok ok... I'm seeing a lot of arguments whether or not Ryzen has or hasn't been optimized in games. Let's put some things to rest. I'll be using Computerbase.de data of BF1.;
https://www.computerbase.de/2017-03.../#diagramm-battlefield-1-dx11-multiplayer-fps

Let's analyze the data shall we...

720p framerate BF1 DX11/DX12
7700k (4/8) 116.4/127.6; 11.2 more (+9.6%)
6850k (6/12) 120.9/122; 1.1 more (+0.9%)
6900k (8/16) 143.8/122.4; 21.4 less (-14.9%)
6950x (10/20) 129.6/120.9; 8.7 less (-6.7%)

1080p framerate BF1 DX11/DX12
7700k 116.2/120.4; 4.2 more (+3.6%)
6850k 120.5/121.5; 1 more (+0.8%)
6900k 136.5/117.3; 19.2 less (-14.1%)
6950x 127.1/109.7; 17.4 less (-13.7%)

I see nothing really inconsistent in the above... The 4C/8T CPU gains performance when jumping to DX12, indicating it likely is already bottlenecking in DX11. The 6C/12T CPU is practically the same between the APIs... For any more cores/threads, performance decreases... The exact reason for the decrease? Don't know. Likely too many threads is causing higher latency or some issues with fences, decreasing framerate... In any case, the loss is seen at both resolutions. Why it's bigger for the 6950x at 1080p compared to 720p, don't know. Likely an optimization issue. Aside from this, at 720p the gain is greater for the 7700k than at 1080p, as expected... But then, look at the Ryzen benchmarks...

Ryzen (8/16):
720p framerate BF1 DX11/DX12
122.4/90.7 31.7 less (-25.9%)
1080p framerate BF1 DX11/DX12
121.8/89.2 32.6 less (-26.8%)

Ryzen loses at least 26%-ish simply by jumping between DX11 and DX12. You cannot honestly tell me that this does not indicate a lack of optimization. Especially considering that at 1080p, where the impact should be less since we're less CPU bound, the loss is actually slightly greater than at 720p... None of the bunch of DX12 benchmarks we have seen are at all representative of CPU performance for Ryzen, because DX12 programming requires more handholding of the hardware than DX11. That is a fact. DX11 is likely a mixed bag depending on the game, but for BF1, DX11 works well (enough) for Ryzen.

Now... What is the following indicating...?

From 720p to 1080p DX11
7700k 116.4/116.2; 0.2 less (-0.2%)
6850k 120.9/120.5; 0.4 less (-0.3%)
6900k 143.8/136.5; 7.3 less (-5.1%)
6950x 129.6/127.1; 2.5 less (-1.9%)
Ryzen 122.4/121.8; 0.6 less (-0.5%)

It indicates that under DX11, all except the 6900k and the 6950x are already CPU limited at 1080p, considering a lower resolution does not increase framerate significantly. Where is the limitation? It can't be single threaded performance of the CPUs, since the 7700k has the highest of all of them, yet has the lowest framerate of all of them. You tell me. What is causing it?

And here...;

From 720p to 1080p DX12
7700k 127.6/120.4; 7.2 less (-5.6%)
6850k 122/121.5; 0.5 less (-0.4%)
6900k 122.4/117.3; 5.1 less (-4.2%)
6950x 120.9/109.7; 11.2 less (-9.3%)
Ryzen 90.7/89.2; 1.5 less (-1.7%)

Under DX12, the 7700k is no longer the bottleneck at 1080p. Same CPU, same game, same OS, different API. And now, the 7700k under DX12 is faster than Ryzen under DX11, at 720p. But at 1080p, suddenly Ryzen is faster under DX11, than the 7700k at DX12. What gives? Care to explain?

Has it become clearer now, why using low settings & low resolution no longer is a good indication of CPU performance? There are too many other variables to consider right now. It is no longer as simple as in the past with one or two cores.

And obviously the platform needs some maturing from all parties, like all completely new platforms that just hit the market. Stop judging prematurely.
 
Those who test CPUs and GPUs using video game framerates as a benchmark are the worst kind of morons. Testing very specific functionalities require the elimination of variables that can influence results. So, if your using software to test hardware, you want the software to be as light as possible. A video game is the exact opposite. You don't know if your testing the interaction of the software/hardware rather than just the hardware. That's why we have synthetic benchmarks, so we can test hardware independent of other, non-related variables.
 
AMD Didn't really over hype the performance of the chips prior to launch. They promised 40% performance increase over Excavator, and managed to deliver at least that.

For me I'll be upgrading from an FX8350, so I can expect almost double performance in gaming compared to what I have now, just from upgrading CPU, Mobo and RAM.. when it comes to gaming at least. Move to heavily threaded work that I also do like video encoding, etc, and it's even more bang.

I only hope that some software patches come out for Windows and the games to fix the SMT issue, as well as to better optimize for the platform.
 
This is actually a poor comparison and poor analytics from the writer of this article. In years past when you only have 20 percent of the CPU market who do you think game developers are going to cater to when it comes to CPU optimization. Why Intel of course. That is one reason why 1080p shows Intel processors doing better. Game developers have had no reason to optimize gaming for AMD processors until AMD Ryzen so it will be months before games begin to be optimized for this new CPU. If you game at 2K and 4K (4k gaming sadly lacking in this article) you'll see that AMD Ryzen does very good, especially the 1800X for far less money.
 
This is actually a poor comparison and poor analytics from the writer of this article. In years past when you only have 20 percent of the CPU market who do you think game developers are going to cater to when it comes to CPU optimization. Why Intel of course. That is one reason why 1080p shows Intel processors doing better. Game developers have had no reason to optimize gaming for AMD processors until AMD Ryzen so it will be months before games begin to be optimized for this new CPU.
Months? How about years? Optimization can occur but games designed with Ryzen in mind would be in the early stages of development. Furthermore it's going to be a while before Ryzen is making appreciable marketshare gains (which was your 20% assertion). That's going to be more than months to wait.
If you game at 2K and 4K (4k gaming sadly lacking in this article) you'll see that AMD Ryzen does very good, especially the 1800X for far less money.
I guess you didn't read the article as it addressed why beating the "4k" drum is tantamount to sticking your head in the sand.
 
Months? How about years? Optimization can occur but games designed with Ryzen in mind would be in the early stages of development. Furthermore it's going to be a while before Ryzen is making appreciable marketshare gains (which was your 20% assertion). That's going to be more than months to wait.
I hardly understand why AMD didn't share their cpu specs (or if there's another reason) with the game/software devs before the release. For god sake at least they could cooperate with MS for a SMT patch beforehand. I can see that Ryzen was designed server/workstation workloads in mind but they could at least prepare some software support beforehand to get somewhat better critics day 1 for a better PR
 
What I don't understand is, why are the real life gaming performance so different than the synthetic benches like (let's say) 3d Mark. In 3d mark amd parts perform quite competitively but in actual gaming they fall short? Yes I know there's the user input factor in actual gaming but shouldn't this affect both amd and intel the same way?

My theory:

In synthetic benchmarks, you have full CPU loading. As a result, you don't have to worry about scheduling effects; every core is going to 100% anyway, so it really doesn't matter how individual cores get loaded.

In games, you have two or three threads that do ~80% of the total work. You are relying on the scheduler putting those threads on CPU cores that have all their resources free for use. If SMT is not handled properly, you can have these threads being interrupted due to a SMT thread taking some of the HW resources that the main thread requires.

When Intel brought back HTT in core, they added a CPUID bit to indicate what CPUs were HTT capable, so the OS/Developers could specifically code against it. That's why no Intel CPU since Core has come out has needed a patch within the OS for scheduling programs, as the OS is already aware how to handle HTT.

AMD has no functional equivalent. As far as the OS is concerned, Ryzen is a native 16 core CPU with no SMT. As a result, you have suboptimal thread scheduling. This is the same EXACT issue Bulldozer had at launch, and will likely be "fixed" via a patch to the OS scheduler.

In my opinion, it's sub-optimal to force MSFT to patch the OS every time the CPU architecture changes. The simple solution would be for AMD to re-use the CPUID bit Intel uses to broadcast which CPUs have HTT to broadcast what CPUs have SMT. This way MSFT only has to patch the OS once, and this patch should cover every AMD CPU that has SMT from that point forward.

It's not just SMT that needs to be handled by the scheduler. They also need to setup NUMA groups for the first 4 and last 4 cores this way the scheduler will keep processes in their respective node based on the memory it uses. This could possibly improve performance even on programs that are heavily threaded. So regardless of the fact of turning on and off SMT performance is being held back more than one might think due to the large delay caused by moving data between the two L3 caches.

As for Intel scheduler patches I doubt what you said is actually 100% true. The patch effect might not have been as dramatic for Intel processors though. Also understand Intel hasn't made any large changes to their chips for a very long time adding HT to their Core processors and moving the memory controller on chip being the largest I can think of recently. Although technically the Core series are still what I would consider descendants of the Pentium 3.
 
Some will look at the graphs and tell they are disappointed, but one must consider the price. For 30-90% less money you get the same performance as Intel. People that want the very best in gaming will still spend $800-1000 on Intel, while people that want best bang for the buck should take Ryzen no questions asked.
Gaming bang for the buck is still the 7700K.
 
After some re-review of the Intel offerings, I agree. However, let's see what Ryzen 5 will offer for gaming.
I don't think it will out shine 7. RYZEN has it's high points but games are not in them. Games use so little of a CPU. Single thread is king. Slaves can not help much. The heavy lifting is on a single core. And single core is still the same core.
 
I don't think it will out shine 7. RYZEN has it's high points but games are not in them. Games use so little of a CPU. Single thread is king. Slaves can not help much. The heavy lifting is on a single core. And single core is still the same core.
Nobody expects ryzen 5 to magically outperform intel counterparts, but as it seems they will definitely be price competitive. Firstly, current information suggests all ryzen 5 cpus will be unlocked so one could get a below $200 ryzen chip (4 or 6 cores with smt) and overclock it to get acceptable gaming performance and outshine intel counterparts in anything else. In my opinion the 6 core one ( and probably the lesser ones too, when overclocked) could give similar gaming performance to their bigger brothers. My point is, intel has the upper hand in gaming, BUT, AMD will offer more variants which are ALL unlocked + at pretty competitive price ranges. In fact intel has only 2 ideal flavors for gamers in their current line-up, 7600k and 7700k. And the budget/mainstream gamers may lean towards the cheaper AMD parts, stealing from intel's market. Just thinking out loudly though, perhaps wishful thinking :)
 
Nobody expects ryzen 5 to magically outperform intel counterparts, but as it seems they will definitely be price competitive. Firstly, current information suggests all ryzen 5 cpus will be unlocked so one could get a below $200 ryzen chip (4 or 6 cores with smt) and overclock it to get acceptable gaming performance and outshine intel counterparts in anything else. In my opinion the 6 core one ( and probably the lesser ones too, when overclocked) could give similar gaming performance to their bigger brothers. My point is, intel has the upper hand in gaming, BUT, AMD will offer more variants which are ALL unlocked + at pretty competitive price ranges. In fact intel has only 2 ideal flavors for gamers in their current line-up, 7600k and 7700k. And the budget/mainstream gamers may lean towards the cheaper AMD parts, stealing from intel's market. Just thinking out loudly though, perhaps wishful thinking :)

It's just that the SMT of 7 is it's strong suit, not it's single thread performance. Less cores will hit it's overall performance a lot. In Cinebench the multithreaded to single thread performance is 10x. That is inline with Skylake's HT. That is how it can match Broadwell-E. Ryzen 5 will be all about price, not performance. The [H] review shows Sandybridge getting the better of Ryzen 7 in games. If Skylake-X comes with lower prices than Broadwell-E the price to performance could be nearly as good.

The G4560 at $65 is going to be hard for AMD to compete with in gaming. Between it and the 7600k is where they need to be.
 
Last edited:
The G4560 at $65 is going to be hard for AMD to compete with in gaming. Between it and the 7600k is where they need to be.
That's exactly where they are going to be, below $250. And no, some Ryzen 5 models may be price/performance competitive to 7600k too, we will wait and see. It will have enough cores to be competitive I presume. Intel has no unlocked competitor to Ryzen 5 offerings other than 7600k, there's only 7350k at the lower segment but that one still requires a z270 mobo to overclock and has too few cores which shows its limits in some newer titles. And I'm sure Ryzen 3 will be competitive too, G4560 looks to be a nice one though
 
Back