Testing Ryzen 9 with SMT On vs. SMT Off

Maxiking

Posts: 93   +126
This isn't what's being experienced out in the wild, despite bios updates to deal with some early quirks. These chips aren't hitting the advertised boost speeds for most and there is little overclocking overhead BUT this isn't to say that they're bad chips, they are unsurpassed in productivity and they're still perfectly good gaming CPUs but Intel does win in that department.

It won't stop me from buying the 3950X when it arrives in September - upgrading from an overclocked 2700 (non-X) at 4.25ghz 1.425V.
By out in the wild I assume you are referring to forum posts / reddit. It would make sense if a majority of the people posting there are looking at how to get max boost clocks, after all they came there for help. On the flip side of the coin, people who are getting max boost clocks aren't making posts for help because they don't need help in the first place. You cannot base the current statues of a product based on support threads alone. If you did nearly every product ever launched would be a faulty piece of junk when in fact the number of people posting help threads for many products is a small amount. It should also be noted that as sales increase, number of RMA cases will increase proportionately. A better selling product is bound to have more people that need help.

I've seen a few posts about not reaching max boost clocks but nothing I would call widespread. Given the number of sales they are making based off the few retailers that actually release their numbers, I'd say this is well within expected.
Gamer Nexus
Techspot
Techpowerup
Toms HW
Der8auer, the most famious cpu overclocker right now who always talks about issues like x299 VRM cooling and so on made a video about that
Aanandtech

etc etc etc

"Not a wide spread issue"

The denial is strong with this one.

No, the latest AGESA updates haven't fixed a thing.

Not to mention that the boost is fraud, because with Ryzen 3xxx and their poor 7nm node, they bin chiplets so every cpu has only one "strong" core which is able to reach the boost clock unlike with Intel, where any core can reach the boost.

But yeah, 4.6 and 4.7ghz looks better on paper than those pisspoor 4.1 - 4.2 boost. :D:D:D


Lets imagine the public outrage had Intel done this.

Video from Der8auer about the issue

 

m3tavision

Posts: 502   +296
The Problem is you get to put your 9900k on an obsolete board and you have a sub-system that is not competitive.

Intel CPU don't matter because they have no future upgrade path.
 

m3tavision

Posts: 502   +296
I find this obsession funny. Deal with it, they are still slower in gaming clock to clock than the Intel's architecture released in 2015 and crippled by the security patches.

No patch, windows scheduler update or HT off will change that.
Bro, it is a simple IQ test: If you are building a brand new $3K+ gaming rig, which CPU would you buy...?

Nobody cares that Intel is 4% faster, except those who promote Intel and are paid sponsors for Pro people. 95% of the populace will choose AMD over Intel for PC gaming. You know this, yet try hardly to spread FUD trying to convince someone that your fake arguments matter.

Intel is for those who have been.
 

Shadowboxer

Posts: 569   +333
Once again, it’s clear as day in the benchmarks that Intel are still ahead in gaming. Those who are buying high refresh monitors and need the best CPU for that should buy Intel. In fact those who just use their machine to game and aren’t hugely budget restricted should buy Intel. I’m pretty sure everyone else should buy AMD. But I haven’t spent too much time looking at those benchmarks because I’m only really enthusiastic about gaming performance.
 

Evernessince

Posts: 4,985   +5,105
Gamer Nexus
Techspot
Techpowerup
Toms HW
Der8auer, the most famious cpu overclocker right now who always talks about issues like x299 VRM cooling and so on made a video about that
Aanandtech

etc etc etc

"Not a wide spread issue"

The denial is strong with this one.

No, the latest AGESA updates haven't fixed a thing.

Not to mention that the boost is fraud, because with Ryzen 3xxx and their poor 7nm node, they bin chiplets so every cpu has only one "strong" core which is able to reach the boost clock unlike with Intel, where any core can reach the boost.

But yeah, 4.6 and 4.7ghz looks better on paper than those pisspoor 4.1 - 4.2 boost. :D:D:D


Lets imagine the public outrage had Intel done this.

Video from Der8auer about the issue

TechSpot, Techpowerup, Toms hardware, and AnandTech have not had an article on not getting boost clocks. The reason you don't provide links is because you are fabricating BS.

FYI the video you linked at the end of your comment is not about AMD boost clocks, it's about fanboys. He says that in the first few seconds of the video. You are just hurling :poop: at the wall to see what sticks.


Once again, it’s clear as day in the benchmarks that Intel are still ahead in gaming. Those who are buying high refresh monitors and need the best CPU for that should buy Intel. In fact those who just use their machine to game and aren’t hugely budget restricted should buy Intel. I’m pretty sure everyone else should buy AMD. But I haven’t spent too much time looking at those benchmarks because I’m only really enthusiastic about gaming performance.
This was a HT/SMT impact test, not an AMD vs Intel one. It's purpose was not to draw conclusions on which to buy as you've done off topic here.
 
  • Like
Reactions: Clynt

TechCat

Posts: 30   +19
First No one buys a $500 CPU to game at 1080p so the results are nice to know but meaningless. The diff at realworld 1440p+ would be nil.
Second For the tiny FPS diff using % exaggerates the chart graphics compounded by AVG % diff an aggregate of an aggregrate is incorrect AVG FPS diff is correct. There are way more advantages to getting the R9 3900X over the end of the line i9-9900K.
 

greyz

Posts: 22   +2
Would you mind testing World of Warcraft performance? Blizzard recently released a huge performance boosting patch for Ryzen CPUs and I can't find benchmarks for Ryzen-WoW from any tech site!
 

PetrolHead

Posts: 66   +35
The the thread that is put on the SMT created "core" will play second fiddle to the main thread. It will use resources when available.
Any source you could link? I was under the impression that there is no real "main" thread (or logical core to be more exact). Instead, the two logical cores are viewed as equal by the scheduler and while tasks may be prioritized, this is done by the scheduler based on what tasks at hand are, not based on any sort of hierarchy between the logical cores themselves - that is, apart from the fact that the scheduler should know which logical cores belong to which physical core. However, apparently with Intel's HT, systems that support it do recognize one of the cores as the actual physical core and the other as the "virtual" core, so now I'm having doubts about AMD's SMT as well.

And yes, some games do get better FPS after disabling HT on Intel, although it seems that it's more pronounced on AMD. I'm just assuming that the threads get better prioritization for Intel on windows and it seems Linux is handling AMD slightly better (prolly better scheduler?)
Well Windows' scheduler at least used to be pretty poor, as it did not properly take into account what sort of hardware it was running on, whereas the Linux scheduler had been good at this for a long time. However, the scheduler should do a better job nowadays due to the improvements in build 1903. Still, it would be nice to see some independent testing comparing 1903 with 1803 to see whether the 15% improvement advertised by AMD (in Rocket League, 1080p and low graphics settings) actually holds any water...

In any case there are a lot of variables in play in multi-threading, so I doubt we'll ever be in a situation where it would always be beneficial to have HT or SMT on.

P.S. Anandtech has some interesting stuff about 1 gen Zen's SMT as schedulers and core affinity:

https://www.anandtech.com/show/1117...review-a-deep-dive-on-1800x-1700x-and-1700/10

https://www.anandtech.com/show/1344...-scheduler-wars-with-amds-threadripper-2990wx
 
  • Like
Reactions: Puiu

Lew Zealand

Posts: 1,292   +1,244
TechSpot Elite
I find this obsession funny. Deal with it, they are still slower in gaming clock to clock than the Intel's architecture released in 2015 and crippled by the security patches.

No patch, windows scheduler update or HT off will change that.
Bro, it is a simple IQ test: If you are building a brand new $3K+ gaming rig, which CPU would you buy...?

Nobody cares that Intel is 4% faster, except those who promote Intel and are paid sponsors for Pro people. 95% of the populace will choose AMD over Intel for PC gaming. You know this, yet try hardly to spread FUD trying to convince someone that your fake arguments matter.

Intel is for those who have been.
I think anyone who is spending $3K on a gaming rig would consider the single best CPU for gaming, an i9-9900K. Yeah it's a one-trick pony but that trick is gaming.

All of 6% better.
With a 2080Ti.
At 1080p or lower rez & quality.

If that person was also looking to the future and an upgrade path, then they'd better buy a Ryzen 2 as you'll get nothing more from Intel. Also looking for better productivity? Yeah Ryzen. Loads of arguments in favor of Ryzen.

But I wonder about 2 things:
1) AMD's tarnished reputation from the Bulldozer days as many people are not good at changing their minds or opinions.
2) Will any reviewers move over to a Ryzen 9 39XX for their testing rigs, or stay with Intel?
 

NightAntilli

Posts: 350   +263
Manually setting core affinity should effectively do the same thing, right? It would be great if there was a tool that could save per application profiles.
You can set this up yourself, albeit with a little bit of work... You do it by setting some additional commands in a shortcut for the game. You do it once, and every time you start the game, it launches with your set affinity. The drawback is that it only works when you start it through that shortcut.
 

Shadowboxer

Posts: 569   +333
I find this obsession funny. Deal with it, they are still slower in gaming clock to clock than the Intel's architecture released in 2015 and crippled by the security patches.

No patch, windows scheduler update or HT off will change that.
Bro, it is a simple IQ test: If you are building a brand new $3K+ gaming rig, which CPU would you buy...?

Nobody cares that Intel is 4% faster, except those who promote Intel and are paid sponsors for Pro people. 95% of the populace will choose AMD over Intel for PC gaming. You know this, yet try hardly to spread FUD trying to convince someone that your fake arguments matter.

Intel is for those who have been.
I think anyone who is spending $3K on a gaming rig would consider the single best CPU for gaming, an i9-9900K. Yeah it's a one-trick pony but that trick is gaming.

All of 6% better.
With a 2080Ti.
At 1080p or lower rez & quality.

If that person was also looking to the future and an upgrade path, then they'd better buy a Ryzen 2 as you'll get nothing more from Intel. Also looking for better productivity? Yeah Ryzen. Loads of arguments in favor of Ryzen.

But I wonder about 2 things:
1) AMD's tarnished reputation from the Bulldozer days as many people are not good at changing their minds or opinions.
2) Will any reviewers move over to a Ryzen 9 39XX for their testing rigs, or stay with Intel?
I can’t see reviewers switching to a 3900X for a testing rig if they are testing graphics cards. It’s measurably slower than Intel’s chips at gaming and a reviewer would want the fastest CPU to prevent there from being any kind of CPU bottlenecks.

Besides, the 3900X is practically unobtainable. The only way of getting one at the moment is to pay nearly twice as much for one on eBay. AMD will be kicking themselves over the lost revenue they could have made from pricing Ryzen 2 a bit higher.
 
  • Like
Reactions: Lew Zealand

Strawman

Posts: 243   +170
First No one buys a $500 CPU to game at 1080p so the results are nice to know but meaningless. The diff at realworld 1440p+ would be nil.
Second For the tiny FPS diff using % exaggerates the chart graphics compounded by AVG % diff an aggregate of an aggregrate is incorrect AVG FPS diff is correct. There are way more advantages to getting the R9 3900X over the end of the line i9-9900K.
Actually that is factually incorrect. If you know what you are doing then obviously you buy an expensive CPU for lower resolutions, since the GPU produces higher framerates and the CPU needs to keep up with it. What doesn't make sense is buying an expensive CPU when you are playing @ 4k

1080p / ultrawide / high refresh rate require fast CPU's first and foremost
 

Puiu

Posts: 3,877   +2,389
Any source you could link? I was under the impression that there is no real "main" thread (or logical core to be more exact). Instead, the two logical cores are viewed as equal by the scheduler and while tasks may be prioritized, this is done by the scheduler based on what tasks at hand are, not based on any sort of hierarchy between the logical cores themselves - that is, apart from the fact that the scheduler should know which logical cores belong to which physical core. However, apparently with Intel's HT, systems that support it do recognize one of the cores as the actual physical core and the other as the "virtual" core, so now I'm having doubts about AMD's SMT as well.



Well Windows' scheduler at least used to be pretty poor, as it did not properly take into account what sort of hardware it was running on, whereas the Linux scheduler had been good at this for a long time. However, the scheduler should do a better job nowadays due to the improvements in build 1903. Still, it would be nice to see some independent testing comparing 1903 with 1803 to see whether the 15% improvement advertised by AMD (in Rocket League, 1080p and low graphics settings) actually holds any water...

In any case there are a lot of variables in play in multi-threading, so I doubt we'll ever be in a situation where it would always be beneficial to have HT or SMT on.

P.S. Anandtech has some interesting stuff about 1 gen Zen's SMT as schedulers and core affinity:

https://www.anandtech.com/show/1117...review-a-deep-dive-on-1800x-1700x-and-1700/10

https://www.anandtech.com/show/1344...-scheduler-wars-with-amds-threadripper-2990wx
Yes you are correct, I misunderstood where the prioritisation takes place, not on the core itself. Thanks for the links
 

YSignal

Posts: 40   +30
I find this obsession funny. Deal with it, they are still slower in gaming clock to clock than the Intel's architecture released in 2015 and crippled by the security patches.

No patch, windows scheduler update or HT off will change that.
Bro, it is a simple IQ test: If you are building a brand new $3K+ gaming rig, which CPU would you buy...?

Nobody cares that Intel is 4% faster, except those who promote Intel and are paid sponsors for Pro people. 95% of the populace will choose AMD over Intel for PC gaming. You know this, yet try hardly to spread FUD trying to convince someone that your fake arguments matter.

Intel is for those who have been.
I think anyone who is spending $3K on a gaming rig would consider the single best CPU for gaming, an i9-9900K. Yeah it's a one-trick pony but that trick is gaming.

All of 6% better.
With a 2080Ti.
At 1080p or lower rez & quality.

If that person was also looking to the future and an upgrade path, then they'd better buy a Ryzen 2 as you'll get nothing more from Intel. Also looking for better productivity? Yeah Ryzen. Loads of arguments in favor of Ryzen.

But I wonder about 2 things:
1) AMD's tarnished reputation from the Bulldozer days as many people are not good at changing their minds or opinions.
2) Will any reviewers move over to a Ryzen 9 39XX for their testing rigs, or stay with Intel?
I can’t see reviewers switching to a 3900X for a testing rig if they are testing graphics cards. It’s measurably slower than Intel’s chips at gaming and a reviewer would want the fastest CPU to prevent there from being any kind of CPU bottlenecks.

Besides, the 3900X is practically unobtainable. The only way of getting one at the moment is to pay nearly twice as much for one on eBay. AMD will be kicking themselves over the lost revenue they could have made from pricing Ryzen 2 a bit higher.
Yeah I'm sure they are kicking themselves over their products being popular and selling out. /s
 

Shadowboxer

Posts: 569   +333
Yeah I'm sure they are kicking themselves over their products being popular and selling out. /s
AMDs engineers may not be but you can rest assured that the shareholders will be. AMD are a for profit corporation and if they’re worth their salt they will increase their prices to match the demand. If they see users willing to buy their products at higher prices then the price will increase. And this isn’t exactly advanced business wisdom. There is a reason Intel raised prices for years, demand. And now we are seeing price cuts on Intel’s latest products as demand slumps in the face of a better product from a competing manufacturer, whose prices will almost certainly rise.
 
  • Like
Reactions: Lew Zealand
Hi there. If you re looking for SMT/HT performance loss responsibles : SMT/HT cannot be used in conjonction with some of the SSE & AVX instructions due to some CPU pipeline not "wide enough" to handle those instructions twice per core. In this case, the HT/SMT thread can be stalled, losing most of its benefits but also messing with threads syncing at sync points, leading to a perf loss against pure non HT/SMT cores.
 
  • Like
Reactions: motoDrizzt
Is 1080p important yet? Surely most of ppl still play at 1080p but this kind of articles are not meant to people considering to change system, who is going to buy an 9900k or a ryzen 9 to play at 1080p?
Imho this kind of stuff should be tested at least at 1440p, 120hz are needed the world top ten hardcore fps, or similar games, players
 

neeyik

Posts: 893   +826
Staff member
When using games to test CPU performance, we have to use the final frame rate to judge the devices. However, the process to make one frame is a long and complicated sequence, where the speed of it is determined by multiple stages along the way. That means for CPU testing, you need to ensure that the slowest part of the entire sequence is caused entirely by the CPU (or as entirely as you can make it).

This in turn requires making the stages that hit other hardware in the system (RAM, storage, graphics card) as simple as possible, whilst retaining the load on the CPU. The easiest way to achieve this is to maximise the graphics/detail settings, minimise the resolution, and use the best possible graphics card available. Hence why CPU tests are done at 1080p but on max/ultra settings.

For something like 1440p, 120 fps, the influence of the CPU, compared to the GPU, is greatly reduced. You can see this in the following article:

https://www.techspot.com/review/1897-ryzen-5-ryzen-9-core-i9-gaming-scaling/

Take the first test result shown:



Notice how there is no effective difference between the 3 CPUs when using an RX 580. It's a similar situation with an RX 5700 - a vastly more capable GPU than the 580, but the results suggest that there is no difference at all between the 3900X and 9900K CPUs. We only see a separation across all 3 central processors once a 2080 Ti is used.
 
  • Like
Reactions: 5W33J1N

Shadowboxer

Posts: 569   +333
When using games to test CPU performance, we have to use the final frame rate to judge the devices. However, the process to make one frame is a long and complicated sequence, where the speed of it is determined by multiple stages along the way. That means for CPU testing, you need to ensure that the slowest part of the entire sequence is caused entirely by the CPU (or as entirely as you can make it).

This in turn requires making the stages that hit other hardware in the system (RAM, storage, graphics card) as simple as possible, whilst retaining the load on the CPU. The easiest way to achieve this is to maximise the graphics/detail settings, minimise the resolution, and use the best possible graphics card available. Hence why CPU tests are done at 1080p but on max/ultra settings.

For something like 1440p, 120 fps, the influence of the CPU, compared to the GPU, is greatly reduced. You can see this in the following article:

https://www.techspot.com/review/1897-ryzen-5-ryzen-9-core-i9-gaming-scaling/

Take the first test result shown:



Notice how there is no effective difference between the 3 CPUs when using an RX 580. It's a similar situation with an RX 5700 - a vastly more capable GPU than the 580, but the results suggest that there is no difference at all between the 3900X and 9900K CPUs. We only see a separation across all 3 central processors once a 2080 Ti is used.
I think it’s important testing. Say you have a 590 now and in say a few years you buy a new GPU for the same sort of money or maybe more money. That GPU could well be the same or more powerful than a 2080ti. So just because the difference in performance today requires a beefy card doesn’t mean it will be the same in the future. I typically go through 1-3 cards per CPU depending on the CPU, so I always look at the small differences at the top, it’s a good inflection if which part would be preferable when it does come to upgrade time. It really sucks to buy a new graphics card to find you aren’t getting everything out of it because the CPU is holding you back.
 

neeyik

Posts: 893   +826
Staff member
It really sucks to buy a new graphics card to find you aren’t getting everything out of it because the CPU is holding you back.
But that's a different set of testing criteria to the ones being targeted in this particular article; here it was about how the fundamental CPU performance is affected by the use of simultaneous multithreading. In your example, the criteria is different to this, and gets covered in the GPU scaling one I linked to or something like this:

https://www.techspot.com/article/1569-final-fantasy-15-cpu-benchmark/

The very first image in the benchmark results shows how the bottleneck shifts from the GPU to the CPU:

 
  • Like
Reactions: Evernessince
Someone else may have mentioned this already, but to me the drop in the lowest 1% frame rate is obvious. SMT allows two separate processes to run cpu instructions/computations (ie run code in general) on the same core during the same clock cycle.

SMT increases performance by allowing other processes to utilize "left over" instructions on a core during each cpu clock cycle.

When disabled the OS, and the other 1000 threads that typically run in the background don't have extra spare cpu time to run. So occasionally when gaming, something in the background needs to do some work. Since it no longer can run in the "left over" cpu instructions, it ends up stealing entire clock cycles from the processes/threads that the game has running.

Thereby causing the game threads to have to "wait in line" in the thread scheduler for X amount of cycles, while the background process does it's thing.

Hope someone found this useful.