AMD's Ryzen 5 can be stably overclocked to 4 GHz

TL;DR those who do care about the CPU in their workstation will almost always go for more cores than higher clocks. there is a good reason why Intel sold 8 core CPUs for 1000$ even though they don't clock as high as the mainstream 4 core CPUs.
Only specific workloads require higher clocks, most scale with better with cores. (with the condition that there isn't a huge difference in clocks)
"those who do care about the CPU in their workstation" are a tiny portion of the population. Higher frequency faster single-threaded Intel CPU's are preferable to most people. You're agreeing with me me yet telling me to "go out in the world" to confirm what we're both saying. I'm not sure what your point is anymore other than to keep reiterating my point - Ryzen is great for the very few people who can take advantage of it's strength.
 
"those who do care about the CPU in their workstation" are a tiny portion of the population. Higher frequency faster single-threaded Intel CPU's are preferable to most people. You're agreeing with me me yet telling me to "go out in the world" to confirm what we're both saying. I'm not sure what your point is anymore other than to keep reiterating my point - Ryzen is great for the very few people who can take advantage of it's strength.
I'm sorry dude, but you need to start making sense in your comments. You seem to have a really weird view of the world which stems from not actually working in IT or any big company.
And why did you suddenly move from people using PCs for work ("enterprise") to the entire population of the world?

From what I've seen you seem to be under the impression that people at work do only 1 thing at a time, that they never open multiple applications simultaneously.
 
I love how they say it beats the i7 7700k stock and overclocked to 4.9.

Well duh, the damn thing has 2 more cores and 4 more threads. Lets talk single core performance and gaming.

Lets see the 4 core 8 thread Ryzen 5 "compete"

Lets talk price duh?
 
I'm sorry dude, but you need to start making sense in your comments. You seem to have a really weird view of the world which stems from not actually working in IT or any big company.
And why did you suddenly move from people using PCs for work ("enterprise") to the entire population of the world?

From what I've seen you seem to be under the impression that people at work do only 1 thing at a time, that they never open multiple applications simultaneously.
I'm an auditor whose clients includes many enterprises with 10's of thousands of PCs. Part of the audit is an IT controls analysis and inventory of the companies assets, in which detailed listings of the companies machines are laid bare. I have clients in most fields: healthcare, manufacturing, CSD, banking (retail and investment), etc. Please tell me more about how my world view is weird...
 
I'm an auditor whose clients includes many enterprises with 10's of thousands of PCs. Part of the audit is an IT controls analysis and inventory of the companies assets, in which detailed listings of the companies machines are laid bare. I have clients in most fields: healthcare, manufacturing, CSD, banking (retail and investment), etc. Please tell me more about how my world view is weird...
I never knew being an auditor also meant being an expert in IT. I've bought desktop PCs/laptops for IT companies, schools, construction companies, engineers, architects, graphic designers, programmers, bankers, etc.

You will NEVER hear a company say that they need "high clocks" for their PCs. What they want is the best performance for the smallest price possible. most of the companies you listed need these 2 things: office PCs and servers. (cheap pentium to max i5 + Xeon)
Those who have the budget for bigger workstations (and need them) will always for the 6/8 core Broadwell-E CPUs from intel and don't even take the top end mainstream CPUs into consideration (like the 7700k).
Big IT companies generally offer their programmers laptops (windows business laptops like the Thinkpad or Apple) + the best PC they could buy for a certain budget. (smaller companies usually buy just one of the 2)

I think I know why you are confused: up until now, when somebody wanted better PCs, there was no option to go for something else besides the next tier of Intel CPUs which generally have higher clocks. Broadwell-E was just too expensive.
 
I never knew being an auditor also meant being an expert in IT.
LOL. Perhaps you should look into assurance services sometimes; your CEO and board know them all too well.

In any case you've completely missed the point and I am done with this thread. Thankfully we should see the full reviews of Ryzen 5 this week and it will confirm the same as 7 - behind Intel in everything but heavily threaded workloads that a few people will greatly appreciate.
 
it won't take 5 years. in max 2 years you'll see big AAA games using more and more threads. you can blame DX11 for the very long period where games didn't use more than 4 threads. the adoption of DX12 and Vulkan will take a bit more time, but eventually it will become the standard like how DX11 replaced DX9/10.


I was genuinely interested in the C2D v C2Q gaming question because I owned both, and I didn't cast my net too far beyond Techspot's own CPU benchmarks from way back but I gathered a fair few.

Their earliest archived benches here showed a Q6600 v an E8500. Ok it's a later C2D but it's fit for the comparison purpose in question- a faster higher clocked dual core.

Turned out nearly 3+ years after Q6600 launched in January 2007 most games still ran pretty decent on the C2D.

May 2010, 3+ years after Q6600 launch:
CPU_Performance.png

E8500 edges it

June 2010:
CPU.png

Rare result Q6600 = E8500.

July 2010:
CPU.png


No E8500 tested but the article alludes to it likely being faster when it states the game only uses 2 cores and shows Q6600's struggles against the next gen i3 540 dual core. In short higher clocked C2D = faster than C2Q

October 2010:
CPU_02.png


Q6600 loses and it's not looking good in any case, even a budget Phenom is faster

November 2010:
CPU.png

We finally see a good win for the Q6600 against the E8500 approaching 4 years since Q6600 arrived, but it's not that good. It's just hitting 30FPS minimum here and it's losing to a budget X4 645 quad, which at this point in history only costs about $100. It's also not even half as fast as the i5 750, which shows it's useful life as a gaming chip with a good GPU is probably nearing the end.

February 2011, Q6600 is now over 4 years old:

CPU1.png

Again Q6600 struggling here against the E8500 in an advanced (for the time) Unreal engine game. This is important because this mirrored most UE games of this period IIRC for two core usage. RE: Singularity above, but also stuff like Mass effect 2 etc

March 2011:
CPU_01.png


Significant win for the Q6600 here, over 4 years since it arrived though on one of the best technical game engines around. Saying that it is clear again there is one hell of a gap now between it and even a budget dual core like an i3 540 which I think is a little over $100 (it's a year old, was like $140 new)

CPU1.png


Q6600 edges it, the E8500 still viable in this though.

It seems Techspot stopped testing Core 2 models after about this point, they were kinda getting too slow I would imagine is the reasoning. I may have missed the odd test they did but I did post ones I found.

I think the pattern is fairly clear, competitiveness of the Q6600 definitely increases as 4 years pass no doubt.

But the problem is two fold as I hinted at in my first post. It took all that time for it to really start showing it's extra muscle in games against the C2D, and by the time it did it was borderline too slow anyway and/or budget chips at that time were starting to destroy it so much you may as well have just ditched the platform.

I hope you find it a little interesting.
 
Last edited:
I was genuinely interested in the C2D v C2Q gaming question because I owned both, and I didn't cast my net too far beyond Techspot's own CPU benchmarks from way back but I gathered a fair few.

Their earliest archived benches here showed a Q6600 v an E8500. Ok it's a later C2D but it's fit for the comparison purpose in question- a faster higher clocked dual core.

Turned out nearly 3+ years after Q6600 launched in January 2007 most games still ran pretty decent on the C2D.

May 2010, 3+ years after Q6600 launch:
CPU_Performance.png

E8500 edges it

CPU_Performance.png

It's not massive again but the E8500 is nudging 30FPS when the Q6600 isn't

June 2010:
CPU.png

Rare result Q6600 = E8500.

July 2010:
CPU.png


No E8500 tested but the article alludes to it likely being faster when it states the game only uses 2 cores and shows Q6600's struggles against the next gen i3 540 dual core. In short higher clocked C2D = faster than C2Q

October 2010:
CPU_02.png


Q6600 loses and it's not looking good in any case, even a budget Phenom is faster

November 2010:
CPU.png

We finally see a good win for the Q6600 against the E8500 approaching 4 years since Q6600 arrived, but it's not that good. It's just hitting 30FPS minimum here and it's losing to a budget X4 645 quad, which at this point in history only costs about $100. It's also not even half as fast as the i5 750, which shows it's useful life as a gaming chip with a good GPU is probably nearing the end.

February 2011, Q6600 is now over 4 years old:

CPU1.png

Again Q6600 struggling here against the E8500 in an advanced (for the time) Unreal engine game. This is important because this mirrored most UE games of this period IIRC for two core usage. RE: Singularity above, but also stuff like Mass effect 2 etc

March 2011:
CPU_01.png


Significant win for the Q6600 here, over 4 years since it arrived though on one of the best technical game engines around. Saying that it is clear again there is one hell of a gap now between it and even a budget dual core like an i3 540 which I think is a little over $100 (it's a year old, was like $140 new)

CPU1.png


Q6600 edges it, the E8500 still viable in this though.

It seems Techspot stopped testing Core 2 models after about this point, they were kinda getting too slow I would imagine is the reasoning. I may have missed the odd test they did but I did post ones I found.

I think the pattern is fairly clear, competitiveness of the Q6600 definitely increases as 4 years pass no doubt.

But the problem is two fold as I hinted at in my first post. It took all that time for it to really start showing it's extra muscle in games against the C2D, and by the time it did it was borderline too slow anyway and/or budget chips at that time were starting to destroy it so much you may as well have just ditched the platform.

I hope you find it a little interesting.
these are indeed some really interesting results.
the base clocks of the Q6600 were really low (2.4GHz with no boost), but the reason why the Q6600 CPU is so legendary is because it was cheap and it could OC really really well (a good air cooler could get it to 3.4-3.6 easily). at 3.2-3.4GHz it would beat the E8500 in single threaded benchmarks and destroy it in multithreading. both of these CPUs also lack hyper threading so you get only 2/4 threads.
 
these are indeed some really interesting results.
but the reason why the Q6600 CPU is so legendary is because it was cheap and it could OC really really well (a good air cooler could get it to 3.4-3.6 easily). at those clocks it would beat the E8500 in single threaded benchmarks and destroy it in multithreading. both of these CPUs also lack hyper threading so you get only 2/4 threads.

An E8500 could usually do 4-4.2GHz typically for reference. Q6600 was indeed a good CPU, but the point I made is still standing I believe. A quad core today that overclocks well beyond a 6 or 8 core is going to be faster in the vast majority of games now, and probably for several years to come yet.

By the time it isn't it probably won't matter. My attention was drawn recently to how an $80 G4560 manages to keep up with a 2500K in modern games, although the 2500K overclocks I find it pretty fantastic how such a cheap part today means anything less than a 2500K from 5 years ago is mostly pointless for modern games.

I think if you are a gamer just stick with what is fastest now and the near future. By the time it isn't you'll be able to get a budget setup that trashes something old with more cores anyway
 
An E8500 could usually do 4-4.2GHz typically for reference. Q6600 was indeed a good CPU, but the point I made is still standing I believe. A quad core today that overclocks well beyond a 6 or 8 core is going to be faster in the vast majority of games now, and probably for several years to come yet.

By the time it isn't it probably won't matter. My attention was drawn recently to how an $80 G4560 manages to keep up with a 2500K in modern games, although the 2500K overclocks I find it pretty fantastic how such a cheap part today means anything less than a 2500K from 5 years ago is mostly pointless for modern games.

I think if you are a gamer just stick with what is fastest now and the near future. By the time it isn't you'll be able to get a budget setup that trashes something old with more cores anyway
for games definitely. but as you've seen the difference between the Q6600 and the E8500 grew smaller as time passed and in some games it actually beat it (using base clocks). let's not forget that in those days even the GPU drivers were mostly single threaded. it was a transition period and quad core CPUs were fairly new (a novelty).

since we're discussing futureproofing. I would like to add that by picking an AM4 socket you'll be able to upgrade your CPU 2-3 generations from now. PCI-E 4.0 won't help at all for single GPU setups when it comes out next year for another 3-4 years (perf delta between 2.0 and 3.0 on Titan X in games is under 5%) and DDR5 is expected to launch for consumers around 2020 (around the time AMD is planning to release a new incompatible socket).

if you plan on doing anything else besides gaming then going for more cores is never a bad decision (especially if the single core performance is not that far behind).

as for the i5 2500k vs the G4560, it would be weird if 5 years didn't make a difference in terms of pricing :D both CPUs have 4 threads and similar clocks, although the G4560 achieves this with HT. let's not forget that the G4560 also puts to shame the i3 line from Intel in terms of value.
 
Last edited:
. My attention was drawn recently to how an $80 G4560 manages to keep up with a 2500K in modern games

Because Hyperthreading.
The G4560 and 2500K both have 4 logical cores and similar clock speeds.

As far as the spite going on here, its pretty simple.
Games for the longest time really only used about 4-6 threads, now they are using 4-8 threads and more cache.
Not cores, threads.

Will they be using 12-16 threads anytime soon? No.
Which is why an the 7600k beats up on Ryzen's 1800X across 18 or so games, including newer ones.
You think that will magically change in just 24 months?
Same thing was said with Bulldozer right?
IMO it will be atleast 5 more years before games are utilizing 12-16 threads.
 
TL;DR those who do care about the CPU in their workstation will almost always go for more cores than higher clocks. there is a good reason why Intel sold 8 core CPUs for 1000$ even though they don't clock as high as the mainstream 4 core CPUs.
Only specific workloads require higher clocks, most scale with better with cores. (with the condition that there isn't a huge difference in clocks)
"those who do care about the CPU in their workstation" are a tiny portion of the population. Higher frequency faster single-threaded Intel CPU's are preferable to most people. You're agreeing with me me yet telling me to "go out in the world" to confirm what we're both saying. I'm not sure what your point is anymore other than to keep reiterating my point - Ryzen is great for the very few people who can take advantage of it's strength.

There's a big difference between what people prefer & what their company's IT department actually provides. The vast majority of office workers have zero control over the hardware...which is why the OEMs like Dell & HP have stayed in business so long, selling companies cheap office PCs (like my current Dell Optiplex 9010 Small Form Factor, "rocking" an i5-3550 that's equipped with 32-bit Windows 7, 4GB of RAM, & no discrete GPU). But that's because a) we're not gaming, we're working, so we don't need a discrete GPU; b) a 4GB/32-bit machine is a lot cheaper than a 8GB/64-bit machine; & c) office work doesn't stress a machine like specialized applications (gaming, video, 3D CAD, etc.). And especially with everyone moving "to the cloud", your office's Internet backbone speed (or lack thereof) is going to have a much bigger effect on how fast your applications open files than extra clock cycles. But we do see bigger advantages to having more cores (physical or logical) in our systems, because we can run more things simultaneously. And even there, 2C/4T < 4C/4T < 4C/8T, & so on.
 
Lets also not forget how AMD was the one who everybody put down because of their Cpus were not good enough. Now they are competing with Intel lets bag them because games don't use all those cores but also lets not forget people do more than just play games. How about we give kudos to AMD for getting back on the horse and giving it to Intel who has for years been shafting people with little increments and high prices each year. Now I just want to see Radeon Gpus get back in the game and give Nvidia a great kick to the gonads.
 
these are indeed some really interesting results.
but the reason why the Q6600 CPU is so legendary is because it was cheap and it could OC really really well (a good air cooler could get it to 3.4-3.6 easily). at those clocks it would beat the E8500 in single threaded benchmarks and destroy it in multithreading. both of these CPUs also lack hyper threading so you get only 2/4 threads.

An E8500 could usually do 4-4.2GHz typically for reference. Q6600 was indeed a good CPU, but the point I made is still standing I believe. A quad core today that overclocks well beyond a 6 or 8 core is going to be faster in the vast majority of games now, and probably for several years to come yet.

By the time it isn't it probably won't matter. My attention was drawn recently to how an $80 G4560 manages to keep up with a 2500K in modern games, although the 2500K overclocks I find it pretty fantastic how such a cheap part today means anything less than a 2500K from 5 years ago is mostly pointless for modern games.

I think if you are a gamer just stick with what is fastest now and the near future. By the time it isn't you'll be able to get a budget setup that trashes something old with more cores anyway

I think we're all forgetting that when Q6600 was "fighting" E8200 we had Xbox 360 and PS3 which had 3 and 1 core cpu's now we have consoles with 8 weak cores games engine have to go multithread or they wont get any more than 15 fps on them :)
 
I think we're all forgetting that when Q6600 was "fighting" E8200 we had Xbox 360 and PS3 which had 3 and 1 core cpu's now we have consoles with 8 weak cores games engine have to go multithread or they wont get any more than 15 fps on them :)

You mean the renowned Cell processor which effectively had 8 threads when most desktop processors of that time had 2?

So not entirely accurate. Maybe you don't remember the complaints from virtually every developer at the time how much of a PITA it was task scheduling for so many threads on PS3 and assigning proper workloads

P. S X360 was 3 cores and 6 threads too btw lol

X360's CPU was also slow even for the time and relied on good threading. We consider the 8 thread Jaguars in Xbone to be slow for the time and they also were but they are stupendously faster then Xenon in 360.

Look no further than the static translation of 360 games running on the emulator built for Xbone!
 
Last edited:
Branch prediction hardware
You mean the renowned Cell processor which effectively had 8 threads when most desktop processors of that time had 2?

So not entirely accurate. Maybe you don't remember the complaints from virtually every developer at the time how much of a PITA it was task scheduling for so many threads on PS3 and assigning proper workloads

P. S X360 was 3 cores and 6 threads too btw lol

X360's CPU was also slow even for the time and relied on good threading. We consider the 8 thread Jaguars in Xbone to be slow for the time and they also were but they are stupendously faster then Xenon in 360.

Look no further than the static translation of 360 games running on the emulator built for Xbone!
the CELL cpu had only 1 general purpose core (3.2GHz RISC PowerPC PPE). the rest were SPEs (1 reserved for the OS and security, 1 disabled for increasing yields and 6 for for the rest of games/apps).
you can compare it to an APU which has one CPU core and a weird compute oriented GPU with relatively high floating point performance but no branch prediction hardware.
 
Branch prediction hardware
the CELL cpu had only 1 general purpose core (3.2GHz RISC PowerPC PPE). the rest were SPEs (1 reserved for the OS and security, 1 disabled for increasing yields and 6 for for the rest of games/apps).
you can compare it to an APU which has one CPU core and a weird compute oriented GPU with relatively high floating point performance but no branch prediction hardware.

Yes, effectively 8 CPU threads. 360 had 6.

It was suggested games were only programmed to run on 3 or 1 on the consoles as an argument for better threading today. This is not really true.
 
Yes, effectively 8 CPU threads. 360 had 6.

It was suggested games were only programmed to run on 3 or 1 on the consoles as an argument for better threading today. This is not really true.

I totally forgot about "HT" is the 360 but my point still stands those were strong cores compare to what we have in todays consoles, I believe that last gen lacked memory and graphics performance not cpu and the SPU's were mainly to help render graphics in PS3
 
I totally forgot about "HT" is the 360 but my point still stands those were strong cores compare to what we have in todays consoles, I believe that last gen lacked memory and graphics performance not cpu and the SPU's were mainly to help render graphics in PS3

They weren't strong cores though I touched on that in my reply.

The 3.2GHz 3C/6T xenon in 360 was a pile of junk.

So much so Microsoft emulate it almost perfectly with static translation on Xbone that has a 1.75GHz 8C/8T Jaguar. A design which is a couple of fused together low power mobile chips only acceptable for low end notebooks or below.

A total performance of that console CPU close to something like a lower end dual core i3 Sandy Bridge lol

In short both 360 and PS3 games had to be pretty well threaded to get the best out of their CPU performance. It isn't suddenly a new console thing with PS4/Xbone to have weak CPUs that had to use many threads
 
Back