Puget Systems survey shows that 9 out of 10 workstation users opted for Threadripper CPUs

Sounds like reality hurt.

You talk crap about Intel, then go in full tears when I point AMDs issues out.

I will never understand fanboys.

Instead of hoping Arrow Lake will be great, you downright deny it will happen. Even tho it has been confirmed officially on several occations. Hahaha.
Reality hurts when AMD gets double performance from AVX-512 and Alder Lake users are stuck with nothing, hahaha!
 
Change didn't show immediately.

Problem with your logic is fact that you can explain only One CPU launch. How about Zen3? Zen4? Zen3 3D? Zen4 3D? You see any impact on chart? There should be. But no.

You missed some stuff there. I'm trying to understand something effing weird in the chart. Zen 2 predates the beginning of the chart. Zen 3 3D and Zen 4 3D are not productivity CPUs and this company is focused on productivity. Zen 4 was an improvement over Zen 3 in every way and I agree it should have balanced the CPU sales more. My guess is when Alder Lake came out the company changed their default CPU suggestion to Alder Lake and never changed it again. Amazing that people can be so lazy on both the seller and buyer side.

So you are making assumptions based on single CPU launch and single data point on graph. It just doesn't work that way.

It works exactly that way. Because there is a change and all I have is assumptions. I'm trying to make reasonable assumptions and Alder Lake's significant productivity increases match well with this company's focus and the timing.

Sure, it could be coincidental but I have yet to see another explanation that fits that wild swing in the chart.

It may well be something like I already showed: by default custom system recommends Intel.

Agreed. And the assumption is they had AMD as the default before Alder Lake came out because pre-Alder, Intel productivity was not competitive.

As for productivity "improvements", I saw how good Alder Lake really is on real life. Productivity work, something intensive running on virtual machine. Worker drags virtual machine windows and wiggle it around. "It calculates faster this way". Why? Because then Thread director keeps it on P-cores instead transferring it into Crap cores.

So anyone that says Alder lake is good for productivity is a ***** or have not actually tried it.

Agreed that reality and benchmarks frequently don't match but when you're buying a system from a company like this, does anyone get the opportunity to sample an AMD and Intel system to see if one is actually the better one for your workflow? Damn, that would be a nice option.
 
You missed some stuff there. I'm trying to understand something effing weird in the chart. Zen 2 predates the beginning of the chart. Zen 3 3D and Zen 4 3D are not productivity CPUs and this company is focused on productivity. Zen 4 was an improvement over Zen 3 in every way and I agree it should have balanced the CPU sales more. My guess is when Alder Lake came out the company changed their default CPU suggestion to Alder Lake and never changed it again. Amazing that people can be so lazy on both the seller and buyer side.
Yeah, probably default is Alder Lake and that explains everything.
It works exactly that way. Because there is a change and all I have is assumptions. I'm trying to make reasonable assumptions and Alder Lake's significant productivity increases match well with this company's focus and the timing.

Sure, it could be coincidental but I have yet to see another explanation that fits that wild swing in the chart.
Too bad, Alder Lake is NOT productivity CPU. Intel says Alder Lake is not good on following:
scientific simulations, financial analytics, artificial intelligence (AI)/deep learning, 3D modeling and analysis, image and audio/video processing, cryptography, and data compression.
Source: Intel https://www.intel.com/content/www/us/en/products/docs/accelerator-engines/what-is-intel-avx-512.html
Agreed. And the assumption is they had AMD as the default before Alder Lake came out because pre-Alder, Intel productivity was not competitive.
And still is not. Again, AVX-512 is big thing for productivity and Alder Lake lacks it.
Agreed that reality and benchmarks frequently don't match but when you're buying a system from a company like this, does anyone get the opportunity to sample an AMD and Intel system to see if one is actually the better one for your workflow? Damn, that would be a nice option.
That would be awesome but pretty hard to work out in practice. Too bad.
 
Why are you even talking about Alder Lake in 2024 :joy:

For content creation, 14900K beats 7950X/7950X3D in most cases. Thats just reality for you. You can talk AVX512 all you want, real world performance is what matters.

Even 13900K, 13900KS beats them. And i7-14700K can beat them too in tons of apps.

14900K, Once again:
"Faster in productivity than any other AMD CPU" +
"Incredible gaming performance"


meaning with Intel you can get BOTH on the same chip, with AMD you have to CHOOSE between 3D or NON-3D. A single chip is not good for everything. THAT is AMDs big problem + IF traffic on chips with more than 8 cores, meaning that 12-16 core chips is performing WORSE than 8 core chips in gaming (and some apps)

Intels problem is watt usage, but wil be fixed with TSMC 3nm on Arrow Lake now. Just wait and see. Intel has ZERO problems with matching AMD in terms of performance. AMD has had NODE ADVANTAGE for several years and STILL don't beat Intel.

Now Intel gets node advantage. Will be fun to see. AMD is forced to use 3nm TSMC as well but rumours are claiming 4nm (5nm), lets hope not or AMD will be back to price cuts all over.

AMD can use Intel 18A in 2025 :joy::joy: Intel will be open for business.

The only company that can afford to use TSMCs best node is Apple. Without Apple, TSMC would be years behind where they are now. Apple money made TSMC.

Then Nvidia and Intel. Both already reserved tons of 3nm production.

AMD can't afford to use the best nodes at TSMC.

And this is why AMD will never get ahead of the pack.
 
Last edited:
Why are you even talking about Alder Lake in 2024 :joy:

For content creation, 14900K beats 7950X/7950X3D in most cases. Thats just reality for you. You can talk AVX512 all you want, real world performance is what matters.

Even 13900K, 13900KS beats them. And i7-14700K can beat them too in tons of apps.

14900K, Once again:
"Faster in productivity than any other AMD CPU" +
"Incredible gaming performance"


Once again, they say Alder Lake / Raptor Lake, no matter, lack AVX-512 support. Now, how much performance is lost? Guess what? They don't have single AVX-512 test there. For some reason, Sisoft Sandra tests are pretty much absent on modern reviews. Reason? It supports AVX-512. Like I said, those amateurs have no match for me.

meaning with Intel you can get BOTH on the same chip, with AMD you have to CHOOSE between 3D or NON-3D. A single chip is not good for everything. THAT is AMDs big problem + IF traffic on chips with more than 8 cores, meaning that 12-16 core chips is performing WORSE than 8 core chips in gaming (and some apps)

Intels problem is watt usage, but wil be fixed with TSMC 3nm on Arrow Lake now. Just wait and see. Intel has ZERO problems with matching AMD in terms of performance. AMD has had NODE ADVANTAGE for several years and STILL don't beat Intel.

Now Intel gets node advantage. Will be fun to see. AMD is forced to use 3nm TSMC as well but rumours are claiming 4nm (5nm), lets hope not or AMD will be back to price cuts all over.

AMD can use Intel 18A in 2025 :joy::joy: Intel will be open for business.

The only company that can afford to use TSMCs best node is Apple. Without Apple, TSMC would be years behind where they are now. Apple money made TSMC.

Then Nvidia and Intel. Both already reserved tons of 3nm production.

AMD can't afford to use the best nodes at TSMC.

And this is why AMD will never get ahead of the pack.

Blah blah. Let me tell you how this works. Alder Lake and others on same architecture do not support AVX-512 so it's not tested. When Intel launches next CPU that support it (Arrow Lake), Intel will promote this awesome feature and review sites will promote how much performance Intel gained over Alder Lake. However it also means CURRENT AMD CPUs leave Alder Lake far behind but who cares about old stuff. That will happen.

For your BS, if you really think AMD "only" has node advantage, prepare for rude awakening.
 
Once again, they say Alder Lake / Raptor Lake, no matter, lack AVX-512 support. Now, how much performance is lost? Guess what? They don't have single AVX-512 test there. For some reason, Sisoft Sandra tests are pretty much absent on modern reviews. Reason? It supports AVX-512. Like I said, those amateurs have no match for me.



Blah blah. Let me tell you how this works. Alder Lake and others on same architecture do not support AVX-512 so it's not tested. When Intel launches next CPU that support it (Arrow Lake), Intel will promote this awesome feature and review sites will promote how much performance Intel gained over Alder Lake. However it also means CURRENT AMD CPUs leave Alder Lake far behind but who cares about old stuff. That will happen.

For your BS, if you really think AMD "only" has node advantage, prepare for rude awakening.
SiSoft Sandra is for absolute beginners and AVX512 don't matter for consumer based chips. Thats why Intel easily wins in productivity.

"Faster in productivity than any other AMD CPU" +
"Incredible gaming performance"

Reality calls.

AMD has node advantage and had for years. AMD relies 100% on TSMC. Sadly AMD can't use the top quality nodes like Apple, Nvidia and now, Intel.
 
Last edited:
Also, AVX512 is hated among many people.


Intel is looking to replace AVX512 with AVX10, which Arrow Lake probably will get.

AVX512 is terrible for consumers because heat output skyrocket, also on AMD chips.

Most applications don't even use AVX512 and its mostly needed for server workloads.

If AVX512 was that important, Intel would not drop it. However, they will improve it.
 
Also, AVX512 is hated among many people.


Intel is looking to replace AVX512 with AVX10, which Arrow Lake probably will get.

AVX512 is terrible for consumers because heat output skyrocket, also on AMD chips.

Most applications don't even use AVX512 and its mostly needed for server workloads.

If AVX512 was that important, Intel would not drop it. However, they will improve it.
That article is 4 years old.

You probably didn't read at all what is AVX10. I'll tell you: it's basically AVX-512 with optional 2*256-bit vector length. You already told AVX-512 sucks, well, too bad, then AVX10 also sucks "(y) (Y)"🤦‍♂️
 
Why are you even talking about Alder Lake in 2024 :joy:

It's very simple:

Pay attention.
Read the article.
Actually read the posts you're responding to.

We were having a discussion about what happened in the CPU graph at the beginning of 2022 and coming up with explanations for why that happened, which I think we did.

Does that make sense?
 
That article is 4 years old.

You probably didn't read at all what is AVX10. I'll tell you: it's basically AVX-512 with optional 2*256-bit vector length. You already told AVX-512 sucks, well, too bad, then AVX10 also sucks "(y) (Y)"🤦‍♂️


I see :joy:

"Faster in productivity than any other AMD CPU" shows how little AVX512 matters for consumers, pointless to bring up really

Not the first time an AMD fanboy is grasping at straws tho

:laughing::joy:

Lets see what AMD will do when they loose node advantage later this year. I hope they did not skimp on process node by using 4/5nm when Intel (and Nvidia) goes directly to 3nm
 
Last edited:
If AVX-512 is useless, then AVX10 is too. Simple.
:joy:

"Faster in productivity than any other AMD CPU" shows how little AVX512 matters for consumers, pointless to bring up really

Not the first time an AMD fanboy is grasping at straws tho

:laughing::joy:

Lets see what AMD will do when they loose node advantage later this year. I hope they did not skimp on process node by using 4/5nm when Intel (and Nvidia) goes directly to 3nm
Guess what? They tested ZERO AVX-512 supporting software because then AMD would have been miles faster. So yes, that article is total BS.
 
Last edited:
Not many. For example Intel told Cinebench to drop support because Intel CPUs suck at it. Source:
But when Intel releases Arrow lake, there will be AVX-512 tests everywhere. Mark my words.

Intel invented it, but do not support it on Alder Lake 🤦‍♀️
Intel dropped it, because it was pointless and still is, especially for consumers

AVX10.1 replaced AVX512 at this point anyway, this is 2024 not 2014
 
Intel dropped it, because it was pointless and still is, especially for consumers

AVX10.1 replaced AVX512 at this point anyway, this is 2024 not 2014
So Intel is ultimately stupid. First they promote it, then drop support :D

I don't know if you are trolling or just plain stupid. Once again: AVX10.1 is NOTHING ELSE THAN OPTIONAL 256 BIT VECTOR LENGTH FOR AVX-512! It's NOT REPLACEMENT!
 
So Intel is ultimately stupid. First they promote it, then drop support :D

I don't know if you are trolling or just plain stupid. Once again: AVX10.1 is NOTHING ELSE THAN OPTIONAL 256 BIT VECTOR LENGTH FOR AVX-512! It's NOT REPLACEMENT!
Yes it is, because AVX512 has too many issues and AVX10+ aims to fix that

AVX512 = More heat, higher power + downclocking because TjMax is reached - Its not worth it for consumers at all

Intel and AMD CPUs alike struggle with high temps, both are hitting ceiling really. Only somewhat cool chips are AMDs 3D chips but thats just because stacked cache will roast if temps go too high, hence clockspeed is gimped and OC is complicated / not really possible

Again, which consumer workload uses AVX512?

You are grasping at straws :joy:

Lets just hope AMD did not cheap out and went TSMC 4/5nm for Zen 5 meaning Intel will have node advantage again
 
Last edited:
Back