AMD claims Ryzen Threadripper 9000 is up to 145% faster than Intel Xeon

DragonSlayer101

Posts: 697   +5
Staff
The big picture: AMD announced its Ryzen Threadripper 9000-series "Shimada Peak" processors at Computex but didn't provide any benchmarks to compare them against Intel's latest Xeon CPUs. This week, the company finally released official benchmarks for the new chips, claiming they are up to 145 percent faster than their Intel counterparts.

According to AMD, the Threadripper 9980X HEDT processor is up to 108 percent faster than the Xeon W9-3595X in Corona Render, up to 41 percent faster in Autodesk Revit, and up to 68 percent faster in MATLAB. It also reportedly delivers up to a 65 percent performance gain in Unreal Engine compilation and up to a 22 percent uplift in Adobe Premiere Pro compared to the same Intel chip.

As for the Threadripper Pro 9995WX, AMD claims it is up to 26 percent faster in Adobe After Effects compared to its immediate predecessor, the Threadripper Pro 7995WX. It also reportedly delivers a 17 percent performance uplift in Autodesk Maya, 20 percent in V-Ray, and 19 percent in Cinebench (nT).

AMD also compared the AI performance of the 9995WX against that of the Xeon W9-3595X. According to the company, its new workstation chip delivers up to 49 percent faster LLM processing in DeepSeek R1 32B, up to 34 percent faster text-to-image generation in ComfyUI + Flux.1 Diffusion Model, and up to 28 percent faster AI-enhanced creation in DaVinci Resolve Studio.

The 9995WX also reportedly shows massive gains in other creative and professional applications, such as KeyShot and V-Ray. In the former, it delivers up to 119 percent faster rendering than the Xeon W9-3595X, while in the latter, it is up to 145 percent faster, according to AMD's data. Performance in other apps, such as After Effects and Autodesk Maya, also shows high double-digit gains.

The Threadripper Pro 9000 WX-series comprises seven SKUs, while the non-Pro HEDT lineup includes three chips. The flagship 9995WX features 96 cores, 192 threads, a boost clock of up to 5.45 GHz, a 350 W TDP, and 384 MB of L3 cache. It offers 128 PCIe 5.0 lanes and supports DDR5-6400 ECC memory. The new chips will launch in July, although AMD has yet to announce pricing.

Permalink to story:

 
Usually I take these claims with a grain of salt but… the 7980x already destroys Intel’s latest Xeons, so it’s not far fetched to think the 9980x and 9995x will completely annihilate them…
We will all know when reviews are published.
 
"up to"
Marketing speak for "almost never".

They can't be sued for it though because they have data saying it happened once under ideal conditions.
 
I REALLY want to build a thread ripper system but can't justify it as long as 1) this is a hobby and I'm not making money and 2) used server hardware is as cheap as it is, especially Intel hardware. I've always been an AMD guy, but my homelab is filled with mostly Xeons and old Ryzen CPUs.
 
I REALLY want to build a thread ripper system but can't justify it as long as 1) this is a hobby and I'm not making money and 2) used server hardware is as cheap as it is, especially Intel hardware. I've always been an AMD guy, but my homelab is filled with mostly Xeons and old Ryzen CPUs.

I got on the Threadripper bandwagon about 5-6 years ago, biggest difference between server hardware is that most servers are conservatively clocked. The Threadripper does better than most on single thread that needs clock speed.
 
I got on the Threadripper bandwagon about 5-6 years ago, biggest difference between server hardware is that most servers are conservatively clocked. The Threadripper does better than most on single thread that needs clock speed.
Well, nothing I do is important enough to need the extra speed. I have a 7800x in my main rig and a 5950x in my server rack, aside from that, its mostly 9th and 10th gen xeons I got for a bargain. I use most of my machines for playing with VMs and hosting game servers from 10-15 years ago that I used to play. I have a NAS, a plex, one is a dedicated firewall sitting between my modem and my router. I ran a local AI for awhile, but it was slow just run on CPU alone. Probably be fine if I stuck a workstation GPU in there, but that's out of the budget.

I mean, it really is just a hobby.
 
Last edited:
Well, nothing I don't important enough to need the extra speed. I have a 7800x in my main rig and a 5950x in my server rack, aside from that, its mostly 9th and 10th gen xeons I got for a bargain. I use most of my machines for playing with VMs and hosting game servers from 10-15 years ago that I used to play. I have a NAS, a plex, one is a dedicated firewall sitting between my modem and my router. I ran a local AI for awhile, but it was slow just run on CPU alone. Probably be fine if I stuck a workstation GPU in there, but that's out of the budget.

I mean, its really is just a hobby.

Kind of the same, except I only really run one PC for everything (aside from a laptop, htpc, etc. all pretty low power stuff). Do all my gaming, work, video/photo, etc on my 7960x.
 
Kind of the same, except I only really run one PC for everything (aside from a laptop, htpc, etc. all pretty low power stuff). Do all my gaming, work, video/photo, etc on my 7960x.
I have 11 systems in my rack l, my PC, laptop and work laptop. So I have 14 computers
 
I have 11 systems in my rack l, my PC, laptop and work laptop. So I have 14 computers

Nice...My main system is a 7960x, 128GB ram, 7900xtx, 2 custom cooling loops, one for cpu, one for gpu. and a boatload of storage...ssd & SCSI
 
The Xeon W9-3595X is a 60-core chip. While I expect AMD has better single core performance (in some scenarios) based on what they are sharing, this is an apples to oranges comparison since both chips have more than 60 cores (64 and 96).

Or, in other news, buying 2 hard drives gives you 100% more storage than 1 hard drive! Buying 2 GPUs gives you (oh wait, that doesn't work in games... well, whatever) 100% more (compute) performance than 1!

To be fair though, Intel was fairly stingy and expensive per core until AMD came long with Threadripper and Epyc.
 
The Xeon W9-3595X is a 60-core chip. While I expect AMD has better single core performance (in some scenarios) based on what they are sharing, this is an apples to oranges comparison since both chips have more than 60 cores (64 and 96).

Or, in other news, buying 2 hard drives gives you 100% more storage than 1 hard drive! Buying 2 GPUs gives you (oh wait, that doesn't work in games... well, whatever) 100% more (compute) performance than 1!

To be fair though, Intel was fairly stingy and expensive per core until AMD came long with Threadripper and Epyc.
Except that the Xeon costs about 7k and the 7980x costs 5k… and the 7980x destroys it…

Cores, ghz, memory, are all just stats… what matters is performance and cost - and Threadripper has Xeon licked on both.
 
This is an apples to oranges comparison...
They're making the comparison because Intel's chips are still a couple of thousand more expensive, whilst being substantially slower.

The comparison is actually fine and just makes Intel look really bad.
 
The Xeon W9-3595X is a 60-core chip. While I expect AMD has better single core performance (in some scenarios) based on what they are sharing, this is an apples to oranges comparison since both chips have more than 60 cores (64 and 96).

Or, in other news, buying 2 hard drives gives you 100% more storage than 1 hard drive! Buying 2 GPUs gives you (oh wait, that doesn't work in games... well, whatever) 100% more (compute) performance than 1!

To be fair though, Intel was fairly stingy and expensive per core until AMD came long with Threadripper and Epyc.

I have the W9 as my everyday driver. I too would have liked the see the 64c comparision, which no doubt will be along shortly.
 
Man… that could easily be replace with 1 7980x :)
Well, not the laptops…
I have a 7980x and an Alienware 18…
I have almost 200 cores in my rack, but the real issue is that putting everything into a thread ripper system makes it a single point of failure. I'll put it this way, I like to "play" system admin in my homelab.
 
Except that the Xeon costs about 7k and the 7980x costs 5k… and the 7980x destroys it…

Cores, ghz, memory, are all just stats… what matters is performance and cost - and Threadripper has Xeon licked on both.
They're making the comparison because Intel's chips are still a couple of thousand more expensive, whilst being substantially slower.

The comparison is actually fine and just makes Intel look really bad.
Sure, but if it were a 64 core Xeon part the percentages wouldn't be as high. May not make a huge difference, but whenever the performance figures come from the vendor I always ask are there techniques being employed to improve the numbers, potentially artificially? With Nvidia it's frame gen, with CPUs it's usually core counts. Not saying that the Threadripper doesn't destroy Intel on price or performance here, just saying that there may have been better parts from Intel to compare to.

Edit: I would also push back that cores are "just stats". In the consumer/gamer space, this argument works, but it doesn't work in the workstation/enterprise space. The reason is because of cost (pay per core licenses), partitioning (allocation of cores to VMs or distributed/parallel processes), power/thermal density, and also performance. Some of those factors favor more cores, some favor fewer.

On the performance angle, it's easy to take the argument that one uses in the consumer space and apply it here, that a CPU's overall performance matters more than the thread count. That argument only really applies when assuming the other factors mentioned above don't matter, and then only really in two places: architecture (or, performance of specific software on a chip) and when comparing older to newer generations of hardware. Within a generation on the same architecture, core count is the main differentiator for multi-threaded performance. Across architectures, the single threaded performance is often close enough that cores are still usually the differentiator, though not always. It's true that there's usually more to look at (like specific support for AVX instructions, out of chip specifications like networking performance, etc.) that supports the argument that cores are just stats, but it's very often the case that cores bring a capability of their own.
 
Last edited:
I have almost 200 cores in my rack, but the real issue is that putting everything into a thread ripper system makes it a single point of failure. I'll put it this way, I like to "play" system admin in my homelab.
I remember reading about this problem several months ago over on The Register. With all the new chips coming out with high core counts, one has to balance density with the blast radius from a failure. My thought at the time was "you're probably buying extra cores for failover anyways, high density doesn't really change that, or augment your failover by using the cloud", For medium to large businesses those might be legit strategies, but I can see why for smaller operations that gets to be too expensive. Other than the price of buying several of these chips (and the rest of the rigs that goes with them), is there anything I'm missing in terms of why one wouldn't go big (assuming they have the need for that many in the first place)?
 
Sure, but if it were a 64 core Xeon part the percentages wouldn't be as high. May not make a huge difference, but whenever the performance figures come from the vendor I always ask are there techniques being employed to improve the numbers, potentially artificially? With Nvidia it's frame gen, with CPUs it's usually core counts. Not saying that the Threadripper doesn't destroy Intel on price or performance here, just saying that there may have been better parts from Intel to compare to.

Edit: I would also push back that cores are "just stats". In the consumer/gamer space, this argument works, but it doesn't work in the workstation/enterprise space. The reason is because of cost (pay per core licenses), partitioning (allocation of cores to VMs or distributed/parallel processes), power/thermal density, and also performance. Some of those factors favor more cores, some favor fewer.

On the performance angle, it's easy to take the argument that one uses in the consumer space and apply it here, that a CPU's overall performance matters more than the thread count. That argument only really applies when assuming the other factors mentioned above don't matter, and then only really in two places: architecture (or, performance of specific software on a chip) and when comparing older to newer generations of hardware. Within a generation on the same architecture, core count is the main differentiator for multi-threaded performance. Across architectures, the single threaded performance is often close enough that cores are still usually the differentiator, though not always. It's true that there's usually more to look at (like specific support for AVX instructions, out of chip specifications like networking performance, etc.) that supports the argument that cores are just stats, but it's very often the case that cores bring a capability of their own.
But the Xeon ISN'T a 64 core part... that's my point... you can't compare something that doesn't exist... When you buy something for the enterprise world, you spec out what you need vs what you can spend... For 5k, you can get a Threadripper 7980x that gives xx performance... for MORE than that, you can buy a Xeon which gives far less performance - who on earth would sanction that purchase!?!??!

Now, if you wait a month or so, you can get a 9980x which will give even MORE performance. Dunno the cost but I'd HOPE that it wouldn't be far from the 7980x price of 5k (and maybe 7980x prices will come down?), but even if it's 8-10k, it's still a better deal than the Xeon.
Intel simply can't compete in the HEDT market - and it only leads in servers due to its former dominance as upgrading to a different platform in enterprise doesn't happen as often...
 
Unreal Engine compiling 65 percent faster means you can finally crash your game in half the time. Truly revolutionary.
 
So, with a bit of digging, found some info on the new Xeon 64 core CPU…


It’s still gonna cost about $7k… and while it might perform a bit better than the 60 core Xeon, it still won’t match the cheaper 7980x - let alone the 9980x…
 
So, with a bit of digging, found some info on the new Xeon 64 core CPU…


It’s still gonna cost about $7k… and while it might perform a bit better than the 60 core Xeon, it still won’t match the cheaper 7980x - let alone the 9980x…
It's going to have a list price of that, they're going to sell them 10-20,000 at a time to hyperscalers for far less than that
 
But the Xeon ISN'T a 64 core part... that's my point... you can't compare something that doesn't exist... When you buy something for the enterprise world, you spec out what you need vs what you can spend... For 5k, you can get a Threadripper 7980x that gives xx performance... for MORE than that, you can buy a Xeon which gives far less performance - who on earth would sanction that purchase!?!??!

Now, if you wait a month or so, you can get a 9980x which will give even MORE performance. Dunno the cost but I'd HOPE that it wouldn't be far from the 7980x price of 5k (and maybe 7980x prices will come down?), but even if it's 8-10k, it's still a better deal than the Xeon.
Intel simply can't compete in the HEDT market - and it only leads in servers due to its former dominance as upgrading to a different platform in enterprise doesn't happen as often...
So, with a bit of digging, found some info on the new Xeon 64 core CPU…


It’s still gonna cost about $7k… and while it might perform a bit better than the 60 core Xeon, it still won’t match the cheaper 7980x - let alone the 9980x…
If there isn't a 64-core Xeon, then yeah, fair point about what can be compared, I assumed that Intel was making them. I vaguely remember them announcing such parts, but they have so many SKUs that it could have been, like the one you found, for the server market and not the workstation market, so it may not be a fair comparison anyways.
 
Back