AMD Ryzen Threadripper 7980X and 7970X Review

As expected incredibly impressive performance, but these are very niche products. You really need to know wtf you are doing if you plan on buying such expensive products. :)
 
This. Where are the render tests in things like Premiere, Resolve, AE etc?
Interesting article, but I also would have liked to see some Code Compile benchmarks (Chromium or Linux Kernel) and benchmarks for something like Unreal Engine buidling (source, shader, or lighting builds). I am sure the game benchmarks were easy to do, but one would have been enough to show these CPUs are not meant for gaming.

I do like the hands on experience notes. That was super useful.
 
I had 2 TR systems made for a friend who did 3D rendering. She wanted a 3rd system but I came to the conclusion that a 7900X with an RTX4090 was the better and more cost effective option as most renderers (like Redshift) are more GPU focused and the CPU only works as a support for the GPU. The only advantage CPU rendering has is if you want to use REALLY big datasets then you can use your 256GB to 2TB of RAM. On a GPU you're limited to 24GB or 48 or 80 GB on a Quadro style GPU. So the niche the HEDT is meant for is becoming very narrow as servers and render farms will just go to EPYC CPU's.
 
Thank you for including the 3970X data! Mine is still chugging away happily on a Gigabyte TRX40 board, seems to be the most mature of the TR platforms. Never upgraded to the 5000 series. It was a sting when we found out that TRX40 was a single generation platform.
 
The question is after so many years - Does Adobe get paid not to allow these processors to amp up.

Or is this test not correct and you can get more productivity running Abobe 2 or more times

ie if you can run Adobe twice at same time and get 90% speed each that's a 80% increase overall

How can no M3 chip Apple Fans claim it's the fastest productivity chip - even if no same test bed - results have meaning
 
"these are super powerful chips that are obviously not meant for the mainstream market."

So, what's the point of using them in gaming benchmarks then??
Well, at the very least at least it tells gamers you are not going to get anything for your money here, you are better off buying 7800X3D or even a 5800X3D for that matter. Hopefully they already knew that, but hey, some people have more money than sense.
 
Pancake size CPU!!!

If HEDT market is in for 64c/128t what is next for servers?
Servers already have 128c/256t, so that's your answer, not that I understand HEDT as a market myself, particularly from the perspective of how they make money frok it, considering the majority of people who actually need the performance would be better of with a server, though I guess sales through OEM's for 3d designer workstations etc. cover the relevant cost of bringing it to market, not sure why they still market the whole "gamer" spiel with it, as chips like the 7950x mean you can get a lot of tasks done and game too in one box, but for this you are carrying a lot of luggage for that, would be better off with two systems at that rate, unless you somehow have the budget for such a hedt system, but no space for another system to use
 
Techspot is putting its credibility on the line by testing these CPU's in gaming.
Think of how many people would have said the same thing if they had not included game tests.
Its happened before.
"This is an enthusiast site, so why not test a few games" kind of bullshit.

By doing a few games, people will still criticize TS, but it will be for what they did do, instead of what they didn't do.
 
What! No benchmarks of scientific programs like Matlab, Mathematica, Comsol, no fluid simulations, compiling, cryptography, etc. Only reason I would ask my work for one would be for heavy duty number crunching especially fp64.
 
Servers already have 128c/256t, so that's your answer, not that I understand HEDT as a market myself, particularly from the perspective of how they make money frok it, considering the majority of people who actually need the performance would be better of with a server, though I guess sales through OEM's for 3d designer workstations etc. cover the relevant cost of bringing it to market, not sure why they still market the whole "gamer" spiel with it, as chips like the 7950x mean you can get a lot of tasks done and game too in one box, but for this you are carrying a lot of luggage for that, would be better off with two systems at that rate, unless you somehow have the budget for such a hedt system, but no space for another system to use
That is coming from a 2 socket board not a single soket
AMD page do not mention a such a high core count on Epyc
AMD EPYC gen 2
AMD latest gen 4

96 cores on sigle soket is max anounced for now.
 
These are tanks, workhorses, monsters of CPU's. AMD is dominating that HEDT / Server / Enterprise market with these things.
 
What is a valid use case for one of these CPU's? especially one where you'd want to pay 10x the cost of lower tier processors to get 2x the performance (sometimes).

How would the graphics programs, like Cinibench and Blender, compare with using a "standard" high end CPU and CUDA processing is done on the GPU?

Also, what are registered DIMMs? do they offer an advantage over the latest standard RAM modules? and do they justify their price?

Obviously I'm not in the market to buy such a system but I was curious.
 
That segment is no more HEDT. It is workstation and prossummer grade. You buy those for specific software for doing compute task or multithreading render.
 
I had 2 TR systems made for a friend who did 3D rendering. She wanted a 3rd system but I came to the conclusion that a 7900X with an RTX4090 was the better and more cost effective option as most renderers (like Redshift) are more GPU focused and the CPU only works as a support for the GPU. The only advantage CPU rendering has is if you want to use REALLY big datasets then you can use your 256GB to 2TB of RAM. On a GPU you're limited to 24GB or 48 or 80 GB on a Quadro style GPU. So the niche the HEDT is meant for is becoming very narrow as servers and render farms will just go to EPYC CPU's.
It is really depending on what you are doing and what software you use. Any kind of data management is done at the CPU level. You are saying that just because you are targeting graphic rendering for your use case, however a CPU is required for way more use cases.
 
What! No benchmarks of scientific programs like Matlab, Mathematica, Comsol, no fluid simulations, compiling, cryptography, etc. Only reason I would ask my work for one would be for heavy duty number crunching especially fp64.
Same, without any scientific CAD results, this review doesn`t say much beside 96 cores from AMD consume the same as a 14900k in CB MT.
 
You would think for the price you're paying out for these CPU's and Mobo's and DDR5 Rdimms the bloody support would be **** hot regardless of them not selling a huge amount of them
 
What! No benchmarks of scientific programs like Matlab, Mathematica, Comsol, no fluid simulations, compiling, cryptography, etc. Only reason I would ask my work for one would be for heavy duty number crunching especially fp64.
Serious question because I'm not actually sure: wouldn't a GPU be better suited to that task anyway?
 
I'm sorry, but I disagree, pro-grade hardware must come with pro-grade support from AMD or they are just half assing their effort. With that being said, I will be getting the 7980X in the beginning of the year, but seems like Gigabyte would be better to go with than MSI.
 
What's the point of a product that is 2.5 times faster and costs maybe 8 times more? Who will buy this?
 
Back