Intel Cascade Lake-X HEDT vs. AMD Ryzen: Fight!

Just a couple years ago, Intel was able to sell the 9980 for $2,000 because there was no competition... Now AMD does the same with the 3970.... What will be great is when Ryzen 3 goes up against Intel's 7nm... We should get some competitive prices and outstanding performance.
 
For a second there you had me, I actually thought you might be going to compare apples with apples, but as usual I was mistaken.
 
OK, I don't plan to defend Intel pathetic product segmentation, but X299 is not a bad deal if you need lanes, but not cores. Get 10 core model + decent board and you can still plug 3 VGAs for full bananas rendering machine.

Nobody in right frame of mind doing rendering on CPU only, except niche things which don't support things like CUDA. Blender is perfect example. Especially with advanced lighting CPUs are just lame option to render anything. You pick CUDA render and Blender flies! I'm certain new 3xxx TR won't solve the problems of rendering machines. CPUs are inherently inferior to VGAs at rendering speed, by laaarge factor.

In DAZ I made once experiment with a scene which rendered for nearly 31 hours on 2 1080Ti alone. And then re-rendered everything with OC TR1920x added to the mix. Render finished 58! seconds sooner. That's hopelessness of CPU rendering ladies and gentlemen. You waste 300W of power every hour for all 58 seconds over day and a half.

Recommending 2nd gen TR over 109xx is ludicrous. Only if you need every single lane of TR, X299 pull this ahead. Memory latency is horrendous on 1xxx and 2xxx TR. Not good for gaming, not good for rendering or visualization dependent on memory speed - e.g. particle physics.


And if author(s) of this comparison thinks that AMD will discount 3xxx series TR any time soon then good luck. Corporation AMD is doing now full-Intel. They'll milk and bloody squeeze the balls of everybody who want them. I can see many retailers in Europe just going bananas with TR3xxx pricing. Even my most pessimistic estimates were smashed to bits. Couldn't care less... getting Acer Concept D7 for another 2080 RTX rendering machine.
 
3950X basically matches Intel's absolute best effort in HEDT for $200 less. Pulling 70 less watts. Ouch.

Intel are going to have to come out fighting even harder on price because if they don't they are going to see their market share rapidly evaporate. This isn't just a loss, it's a heavy loss for price performance.

AMD have transformed the HEDT landscape, more than halving the cost of performance at this level from previous available Intel generations.

Credit where credit is due.
 
OK, I don't plan to defend Intel pathetic product segmentation, but X299 is not a bad deal if you need lanes, but not cores. Get 10 core model + decent board and you can still plug 3 VGAs for full bananas rendering machine.

Wouldn't it be more sensible to get an eight core TR1 1900x for $149 in that case?
You get a lot more PCIe lanes, so you can connect three GPU at full speed (16x) plus have enough left for drives.
 
OK, I don't plan to defend Intel pathetic product segmentation, but X299 is not a bad deal if you need lanes, but not cores. Get 10 core model + decent board and you can still plug 3 VGAs for full bananas rendering machine.

Nobody in right frame of mind doing rendering on CPU only, except niche things which don't support things like CUDA. Blender is perfect example. Especially with advanced lighting CPUs are just lame option to render anything. You pick CUDA render and Blender flies! I'm certain new 3xxx TR won't solve the problems of rendering machines. CPUs are inherently inferior to VGAs at rendering speed, by laaarge factor.

In DAZ I made once experiment with a scene which rendered for nearly 31 hours on 2 1080Ti alone. And then re-rendered everything with OC TR1920x added to the mix. Render finished 58! seconds sooner. That's hopelessness of CPU rendering ladies and gentlemen. You waste 300W of power every hour for all 58 seconds over day and a half.

Recommending 2nd gen TR over 109xx is ludicrous. Only if you need every single lane of TR, X299 pull this ahead. Memory latency is horrendous on 1xxx and 2xxx TR. Not good for gaming, not good for rendering or visualization dependent on memory speed - e.g. particle physics.


And if author(s) of this comparison thinks that AMD will discount 3xxx series TR any time soon then good luck. Corporation AMD is doing now full-Intel. They'll milk and bloody squeeze the balls of everybody who want them. I can see many retailers in Europe just going bananas with TR3xxx pricing. Even my most pessimistic estimates were smashed to bits. Couldn't care less... getting Acer Concept D7 for another 2080 RTX rendering machine.
These are benchmarks not real life how to's, they render on CPU's to show how fast/slow they are NOT to show how you should use them.

Plus AMD's PCI-e 4.0 lets you add WAY more GPU's and m.2 ssd's than intel can dream of.

Good *****ing job AMD!!! So good to see some real competition in every area now.
 
3950X basically matches Intel's absolute best effort in HEDT for $200 less. Pulling 70 less watts. Ou
Credit where credit is due.
HEDT in Intel terms is Xeon, not Core i9. So you may be right about the cost, AMD massively undercut Intel on price, but in terms of raw performance we'll never know because sites like this refuse to compare like with like.
 
HEDT in Intel terms is Xeon, not Core i9. So you may be right about the cost, AMD massively undercut Intel on price, but in terms of raw performance we'll never know because sites like this refuse to compare like with like.
Other sites do (e.g. Anandtech) but it ain't pretty for the Xeon.
Also, the Xeon for Workstations line is usually sold under the server category, not desktop.
 
HEDT in Intel terms is Xeon, not Core i9. So you may be right about the cost, AMD massively undercut Intel on price, but in terms of raw performance we'll never know because sites like this refuse to compare like with like.
Other sites do (e.g. Anandtech) but it ain't pretty for the Xeon.
Also, the Xeon for Workstations line is usually sold under the server category, not desktop.
 
Any chance of a link? I haven't seen a single review that compares 32/64 with 32/64.
Afaik, the max you can get is the 28 core Xeon W 3175X. Head over to Anandtech to see its results vs. Threadripper.

There are CPU like the Xeon Platinum 9221 with 32 cores but those are really two CPU glued together and you cannot buy them from distributors. Price is unknown but you can always ask Intel.

It simply comes down to only being able to compare what is available.
 
OK, I don't plan to defend Intel pathetic product segmentation, but X299 is not a bad deal if you need lanes, but not cores. Get 10 core model + decent board and you can still plug 3 VGAs for full bananas rendering machine.

Nobody in right frame of mind doing rendering on CPU only, except niche things which don't support things like CUDA. Blender is perfect example. Especially with advanced lighting CPUs are just lame option to render anything. You pick CUDA render and Blender flies! I'm certain new 3xxx TR won't solve the problems of rendering machines. CPUs are inherently inferior to VGAs at rendering speed, by laaarge factor.

CPU rendering is more accurate so for those using CAD software or where quality is a factor, it is a decent choice. If you are adding advanced fiflters, effects, ect these may also favor the CPU. Here is a breakdown: https://td-u.com/cpu-vs-gpu-renderer-which-is-better/

In addition, if you are looking for the most PCIe lanes, 1st gen Threadripper has more then these new Intel processors (64 vs 48). Given that the new threadrippers are PCIe 4.0, they are double the bandwidth per lane. In addition, there are 64 lanes vs 48 so you are looking at more then double the bandwidth overall.


Recommending 2nd gen TR over 109xx is ludicrous. Only if you need every single lane of TR, X299 pull this ahead. Memory latency is horrendous on 1xxx and 2xxx TR. Not good for gaming, not good for rendering or visualization dependent on memory speed - e.g. particle physics.

" not good for rendering or visualization "

Incorrect

For rendering

https://www.anandtech.com/show/13124/the-amd-threadripper-2990wx-and-2950x-review/8

or this https://www.guru3d.com/articles-pages/amd-ryzen-threadripper-2990wx-review,1.html

In fact go to any link. 1st and 2nd gen threadripper CPUs are clearly excellent at rendering.


For visualization (specifically particle physics)

https://www.anandtech.com/show/13124/the-amd-threadripper-2990wx-and-2950x-review/7

The Intel CPU is about half the performance in particle physics. Clearly the threadripper 1st and 2nd gen CPUs do very well in particle physics.

I don't know where you got the idea that particle physics and rendering are latency intensive but that idea contradicts every benchmark on the internet. In fact it's almost like you picked the benchs Intel looses the most in.

And if author(s) of this comparison thinks that AMD will discount 3xxx series TR any time soon then good luck. Corporation AMD is doing now full-Intel. They'll milk and bloody squeeze the balls of everybody who want them. I can see many retailers in Europe just going bananas with TR3xxx pricing. Even my most pessimistic estimates were smashed to bits. Couldn't care less... getting Acer Concept D7 for another 2080 RTX rendering machine.

Given that AMD has heavily discounted 1st and 2nd gen threadripper shortly after the next gen launch I would say the author has a reasonable argument, given they are basing it off events that occurred twice.
 
OK, I don't plan to defend Intel pathetic product segmentation, but X299 is not a bad deal if you need lanes, but not cores. Get 10 core model + decent board and you can still plug 3 VGAs for full bananas rendering machine.

Nobody in right frame of mind doing rendering on CPU only, except niche things which don't support things like CUDA. Blender is perfect example. Especially with advanced lighting CPUs are just lame option to render anything. You pick CUDA render and Blender flies! I'm certain new 3xxx TR won't solve the problems of rendering machines. CPUs are inherently inferior to VGAs at rendering speed, by laaarge factor.

In DAZ I made once experiment with a scene which rendered for nearly 31 hours on 2 1080Ti alone. And then re-rendered everything with OC TR1920x added to the mix. Render finished 58! seconds sooner. That's hopelessness of CPU rendering ladies and gentlemen. You waste 300W of power every hour for all 58 seconds over day and a half.

Recommending 2nd gen TR over 109xx is ludicrous. Only if you need every single lane of TR, X299 pull this ahead. Memory latency is horrendous on 1xxx and 2xxx TR. Not good for gaming, not good for rendering or visualization dependent on memory speed - e.g. particle physics.


And if author(s) of this comparison thinks that AMD will discount 3xxx series TR any time soon then good luck. Corporation AMD is doing now full-Intel. They'll milk and bloody squeeze the balls of everybody who want them. I can see many retailers in Europe just going bananas with TR3xxx pricing. Even my most pessimistic estimates were smashed to bits. Couldn't care less... getting Acer Concept D7 for another 2080 RTX rendering machine.

109XX seems reasonable if you need the PCIe lanes but you feel 3rd gen TR is too expensive. I disagree that 2nd gen TR is a ludicrous recommendation though. It's very cost effective depending on workload.
 
Last edited:
Back