Here We Go Again: 40-Thread Xeon PC for less than a Broadwell-E Core i7

Steve

Posts: 3,044   +3,153
Staff member

Following up to our popular 32-thread Xeon PC feature, we've been in the hunt for affordable Xeon processors based on the Haswell-EP or perhaps even Broadwell-EP architectures -- it certainly seemed mere wishful thinking that we would come across a relatively inexpensive Broadwell-EP Xeon.

Our search put us on the trail of Intel's Xeon E5 2630 v4, a 10-core Broadwell-EP part that runs at a base clock of 2.2GHz but can boost up to 3.1GHz depending on the workload.

Typically, you'd spend something like $700 for this processor -- substantially more than the $70 we paid for each of our E5-2670 v1 processors -- however, it's possible to purchase the E5-2630 v4 for as little as $200 on eBay. The only catch is that they are engineering samples (ES), not retail chips.

Read the complete feature.

 
Seriously! It has been only a amonth since I have finished building the 32 core monster version :)

I built one too 3 monthsback, but a single socket version - although the advantage in this case is, we have retail / server release version of the processor for $70, which should be MUCH more reliable than the ES versions!

So I am happy with the build and in fact have purchased a second processor as a back up for the future
 
I like it guys! Keep thinking outside the box. Curious to see how 1 of these CPUs would perform in newer dx12/Vulkan titles which should be far more multicore optimized. Digital Foundry's TITANX P review clearly shows the 6700k is a bottleneck at 1080p for high refresh gaming. Perhaps you can try some newer dx12 titles (including Deus Ex, Recore, Gears of war 4, and battlefield 1,) with asus' new 1080p/180hz monitor to see if an average 180fps is even possible.
 
I like it guys! Keep thinking outside the box. Curious to see how 1 of these CPUs would perform in newer dx12/Vulkan titles which should be far more multicore optimized. Digital Foundry's TITANX P review clearly shows the 6700k is a bottleneck at 1080p for high refresh gaming. Perhaps you can try some newer dx12 titles (including Deus Ex, Recore, Gears of war 4, and battlefield 1,) with asus' new 1080p/180hz monitor to see if an average 180fps is even possible.
Not sure than many games will utilize more than a few cores.... CPU Bottlenecks tend to be for clock speeds, so this build might not fair so well....

But this type of machine would be pretty wasteful if you simply want a gaming machine...
 
Nice but I am more looking forward to dual AMD CPU Zen motherboard setups. I mean 64 cores/128 threads.

Crazy cool.

Good effort though! Nice set up.
 
Nice but I am more looking forward to dual AMD CPU Zen motherboard setups. I mean 64 cores/128 threads.

Crazy cool.

Good effort though! Nice set up.
lol... AMD CPU can have 10 times the cores, but will deliver 1/10 the performance... not to mention insane power draw...
 
I like it guys! Keep thinking outside the box. Curious to see how 1 of these CPUs would perform in newer dx12/Vulkan titles which should be far more multicore optimized. Digital Foundry's TITANX P review clearly shows the 6700k is a bottleneck at 1080p for high refresh gaming. Perhaps you can try some newer dx12 titles (including Deus Ex, Recore, Gears of war 4, and battlefield 1,) with asus' new 1080p/180hz monitor to see if an average 180fps is even possible.
As great as a dual CPU server based machine performs on most benchmarks they aren't the best at running games.
To save money on a gaming rig and revive and old server though is not a bad idea. People will take older mainboards that support dual CPU"s and try to make bargain built 4-16 core gaming machines.
 
On amd system side could run 9590 at least overclocked to 5ghz please
I think you can estimate where those values will be. You should be glad an AMD system was added to an Intel topic in the first place. This is not an Intel vs AMD review. It is to give an indication as to where one might stand with the 40 thread beast.
 
Seriously! It has been only a amonth since I have finished building the 32 core monster version :)

:) LOOOL! Same here! Built the dual 2670 rig in april...
But I'm quite happy with it and I prefer enjoying using it than spending more time finding parts, dismantling a running machine and building/installing software again.
This tends to get too time consuming, and money consuming too... It's not *cheap*, though it looks like it when comparing with retail CPUs.

Maybe in a few years, when E5 2690 V3 will be found on ebay for 70 bucks ;-)
 
I need to build a computer like this just for my video encodings. It takes all night on my 5820k. I can only imagine how long it will take for HVEC!
 
How does everyone think this would fair as a streaming + gaming rig? The extra CPU power should more than handle the converting while the GFX card (1080) and extra CPU handle the gaming side. Thoughts?
 
A dual cpu system is far more superior than a single cpu for almost everything. when you need the processing power for serious computing it is there. everything will move to multicore cpu and as of 4 years ago most of my software and programs I use are geared towards taking advantage of more cores. Plus a dual cpu system is far more futureproof than a single cpu setup.
 
Would an ASUS Z10PE-D8 WS motherboard work for this build?

yes it should work. and would be a big jump to any normal single cpu system. imagine seeing 40 cores on the windows cpu gadget. that is just spectacular. imagine only using 2% on most things unless you load a cpu intensive program that takes advantage of all or most of those cores. that would be awesome.
 
Thank you for a great article.

I just put together a full price just out single E5-2620 V4 on a just out Supermicro microatx board and it rocks!
Now I see that I can get TWO e5-2630 V4's for the same price! I am eager!
But tell me more:
do I get the ES from Japan or China or Hong Kong or are they all equally good?
What do I not get a pair of 2650's? They only $240 or so?
And what about a pair of 2680's? They only $340 or so?
And why are there thousands of these ES out there? How did this come about?
What is the catch? I recall years ago I could buy top notch IBM SCSI hard drives cheap or expensive.
The catch was the cheap ones were identical but no warranty and SCSI back then often had problems.
I would also appreciate comments from any one buying the faster ES Cpu's.
Roe5685
 
The premise of this article seems to be 'building a latest gen server results in greater performance and energy efficiency than four generations ago'. Well, no kidding...

So I have a few things I'd like to point out. Firstly, you're using standard memory in a server platform. That might make sense if all you're trying to do is run software that happens to love multithreaded operation but the accuracy of output doesn't matter too much. Like in a game. But for a machine crunching numbers that have to be accurate, you need ECC memory. RDIMMS are also preferable, but ECC is the minimum standard here.

So you're being a bit disingenuous with your comparison between the v1 Vs v4 architecture costs, when DDR3 ECC RDIMMS on the v1 can be picked up for under $15 USD per 8gb, while DDR4 RDIMMS are what, five times that?

While I'm also glad your ES chips are doing well, this is nothing but a home tinkering solution - you would never run any business processes on a machine built like that. You could, however, on the v1 architecture - it may be old, but it will still be rock solid reliable, as well as costing a fraction of the dodgy v4 build.

So those reading this who have built or are thinking of building a v1 system, what they've done here does not diminish such systems in the least - the v1 E5 Xeon system is still far and away better value for money $/clock or $/work unit, with the added bonus that it will actually be reliable - I wouldn't run anything I gave a $#@& about on ES chips and non ECC memory...
 
The premise of this article seems to be 'building a latest gen server results in greater performance and energy efficiency than four generations ago'. Well, no kidding...

So I have a few things I'd like to point out. Firstly, you're using standard memory in a server platform. That might make sense if all you're trying to do is run software that happens to love multithreaded operation but the accuracy of output doesn't matter too much. Like in a game. But for a machine crunching numbers that have to be accurate, you need ECC memory. RDIMMS are also preferable, but ECC is the minimum standard here.

So you're being a bit disingenuous with your comparison between the v1 Vs v4 architecture costs, when DDR3 ECC RDIMMS on the v1 can be picked up for under $15 USD per 8gb, while DDR4 RDIMMS are what, five times that?

While I'm also glad your ES chips are doing well, this is nothing but a home tinkering solution - you would never run any business processes on a machine built like that. You could, however, on the v1 architecture - it may be old, but it will still be rock solid reliable, as well as costing a fraction of the dodgy v4 build.

So those reading this who have built or are thinking of building a v1 system, what they've done here does not diminish such systems in the least - the v1 E5 Xeon system is still far and away better value for money $/clock or $/work unit, with the added bonus that it will actually be reliable - I wouldn't run anything I gave a $#@& about on ES chips and non ECC memory...
The premise actually was building a "monster" PC for NON server users that would compare to the 5960 and the newer 6xxx extreme processors using older server parts. Instead of paying $1000 for the 5960, you pay substantially less for the server parts.

This is why you don't see ECC RAM as you wouldn't need that in a typical 5960 build...
 
Back