Intel's second generation Xeon Phi co-processor slated for Q1 2016 launch

Shawn Knight

Posts: 15,284   +192
Staff member

Intel revealed at the annual SC conference that its second generation Xeon Phi co-processor, codenamed Knights Landing, is nearly ready for general availability.

The chipmaker said it has already seeded several pre-production systems to clients through its early ship program. Cray, for example, has a system up and running to handle multiple customer applications in preparation for supercomputer deployments at Los Alamos and NERSC.

As Intel revealed last year, Knights Landing packs a number of innovative features that afford unmatched performance. Systems will be capable of delivering single-precision performance of more than eight teraflops with double-precision performance rated at more than three teraflops. The latter is nearly three times as fast as its predecessor, Knights Corner, although as PC World points out, double-precision is especially important for supercomputing due to higher accuracy rates in floating-point calculations.

The second generation Xeon Phi co-processor also packs 16GB of stacked memory that’s said to offer five times more bandwidth than DDR4. It’s also five times more power efficient and three times denser than the GDDR5 memory found on today’s graphics cards.

The platform will rely on a new interconnect called Omni-Path that improves scalability, reliability and performance. At this stage, however, Intel hasn’t released a ton of information regarding the proprietary interconnect.

More than 50 providers are expected to have Xeon Phi product family-based systems ready for the general availability launch, currently slated for Q1 2016.

Permalink to story.

 
Wow! I'm impressed, I'm dumbstruck, I'm in awe, I'm flabbergasted... Or at least I assume I would be if I knew what the heck they were talking about here.
 
If memory services, I think Techspot reported on this a few years ago. Is this a chip for "super-servers"? ...and yes, definitely impressive!
 
So the article basically said DDR4 is better than GDDR5? Also what makes this superior compared to say Nvidia's 256core+ competitors?
 
The top 2 commentators are always fast here to advertise their ignorance regarding any subject.

The product, if you bothered to read it at all, is mainly for supercomputers, and secondarily, for enterprise-class servers.
 
The product, if you bothered to read it at all, is mainly for supercomputers, and secondarily, for enterprise-class servers.
Thanks for the simplistic answer of which anyone might assume. And by the way that does nothing to answer my unspoken query. If I actually expected an answer I would have used a question mark and explained myself at least a little more.
 
I don't understand the purpose of the co-processor. Therefor I don't understand where or when something like this would even be used.

So these Phi cards as well as NVidia K series (K20, K40, K80) are all designed for GPGPU uses.

A general-purpose GPU (GPGPU) is a graphics processing unit (GPU) that performs non-specialized calculations that would typically be conducted by the CPU (central processing unit). Ordinarily, the GPU is dedicated to graphics rendering.

GPGPUs are used for tasks that were formerly the domain of high-power CPUs, such as physics calculations, encryption/decryption, scientific computations and the generation of cypto currencies such as Bitcoin. Because graphics cards are constructed for massive parallelism, they can dwarf the calculation rate of even the most powerful CPUs for many parallel processing tasks. The same shader cores that allow multiple pixels to be rendered simultaneously can similarly process multiple streams of data at the same time. Although a shader core is not nearly as complex as a CPU, a high-end GPU may have thousands of shader cores; in contrast, a multicore CPU might have eight or twelve cores.

There has been an increased focus on GPGPUs since DirectX 10 included unified shaders in its shader core specifications for Windows Vista. Higher-level languages are being developed all the time to ease programming for computations on the GPU. Both AMD/ATI and Nvidia have approaches to GPGPU with their own APIs (OpenCL and CUDA, respectively).

The Xeon Phi is Intel's attempt to compete in the GPGPU space. They know there are loads of $$$ left on the table that they aren't getting any of.
 
I don't understand the purpose of the co-processor. Therefor I don't understand where or when something like this would even be used.
To (over) simplify axiomatic13's explanation:
CPU is more for relatively low number of complex serial workloads
GPU is for relatively high number of simple parallel workloads
WRT to shader cores, while Nvidia's Tesla and AMD's FirePro leverage the traditional graphics pipeline - since the GPUs have to fulfill double duty as consumer and workstation cards, Intel's Xeon Phi is basically a collection of simple (Atom-based) x86 cores.
There is a move away from co-processors using the graphics pipeline in general. Nvidia's GK210 (never used in a consumer gaming graphics card) with its increased cache and memory register, and the PEZY-SC (which lacks the graphics pipeline entirely) probably point to a future where co-processors are completely divorced from consumer graphics - which makes sense because math co-processing has no use for elements of the traditional graphics pipeline, notably rasterization.
 
Man I hope Intel someday makes a go at it in the workstation and gaming graphics card market. I feel like they could definitely put a dent in Nvidia's profits if they tried (other than their integrated graphics).
 
Back