Intel's second generation Xeon Phi co-processor slated for Q1 2016 launch

By Shawn Knight
Nov 16, 2015
Post New Reply
  1. Intel revealed at the annual SC conference that its second generation Xeon Phi co-processor, codenamed Knights Landing, is nearly ready for general availability.

    The chipmaker said it has already seeded several pre-production systems to clients through its early ship program. Cray, for example, has a system up and running to handle multiple customer applications in preparation for supercomputer deployments at Los Alamos and NERSC.

    As Intel revealed last year, Knights Landing packs a number of innovative features that afford unmatched performance. Systems will be capable of delivering single-precision performance of more than eight teraflops with double-precision performance rated at more than three teraflops. The latter is nearly three times as fast as its predecessor, Knights Corner, although as PC World points out, double-precision is especially important for supercomputing due to higher accuracy rates in floating-point calculations.

    The second generation Xeon Phi co-processor also packs 16GB of stacked memory that’s said to offer five times more bandwidth than DDR4. It’s also five times more power efficient and three times denser than the GDDR5 memory found on today’s graphics cards.

    The platform will rely on a new interconnect called Omni-Path that improves scalability, reliability and performance. At this stage, however, Intel hasn’t released a ton of information regarding the proprietary interconnect.

    More than 50 providers are expected to have Xeon Phi product family-based systems ready for the general availability launch, currently slated for Q1 2016.

    Permalink to story.

  2. cliffordcooley

    cliffordcooley TS Guardian Fighter Posts: 8,430   +2,822

    I don't understand the purpose of the co-processor. Therefor I don't understand where or when something like this would even be used.
    axiomatic13 likes this.
  3. Skidmarksdeluxe

    Skidmarksdeluxe TS Evangelist Posts: 6,349   +1,945

    Wow! I'm impressed, I'm dumbstruck, I'm in awe, I'm flabbergasted... Or at least I assume I would be if I knew what the heck they were talking about here.
  4. RustyTech

    RustyTech TS Guru Posts: 820   +389

    If memory services, I think Techspot reported on this a few years ago. Is this a chip for "super-servers"? ...and yes, definitely impressive!
  5. wastedkill

    wastedkill TS Evangelist Posts: 1,373   +310

    So the article basically said DDR4 is better than GDDR5? Also what makes this superior compared to say Nvidia's 256core+ competitors?
  6. VitalyT

    VitalyT Russ-Puss Posts: 3,114   +1,379

    The top 2 commentators are always fast here to advertise their ignorance regarding any subject.

    The product, if you bothered to read it at all, is mainly for supercomputers, and secondarily, for enterprise-class servers.
  7. cliffordcooley

    cliffordcooley TS Guardian Fighter Posts: 8,430   +2,822

    Thanks for the simplistic answer of which anyone might assume. And by the way that does nothing to answer my unspoken query. If I actually expected an answer I would have used a question mark and explained myself at least a little more.
  8. axiomatic13

    axiomatic13 TS Booster Posts: 88   +30

    So these Phi cards as well as NVidia K series (K20, K40, K80) are all designed for GPGPU uses.

    A general-purpose GPU (GPGPU) is a graphics processing unit (GPU) that performs non-specialized calculations that would typically be conducted by the CPU (central processing unit). Ordinarily, the GPU is dedicated to graphics rendering.

    GPGPUs are used for tasks that were formerly the domain of high-power CPUs, such as physics calculations, encryption/decryption, scientific computations and the generation of cypto currencies such as Bitcoin. Because graphics cards are constructed for massive parallelism, they can dwarf the calculation rate of even the most powerful CPUs for many parallel processing tasks. The same shader cores that allow multiple pixels to be rendered simultaneously can similarly process multiple streams of data at the same time. Although a shader core is not nearly as complex as a CPU, a high-end GPU may have thousands of shader cores; in contrast, a multicore CPU might have eight or twelve cores.

    There has been an increased focus on GPGPUs since DirectX 10 included unified shaders in its shader core specifications for Windows Vista. Higher-level languages are being developed all the time to ease programming for computations on the GPU. Both AMD/ATI and Nvidia have approaches to GPGPU with their own APIs (OpenCL and CUDA, respectively).

    The Xeon Phi is Intel's attempt to compete in the GPGPU space. They know there are loads of $$$ left on the table that they aren't getting any of.
    Yynxs and cliffordcooley like this.
  9. dividebyzero

    dividebyzero trainee n00b Posts: 4,891   +1,258

    To (over) simplify axiomatic13's explanation:
    CPU is more for relatively low number of complex serial workloads
    GPU is for relatively high number of simple parallel workloads
    WRT to shader cores, while Nvidia's Tesla and AMD's FirePro leverage the traditional graphics pipeline - since the GPUs have to fulfill double duty as consumer and workstation cards, Intel's Xeon Phi is basically a collection of simple (Atom-based) x86 cores.
    There is a move away from co-processors using the graphics pipeline in general. Nvidia's GK210 (never used in a consumer gaming graphics card) with its increased cache and memory register, and the PEZY-SC (which lacks the graphics pipeline entirely) probably point to a future where co-processors are completely divorced from consumer graphics - which makes sense because math co-processing has no use for elements of the traditional graphics pipeline, notably rasterization.
    Yynxs and cliffordcooley like this.
  10. bmw95

    bmw95 TS Enthusiast Posts: 94   +47

    Man I hope Intel someday makes a go at it in the workstation and gaming graphics card market. I feel like they could definitely put a dent in Nvidia's profits if they tried (other than their integrated graphics).
  11. infiltrator

    infiltrator TS Booster Posts: 140   +21

    I wonder how many hashes per second this co-processor can crack at?

Similar Topics

Add New Comment

You need to be a member to leave a comment. Join thousands of tech enthusiasts and participate.
TechSpot Account You may also...