Cambridge University gets early access to Intel's 60-core Xeon Phi chip

Shawn Knight

Posts: 15,289   +192
Staff member

The latest Intel Xeon Phi chip has arrived, packing a full 60 cores into a single package. It’s not expected to be publically available until sometime early next year although a few have been granted early access to the technology that’s said to be more like a GPU than a traditional CPU.

Intel announced Xeon Phi as the new name for all of their future Many Integrated Core Architecture (MIC Architecture) earlier this year. The coprocessor is built using Intel’s 22nm 3D Tri-Gate transistor technology much like Ivy Bridge consumer products. Xeon Phi, however, certainly isn’t a consumer-grade chip.

Codenamed Knights Corner, the chip works inside of servers and supercomputers alongside the regular CPU(s) to boost calculations used for mathematics, graphical tasks and scientific experiments. The high-end chip in the 3100 family is capable of churning out more than a teraflop of performance per second. Traditional desktop processors don’t even come close to that.

Cambridge University and Stephen Hawking have been given early access to the chip, according to Intel. It is being used in their SGI UV2000 supercomputer. Hawking said in a statement that the chip gives them the ability to focus on discovery and continue to lead the worldwide efforts to advance the understandings of our universe.

The general public will be able to get their hands on the coprocessor early next year; January 28, to be exact. We are told that it’ll ship on a PCIe circuit board and carry a MSRP of $2,649.

Permalink to story.

 
Maybe it's silly question, but I'm serious since I'm a noob here: if I plugged it to my gaming rig, would that (significantly) increase fps when gaming?
 
How about going in a new way, specially when a $100 dlls video card has 20+ times more juice than most of the end-line CPU's, it has been my whet dream to use my GPU card for all porpuses.
 
How about going in a new way, specially when a $100 dlls video card has 20+ times more juice than most of the end-line CPU's, it has been my whet dream to use my GPU card for all porpuses.

Today's top-end GPU cards passed 4TFlop some time ago, when it comes to floating-point operations, and CPU market is far behind that figure. But that's not the point. GPU-s do too much graphics-specific stuff these days, and expecting a CPU to match that is simply unrealistic.
 
But can it run Crysis?
This joke is entirely too outdated. My tablet can run Crysis. Times have changed.


Crysis is still pretty demanding, even by today's standards.

And what is it giving back for its demands? :) Or if anyone cares for that matter :)

I could write a 2+2 application and make it most demanding ever too :)

If crytek optimized crysis 1, they would ruin the jokes.
 
Maybe it's silly question, but I'm serious since I'm a noob here: if I plugged it to my gaming rig, would that (significantly) increase fps when gaming?

Intel's Xeon Phi has no texture units. No TMU's = no graphics visualization. Number crunching only.
 
> Impressive intel..... maybe by 2016 or 2017 we will be having 64cores desktops also

Thats cool... only thing left to figure out is what to do with a 64 core processor in a normal computer.

Its a specialized data crunching chip really, in a way similar to specialized gpu chips made for graphics processing, most of the current types of software just cant make use of such architecture. And multitasking benefits have limits as well, especially since individual cores arent that powerful.
There are some programs which can make use of use dual/quad core cpus, but even those are not particularly efficient beyond two cores from what I hear...maybe if regular software made use of some adaptive algorithms, and would automatically divide itself into a number of sub-threads based on the number of cores the main cpu has and its current load....or rewritten from the ground up to separate its operations into a bunch of independant small threads, like an operating system...though is that even necessary/beneficial?

Anyone of the techies, is it actually beneficial to break up a program into multiple threads/subprogramms, or is it better to keep it all confined into one large file ? Obviously it would depend on what kind of cpu is being used in your system, but I'm asking a question in general, not in a context of optimizing a program resource usage for a specific cpu. It just seems like software is being forced to multi thread because of the types of chips being released, and not the other way around, as in chips being made that way to account for the fact that its more efficient to write software in many threads.

Come to think of it, I have firefox running as one large 3Gb file/thread that I assume shares cpu usage from all tabs in it (well, plugin container is external), but Chrome would normally divide that single large file and make a dozen or more smaller threads out of it...
 
> but Chrome would normally divide that single large file and make a dozen or more smaller threads out of it...
Reason for that being that is that Windows gives processes so much memory accessible from its linear address space, even 64-bit applications only get so much. By starting multiple processes that offload the work into separate address spaces, you can leverage a lot more isolation and optimization by breaking part of the memory allocation intensive portions of the program's logic. Chrome does this but also separates rendering, tab management, and plugins. Multi-processing Chrome not only breaks through the memory addressing limitations, but isolating plugins provides more security through process isolation. The same can be said about page management as plugins or errors that occur in tabs do not crash the browser entirely, just the offending page.
 
But can it run Crysis?
This joke is entirely too outdated. My tablet can run Crysis. Times have changed.


Crysis is still pretty demanding, even by today's standards.

And what is it giving back for its demands? :) Or if anyone cares for that matter :)

I could write a 2+2 application and make it most demanding ever too :)

If crytek optimized crysis 1, they would ruin the jokes.


A joke, Taken too far LOL
 
Yes it would but xeons arent made for gaming however with 60 cores and hyperthreading it would but its a waste of money its better to just get a i7-4gen
 
Back