Sadly, chicken - n -egg to me.. if it sells well, mobo and software will develop to Efficiently suck every bit o' processing power from those many cores - if not, there won't be enuf R&D interest to create core-monsters, and Intel will yawn and keep its stunning Moore's/4 rate of progress (CPU processing power doubles in our children's lifetime-ish).
Well, first and foremost,"Moore's Law", is anything but a "law". In mathematical terms, it's should be classed as at most, "a theorem", and IMHO, it's really nothing more than an "idea" or a "dashed out concept". Yet it always seems to make a great sound bite, which requires very little thought, rhyme, or reason, to spit out. It's really just a rehash of, "I'll work for a penny a day, doubled each day for a month". Do the numbers on that, and you'll find you hit a wall around where you simply can't afford to pay the man, at around the 3 week point or so.
So it is with the 10nm process node. I have a feeling that as node size gets smaller, the tooling costs rise exponentially. In short, I honestly don't believe Intel is sitting on its duff to the degree the "experts" here at Techspot have the impression that.the real experts at Intel are. Remember, it's far easier for some know nothing talking head, to stand in front of a mic and spout a bunch of happy horse sh!t about, "ticks and tocks", than it is to labor at a drawing board or a lab, and actually DO the job.
Considering that Intel is in the business of making money from cooking CPUs for a broad spectrum of uses, not just catering to a bunch of whiny, needy, gamers whimpering for 10nm processors, I'd say, "don't hold your breath waiting for Moore's "law" to kick in, we're at the point in micro miniaturization, it's more likely to fail than it is to hold true.
We always seem to expect more of others that we are able to accomplish on our own. To which end I have a standing challenge. Take any two dozen Techspot "experts", go to the one's house with the best workshop, and see if you can cook up a lousy 140nm process
(*) Pentium 2, or quit your whining..
This is where Intel is with GPU computing:
And this is where you'll find it:
After Larrabee was cancelled, Intel shifted its design goals for the underlying technology. While Larrabee could have been quite capable for gaming, the company saw a future for it in compute-heavy applications and created the Xeon Phi in 2012. One of the first models, the Xeon Phi 5110P, contained 60 x86 processors with large 512-bit vector units clocked at 1GHz. At that speed, they were capable of more than 1 TFLOPS of compute horsepower, while consuming an average of 225W.
As a result of the high compute performance relative to power consumption, the Xeon Phi 31S1P was used in the construction of the Tianhe-2 supercomputer in 2013, which persists as the world's faster computer today.
(*) In truth "140nm" for Pentium 2 was just a wild guess. Process widths for Pentium 4 were up as high as 180nm, and the "Prescott" offerings were shrunken all the way down to 90nm.