Intel now shipping seventh-gen Kaby Lake processors to manufacturing partners

Shawn Knight

Posts: 15,289   +192
Staff member

Intel said during its recent quarterly earnings call that it is now shipping its seventh generation Intel Core processors, codenamed Kaby Lake, to manufacturing partners. Systems powered by the new chips should begin to trickle into the market this fall.

Intel’s Kaby Lake processors are notable for the fact that they have native HDCP 2.2 support, native USB 3.1 support and native Thunderbolt 3 support – all at a maximum TDP of 95 watts.

The chipmaker originally planned to have just two chips built using its 14-nanometer manufacturing process – Broadwell and Skylake – but revised those plans a little over a year ago by inserting the 14-nanometer Kaby Lake into its roadmap.

The move broke Intel’s predictable “Tick-Tock” model in which a “Tick” represented a new fabrication process (die shrinkage) of the previous processor design while a “Tock” introduced a brand new microarchitecture. Kaby Lake is described as an “optimization” or refresh of Skylake.

Had Intel stuck with tradition, its next chip would have been the 10-nanometer Cannonlake which now isn’t forecasted to arrive until the middle of next year.

Although it sucks for hardware enthusiasts, you can’t really blame Intel for breaking the cycle. Continuing to shrink its manufacturing process is becoming more difficult with each generation. What’s more, Intel hasn’t faced any real competition in the high-end market since launching its Core series in 2006.

With new graphics cards from both AMD and Nvidia already on the market (coming soon to mobile) and Kaby Lake just around the bend, it may be advisable to hold off on buying a new computer for another few months to take advantage of Intel’s latest and greatest.

Lead image courtesy Dragon Images, Shutterstock

Permalink to story.

 
Yeah well, after a couple months why not wait a couple more and get the 10nm proc? Or why not wait a couple more months and then get the 8th gen procs... hell wait for the i7-9xxx.

Now you made me wonder how they'll brand their 10th gen procs...
 
"you can’t really blame Intel for breaking the cycle. Continuing to shrink its manufacturing process is becoming more difficult with each generation."

As defined in a market with no competition. I don't know why you are apologizing for Intel here, it is exactly their fault that x86 CPUs haven't done anything in the last 10 years.
 
So when exactly did AMD fold and close shop on the CPU market?

When Roy Read said they weren't competing with Intel any more. Don't get any more definitive than that.

If you don't think Intel is a monopoly that I guess we'd have to ignore the 1 billion intel gave AMD for just that. Of course, 1 billion is nothing compared to what they lost.

You don't have to be the only one on the market to be a monopoly. Having complete or near complete control over the market also constitutes as a monopoly.
 
Last edited:
On my researchs when I had a desktop computer, AMD always won the bang for the buck reviews, specially the FX-6300 proc dropping low and almost on par with desktop i5 procs. They also make really efficient and cheap notebook procs.

AMD might not be contesting the high end parts -obviously- but they are still competing for the BFB and entry market.
 
...and Kaby Lake just around the bend, it may be advisable to hold off on buying a new computer for another few months to take advantage of Intel’s latest and greatest.

Meh, I bet it'll be 3% additional CPU performance for 15% additional price. If choosing between Haswell and Skylake I still recommend Haswell to save money here and there... why would anyone with some sense wait for "better integrated graphics"? The CPU side of PCs has been boring for years now, my attention and excitement going to GPUs instead.

I just built a system with Haswell-E right after I saw Broadwell-E price and performance, and only because of the change in use case scenarios at home. When I first built my PC with a Core i5 3 years ago it was intended to be used for gaming, then my brother decided to study audiovisuals and soon it became evident that we needed more CPU power to handle me gaming and him rendering videos at the same time -yes, with concurrent users using the PC. If it wasn't the case, I wouldn't bother changing that i5 for gaming in the incoming years until a true CPU leap was done.
 
As defined in a market with no competition. I don't know why you are apologizing for Intel here, it is exactly their fault that x86 CPUs haven't done anything in the last 10 years.
So when exactly did AMD fold and close shop on the CPU market?

Immediately after BD shipped... and they realized they were boned. Zen might help. but it is NOT going to put them back in to "competition" outside of fanoys, third rate shops building 300$ computers for craigslist and TigerDirect ...
 
"you can’t really blame Intel for breaking the cycle. Continuing to shrink its manufacturing process is becoming more difficult with each generation."

As defined in a market with no competition. I don't know why you are apologizing for Intel here, it is exactly their fault that x86 CPUs haven't done anything in the last 10 years.
Why do you say they haven't done anything?
 
Their architecture hasn't made large strides since Netburst. What they are using now is essentially a refined version of that. It's pretty bad that GPUs are so badly outpacing CPUs
A GPU can't do what a CPU can. And no other architecture can compete with Intel's x86 implementation at desktop form factor, despite many trying. A CPU can never scale up to a GPU in raw parallel number crunching. Not apples vs apples comparison.
 
A GPU can't do what a CPU can. And no other architecture can compete with Intel's x86 implementation at desktop form factor, despite many trying. A CPU can never scale up to a GPU in raw parallel number crunching. Not apples vs apples comparison.

We're not talking about parallel number crunching, we are talking about advancements in GPU architecture. I'm well aware that a CPU cannot scale like a GPU. If we followed your given logic there would never be any apples to apples comparison because Intel and AMD are the only two x86 processor manufacturers.

Despite many trying? Care to name a few? AMD is literally the only other x86 CPU manufacturer on the market and they gave up with competing years ago.
 
I'll wait the few months for the new stuff. I was gonna buy a 1070 for BF1, but my 970 played the Closed Alpha just as good as it does in BF4.
I get what you are saying about the GPU'S. I am going to wait and upgrade my 970 when the price gouging settles down. As for the CPU's, you will always be waiting and waiting and waiting, might as well go for it.
 
We're not talking about parallel number crunching, we are talking about advancements in GPU architecture. I'm well aware that a CPU cannot scale like a GPU. If we followed your given logic there would never be any apples to apples comparison because Intel and AMD are the only two x86 processor manufacturers.

Despite many trying? Care to name a few? AMD is literally the only other x86 CPU manufacturer on the market and they gave up with competing years ago.
Well first of all, x86's strength over it's competitors in general was that it is CISC. So the extensions like AVX will thrash RISC architectures. There are many other extension sets like all the SSE extensions. Bear in mind, with so much legacy hardware out there, to make efficient gains to the architecture, they need to maintain backwards compatibility in capabilities AND add the new extensions. This is very different to GPUs which are operated via drivers and HAL so you can change under the covers a LOT more significantly.

Also the main competitors I was referring to was not in the x86 space - PowerPC and ARM. Who could possibly come into the market to compete with Intel when they have had the best fabs for so long? Maybe we'll see some nowadays now that ARM has done so well on mobile devices and those fabs are looking very strong.
 
Well first of all, x86's strength over it's competitors in general was that it is CISC. So the extensions like AVX will thrash RISC architectures. There are many other extension sets like all the SSE extensions. Bear in mind, with so much legacy hardware out there, to make efficient gains to the architecture, they need to maintain backwards compatibility in capabilities AND add the new extensions. This is very different to GPUs which are operated via drivers and HAL so you can change under the covers a LOT more significantly.

Also the main competitors I was referring to was not in the x86 space - PowerPC and ARM. Who could possibly come into the market to compete with Intel when they have had the best fabs for so long? Maybe we'll see some nowadays now that ARM has done so well on mobile devices and those fabs are looking very strong.

You aren't telling me anything I don't know. I took basic computer architecture. Extensions are not a requirement to keep and in fact there have been many periods where extensions were removed and then more efficient ones added later that take up less space. As with any piece of computer hardware, circuits are just physical software. As computers become more and more advanced, the hardware is able to integrate more and more advanced functions previously resigned to software.

PowerPC and ARM don't compete with Intel, yet. They don't target the same market and they don't affect each other's margins. If ARM and PowerPC were competing with Intel you would not be seeing a .1 GHz rise on Intel's enthusiast Broadwell-E launch and you would not be seeing the price of the sweet spot 3570k, 4670k, and 6600k rise with each generation. You could purchase the 3570k for around $200 during launch. The same tier CPU is now $250 and it's a joke that it doesn't have Hyper-Threading. You have to pay $100 more for the privilege.
 
You aren't telling me anything I don't know. I took basic computer architecture. Extensions are not a requirement to keep and in fact there have been many periods where extensions were removed and then more efficient ones added later that take up less space. As with any piece of computer hardware, circuits are just physical software. As computers become more and more advanced, the hardware is able to integrate more and more advanced functions previously resigned to software.

PowerPC and ARM don't compete with Intel, yet. They don't target the same market and they don't affect each other's margins. If ARM and PowerPC were competing with Intel you would not be seeing a .1 GHz rise on Intel's enthusiast Broadwell-E launch and you would not be seeing the price of the sweet spot 3570k, 4670k, and 6600k rise with each generation. You could purchase the 3570k for around $200 during launch. The same tier CPU is now $250 and it's a joke that it doesn't have Hyper-Threading. You have to pay $100 more for the privilege.
I can't remember if Intel has removed extensions. And no, the reason ARM and PowerPC don't compete is because PowerPC tried and died (that's why Apple moved to x86) and ARM hasn't gotten a decent competitor out there yet which is hardly Intel's fault.

Also does Hyperthreading actually matter? I find 90% of benchmarks it doesn't improve things and the small niche it does is offset by the cases it hurts but I haven't seen recent gens to see if there has been an improvement there.

Not extensive research but gaming perf looks pretty poor with HT: https://www.techpowerup.com/forums/...rks-core-i7-6700k-hyperthreading-test.219417/

I forgot to address the clock increase you cite. GHz does not (in isolation) matter. Average instructions per cycle combined with clock rate matter. If you are massively improving your IPC, then you are making a better chip. The P4 proved clock rate doesn't necessitate a good result. Intel has been improving IPC markedly for a long time now because it is easier than pushing clock rates up and dealing with the physics implications particularly when you are lowering the transistor interconnect distance constantly.
 
Last edited:
I can't remember if Intel has removed extensions. And no, the reason ARM and PowerPC don't compete is because PowerPC tried and died (that's why Apple moved to x86) and ARM hasn't gotten a decent competitor out there yet which is hardly Intel's fault.

Also does Hyperthreading actually matter? I find 90% of benchmarks it doesn't improve things and the small niche it does is offset by the cases it hurts but I haven't seen recent gens to see if there has been an improvement there.

Not extensive research but gaming perf looks pretty poor with HT: https://www.techpowerup.com/forums/...rks-core-i7-6700k-hyperthreading-test.219417/

I forgot to address the clock increase you cite. GHz does not (in isolation) matter. Average instructions per cycle combined with clock rate matter. If you are massively improving your IPC, then you are making a better chip. The P4 proved clock rate doesn't necessitate a good result. Intel has been improving IPC markedly for a long time now because it is easier than pushing clock rates up and dealing with the physics implications particularly when you are lowering the transistor interconnect distance constantly.

It would make sense that 90% of applications not need Hyper-Threading, a good chunk of intel processors don't have it thus the market won't follow. This goes double for games, where 99% of the time the game either doesn't need the extra power and most gamers have i5s.

I don't remember referencing clock speeds, can you quote it please?

You stated ARM as a competitor to Intel and I countered that. Did you have another point in mind?
 
It would make sense that 90% of applications not need Hyper-Threading, a good chunk of intel processors don't have it thus the market won't follow. This goes double for games, where 99% of the time the game either doesn't need the extra power and most gamers have i5s.

I don't remember referencing clock speeds, can you quote it please?

You stated ARM as a competitor to Intel and I countered that. Did you have another point in mind?
Clock speed:
"If ARM and PowerPC were competing with Intel you would not be seeing a .1 GHz rise on Intel's enthusiast Broadwell-E launch"

Hyperthreading - it has been around since P4 yet still has basically no solid use. It's a continual failed attempt to make use of underutilised resources. I think the fact the market hasn't found a real benefit in this amount of time is evidence enough that it isn't worth the effort.

My point re: ARM is that there are desktop ARMs out there, there were desktop PowerPCs but they are not mainstream. There is no desktop competition to x86 because Intel has the best performing desktop CPU and will for a while yet.

To your original point, "It's pretty bad that GPUs are so badly outpacing CPUs", Intel has provided and continues to improve performance well ahead of anyone else in terms of CPU program execution efficiency which is all they can really do. They can't really "market lead" with instruction sets - for CPUs these are heavily driven by application usage and need - so you get a LOT more bang for buck being reactive.
 
With the slow rate of CPU development and performance improvements, I don't see why anyone would wait for Kabylake instead of buying Skylake today? This will provide, what, a 5% increase over skylake? if that?
 
As defined in a market with no competition. I don't know why you are apologizing for Intel here, it is exactly their fault that x86 CPUs haven't done anything in the last 10 years.
So when exactly did AMD fold and close shop on the CPU market?

I guess you remember that Intel was frequently accused in the past for anti-competitive practices against AMD..
 
I guess you remember that Intel was frequently accused in the past for anti-competitive practices against AMD..
Exactly how far off the path of this conversation are you wanting to go? AMD did not go belly up, so Intel is not the sole blame for any (*and I say again* if any) lack of x86 advancements.

That's not to say the thought of them suggesting an instruction set can be changed on a whim is laughable. It's not the instruction set that changes. It's extensions to the instruction set that are included or removed. Extensions that can be used if available. Tampering with the base instruction set would break backward compatibility as has already been mentioned. All though I can't say that would be such a bad thing. Starting over fresh would likely open a door to new things, which is likely the reason they brought up lack of x86 innovation. It doesn't have anything to do with being Intel's or AMD's fault. The market won't allow for it, not without some kind of grace period for changing over. And if you don't believe that, just look at the backlash Microsoft is facing over drastic changes.
 
Monopoly and collusion in the marketplace brings us to not even a small tick on the tick tock scale... what a waste!
 
Clock speed:
"If ARM and PowerPC were competing with Intel you would not be seeing a .1 GHz rise on Intel's enthusiast Broadwell-E launch"

Hyperthreading - it has been around since P4 yet still has basically no solid use. It's a continual failed attempt to make use of underutilised resources. I think the fact the market hasn't found a real benefit in this amount of time is evidence enough that it isn't worth the effort.

My point re: ARM is that there are desktop ARMs out there, there were desktop PowerPCs but they are not mainstream. There is no desktop competition to x86 because Intel has the best performing desktop CPU and will for a while yet.

To your original point, "It's pretty bad that GPUs are so badly outpacing CPUs", Intel has provided and continues to improve performance well ahead of anyone else in terms of CPU program execution efficiency which is all they can really do. They can't really "market lead" with instruction sets - for CPUs these are heavily driven by application usage and need - so you get a LOT more bang for buck being reactive.

I guess I did use clockspeed, lol. I meant that as more of a statement, you could replace that with the tiny IPC gains we've been seeing and it would be the same result.

Saying Hyper-Threading is a failure is like saying we will never overcome the inefficiencies of Multiple cpus cores. If we can't find a way to utilize all the cores then we are going to have issues in the consumer market. If you can't get CPUs to divide the work among cores properly it's going to reflect in the software as well and vice versa. If it can be done via software it can be done via hardware.
 
Back