Intel bets its future on software and manufacturing

Bob O'Donnell

Posts: 81   +1
Staff member
Something to look forward to: Intel made a number of intriguing disclosures and bold proclamations during last week’s Intel Analyst Day, highlighting how it sees its future evolving. What made the information even more intriguing—and frankly, more compelling—is that they did so while also offering a refreshing level of honesty and transparency about challenges they’ve faced.

While the information about manufacturing delays at both their 10nm and 7nm process nodes isn’t new, the competitive challenges they’re facing from AMD, Arm architectures, and others are well known, their responses felt new.

From CEO Bob Swan on down, the manner with which Intel executives talked about these issues made it clear that they’ve not only accepted them, they’ve developed strategies to help overcome them.

On the manufacturing front, despite calls from some in the industry to get out of chip fabrication or, at the least, concerns about the reliability or stability of that portion of their business, company executives were extremely clear: they have absolutely no intention of moving away from being an IDM (Integrated Device Manufacturer) that both designs and builds its own chips. And yes, they know they have to regain some of the trust they lost after slipping from their long-held lead as manufacturing process champions.

Recent announcements on fundamental transistor improvements as well as innovative chip packaging technologies, however, coupled with their long history of effort and innovation in these areas gives them the confidence that they can compete and even win (see “Intel Chip Advancements Show They’re Up for a Competitive Challenge” ).

"Intel made it clear that CPUs will continue to be their primary focus, but they are greatly stepping up their efforts on GPUs with their Xe line"

The basic chip hardware strategy focuses on two key elements. The first of these they’re calling disaggregation—that is, the breaking up of larger monolithic chip designs into a variety of smaller chiplets connected together via high-speed links and packaged together with a variety of different technologies.

The second is referred to generically as XPUs, but essentially means a diversification of core chip architectures, with much greater support for more specialized “accelerator” silicon designs. Again, Intel made it clear that CPUs will continue to be their primary focus, but they are greatly stepping up their efforts on GPUs with their Xe line, as well as various types of AI accelerators, particularly those from its acquisition of Habana, and on FPGAs, such as its Stratix line.

While they are all interesting and important on their own, it’s their ability to potentially work together where the real opportunity is. In a presentation from SVP and Chief Architect Raja Koduri, the company demonstrated how diverse types of data analysis are significantly overwhelming the current trajectories for existing CPU designs. That’s why different types of chip architectures with specialized capabilities that are better suited for certain aspects of these computing tasks are so critical—hence the need for more variety.

The real magic, however, can only happen with software that unites them all and that’s where Intel’s oneAPI fits in. Originally announced two years ago, oneAPI is an open, standards-based, unified programming model that’s designed to make it easier for developers to be able to write software that can take advantage of all the unique capabilities of these different chip architectures. Critically, it does so without the need to know how to specifically write code that’s customized for them. This is absolutely essential because there is a very limited set of developers that can write software for each one of these different accelerators, let alone all of these different chip types.

The company achieves this important capability by providing a hardware abstraction layer and a set of software development tools that does the incredibly hard work of figuring out what bits of code can run most effectively on each of the different chips (or, at least, each of the components available in a given system). It’s a challenging goal, so it isn’t surprising that it’s taken a while to come to fruition, but at the Analyst Event last week, Intel announced that they had started shipping the base oneAPI toolkit along with several other options that are specialized for applications such as HPC (High Performance Computing), AI, IoT, and Rendering. This marks a step forward in the evolution of Intel’s software strategy that will take a while to fully unfold, but shows that they’re bringing this audacious vision to life.

"It’s clear that Intel is moving forward with its strategy of unifying its increasingly diverse set of chip architectures through a single unified software platform"

Equally interesting was an explanation of how Intel is making this seemingly magical cross-architecture technology work. Basically, the company is applying some of the same principles and learnings from their experience with adding instruction extensions for CPUs (such as AVX, AVX-512, etc.) to additional chip architectures.

As it found with those efforts, usage of the new capabilities can take a while unless there is full support throughout a range of development tools such as compilers, performance libraries, and more. That’s what Intel is launching with its oneAPI toolkit in an effort to jumpstart the technology’s usage. In addition, the company has created a compatibility tool for porting code written in Nvidia’s popular CUDA language for GPUs. This provides a big head start for developers who have experience with or software already written in CUDA.

Given that it was just released, the final performance and real-world effectiveness of the oneAPI toolkit remains to be seen. However, it’s clear that Intel is moving forward with its strategy of unifying its increasingly diverse set of chip architectures through a single unified software platform. Conceptually, it’s been a very appealing idea since the company first unveiled it several years back—here’s hoping the reality proves to be equally appealing.

Bob O’Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter .

Permalink to story.

 
This just feels like AMD treading-water with their mostly-uselsss HSA push.

You can't get people to transition the majority of software out here that already exists, so you're just wasting your time here. The easy-to-translate compute-heavy stuff is probably already running on OpenCL/Cuda.

Also, can anyone tell me what benefit talking to an FPGA brings to an intel processor? It's hard to imagine a problem too difficult to solve with a Discrete GPU/CPU/Vector extension?
 
Last edited:
They should steal isilicon before apple. Lol. Does apple have something similar to this in their hardware? Or am I just talking out my b---? I know the new playstation unloads certain work loads to hardware accelerators but I don't know about the coding for such a thing and if it is as simple as this api is supposed to be theoretically.
 
Mr. Sales Dude: Boss we're losing mind and market share to AMD. Apple's new CPU cost us also.

Boss: Come up with new instructions that we can license to the competition to help salvage some of the mess we made.
 
If Intel is going to make this work for all architectures, then, that'll be a huge contribution to the computing environment. Intel has always been a lead at innovation, and selfishness its achilles heel.
Moving to a chiplet design is okay. But they should still try to shift to a smaller node on a monolithic design. Monolithic designs have their advantages. So we want both. Am looking forward to a gpu from intel as a 3rd competitor in this space. And of course, how Intel is handling all of this and their responses is respectable. AMD and Intel seem to have a more mature corporate culture as compared to Nvidia.
 
Last edited:
Is this a press release ? Very much sounds like it.

Also, what are the manufacturing challenges they face (besides the obvious)?
 
This is either gonna gain no traction or all the traction but I feel like intel would do it by using financial incentives to developers.
 
This just feels like AMD treading-water with their mostly-uselsss HSA push.

You can't get people to transition the majority of software out here that already exists, so you're just wasting your time here.

If you‘re someone like Apple who controls the hardware and OS you can. Isn‘t M1 HSA ?
 
I wont be happy until they are so desperate that they are forced to sell their headquarters (like AMD was forced to, thanks to Intel bribery with Dell).

I really dont want them to ever be number one, since the whole industry was held back on the 4 cores hell thanks to them.
 
Hold industry held back? Intel had 30+ threads on server parts before AMD failed with bulldozer. No one was held back except desktop users with $200 in their pockets.
 
Hold industry held back? Intel had 30+ threads on server parts before AMD failed with bulldozer. No one was held back except desktop users with $200 in their pockets.
Intel's first x86 CPUs with 15C/30T appeared after AMD's 16C/16T Opterons, by around 15 months or so, and was charging nearly $6k for said processor. Sure it was better performing than two Opterons, but together they cost 1/6th the price of Intel's chip. And AMD's first Zen-based 16C/32T EPYC models, arriving on the market 3 years later, were still half the price of the equivalent Intel models at that time.

Whether such excessive pricing methods are indicative of Intel holding back the server industry is open to debate, but given the monopoly they've had on the sector (and still do), they've certainly held the industry to ransom.

What kind of tasks/workloads will be accelerated by these XPUs and FPGAs? Can anyone give more examples?
Pick any sub-set of AI, such as inference, and you've got a perfect task for a specialised chip. They might be designed to do just one calculation, such as GEMM, but they'll do it far quicker than any equivalent CPU/GPU.
 
I wish them best of luck, and hope they keep innovating like they have for decades. it's such a shame their previous ceo set them back for years, while he was getting his noddle wet. We certainly do not want to lose competition. Go Intel!
 
Dropping the ball and watching AMD eat their lunch has consequences.

They see the writing on the wall and it's not pretty.
 
If you‘re someone like Apple who controls the hardware and OS you can. Isn‘t M1 HSA ?


Sure is, but selling that full-stack

1. requires customers not already married to a 40 year-old bride

2. some actual design domination in this locked memory architecture,

Apple''s M1 brings you both of those,

The reason these x86 APUs don't dominate is because x86 APUs have to target a $100 price-point. Even Alder Lake will eventually be offered on desktops for that price.

If AMD didn't have such limitations. they would have matched the l3 cache on the Ryzen 3700x, and doubled the Raven Ridge GPU performance.

There are all sorts of advantages Apple has in n having a more efficient architecture, and they can also afford to pay a higher price per-chip (because they make Mad Money on each iOS system sold).

Without the iOS money machine, there would not be enough money to design that dominant I/O architecture for mac ARM systems...that is the only reason APPLE has switched over from x86.

Higher margins are the difference between m1 and Everything Else (TM). A pointless talk like this one is just Treading Water.
 
Last edited:
OneAPI to rule them all, OneAPI to find them,
OneAPI to bring them all, and in the sunshine bind them,
In the City of Santa Clara where rent is high.
 
Intel's first x86 CPUs with 15C/30T appeared after AMD's 16C/16T Opterons, by around 15 months or so, and was charging nearly $6k for said processor. Sure it was better performing than two Opterons, but together they cost 1/6th the price of Intel's chip. And AMD's first Zen-based 16C/32T EPYC models, arriving on the market 3 years later, were still half the price of the equivalent Intel models at that time.

Whether such excessive pricing methods are indicative of Intel holding back the server industry is open to debate, but given the monopoly they've had on the sector (and still do), they've certainly held the industry to ransom.


Pick any sub-set of AI, such as inference, and you've got a perfect task for a specialised chip. They might be designed to do just one calculation, such as GEMM, but they'll do it far quicker than any equivalent CPU/GPU.

well, https://ark.intel.com/content/www/u...rocessor-e7-4890-v2-37-5m-cache-2-80-ghz.html

still tops the charts in AMD's favorite Cinebench...2 of those on a board with 60 threads, 6 years old...Sure, Ryzen brought pricing down, I never disputed that, I doubt anyone ever did, but Opteron made it possible for intel to do nothing for 6 years. So thanks and "thanks" AMD.

edit: like I've said: the only thing that was standing between more threads and people that don't need those threads is $200 max for CPU in pockets of PUBG players. Don't take that figuratively, please :)
 
well, https://ark.intel.com/content/www/u...rocessor-e7-4890-v2-37-5m-cache-2-80-ghz.html

still tops the charts in AMD's favorite Cinebench...2 of those on a board with 60 threads, 6 years old
Would be interesting to see the source for that chart. HWBot has one entry in Cinebench for a dual E7 4870 v2 system and it's 729th in the rankings (score of 3197cb). There's an entry for a quad system too (66th overall - 7032cb) - both are pretty respectable for something that came out in 2014, although neither fair well against a single Threadripper 3990X (5th - 14787cb). As you pointed out, though, there is a 6 year gap between the two processors.

Having said that, there is one entry for a single Opteron 6272 and is ranked 225th, with a score of 1138cb. This may seem very lowly, but the score isn't too bad for a 16C/16T chip, given that the best single air-cooled E7 4870 v2 in the database is 1662cb.

Nobody could deny that the E7 4870 is the better processor, but so it should be at $4394 (the Opteron's $523) and having a 3 year design advantage. Not that Cinebench R15 should ever be used to judge the relative merits of a server processor, but I would argue that the E7's performance advantage doesn't warrant it's price tag, and given that our discussion is about whether or not Intel has held industry back over the years, I would also argue that this tips the balance of the debate against Intel.
 
Honestly, I have no idea, I saw it in Cinebench R20 results table, and there is one more newer intel above that and one TR bellow it. That's all I know.

Price tag, pointless discussion. Back then, intel had it's "pricing" on Xeons as always - super high. Compared to today's pricing of AMD, even Opteron was super overpriced back then.
 
Price tag, pointless discussion. Back then, intel had it's "pricing" on Xeons as always - super high. Compared to today's pricing of AMD, even Opteron was super overpriced back then.
I would argue that price tags aren't pointless in a discussion as to whether or not a vendor that's had a virtual monopoly on an industry sector has in any way 'held it back.' When Zen first appeared in 2017, all products based on the architecture weren't as capable as their equivalent models from Intel - however, they were significant cheaper. The EPYC 7601 retailed at $4200, whereas the Xeon Platinum 8180 was $10000. If price didn't matter, the 8180 would have been snapped up everywhere, but it wasn't and it's still the case now, 3 years on. There's not much difference between the products, in terms of cores, threads, clocks, etc: just platform costs.
 
That's why it's pointless. It took AMD fail, and AMD success to move prices. What more do you (not specifically you) really need to know?! Is it alright? Of course not, but it's in the past, now AMD X600 isn't <$200 any longer, it's at where previous gen X700 was. See the trend?
 
AMD are always going to be caught between a rock and a hard place, when it comes to progress and price. In the past, their products were always relatively cheap to manufacture (especially their CPUs) but didn't perform as well as Intel, etc - hence the lower prices. Now, due to using TSMC for virtually everything, the costs aren't quite as low but at least they're competitive/better: so if they kept prices down, their operating margins would be smaller. Sustained, this would lead to a fall in share prices, loss of investment, and so on. AMD only made headway into their colossal debts just over a year ago; their net liabilities are now very healthy, due to a decent cash balance, although it pales in comparison to Intel's.

But during all that time in high debt and poor revenue, they at least continued to offer the market something new - be in custom SoCs for consoles, decent GPUs for compute, or wallet-friendly products for the budget gamer. None of those products, bar the console chips, would have been released to market without knowing exactly how they would fare against the competition.

AMD could have stuck with just being the 'budget choice' for the rest of their existence, but in 2015 they made it clear that this wasn't going to be the case. Now while their statements primarily revolved around improving performance and the span of their portfolio, their statement could also be seen as a warning that their prices would increase too.

And we have Intel (and Nvidia) to thank for that. Their 2015 Skylake i7-6700K had an RCP of $339; within 3 years, they were asking $499 for the i9-9900K. AMD's A10-7870K, launched in the same year as Skylake, was $137; 3 years after that was the likes of the Ryzen 7 2700X, at $329. Had Intel kept their prices down (and let's face it, they could have easily afforded to do so), AMD would have almost certainly released Ryzen at a lower price.
 
Back