As Alder Lake brings big.LITTLE to desktop PCs, Intel highlights benefits of software...

Ivan Franco

Posts: 300   +9
Staff member
The big picture: While everyone who closely tracks the tech industry understands there’s an important link between today’s latest semiconductor chips and software application performance, there does seem to be a new, deeper level of connection being developed between the two. From cloud-based chip emulation kits to AI framework enhancements to new tools for creating applications that function across network connections, we appear to be embarking on a new era of silicon-focused software optimizations being created -- surprise, surprise -- by chip makers themselves.

The first Intel On event represents a rejuvenation of the company’s former IDF, or Intel Developer Forum, where the company highlighted the importance of software optimization through a series of announcements. Collectively, these announcements -- including the launch of an updated Developer Zone website -- re-emphasized Intel’s desire to put developers of all types and levels at the forefront of its efforts.

Intel’s focus on its oneAPI platform highlights the increasingly essential role that software tools play in getting the most from today’s silicon advancements. First introduced last year, oneAPI is designed to dramatically ease the process of writing applications that can leverage both x86 CPUs as well as GPUs and other types of accelerators via an open, unified programming environment. The forthcoming 2022 version is expected to be released next month with over 900 new features, including things like a unified C++/SYCL/Fortran compiler and support for Data Parallel Python.

Given the increasing sophistication of CPUs and GPUs, it’s easy to see why new developer tools are necessary to not only more fully leverage these individual components but also to take advantage of the capabilities made possible by combinations of these chips. While Intel has always had a robust set of in-house software developers and has built advanced tools like compilers that play an important role in application and other software development, the company has been focused on a tiny, elite fraction of the total developer market. To make a bigger impact, it is important for the company to create software tools that can be used by a much broader set of the developer population. As Intel itself put it, the company wants to “meet developers wherever they are” on the skill and experience front.

As a result, Intel is doing things like increasing the number of oneAPI Centers of Excellence to provide more locations where people can learn the skills they need to best utilize oneAPI. In addition, the company announced that it was building new acceleration engines and new optimized versions of popular AI toolkits, all designed to best use its forthcoming datacenter-focused Intel Xeon Scalable Processor.

The company also talked about cloud-based chip emulation environments that can enable a wider array of programmers to build applications for various types of accelerators without needing to have physical access to them.

Much of these developments are clearly driven by the increased complexity of various individual chip architectures. While architectural enhancements in these semiconductors have obviously improved performance overall, even the most experienced programmers are unlikely to be able to keep track of all the functionality that’s been enabled.

Toss in the possibility of creating applications that can leverage multiple chip types and, well, it’s easy to imagine how things could get overwhelming pretty quickly. It’s also easy to understand why few applications really take full advantage of today’s latest CPUs, GPUs, and other chips. Conversely, it’s not hard to understand why many applications (especially the ones that aren’t kept up-to-date on a regular basis) don’t perform as well as they could with some new chips.

Taking a step back, what’s interesting about these Intel efforts is that they seem to be a logical extension of other similar motions recently announced by Arm and Apple, two other silicon-focused companies (each in its own unique way). At Arm’s Developer Summit last week, the company described its own efforts to create cloud-based versions of virtual chip hardware as part of its Total Solutions for IoT effort and its new Project Centauri. As with Intel, Arm’s goal is to increase the range of developers that can potentially write applications for its architecture.

In terms of chip-focused software optimizations, early benchmark data seems to suggest that Apple’s latest Arm-based M1 Pro and M1 Pro Max chips offer extremely impressive results with Apple’s own applications -- which were undoubtedly optimized for these new chips. On other types of non-optimized applications and workloads, however, the results appear to be a bit more moderate. This simply highlights how achieving the best possible performance for a given chip architecture requires an increased amount of software optimization.

Now, some might argue that the hardware-agnostic (or at least hardware-ambivalent) nature of Intel’s oneAPI is at direct odds with the highly tuned, chip-specific software optimizations that some of Intel’s other new efforts incorporate. At a foundational level, however, they’re all highlighting a great deal of low-level software enhancement that the increasingly sophisticated new chip architectures (such as Intel’s new 12th-gen Core CPUs) demand in order to get the best possible performance.

To put it simply, the pure, across-the-board performance improvements that we’ve enjoyed from hardware-based chip advances are getting harder to achieve. As a result, it’s going to take increasingly sophisticated and well optimized software to keep computing performance advancing at the level to which we’ve become accustomed. In that light, Intel’s latest efforts will likely bring more attention to the platform in the short term but deliver even more meaningful impact over time.

Bob O’Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter .

Permalink to story.

 
Would this be Intel innovating? I guess AMD innovated by using the whole chiplet thing and not having a monolithic die. Good to see both companies working hard and keeping each other in check.

This to me feels like a response to Apples solutions. Maybe even part of Gelsingers supposed strategy to win Apple back.
 
Would this be Intel innovating? I guess AMD innovated by using the whole chiplet thing and not having a monolithic die. Good to see both companies working hard and keeping each other in check.

This to me feels like a response to Apples solutions. Maybe even part of Gelsingers supposed strategy to win Apple back.
I don't think they can win Apple back - unless a very high end solution . I think it would of happened independent of Apple- though Apple has sped it up a huge amount- These companies must have a greater idea of the future than us .
I just think the need for custom very efficient and/or high powered chips/ SOCs etc - is going to increase a 1000 fold in the next ten years . No one wants the rumba smearing droppings in the carpet , or grandmas arms broken by a robot assist.
Something big will happen in the toy market - hobbiest already building semi-autonomous modular parts - that can be put together . That's a 20 Billion dollar industry right there . Buy a base set for $200 - with lots and lots of upgrades and add-ons
 
I don't think they can win Apple back - unless a very high end solution . I think it would of happened independent of Apple- though Apple has sped it up a huge amount- These companies must have a greater idea of the future than us .
I just think the need for custom very efficient and/or high powered chips/ SOCs etc - is going to increase a 1000 fold in the next ten years . No one wants the rumba smearing droppings in the carpet , or grandmas arms broken by a robot assist.
Something big will happen in the toy market - hobbiest already building semi-autonomous modular parts - that can be put together . That's a 20 Billion dollar industry right there . Buy a base set for $200 - with lots and lots of upgrades and add-ons
The only reason I could see Apple ditching their own stuff and going for anyone third party is if that third party can create something that is significantly better than what Apple can make themselves. With the current state of Intel and AMD chips there would need to be a massive increase in efficiency to even match Apple right now, let alone provide something better. They are at a disadvantage with X86 aswell, meaning Apple would have to change its software back to X86 to use Intel or AMD which they won’t do lightly. So unless Intel can magic up an ARM based masterpiece I really can’t see Apple going back to Intel at all.
 
Windows 11 ships with AMD performance crippled, yet no doubt M$ spending a a million man hours fine tuning Intel performance.

It's fixed already, calm down - AMD will come with a hybrid design themselves it's just a matter of time (they said it officially).

If they don't, AMD would loose the entire mobile market, because hybrid design is going to be a huge here (however Intel already sits on most of this market as it is - especially enterprise)

Even for desktop's big.LITTLE makes sense, if software is tweaked. This means that hybrid CPUs are only going to get better going forward, as software is optimized for this approach.

The idea is, that the OS and background tasks only uses efficiency cores, even while doing stuff like gaming - meaning - for example - the game can utilize 100% of performance cores, this is not the case today. This means a higher minimum fps, less stutter etc.

i7-12700K for 400 dollars seems like a great gaming chip, we will see soon. Even i5-12600K should be

i9-12900K seems like a bad buy for most, unless it's binned or something
 
Last edited:
I wonder, when games at least will likely be pretty optimized for the Zen architectures since that's what the consoles use.
 
It's fixed already, calm down - AMD will come with a hybrid design themselves it's just a matter of time (they said it officially).

If they don't, AMD would loose the entire mobile market, because hybrid design is going to be a huge here (however Intel already sits on most of this market as it is - especially enterprise)

Even for desktop's big.LITTLE makes sense, if software is tweaked. This means that hybrid CPUs are only going to get better going forward, as software is optimized for this approach.

The idea is, that the OS and background tasks only uses efficiency cores, even while doing stuff like gaming - meaning - for example - the game can utilize 100% of performance cores, this is not the case today. This means a higher minimum fps, less stutter etc.

i7-12700K for 400 dollars seems like a great gaming chip, we will see soon. Even i5-12600K should be

i9-12900K seems like a bad buy for most, unless it's binned or something
That only applies if you have a old quad core chip. If you have a modern 6 or 8 core CPu the OS already has exclusive high performance cores for itself, and teh games have exclusive cores for them as well. Most games only use at most 6 cores, usually closer to 4, with any significance.

The fact that intel was only showin 8-13% higher performance then zen 3 while using a windows 11 build that was pre zen cache fix speaks wonders.
 
That only applies if you have a old quad core chip. If you have a modern 6 or 8 core CPu the OS already has exclusive high performance cores for itself, and teh games have exclusive cores for them as well. Most games only use at most 6 cores, usually closer to 4, with any significance.

The fact that intel was only showin 8-13% higher performance then zen 3 while using a windows 11 build that was pre zen cache fix speaks wonders.

Haha, no it does not. Everything that need CPU cycles on the OS and background system will (obviously) be done on the CPU and games won't have exclusive access to the CPU today. It's shared across all tasks. Tons of games uses more than 6 cores, especially when paired with a higher end GPU. That is why 5800X performs better than 5600X clock for clock in games, especially for minimum fps.

With Hybrid Design and proper coding, games will have 100% control over performance cores and effciency cores will handle all other stuff running in the background; Higher minimum fps and generally better performance.

Hybrid design is the future and AMD will do their own CPUs using this approach too. Windows 11 is built for this.

I will take months before software is optimized properly for hybrid, over time tho, it will get better and better. I won't touch Alder Lake but I might buy 13th gen if it headily beats Zen 4 in gaming and emulation which is all I care about on my home PC.
 
Last edited:
Windows 11 ships with AMD performance crippled, yet no doubt M$ spending a a million man hours fine tuning Intel performance.
"AMD and Microsoft jointly made this discovery, and listed out potential impact on application performance."
-TPU

This isn't AMD's first rodeo with the 5000 series. People have selective memories though. Good thing I'm here!
 
Back