Lunar Lake-MX leak suggests Intel is serious about "outcompeting" Apple

nanoguy

Posts: 1,355   +27
Staff member
Something to look forward to: The Pat Gelsinger-powered Intel may not be back to running on all cylinders just yet, but it is cooking some rather interesting PC hardware in the labs. According to a new leak, Team Blue's Lunar Lake-MX CPUs could be just what Team Blue needs to compete against the rising wave of Arm-based processors for laptops and ultraportable devices in the coming years.

In 2021, Intel CEO Pat Gelsinger proudly discussed the company's long-term plan to win Apple's business back by "outcompeting" it. Ever since he returned to Team Blue, Gelsinger has been on a quest to demonstrate that Arm-based processors aren't a big threat to x86 in the consumer space and that the older architecture can still incorporate plenty of innovations on top of its already solid foundations.

To that end, Intel has been working on refining several key technologies like PowerVia, Foveros, and RibbonFET as well as adopting a hybrid approach when it comes to manufacturing. That means the company that once strived to be at the forefront of process technology is now willing to use competing manufacturing nodes at foundries like TSMC as opposed to producing everything in-house.

A recent leak by YuuKi_Ans on X.com (formerly Twitter) sheds more light on Team Blue's goals when it comes to low-power architectures like Lunar Lake-MX. While architectures like Arrow Lake will mostly feature improvements meant to boost gaming performance and show off the benefits of Intel's 20A process node and gate-all-around transistor technology, Lunar Lake-MX is shaping up to be more of a direct competitor to Apple M-series chipsets, with a focus on maximizing performance-per-watt for thin and light laptops.

The leaked slides suggest that Lunar Lake-MX will continue the trend of multi-chip design that we've seen with Alder Lake mobile CPUs. Notably, Intel is taking a page from Apple's silicon book and soldering memory right next to the CPU. It looks like the Lunar Lake-MX UP4 package will be larger than the Alder Lake UP4 and Meteor Lake UP4 counterparts to accommodate two LPDDR5X-8533 memory chips for up to 32 gigabytes of total capacity over a 160-bit, dual-channel interface.

Interestingly, the minimum memory capacity will be 16 gigabytes which is a good amount considering you won't have the ability to add more after purchasing the device incorporating such a CPU. Meanwhile, Apple is still trying to justify including a more modest eight gigabytes of unified memory on base-level M3 chipsets using questionable claims of higher "efficiency" when compared to standard PC hardware with double the memory.

The compute muscle of Lunar Lake will feature four performance cores (Lion Cove) and four efficiency cores (Skymont) along with eight megabytes of "side cache," a faster neural processing unit (NPU 4.0), and eight Xe2 Battlemage graphics cores (64 vector engines with up to 2.5 teraflops of raw performance) with DirectX 12 Ultimate support – all linked together via a "North Fabric". The SoC tile will be essentially the good ol' PCH but will notably support PCIe 5.0, hardware-accelerated storage encryption, and up to three USB4/Thunderbolt 4 ports.

The slides suggest there will be four variants of Lunar Lake processors targeting base power envelopes between 8 W and 30 W – two models of Core 7 and two models of Core 5 with either 16 or 32 gigabytes of memory and slightly different iGPU and NPU configurations. The 8-watt versions of Lunar Lake will supposedly be able to operate in completely fanless devices.

If the leak is any indication, Intel will use TSMC's "N3B" node to make the CPU tile for Lunar Lake UP4 packages, while the upcoming Meteor Lake UP4 designs will be manufactured on the Intel 4 node.

Lunar Lake-MX processors are expected to debut sometime in 2024 or early 2025. By that time, it's possible we'll also see a new M-series chipset from Apple as well as the much-awaited Snapdragon X Elite from Qualcomm. It will be interesting to see how Intel's x86 designs stack up against those offerings. The compact design makes us think Lunar Lake-MX may also end up in some next-generation handheld consoles, which should lead to more competition in that space.

Permalink to story.

 
Memory is so cheap that going from 8GB to 16GB is a non issue. If the consumers want it, why not give it to them? Anyone who ACTUALLY needs lots of ram is going to want 64GB+. That said, I really wish AMD would start adding support for ECC memory on Ryzen CPUs because I've seen too many issues when running 64+ gigs of unregistered memory. If we're going to be limited in speed ANYWAY when using all 4 slots in a ryzen system we should at least be able to use ECC on them
 
Wanting to compete is one thing. Actually doing it is another. Given intel's node restrictions, are they actually going to get a 8 xe2 core GPU functioning in a 30 watt TDP? One of the things that makes the M series so interesting is its ability to run full throttle on a battery, something intel cant do.

If they can pull it off it'll be great, but I wont hold my breath.
 
Wanting to compete is one thing. Actually doing it is another. Given intel's node restrictions, are they actually going to get a 8 xe2 core GPU functioning in a 30 watt TDP? One of the things that makes the M series so interesting is its ability to run full throttle on a battery, something intel cant do.

If they can pull it off it'll be great, but I wont hold my breath.
Intel is working on some very interesting transistor tech right now. Even if they get stuck on an older node the performance increase they get on any node they use could exceed anything produced on TSMC's latest node. Intel has been working on their CFET with GAA for almost 15 years now and claim to be about 5 years away. Meanwhile, TSMC is atleast a decade from being able to produce anything using these techniques on their nodes. This is the main reason that they were stuck on 14nm for awhile.

We are going to see some very interesting coming from Intel in the next 5 years. If they master CFET then nVidia and AMD are going to be behind in anything they produce well into the 2030's. It'll be sad to see Intel dethrone AMD but I would really enjoy seeing them kick nVidia in the chest using the dumpster fire that is ARC.

I do have a softspot for ARC GPU's, I've been thinking about buying one just to play around with. We've had 2 GPU makers for over 20 years now. I do, however, have a hardtime seeing an ARC GPU purchase just to play around with as anything other than a massive waste of money....
 
Intel is working on some very interesting transistor tech right now. Even if they get stuck on an older node the performance increase they get on any node they use could exceed anything produced on TSMC's latest node. Intel has been working on their CFET with GAA for almost 15 years now and claim to be about 5 years away. Meanwhile, TSMC is atleast a decade from being able to produce anything using these techniques on their nodes. This is the main reason that they were stuck on 14nm for awhile.

We are going to see some very interesting coming from Intel in the next 5 years. If they master CFET then nVidia and AMD are going to be behind in anything they produce well into the 2030's. It'll be sad to see Intel dethrone AMD but I would really enjoy seeing them kick nVidia in the chest using the dumpster fire that is ARC.

I do have a softspot for ARC GPU's, I've been thinking about buying one just to play around with. We've had 2 GPU makers for over 20 years now. I do, however, have a hardtime seeing an ARC GPU purchase just to play around with as anything other than a massive waste of money....
Yeah, just like solid state batteries, cold fusion, and flying cars are just "5 years" away. What we've actually SEEN from intel was a grand total of one (1) new architecture, with a half arsed launch with rocket lake, a decent show with alder lake, lukewarm raptor lake, and now a refresh because we just cant get anything new out. Meteor lake has been repeatedly delayed, and all their new tech comes with huge power draws. Their answer to everything has been to shove in more "e" cores and jack up the power use.

I'll believe intel is making advancements when we actually start to see improvements. As it stands, has been floundering since the zen launch, been stagnant since skylake, with the only real improvement being alder lake.
 
Yeah, just like solid state batteries, cold fusion, and flying cars are just "5 years" away. What we've actually SEEN from intel was a grand total of one (1) new architecture, with a half arsed launch with rocket lake, a decent show with alder lake, lukewarm raptor lake, and now a refresh because we just cant get anything new out. Meteor lake has been repeatedly delayed, and all their new tech comes with huge power draws. Their answer to everything has been to shove in more "e" cores and jack up the power use.

I'll believe intel is making advancements when we actually start to see improvements. As it stands, has been floundering since the zen launch, been stagnant since skylake, with the only real improvement being alder lake.
In the sameway the FINFET technology is used in everything today, From nVidia to Apple M chipes, CFET will be the next step in chip design and Intel is about 15 years ahead of everyone else.

And I can't believe you, of all people, would compare CFET to cold fusion or flying cars. CFET is the next step in transistor design and it's going to make AMD's 3d vcache look like an Arduino.

The reason Intel's chips are using so much power is that the part of the CFET design that allows faster switching in the transistors works, it's the GAA power delivery system they are trying to get working that is causing the massive power draws. A GAA design would cut the power requirements by a massive amount while also allowing ridiculous frequencies that only extreme overclockers can dream of. And the finale part of the design would allow them to have a "3d vcache" without having to physically add a module like AMD does. In theory, Intel could put as many transistors as they wanted to in a space by scaling vertically.

I very much am an AMD guy but we are seeing steady progress every generation as Intel inches closer to a CFET/GAA design. I would also like to add that TSMC, Samsung, and IBM are also hard at work at making CFET/GAA chips but Intel is the only one who has chips on the market with aspects of the design market right now.

Not only could this put off Moores law for several decades, we could have a 10 year performance jump in a single generation.

The only thing to beat it would be photonics and that's a lot more like cold fusion. Keep in mind, we've been using finFET chips since 2013 and everyone is using it today. There hasn't been an industry wide change in transistor design for going on over a decade. Everyone's working on it but Intel is leading the pack. This is the reason for all the "+" marks at the end of all of Intel's 14nm nodes. It doesn't work yet but they've found ways to incorperate new aspects of the design into every chip they've released since the 6700k
 
Wanting to compete is one thing. Actually doing it is another. Given intel's node restrictions, are they actually going to get a 8 xe2 core GPU functioning in a 30 watt TDP? One of the things that makes the M series so interesting is its ability to run full throttle on a battery, something intel cant do.

If they can pull it off it'll be great, but I wont hold my breath.
Apple most likely worked years on their M cpus to be specifically the way they are now, both cold and powerful.
I think intel can especially when they will use TSMC's resources and the same stuff Apple got from them.
But it won't come soon. I would be very surprised if they could.
 
Let's see what Meteor Lake delivers first. Already leaks are showing that Meteor Lake is roughly as fast as Raptor Lake equivalents while using far less power. IIRC ML at 65-75W is as peformant as Raptor Lake mobile in the 120-130W range. If this is actually true, that's a huge improvement. Luna Lake is specifically a U series evolution of ML so for ultrathin devices etc should be a decent improvement over ML U in power and performance. I am worried though that Intel would consider using the widely panned N3B node, which has shown minimal gains over N4P in any metric and that most companies were at least waiting for the greatly improved N3E. N3B is also far costlier than N4P too. Either these slides are old or Intel has money to burn (spoiler alert - they don't).
 
Memory is so cheap that going from 8GB to 16GB is a non issue. If the consumers want it, why not give it to them? Anyone who ACTUALLY needs lots of ram is going to want 64GB+. That said, I really wish AMD would start adding support for ECC memory on Ryzen CPUs because I've seen too many issues when running 64+ gigs of unregistered memory. If we're going to be limited in speed ANYWAY when using all 4 slots in a ryzen system we should at least be able to use ECC on them

If it's ECC support your looking for, Ryzen Pro or anything above. Not consumer ryzen CPU's.
 
Intel is never going to win back Apple as a customer. The sooner they realize that the better. Two things would have to happen in order for Apple to come back to Intel. First, Intel would have to produce considerably better/faster chips than what Apple can produce. Second, Intel would have to sell those chips cheaper than what Apple is currently paying to source their chips from TSMC.

The first part is not impossible although highly unlikely to ever happen. Apple poured billions into R&D and production of their latest chips. They would need unbelievable incentives from Intel to simply throw all that away. Meanwhile Intel is still playing catch up. Their process nodes are laughably behind TSMC and they're just now finally starting to take the ARM architecture seriously. Their dorky CEO talks a big game, but the proof will be in the pudding.

The second part is pretty much impossible. This is the main reason why Apple didn't go with Intel chips way back in the first place. The story goes that Steve Jobs wanted to pay a certain price for Intel's chips and Intel was not willing to go that low. Not much has changed. If anything, Apple will want even more favorable pricing now. Intel wants healthy profit margins and Apple wants chips dirt cheap. What happens when an unstoppable force meets an unmovable object?
 
I hope these will be good and not just marketing hype. Other than gaming laptops, battery life is the most important thing to me in a laptop these days as long as performance is adequate. Even if they don't win back Apple (the company) they can give customers a reason not to switch to Apple when battery life is the most important feature to them.
 
In the sameway the FINFET technology is used in everything today, From nVidia to Apple M chipes, CFET will be the next step in chip design and Intel is about 15 years ahead of everyone else.

And I can't believe you, of all people, would compare CFET to cold fusion or flying cars. CFET is the next step in transistor design and it's going to make AMD's 3d vcache look like an Arduino.

The reason Intel's chips are using so much power is that the part of the CFET design that allows faster switching in the transistors works, it's the GAA power delivery system they are trying to get working that is causing the massive power draws. A GAA design would cut the power requirements by a massive amount while also allowing ridiculous frequencies that only extreme overclockers can dream of. And the finale part of the design would allow them to have a "3d vcache" without having to physically add a module like AMD does. In theory, Intel could put as many transistors as they wanted to in a space by scaling vertically.

I very much am an AMD guy but we are seeing steady progress every generation as Intel inches closer to a CFET/GAA design. I would also like to add that TSMC, Samsung, and IBM are also hard at work at making CFET/GAA chips but Intel is the only one who has chips on the market with aspects of the design market right now.

Not only could this put off Moores law for several decades, we could have a 10 year performance jump in a single generation.

The only thing to beat it would be photonics and that's a lot more like cold fusion. Keep in mind, we've been using finFET chips since 2013 and everyone is using it today. There hasn't been an industry wide change in transistor design for going on over a decade. Everyone's working on it but Intel is leading the pack. This is the reason for all the "+" marks at the end of all of Intel's 14nm nodes. It doesn't work yet but they've found ways to incorperate new aspects of the design into every chip they've released since the 6700k
It's unlikely for Intel to be ahead of TSMC as they also already have CFET working in the lab, but they stated that it will only be implemented and used widely once we get to around 1nm. So we'll have several generations of CPUs until then.

Intel's CFET is also years away. We saw a slide from them this year having CFET's date as "FUTURE" after 2024's RibbonFET which will be used for the 20A process node. Realistically we'll see CFET in consumer CPUs after maybe 2030 (or 2029 if all the planets align and the stars shine bright).

So don't cry for AMD and everybody else, they have the time and money to stay ahead of Intel, especially with the wild rumours of Meteor Lake being a bust. :)
 
Back