Not Intel, Not AMD: Alternative CPU Architectures from Around the World

"The M1 requires a Rosetta translation layer to convert x86 code into something the M1 can execute. Despite the overhead on legacy applications, the M1 can actually outperform Intel Comet Lake in some x86 workloads. Then in native apps it can demolish the competition."

Who cares about Comet Lake when we have Zen3 and Zen4 upcoming.

Also how can M1 outperform anything on Native apps, since M1 native apps only run on M1...
 
"The M1 requires a Rosetta translation layer to convert x86 code into something the M1 can execute. Despite the overhead on legacy applications, the M1 can actually outperform Intel Comet Lake in some x86 workloads. Then in native apps it can demolish the competition."

Who cares about Come Lake when we have Zen3 and Zen4 upcoming.

Also how can M1 outperform anything on Native apps, since M1 native apps only run on M1...
Rosetta 2 is not actually a translation layer that implies a JIT compile, Rosetta 2 is pre-compiled on installation, but there is a code block that replaces the x86 code with ARM instructions, you could call that a translation layer, but I think there terminology implies something else..

The m1 should be compared to like vintages of processors, saying Zen3 and Zen4 are coming, you might as well also say that M2, and M3 are coming as well.

M1 native outperform software running on x86 machines of like vintage and specification. There are many native M1 applications because the implementation via Xcode is easy as mostly a recompile (OK, not quite that easy, but close). If a vendor is not offering it, what else are they not keeping up with?
 
Also how can M1 outperform anything on Native apps, since M1 native apps only run on M1...
Apps do stuff. So an M1 running an app that does something can do it faster than an x86 computer running an app that does the same thing. For example, a benchmark that inverts a large matrix.
Performance can be compared across architectures.
 
Apps do stuff. So an M1 running an app that does something can do it faster than an x86 computer running an app that does the same thing. For example, a benchmark that inverts a large matrix.
Performance can be compared across architectures.

M1 compares well with x86 Chips in the same category. Lets not forget that Intel is falling behind even AMD here, especially in power usage. The ryzen 4800u compete pretty well with the M1 on performance while not being on a cutting edge node.

Lets not forget that the M1 is on TSMC's newest and greatest node, Apple has a large advantage because of this. The Ryzen 4800U is not only a older design, but also on a older node. Apple's M1 is packed full of cache, on package memory, high sustained core clocks, etc. M1 is on a totally different beast than something like the 4800U which is more of an after thought. It has old GPU cores, heavily cut down cache levels, old accelerators, IO is more of a light evolution of what came before. It was a chip that was designed to be cheap and slot into existing designs.

M1 is an expensive chip. The 4800u is still faster in multithreaded workloads. Yeah the M1 can export a video fast, when acceleration is used. Toss a nvidia GPU onto a x86 chip you can turn the tables really quick.

The New AMD 5800U is overall a faster chip than the M1, but still falls behind in a few areas. One being single thread performance, it just doesnt have the sustained clock speed that the M1 has. It will boost higher for short period of time, and with apple's M1 already having higher ipc AMD needs the clock speed advantage. Big thanks to Apple's node advantage. Second is performance per watt, the 5800U really isnt any better than the 4800U. A move to 5nm would help alot here.

Apple's M1 is not magic, it is just well designed. For someone like AMD the design is expensive and doesn't really fit into the market they were trying to break into. I see AMD and Intel moving this direction in the future. Intel has been behind the ball and this is the route Intel should have gone years ago.

What the M1 really shows is how far behind other ARM manufactures are. Samsung is catching up quick. The current choice of ARM silicon for windows is really a joke. Much needed improved chips are coming, and most likely still be slower than the M1. The Samsung Chip with AMD graphics will be one of the first ARM silicon to really give Apple a run for the money.
 
The m1 should be compared to like vintages of processors, saying Zen3 and Zen4 are coming, you might as well also say that M2, and M3 are coming as well.

M1 native outperform software running on x86 machines of like vintage and specification. There are many native M1 applications because the implementation via Xcode is easy as mostly a recompile (OK, not quite that easy, but close). If a vendor is not offering it, what else are they not keeping up with?
We don't know much about M2, when it's coming or even if it's coming but Zen4 already exists. Zen3 is on sale too.

Point is that "native" M1 app is not available for AMD or Intel CPU's so comparison is pretty much impossible.
Apps do stuff. So an M1 running an app that does something can do it faster than an x86 computer running an app that does the same thing. For example, a benchmark that inverts a large matrix.
Performance can be compared across architectures.
It's quite easy to cherrypick "same thing" that Apple just happens to do faster.

In reality, there are virtually no good benchmarks that compare x86 and M1 CPU's. Even article doesn't mention single one. Because it's supposed to be "faster", it should be very easy to offer something to prove that.

M1 compares well with x86 Chips in the same category. Lets not forget that Intel is falling behind even AMD here, especially in power usage. The ryzen 4800u compete pretty well with the M1 on performance while not being on a cutting edge node.

Lets not forget that the M1 is on TSMC's newest and greatest node, Apple has a large advantage because of this. The Ryzen 4800U is not only a older design, but also on a older node. Apple's M1 is packed full of cache, on package memory, high sustained core clocks, etc. M1 is on a totally different beast than something like the 4800U which is more of an after thought. It has old GPU cores, heavily cut down cache levels, old accelerators, IO is more of a light evolution of what came before. It was a chip that was designed to be cheap and slot into existing designs.
Yeah, it's like M1 beats old Intel that AMD also beats so M1 is awesome, "yeah".

Agreed. Even with those advantages M1 has pretty much difficulties to beat AMD's latest offerings. Even harder if AMD starts to integrate much larger caches and brings architecture suited better for mobile (Zen3 is mostly for servers).

M1 is an expensive chip. The 4800u is still faster in multithreaded workloads. Yeah the M1 can export a video fast, when acceleration is used. Toss a nvidia GPU onto a x86 chip you can turn the tables really quick.

The New AMD 5800U is overall a faster chip than the M1, but still falls behind in a few areas. One being single thread performance, it just doesnt have the sustained clock speed that the M1 has. It will boost higher for short period of time, and with apple's M1 already having higher ipc AMD needs the clock speed advantage. Big thanks to Apple's node advantage. Second is performance per watt, the 5800U really isnt any better than the 4800U. A move to 5nm would help alot here.

Apple's M1 is not magic, it is just well designed. For someone like AMD the design is expensive and doesn't really fit into the market they were trying to break into. I see AMD and Intel moving this direction in the future. Intel has been behind the ball and this is the route Intel should have gone years ago.
M1 can be good on videos when M1 SOC's separate chip is used but that's not really CPU performance, that's more GPU performance.

5800U is designed mostly for multi threading, no surprise it's not that good on single threads. Perhaps separation between high and low performance cores will help there. Still, considering M1 is only designed for one category (mobile) and it also has manufacturing process advantage, it's hard to understand where that M1 hype like "M1 will conquer desktops etc" really comes from.

It's much easier to design architecture for one category than for everything. AMD tries to cover everything with Zen's and Intel with Core. That's about to change and tbh M1 market share is still pretty much non existing.
 
I remembered that Elbrus was, at one time in the past, a commercial project, so I checked around to see if there was any information on its architecture. It turns out that while the early processors named Elbrus had different architectures, the later Elbrus chips are based on the SPARC architecture.
 
We don't know much about M2, when it's coming or even if it's coming but Zen4 already exists. Zen3 is on sale too.

Point is that "native" M1 app is not available for AMD or Intel CPU's so comparison is pretty much impossible.

It's quite easy to cherrypick "same thing" that Apple just happens to do faster.

In reality, there are virtually no good benchmarks that compare x86 and M1 CPU's. Even article doesn't mention single one. Because it's supposed to be "faster", it should be very easy to offer something to prove that.


Yeah, it's like M1 beats old Intel that AMD also beats so M1 is awesome, "yeah".

Agreed. Even with those advantages M1 has pretty much difficulties to beat AMD's latest offerings. Even harder if AMD starts to integrate much larger caches and brings architecture suited better for mobile (Zen3 is mostly for servers).


M1 can be good on videos when M1 SOC's separate chip is used but that's not really CPU performance, that's more GPU performance.

5800U is designed mostly for multi threading, no surprise it's not that good on single threads. Perhaps separation between high and low performance cores will help there. Still, considering M1 is only designed for one category (mobile) and it also has manufacturing process advantage, it's hard to understand where that M1 hype like "M1 will conquer desktops etc" really comes from.

It's much easier to design architecture for one category than for everything. AMD tries to cover everything with Zen's and Intel with Core. That's about to change and tbh M1 market share is still pretty much non existing.

M1 is powering ALL Macs now so I wouldn't say its strictly a mobile chip. Of course Apple's clear goal is to make desktop Macs just a different form factor for their laptop hardware so I guess it works either way.
 
M1 is powering ALL Macs now so I wouldn't say its strictly a mobile chip. Of course Apple's clear goal is to make desktop Macs just a different form factor for their laptop hardware so I guess it works either way.
Exactly. M1 "desktop" is essentially mobile computer on "desktop" form factor. Yeah, "desktop".
 
M1 is powering ALL Macs now so I wouldn't say its strictly a mobile chip. Of course Apple's clear goal is to make desktop Macs just a different form factor for their laptop hardware so I guess it works either way.

This isn't new, Apple has been doing this for 20 years now. The only version of the Mac Mini that has ever had non-Laptop class CPUs is the 2018 model. The Intel iMacs started with Laptop class CPUs and only slowly migrated to Desktop class ones, sometimes retaining a laptop-class one as the entry model. Because a good laptop CPU works perfectly fine for the average user's needs.
 
Increased competition is very good news for X86 fans. Both Intel and AMD will have to up their game. AMD will roll out Zen 4 and 5 sooner.

I wouldn't be surprised if MS's new Windows will be 'inclusive' of the M processors.
 
Also how can M1 outperform anything on Native apps, since M1 native apps only run on M1...
My son has one of the new M1 powered Macs to replace his 4 year old laptop (mid range i5, 16GB and 256GB SSD). The performance increase is night and day while the battery life has gone from 5 hours to around 20 hours. Sure, I don't have any benchmarks to show but the user perceived benefits are quite something. Perhaps a 5800u powered laptop would provide the same jump but I don't have one on hand.
 
Why is there such a distinction between the fabrication levels that all the current manufacturers are capable of? Intel seems to be on 14nm, AMD is on 7nm and the M1 is on 5nm. Why is Intel finding it so hard to keep up? and what's enabled the Apple (or whoever makes their chips) to hit 5nm? Before anyone says I'm an Apple fanboy, I'll quickly say I've never owned a Mac, and that my current Win 10 PC is Intel based though my next home build will probably be AMD.
 
My son has one of the new M1 powered Macs to replace his 4 year old laptop (mid range i5, 16GB and 256GB SSD). The performance increase is night and day while the battery life has gone from 5 hours to around 20 hours. Sure, I don't have any benchmarks to show but the user perceived benefits are quite something. Perhaps a 5800u powered laptop would provide the same jump but I don't have one on hand.
4 years is very much on mobile space. Even today's top end Intel mobile CPU is much faster than 4 year old Intel mid range.
Why is there such a distinction between the fabrication levels that all the current manufacturers are capable of? Intel seems to be on 14nm, AMD is on 7nm and the M1 is on 5nm. Why is Intel finding it so hard to keep up? and what's enabled the Apple (or whoever makes their chips) to hit 5nm? Before anyone says I'm an Apple fanboy, I'll quickly say I've never owned a Mac, and that my current Win 10 PC is Intel based though my next home build will probably be AMD.
First: nm's are not comparable between manufacturers.

AMD and Apple both use TSMC. Apple paid TSMC to get 5nm first. AMD will get it soon.

Intel OTOH decided to create ultra aggressive (high transistor density etc) 10nm process that came around 3.5 years late and didn't offer what it first promised. Also Intel's current 14nm is (at least) third version of 14nm process.
 
First: nm's are not comparable between manufacturers.

AMD and Apple both use TSMC. Apple paid TSMC to get 5nm first. AMD will get it soon.

Intel OTOH decided to create ultra aggressive (high transistor density etc) 10nm process that came around 3.5 years late and didn't offer what it first promised. Also Intel's current 14nm is (at least) third version of 14nm process.
I'm no expert but nm should just show the size of individual transistors on the die and should be directly compatible between manufacturers. No sane manufacturer would willingly describe their process as worse than the opposition (Intel 14nm vs AMD 7nm) if they had a choice.

The power consumption and heat dissipation of Intel's chips also point towards an older process that's being pushed to try and keep up. The efficiency of Apple's M1 chip is also apparent from their 20 hour run time on the laptop and, no, they don't use a large battery. I also can't see how you can describe Intel's 10nm process as high density when it's obviously much lower than Apple's 5nm process or even AMD's 7nm process.

It seems obvious from all their attempts that Intel can't keep up and so my original question still stands - why are they still stuck at 14nm? I'll repeat again that I'm not biased to one manufacturer, I actually have a 10 year old i5-3570K processor in my main PC and I'm very happy with it.
 
I'm no expert but nm should just show the size of individual transistors on the die and should be directly compatible between manufacturers. No sane manufacturer would willingly describe their process as worse than the opposition (Intel 14nm vs AMD 7nm) if they had a choice.
Should yes but it doesn't. Transistor size and nm number has no correlation, has not been for years.

Intel is not sane manufacturer then: https://en.wikichip.org/wiki/14_nm_lithography_process#Intel
The power consumption and heat dissipation of Intel's chips also point towards an older process that's being pushed to try and keep up. The efficiency of Apple's M1 chip is also apparent from their 20 hour run time on the laptop and, no, they don't use a large battery. I also can't see how you can describe Intel's 10nm process as high density when it's obviously much lower than Apple's 5nm process or even AMD's 7nm process.

It seems obvious from all their attempts that Intel can't keep up and so my original question still stands - why are they still stuck at 14nm? I'll repeat again that I'm not biased to one manufacturer, I actually have a 10 year old i5-3570K processor in my main PC and I'm very happy with it.

Of course older process that is even aimed for high frequencies consumes more power.

Intel's 10nm offers roughly same density than TSMC 7nm. Sources:


Like I said before, Intel tried to make ultra dense 10nm process that suffered many delays. Also Intel is no longer really stuck on 14nm, but 10nm products still have scaling issues (does not reach high clocks speeds) and have problems with yields (not many parts manufactured actually work). More about 10nm https://fuse.wikichip.org/news/525/...ntels-10nm-switching-to-cobalt-interconnects/

That didn't work and Intel made at least one (or more) less dense 10nm version.
 
Back