Here are the first benchmarks of Apple's upcoming M2 chip

Tudor Cibean

Posts: 182   +11
Staff
Something to look forward to: Even if Apple's M2 series of SoCs don't feature significantly-improved performance compared to the M1s, they'll probably still be some of the most efficient chipsets found in today's laptops - beating most if not all of their x86-based competitors in that department.

Last week, Apple unveiled their new M2 SoC, which will start shipping in the redesigned MacBook Air and 13-inch MacBook Pro next month. Fortunately, we don't have to wait that long to get a sneak peek at their performance as someone with access to the laptops has ran GeekBench 5 on them.

Starting with the CPU results, the M2 got a score of 1,919 in the single-threaded test and 8,929 in the multi-threaded assessment. That's an improvement of 11 percent and 18 percent, respectively, compared to the M1 in the 2020 13-inch MBP.

As expected, there isn't a massive leap in CPU performance. The M2 uses the same "Avalanche" and "Blizzard" microarchitectures as the A15 Bionic found in the iPhone 13 series. These don't feature significant IPC gains over previous-gen architectures, instead relying on larger caches, faster LPDDR5 memory, and higher clock rates thanks to TSMC's N5P process node.

However, the GPU results look more promising. In the GeekBench 5 Metal API benchmark, the M2 scored 30,627, a whopping 43 percent more than the M1 equipped with eight GPU cores. It's unknown whether the M2 tested here is the eight-core GPU variant or the 10-core one, although my money is on the latter considering the massive performance difference.

It's worth noting that GeekBench isn't the best benchmark to gauge real-world performance. You can check out our review of the M1 Pro to see how well it performs across a variety of applications and even games.

Permalink to story.

 
Nice performance jump considering how cool and low power hungry these chips are.

Nevertheless I own an M1 and apple's issues lie on the software side. With little to almost none AAA games or high end 3D/CAD apps supported (Apple Arcade is to laugh about) even if it had the RTX 4090 speed, it wouldn't be of no use, only for the marketing team...
 
I continue to see these M# chips as the potential future for the Windows side of things if some company finally gets their act together to design something maybe just a little different. And that company writes good DX11 DX12 VLK OGL drivers. And MS releases full-featured ARM Windows and Office. And game makers release ARM-native versions. And someone releases an x64/86 emulation layer as good as Rosetta. And...

Just wait longer!
 
I continue to see these M# chips as the potential future for the Windows side of things if some company finally gets their act together to design something maybe just a little different. And that company writes good DX11 DX12 VLK OGL drivers. And MS releases full-featured ARM Windows and Office. And game makers release ARM-native versions. And someone releases an x64/86 emulation layer as good as Rosetta. And...

Just wait longer!

Intel won't really have an answer until Arrow Lake and AMD while in a better position will also need Zen5 and by then we will be seeing M3, maybe M4 by the time Arrow really ships.
 
Intel won't really have an answer until Arrow Lake and AMD while in a better position will also need Zen5 and by then we will be seeing M3, maybe M4 by the time Arrow really ships.

Apple started sooner, so it will be ahead sooner. But Apple has no windows, so they run parallel and not with/against "windows" or server chip makers (Apple makes consumer products). It just shows how behind most chip makers are and 7 or 5 or 2nm, the design still means a lot.

x86 is very powerful but it should be discontinued, it is too complicated. Apple entered in the desktop market 2 years ago and already had the best chip in town. The GPU is more powerful than RDNA2/Watt, the CPU more powerful/Watt than any present Intel/AMD chip. When Xe II and RDNA3 appear, will be on M3 and so on....

And Qualcomm is taking a hit: the best ARM chipmaker until Apple is taking a lesson on both mobile and desktop. Qualcomm is being beaten hard....
 
Apple started sooner, so it will be ahead sooner. But Apple has no windows, so they run parallel and not with/against "windows" or server chip makers (Apple makes consumer products). It just shows how behind most chip makers are and 7 or 5 or 2nm, the design still means a lot.

x86 is very powerful but it should be discontinued, it is too complicated. Apple entered in the desktop market 2 years ago and already had the best chip in town. The GPU is more powerful than RDNA2/Watt, the CPU more powerful/Watt than any present Intel/AMD chip. When Xe II and RDNA3 appear, will be on M3 and so on....

And Qualcomm is taking a hit: the best ARM chipmaker until Apple is taking a lesson on both mobile and desktop. Qualcomm is being beaten hard....


I cant agree with this sentiment because on desktop I dont really care about power per watt, I care about power (within reason the standard CPU and GPU desktop power envelopes, which granted may be a sliding scale at this point for GPU).

The fact Apple with more resources than anybody is beaten by the best Ryzen and Intel CPU's and Radeon and Nvidia GPU's is damning IMO. Even though M# is great, no question, the real thing everybody cares about is who is fastest at the top end and they are not. A good example is AMD vs Nvidia. AMD usually had competitive or faster GPU's at every price point. The issue was//is Nvidia typically had THE fastest GPU, at the top of the stack, and there's no substitute for that.
 
Intel won't really have an answer until Arrow Lake and AMD while in a better position will also need Zen5 and by then we will be seeing M3, maybe M4 by the time Arrow really ships.
AMD's answer will be next year with their Zen4 mobile chips. The 6000 series is already very close to the M1 in terms of perf and efficiency, with the M1 being better for the ST efficiency but around the same for MT. The process node helps Apple a lot.

I don't really like using comparison websites, but the benchmarks results for Cinebench and Geekbench here seem to be consistent with what I've seen in other places:
1. M1 vs 6800U

2. M1 Pro vs 6900HS

It seems that the M1 architecture really likes geekbench. Here are some more 6900HS vs M1 Pro benches:

For light workloads the M1 is a beast in terms of battery life. For more intensive workloads, it's pretty much on par.
 
Last edited:
I cant agree with this sentiment because on desktop I dont really care about power per watt, I care about power (within reason the standard CPU and GPU desktop power envelopes, which granted may be a sliding scale at this point for GPU).

The fact Apple with more resources than anybody is beaten by the best Ryzen and Intel CPU's and Radeon and Nvidia GPU's is damning IMO. Even though M# is great, no question, the real thing everybody cares about is who is fastest at the top end and they are not. A good example is AMD vs Nvidia. AMD usually had competitive or faster GPU's at every price point. The issue was//is Nvidia typically had THE fastest GPU, at the top of the stack, and there's no substitute for that.
You may personally not care about performance/watt, but it's an important metric.

Ryzen and GeForce might still take the raw performance crowns, but they do so at 2-3x the power usage and generate far more heat doing so. As someone whose office turns into a sauna after an hour of PC gaming, I'd love to see nVidia and AMD pursue some efficiency instead of driving us into the 400-watt GPU era.
 
Solid but unimpressive and yes, even for the GPU (Should we be calling this APU? Because of the UMA architecture it feels more like an APU than a GPU but I digress) means that while 40% better is a significant generational update, we know 20 to 30% generational update on the same price category (Taking into account Nvidia leads the pack and always bumps up the price categories a little bit more each time) means that while Apple might fare better on raw GPU performance this time around, it remains to be seen if M2 is actually superior to dedicated Nvidia chips or the Zen 4 RDNA 3 line AMD is getting ready for as well.

Again the best part of the M2 and Apple silicon in general is growing out to be not really the hardware itself, which is competitive with the top of x86 line very rapidly don't get me wrong, but not inherently better. The better part is the optimization they're achieving specially when they get to control the software side like Final Cut for example which just flies far ahead of competing software on similar hardware unless you really crank up the hardware side.

There is something to be said to the very specific customer Apple is going after which is aspiring content creators and such when they could easily go "Yeah you could build an expensive, video rendering workstation x86-64 PC or you could just buy this M2 laptop that competes well against those massive GPU equipped PC rigs or huge, expensive laptops on this think, fanless laptop" That's very compelling.

But the thing is that raw performance isn't there to make it extremely compelling for the rest of people and the software is limited so it won't let them grow much bigger than 'Creative professionals' niche market. Seems like a waste they just refuse to open up their ecosystem even slightly to make it more appealing to people overall but nope, they stick to stuff like "These machines can game! Make sure you code your game in metal, No we will NOT support any other API, next question."
 
You may personally not care about performance/watt, but it's an important metric.

Ryzen and GeForce might still take the raw performance crowns, but they do so at 2-3x the power usage and generate far more heat doing so. As someone whose office turns into a sauna after an hour of PC gaming, I'd love to see nVidia and AMD pursue some efficiency instead of driving us into the 400-watt GPU era.

Even the M1 Ultra as trouble competing with a 3060ti when it comes to GPU Gaming Workloads despite the M1 Ultra's GPU's massive transistor advantage.

The M1 Chips are not small chips either. Apple makes large chips with high transistor counts. Clock speed is kept low because of this.

AMD and Intel try to pack as much power as they can into as little transistors as possible. Die area is a big deal, and the more space the chip uses the less you get from a wafer.

Apple's M1 chips are not cheap to make. They use a massively superior node compared to anything else in the industry. They are made at a low scale, and epically the pro and better models. They are the best SOCs when it comes to the Ultra Portable segment, hands down.

But even scaled up to the massive levels seem with the M1 Ultra, the chip is not impressive in every worldload. Actually in notable workload improvements actually come more from having more media engines than actual more CPU cores. Synthetic benchmarks favor some aspects of the chip, but these really are not real work loads.


X86 will continue to be the most power platform for software. AMD and Nvidia GPU's are still the performance kings.

The biggest issue with M1 is not even the Hardware, it is MacOS. Sure it works great for MacWorkloads. But if the goal with to get Windows X86 apps running, good luck. Huge hit or miss there. Problem is that MacOS on X86 is a crappy platform for trying to run Windows apps. Linux by far was a better platform for these workloads. Linux on Arm has a lot of room to grow. Windows on Arm on the other hand is now very much improved with X64 apps now supported and most of the windows X86 apps now work fine on ARM. And good performance. Gaming on Apple ARM's chips on their desktop platform is going to rely on getting windows onto their devices.
 
If apple cared, they could make one killer Apple TV game Console.
This would require Apple to accept an entirely different business model, though. Microsoft and Sony are willing to accept zero profit margins on the consoles (or even losses), because of the substantial licence fee they pull in from each game sold on their platforms.

Apple, on the other hand, expects a large margin on all their hardware. Nintendo does too, of course, but they're not shipped hardware packed with the latest custom designed chips - the Switch uses a 7 years old SoC.

Even the M1 Ultra as trouble competing with a 3060ti when it comes to GPU Gaming Workloads despite the M1 Ultra's GPU's massive transistor advantage.
Has Apple said how many transistors the GPU is using? I know the difference between the M1 Pro and M1 Max is 2 CPU cores, 16 GPU cores, 2 memory controllers, and 24MB of system level cache (SLC) for 23.3b transistors. Nvidia's GA104 is around 17b but Apple's chip is a full SoC:

M1MAX_575px.jpg

Note that those SLC blocks are huge: 48MB in total. The GA104 barely has a tenth of that for all its cache combined.

So I wouldn't say that M1 Max, et all have a transistor advantage over the GA104 at all. The two GPUs are probably fairly equal in terms of transistor count and the GA104 is probably slightly higher, given that Apple are claiming a peak FP32 throughput of 10.4 TFLOPS @ 1.296 GHz, compared to Nvidia's chip being 16.2 TFLOPS @ 1.665 GHz.
 
x86 is very powerful but it should be discontinued, it is too complicated. Apple entered in the desktop market 2 years ago and already had the best chip in town. The GPU is more powerful than RDNA2/Watt, the CPU more powerful/Watt than any present Intel/AMD chip. When Xe II and RDNA3 appear, will be on M3 and so on....
To remind, M1 max memory support is 16GB. Pretty far from "best" and honestly not enough at all.
 
To remind, M1 max memory support is 16GB. Pretty far from "best" and honestly not enough at all.
Only because of what LPDDR5 is available on the market - there's nothing larger than 128Gb from either Micron or Samsung. No signs of anything larger appearing appearing any time soon, though.
 
Only because of what LPDDR5 is available on the market - there's nothing larger than 128Gb from either Micron or Samsung. No signs of anything larger appearing appearing any time soon, though.
Point is that Ryzen 6900 laptops can handle 64GB memory. That makes M1 useless for anything else than casual use.
 
Point is that Ryzen 6900 laptops can handle 64GB memory. That makes M1 useless for anything else than casual use.
That first sentence is obviously a valid point, but I’d argue that saying 16GB is only useful for casual use is hyperbole. I have CAD and video editing PCs at work with that amount of RAM and they perform acceptably well - their capabilities are limited by the GPU, in those particular systems.

For me, the biggest issue is the price: £3000 for a 32GB RAM laptop, irrespective of how good the CPU, bandwidth, cache, etc all are is just nuts. I’d accept the memory capacity limitation if it sensibly priced.
 
That first sentence is obviously a valid point, but I’d argue that saying 16GB is only useful for casual use is hyperbole. I have CAD and video editing PCs at work with that amount of RAM and they perform acceptably well - their capabilities are limited by the GPU, in those particular systems.
No doubt they do but only if there is no multitasking happening. Even 8GB is usually enough for single heavy program but even light multitasking is pain. 16GB also is not enough for even average multitasking. Not using heavy multitasking goes into casual use category in my POV. 16GB is barely enough for even secure web browsing.
For me, the biggest issue is the price: £3000 for a 32GB RAM laptop, irrespective of how good the CPU, bandwidth, cache, etc all are is just nuts. I’d accept the memory capacity limitation if it sensibly priced.
Agreed but today manufacturers tend to put everything they could on top models. And price reflects that. Perhaps they don't want (stupid) customers to think product is top model if there is only one top feature and everything else is mediocre. Since then (stupid) customers might think brand's top model is crap.

That is very evident when looking Samsung's newest tablets. 14.6 inch tablet is big enough for tablet but since Samsung prices is over $1000, no way. And Samsung probably won't want to release "cheap" but big tablet since (stupid) customers would consider Samsung's "top model" crap...
 
No doubt they do but only if there is no multitasking happening. Even 8GB is usually enough for single heavy program but even light multitasking is pain. 16GB also is not enough for even average multitasking. Not using heavy multitasking goes into casual use category in my POV. 16GB is barely enough for even secure web browsing.

el" crap..
I own a Surface Pro 8 i5 with 8 GB RAM and a Mac Mini M1 8 GB RAM. With both I have absolutely no issues multitasking office, web browser with 15 Tabs, YouTube playing music clips, email and editing some photos / video. Not a PRO so everything as a private user. No issue. And the M1 is attached to a 4K monitor...

I believe people that use CAD, professional 4K editing, many photo raw images etc etc will need 16 GB RAM as a base (M1 architecture saves some RAM so 8 GB RAM would be like 12 GB RAM); those with VMs will need definitely 32 GB or more.

I think nowadays gaming needs more RAM than working...
 
I own a Surface Pro 8 i5 with 8 GB RAM and a Mac Mini M1 8 GB RAM. With both I have absolutely no issues multitasking office, web browser with 15 Tabs, YouTube playing music clips, email and editing some photos / video. Not a PRO so everything as a private user. No issue. And the M1 is attached to a 4K monitor...

I believe people that use CAD, professional 4K editing, many photo raw images etc etc will need 16 GB RAM as a base (M1 architecture saves some RAM so 8 GB RAM would be like 12 GB RAM); those with VMs will need definitely 32 GB or more.

I think nowadays gaming needs more RAM than working...
Exactly. For safe and/or private browsing, using multiple VM's are basically only way today. Cookie tracking and machine fingerprint etc etc allows tracking plus there are tons of viruses and trojans around. About only way to keep privacy and security is to use multiple virtual machines.

Tbh, I see no reason to use internet outside virtual machine today. While virtual machine does not make computer immune to viruses, it keeps 99.9% of viruses/trojans away and combined with VPN (on selected virtual machines, of course), makes web surfing much more private and safe.

Considering that, 16GB is way too low for multitasking. Needless to say, this post is written inside virtual machine.
 
Apple started sooner, so it will be ahead sooner. But Apple has no windows, so they run parallel and not with/against "windows" or server chip makers (Apple makes consumer products). It just shows how behind most chip makers are and 7 or 5 or 2nm, the design still means a lot.

x86 is very powerful but it should be discontinued, it is too complicated. Apple entered in the desktop market 2 years ago and already had the best chip in town. The GPU is more powerful than RDNA2/Watt, the CPU more powerful/Watt than any present Intel/AMD chip. When Xe II and RDNA3 appear, will be on M3 and so on....

And Qualcomm is taking a hit: the best ARM chipmaker until Apple is taking a lesson on both mobile and desktop. Qualcomm is being beaten hard....


You have no idea what you are talking about!!
 
Back