Early benchmarks suggest Apple M2 Ultra could be slower than Core i9-13900KS, RTX 4080

DragonSlayer101

Posts: 372   +2
Staff
What just happened? Multiple leaked benchmark scores of the Apple M2 Ultra's GPU seem to suggest that it could be slower than some standalone graphics cards, like the Nvidia RTX 4080. The benchmark results also suggest that the CPU component of the SoC could be slower than current-generation top desktop processors from Intel and AMD.

Starting off with the CPU scores, the M2 Ultra notched up 2,809 points in the Geekbench 6 single-core benchmark and 21,531 points in the multi-core tests. This is lower than the 3,083 and 21,665 points racked up by the Intel Core i9-13900KS, as well as the 2,875 and 19,342 points scored by AMD's Ryzen 9 7950X. Do note that these synthetic benchmarks do not necessarily give us the full picture, but they do indicate that the M2 Ultra may not be the speed demon that Apple would have us believe.

On the GPU side, the M2 Ultra's 220,000+ score in Geekbench 6 Compute (Metal API) benchmark is slightly higher than the 208,340 (OpenCL) points scored by the RTX 4070 Ti, but it's still lower than the RTX 4080's score of 245,706.

In comparison, the GPU in the M1 Ultra could only notch up about 155,000, meaning the new chip is a significant improvement over its predecessor in terms of graphics performance (h/t Tom's Hardware).

If you're looking for a direct Geekbench 6 OpenCL comparison, the M2 Ultra's OpenCL score of around 155,000 is significantly lower than those of the current-gen Nvidia cards and somewhat similar to that of the AMD Radeon RX 6800 XT. That said, it is once again worth mentioning that synthetic benchmarks do not always indicate real-world performance, so take them with a pinch of salt.

In GFXBench 5.0, the M2 Ultra managed to notch up 331.5 fps in the 4K Aztec Ruins high-tier offscreen tests, which would suggest that the graphics in the new chip is roughly 55 percent faster than its predecessor. However, GFXBench is hardly the best tool to benchmark high-end desktop GPUs, so the results do not necessarily mean that the M2 Ultra would be that much faster than the M1 in games and other graphics applications.

The M2 Ultra is Apple's latest flagship desktop silicon that comes with 24 CPU cores (16 performance cores, 8 efficiency cores), up to 76 GPU cores, a new 32-core neural engine, and support for up to 192GB of unified memory at a bandwidth of 800 GB/s. The chip will power Apple's new Mac Studio and Mac Pro desktop computers that were announced at WWDC earlier this month.

Permalink to story.

 
I mean, that's still incredibly impressive for Apple, considering they don't have a long history in desktop silicon vs Intel and Nvidia.

+1000

Intel is in this business for over 50 years, Apple did their first in house chip 13 years ago.
Apple is achieving with an ARM- compatible chip a MUCH better performance / Watt result than anyone else and an overall performance as good as a mid / mid high end x86 chip. Not to mention that inside is a MUCH more powerful AI and GPU than the ones on Intel or perhaps even AMD counterparts.

All in all: Apple is letting everyone ashamed (Qualcomm, Intel hello? AMD is also working well but relaxing a bit...). This on the hardware, but Apple's software is lacking a lot...
 
The missing info here is the M2 Ultra’s power consumption and comparative efficiency. I think that’s where the difference resides, if the other M2 chips are anything to go by. One site claims 60W TDP, whatever that actually means.
 
Last edited by a moderator:
I don't know if Arm is the future or not, but if it is, it seems like Microsoft is going to drop the ball again, just like they did on mobile.
 
Well. In the CPU department they should also give some credit to Arm.
This performance also comes from TSMC 5Nm enhanced. The ultra is combining two M2 max chips, too.

In regards to it's transistor count to cpu and GPU it could be compared to mi300
 
Last edited:
as well as the 2,875 and 19,342 points scored by AMD's Ryzen 9 7950X.

Uh, 2,809 is practically "margin of error" (3%) compared to 2,875 and unless there's something new in mathematics, 21,531 is about 11% higher than 19,342. So, maybe the M2 Ultra is the speed demon that Apple claims.

As for the GPU claims, well, this is an internal GPU versus a discrete GPU. I'd say that pound for pound this looks pretty darn good. The author of this piece is working overtime to bash what Apple has done.
 
...If you ignore that they bought the latest and fastest processes to leap there.

Unfortunately for them, once you're on the latest, it's hard to leap ahead more...
Well a chip's success depends on many factors and the process is just one of them. Intel and AMD are also free to negotiate with TSMC and take a piece of the 3nm cake. They don't want to spend that money? Not Apple's fault.

Even so, if you take a 5nm M1 Ultra vs the best 5nm Intel, both at 60W TDP PL2 (whatever the name, max 60), I bet the M1 Ultra will win hands down, let alone the M2 vs something from Intel. The architecture and software optimization are as important as the manufacturing process.

Uh, 2,809 is practically "margin of error" (3%) compared to 2,875 and unless there's something new in mathematics, 21,531 is about 11% higher than 19,342. So, maybe the M2 Ultra is the speed demon that Apple claims.

As for the GPU claims, well, this is an internal GPU versus a discrete GPU. I'd say that pound for pound this looks pretty darn good. The author of this piece is working overtime to bash what Apple has done.
I read it again and I don't think the author wants to bash Apple, it seems just to average Apple's exaggeration when (as with the M1) it talks about the speed and goodies. We all know that Apple's executives are masters exaggerating the capabilities.

That said I'm happy that Qualcomm, ARM and Apple developed such optimized chips that pressed Intel and AMD to increase hardware acceleration for many things that on x86 were traditionally over software and a lot of energy consuption.
 
SNIP

I read it again and I don't think the author wants to bash Apple, it seems just to average Apple's exaggeration when (as with the M1) it talks about the speed and goodies. We all know that Apple's executives are masters exaggerating the capabilities.

That said I'm happy that Qualcomm, ARM and Apple developed such optimized chips that pressed Intel and AMD to increase hardware acceleration for many things that on x86 were traditionally over software and a lot of energy consuption.
The author constantly makes comments like "Do note that these synthetic benchmarks do not necessarily give us the full picture" or "However, GFXBench is hardly the best tool to benchmark high-end desktop GPUs" when the M2 shows promising results. He's looking for reasons to dismiss the results at all levels. He compared to the top of the line CPU/GPUs but doesn't bother to consider that, OK, maybe it's not a 13900K killer, but it sure looks like an i7-13700K killer or a 7900XT killer. And all this on a SOC, not on a monolithic CPU with discrete GPU system.

Personally, I see this as an impressive achievement, especially given the power consumption differences. I have to wonder how this would fare against the AMD Z1 Extreme used in the ROG Ally? Seems like it would give it a run for it's money.
 
Even so, if you take a 5nm M1 Ultra vs the best 5nm Intel, both at 60W TDP PL2 (whatever the name, max 60), I bet the M1 Ultra will win hands down, let alone the M2 vs something from Intel. The architecture and software optimization are as important as the manufacturing process.
And, how comparable are those 2 process nodes? I know they sound like they're the same size, but, do tell...
 
Well I mean it shares a pool of ram.. so yeah .. it is always going to be a challenge to exceed by any meaningful margin a dedicated CPU with its own private cache and ringbus/infinity bus and now 3d stacked mega high speed cache and a gpu all by itself with its own pile of private ram and gen5 pci-e

that fact that it is in the ballpark is really good .. to bad you cannot install native open source OS'es that don't have to rely on reverse engineered drivers and hardware because I will never use apple's os
 
In certain selected benchmarks it is kinda similar, in many more is far behind. What's more, it is even more behind than a Hackintosh on much cheaper configuration. (intel+amd gpu).
While m2 is solid chip, it is nothing too crazy - what is excellent though is the software optimization. Apple, as closed ecosystem and most wealthy company in the world can throw resources to make it works - and would work probably better on overall stronger apu. But there are certain stuff they can't get over with - not sure how does RT works there ;)
 
I think the real world is where the tyres meet the road.

IF you live in USA or a number of other countries that have Apple premium support - many Americans rave about Apple care blah blah - Apple support stores - just realise it does not fall too far from the Apple tree
If you have software that needs it - eg video production - forget about the latest Cyberpunk exp - unless you jump through hoops
If you don't need a rugged device , need to hook it up to whatever you desire in the swamps

then it's great

Apple still isn't a AAA gaming machine - no matter the scores

As a standalone production device - probably very good

Need serious production/Machine learning/mathematics/modelling - much better dedicated monsters out there

End of the day 5 years from now you will buy a $500 laptop = with 1440p or maybe 4K HDR printed OLED screen . that is speedy and efficient for every day use - browsing , video calls , light gaming - for $800 you will have a meaty little gaming laptop.

For existing Apple people great - everyone else - better more versatile packages out there

Apple will probably make there new AR/VR thingy work well with these - for extra bells and whistles
 
Benchmarks are one thing, real works performance another. I had a M1 Ultra and the GPU couldn't keep up with anything being rendered past 2K, no consistent framerate / a lot of random hesitation.
 
+1000

Intel is in this business for over 50 years, Apple did their first in house chip 13 years ago.
Apple is achieving with an ARM- compatible chip a MUCH better performance / Watt result than anyone else and an overall performance as good as a mid / mid high end x86 chip. Not to mention that inside is a MUCH more powerful AI and GPU than the ones on Intel or perhaps even AMD counterparts.

All in all: Apple is letting everyone ashamed (Qualcomm, Intel hello? AMD is also working well but relaxing a bit...). This on the hardware, but Apple's software is lacking a lot...

Due to vast amounts of money apple has, it books out, all of TSMC's latest and best manufacturing node for a couple of years infront of everyone else....its up to you to decide if thats fair or not, but its not apple doing magic or something like that xD

Also their approach is not modular like Intel, amd etc who design their chips to work with multiple devices instead of only going into three laptops or so per year.

Look beyond my brother, its true performance of these chips is amazing, b ut not for the reasons you think.
 
M2 ultra is a huge chip, larger than a 2x 13900k and 4090 putted together and vastly more costly to make than those 2 chips, and even so can't compete with a Threadripper or Intel Xeon W with a 4090 in a pro niche were the Mac Pro is supossed to compete directly.

The Mac Studio is the only one that makes some sense, but is certainly not a pro computer.
 
M2 ultra is a huge chip, larger than a 2x 13900k and 4090 putted together and vastly more costly to make than those 2 chips
13900K die size is 257 mm2 and the AD102 is 608 mm2 -- so two of the former, coupled with one of the latter, is a combined die area of 1122 mm2. Regardless that no figures have been issued concerning the M2 Max/Ultra's die size, Apple's chip isn't going to be anywhere near that big.

However, in terms of transistor count, the M2 Ultra has a total of 134 billion. The AD102 is 76.3 billion, which would leave 57.3 billion for two 13900Ks or 28.9 each. There are no figures for Intel's chips, but given that a Ryzen 9 7950X has 6.5 billion in each CCD and 3.4 billion in the IOD (a total of 16.5), it's unlikely that a 13900K has that many (I.e. 12 billion more transistors, though, one never knows with Intel).

So yes, Apple's M2 Ultra is a large chip in terms of transistor count but each Max die is no bigger than a top-end GPU, and the addition of the imposter and interconnect system isn't going to raise manufacturing costs to it being greater than the combined cost of fabricated two 13900 and one AD102 dies.
 
Makes sense. My M2 Pro Macbook runs very *very slightly behind my i7-12th gen 3060TI desktop (video editing & motion graphics work is my primary use).
Power consumption and all that makes sense when you're on the go, but the moment we're talking about studio workstations, all that matters is the money-to-performance ratio, and Apple hasn't done enough to justify that outside of the endless noise of hype the internet generates for them for free.
 
13900K die size is 257 mm2 and the AD102 is 608 mm2 -- so two of the former, coupled with one of the latter, is a combined die area of 1122 mm2. Regardless that no figures have been issued concerning the M2 Max/Ultra's die size, Apple's chip isn't going to be anywhere near that big.

However, in terms of transistor count, the M2 Ultra has a total of 134 billion. The AD102 is 76.3 billion, which would leave 57.3 billion for two 13900Ks or 28.9 each. There are no figures for Intel's chips, but given that a Ryzen 9 7950X has 6.5 billion in each CCD and 3.4 billion in the IOD (a total of 16.5), it's unlikely that a 13900K has that many (I.e. 12 billion more transistors, though, one never knows with Intel).

So yes, Apple's M2 Ultra is a large chip in terms of transistor count but each Max die is no bigger than a top-end GPU, and the addition of the imposter and interconnect system isn't going to raise manufacturing costs to it being greater than the combined cost of fabricated two 13900 and one AD102 dies.

The M2 is about 155mm2, Ultra would be close to 8x times that: 1240mm2

But even if were much smallerthan that, you cant compare the cost of a huge monolithic chip with the complexities of having a CPU+GPU combined to a GPU chip alone, because a GPU is much cheaper to produce because their basic architecture have significant superior yields (GPUs run a much lower speed, have much less (and complex) cache memory, ALUs are much slower, etc, so less cost to manufacture in result.)

Have a CPU that have much higher requirements to manufacture in a very early node (5nm/3nm) suppose lower yields, when you have a CPU that can't reach a peak frequency or you have a bug on the cache memory, you need to ditch the entire chip. All that fuss about chiplets of the last few years is because putting out a single large chip is beeing very expensive as long as the architectures grow in transistors.

So even if we imagine that they have the same transistor count and the same node process (regardless the imposer in AMD chips is made in a very cheap node), and the same die size, the Apple M would still be more costly to make because their monolithic form.
 
No Apple fan but...
A) Those ARM CPUs are really nice, and the performance per watt on them is particularly nice. As a long-time Linux user, I used an ARM Chromebook a few years back with Ubuntu for ARM on it and it was incredible -- 22 hour battery life (and 12 hours under FULL load, like video encoding and such) and good performance. The one I had had a Nvidia Tegra K1 so it had a roughly GTX650-performance GPU built-in but cut down from the ~80-90 watts of the GTX650 to about 5 watts. And (just like on Mac), Linux now has a solution for seamlessly running x86/x86-64 binaries on ARM (even Steam and wine for games). I'm looking forward to picking up an ARM notebook or possibly desktop sometime in the next few years.

B) It's true, comparing an embedded GPU to a discrete may not be a fair comparison. BUT, given that Apple is selling these very-high-dollar studio systems with the embedded GPU, with no option for discrete (other than buying a thunderbolt external PCIe box, buying a compatible GPU -- are there any for the ARM Macs?) Given people will be buying these expecting both CPU and GPU-intensive workloads, I do think it's a fair comparison. That said, an RTX 4080 is no slouch.
 
No Apple fan but...
A) Those ARM CPUs are really nice, and the performance per watt on them is particularly nice. As a long-time Linux user, I used an ARM Chromebook a few years back with Ubuntu for ARM on it and it was incredible -- 22 hour battery life (and 12 hours under FULL load, like video encoding and such) and good performance. The one I had had a Nvidia Tegra K1 so it had a roughly GTX650-performance GPU built-in but cut down from the ~80-90 watts of the GTX650 to about 5 watts. And (just like on Mac), Linux now has a solution for seamlessly running x86/x86-64 binaries on ARM (even Steam and wine for games). I'm looking forward to picking up an ARM notebook or possibly desktop sometime in the next few years.

B) It's true, comparing an embedded GPU to a discrete may not be a fair comparison. BUT, given that Apple is selling these very-high-dollar studio systems with the embedded GPU, with no option for discrete (other than buying a thunderbolt external PCIe box, buying a compatible GPU -- are there any for the ARM Macs?) Given people will be buying these expecting both CPU and GPU-intensive workloads, I do think it's a fair comparison. That said, an RTX 4080 is no slouch.
Let's be honest:

A) office and multimedia edit: private users or small companies have enough power on M powered laptops (simple or pro versions of the chip)

B) games and 3D and demanding works: people (at least outside the US and UK) won't mind too much about power consumption, it's all about "how- much- job- will- be- done" / price. On this matter Apple is very unlikely to win costumers:

- the hardware is great BUT Apple is very expensive and zero upgradable.

- MacOS is pretty cute but very incompatible, most AA/AAA games and professional 3D/CAD/ physics or engineering apps don't work.

"Oh you can virtualize Windows 11 ARM or use Crossover or ..." .... why bother?! I can directly buy an upgradable PC with Linux or windows or both, with a good CPU and gpu and even change as I go and DIRECTLY run all apps and games. Why spend a ton of money on a Mac and win a non upgradable system with a "I don't care about your needs just your money" philosophy?
 
Back