The Apple M1 Max is faster than a $6,000 AMD Radeon Pro W6900X in Affinity GPU benchmark

nanoguy

Posts: 1,355   +27
Staff member
Why it matters: When Apple announced the M1 Pro and M1 Max-based MacBook Pros, it made a number of claims about the performance of the new chipsets when compared to existing solutions from the PC world. As more people get their hands on the new devices, those claims are being validated one by one in various tests. The most impressive finding by far is that Apple has managed to create power-efficient mobile chips that can compete in certain productivity tasks with workstation-grade hardware.

Apple’s M1 Max chipset has already showed its teeth in Adobe Premiere Pro, where it scored higher than 11th generation Intel CPUs paired with Nvidia RTX 3000 series laptop GPUs. This is no small feat, especially since it does so with much lower power consumption than those setups.

Lead Affinity Photo developer Andy Somerfield was curious to see what the new SoC is capable of, so he created a battery of tests to stress its GPU. Somerfield also wrote a Twitter thread where he details how the Affinity team has been gradually building GPU support into the architecture of the app over the past 12 years.

Somerfield was careful to note there’s no single measure of GPU performance, which is why the benchmark results he observed can only be taken as an indication of how well Affinity Photo will run on Apple’s latest silicon. The best GPU for software like Affinity Photo and Affinity Designer would be one that has high compute performance, fast on-chip bandwidth, and fast transfer on and off the chip.

The fastest GPU the Affinity team had previously tested was the AMD Radeon Pro W6900X, which is sold by Apple as an MPX module for the Mac Pro and coincidentally costs as much as a fully-specced 14-inch MacBook Pro.

It turns out the M1 Max outperforms it in every one of the Affinity tests, despite having only 400 GB/s of memory bandwidth that is shared with a CPU, a Neural Engine, and a Media Engine. By comparison, the Radeon Pro W6900X has 32 gigabytes of GDDR6 that can deliver up to 512 GB/s of memory bandwidth.

The only result where the Radeon Pro W6900X comes close to the M1 Max is the Raster (Single GPU) test where it manages a score of 32,580 points, while the Apple chipset stretches that to 32,891 points. It’s also worth noting the AMD card can draw up to 300 watts of power, while the M1 Max will draw considerably less in all scenarios.

Overall, Apple’s latest silicon seems to be built with power users in mind, and builds on the strong foundation of the M1 chipset. The only workload where the M1 Pro and M1 Max don’t seem to excel is gaming, with the former being slower than an Nvidia RTX 3060 laptop GPU and the latter being easily surpassed by an RTX 3080 laptop GPU or an AMD Radeon 6800M.

Of course, this wouldn’t be a surprise as the Mac isn’t the go-to platform for gamers. If you’re looking for an in-depth analysis of the new Apple chips, AnandTech’s Andrei Frumusanu has a great write up.

Permalink to story.

 
I've read the anandtech article and it looks good. I just want to see more workloads tested going forward. Apple is doing great things on TSMC 5nm can't wait until we get Zen 4 on it.
 
This bodes well for the chips coming from all the big players going forward.

Not sure how much to read into this - Horses for courses.
Ie chips specifically made for crypto mining will do well at crypto mining
Phones that can't run h265 or AV1 natively will suck big balls.
Intel has that thing that runs certain stuff as - amd doesn't have ( can't remember it's name )

As stated not great for gaming - 3060 isn't bad though - but who knows with NVidias wonky naming of laptop GPUs .

I mean would they sell these to the special effects studios ? - how would it compare - does it scale with multiple units .- Or to scientists

End of the day - it's real world specific and general use that matters.

Like I said this is just the beginning - in 5 or 6 years we are going to see some amazing silicon.
I'm sure even companies like Facebook will be pushing new silicon- as AI design improves - and modularity increases
 
What gets me is that if Apple wanted to it could probably take a significant chunk of the gaming market away too. If the M-series is everything a lot of tech sites are claiming then it wouldn't be much of a stretch to build a Mac that could equal a mid-range, 11th gen desktop i5 and 3060 TI. Sure, battery life would be shortened with AAA games but a "gaming laptop" is really just a very portable desktop - you're expected to use outlet power. Heck, they could even make a true gaming Mac by adding a D/C bypass option like we had way back in the day. That way the battery doesn't need to be removed to run solely off A/C. Believe it or not, gamers aren't married to Windows. I think a good number of them would jump to Linux or macOS if the GPU driver and Steam game support were on par with Windows.
 
In the right screenshot the numbers for the CPU itself are also lower, and actually only about half of the other, which proves that the Radeon was paired with a LOT slower CPU performance to begin with, which obviously would also affect the GPU's performance. So, this is at best comparing apples to oranges, and definitely not something that would allow ANY conclusion in regards to how the Radeon Pro's and Apple's graphic chipset's performance compare to each other.
 
Apple's M1 is an SoC which combines CPU, RAM, GPU, and some other dedicated processing units all into one. There's much less latency between all the components that would otherwise be present in a typical PCI-E interconnect.

For a more apples to apples comparison you have to at least try to equalize CPU performance as well as memory bandwidth and throughput to isolate just the GPU performance.
 
If Macs could run games and run them well I’d dump windows in a heartbeat. Same with Linux. I’d prefer a mac though, I’d want silicon designed by the same team that made this!

 
Neither card runs cuda and can the M1 Max/Pro even run OpenCL. Still a great job from Apple but until they ship an option with a quadro card would never buy it as a workstation power laptop. Same reason I wouldn't buy AMD pro GPU, no cuda. OpenCL is usually much slower.
 
In a benchmark....
I'll only be interested when I see real world M1 results alongside Intel, AMD and NVIDIA in more than a handful of cherrypicked tests.
 
I mean would they sell these to the special effects studios ? - how would it compare - does it scale with multiple units .- Or to scientists
If VFX studio isn't relying on CUDA then yes, Apple will sell thousands/millions of units. Cinema4D is main flagship software which Apple makes always a push for, so yes you don't need CUDA if you're in C4D. Still CUDA is vastly superior in many ways. What would really make Apple good platform if at least Blender was fully supported without Rosetta2. Sadly Apple with its shortsighted lunacy when they ditched nVidia support (with Mojave), pushed 90% of 3D users into Windows.

Science - most certainly yes. There is a lot, lot of various specialist software which run on Macs.

In all mightily impressive. Now graphic power is on par with at least mid-high range workstations in a laptop. Which will draw less than 100W, which is simply astonishing.

Frankly I don't understand Apple in one segment at all. They seem completely oblivious to the fact that AAA/PC gaming market (constantly expanding and growing) earn more than all their services combined, yet they make no effort of convincing people that Apple can be PC replacement. They have trillions of $. They can invest and reap rewards very quickly (like in 5 years), but they just letting M$ have it all.
 
Science - most certainly yes. There is a lot, lot of various specialist software which run on Macs.

This is the thing I'm watching closely and so far it looks bleak, to say the least. Most of the data science software doesn't run on M1 at all, even via Rosetta. This includes must-have packages. Platform adoption is overall very slow, there are still major omissions even that M1 MacBooks are on market for a while.

And of course, ML doesn't exist outside CUDA. If Tensorflow Metal is any indication of M1 Max performance, then its performance is 5-10% of what a modern single GPU can achieve.

M1 is mightily impressive but it needs native software.
 
It's possibly the best investment they've ever made. They're now in the lead, as long as you don't take Intel's ~300-watt 12th gen Core i9 into consideration, at least until we see real world side by side comparisons.

It's still an ARM chip. Without perfect optimization performance is not good but what Apple did is impressive. It will do most tasks you do on a laptop to perfection while using way less watts than Intel and AMD chips, however Alder Lake and hybrid design in general might change this.
 
There are many types of workstation workloads, don't go buying the M1 Max without actually proper research on the performance of your particular workload. There are many reasons to buy a big workstation class GPU (like the ECC VRAM it uses - memory errors during very large renders are not uncommon)

Ofc buget also plays a big role in what you buy.
 
Now who was the guy saying the m1 chip wasnt impressive?
57 million transistors and 5nm tech to match the performance of a 5700xt with 11 billion. Wow so impressive.

Yes the M1x is neat, but impressive isnt really the right word. It pulls a LOT of power and needs a LOT of transistors to match the competition, even in its second generation, and that performance is not consistent at all.

And of course it's limited to overpriced unrepairable macbooks, so it's hard to tell if these benchmarks are entirely upt ot he silicon or due to apple's control over the entire ecosystem.
 
The M1 Max GPU performance is just what you'd expect when it comes to gaming performance for the amount of power it uses doing so. Clearly even with the node advantage it is not even able to match Nvidia or AMD in terms of performance per Watt in gaming.

Apple's focus was clearly synthetic benchmarks. As there isn't much in terms of real world workloads that can be tested here. AMD and Nvidia have both pushed away from Compute performance for consumer level products, they are just not important. The M1 Max has a huge chuck on silicon dedicated to the GPU, talking flagship level of silicon space from AMD or Nvidia. It is damn impressive they are even building a monolithic design on this scale, it is HUGE. But clearly low clock speeds hurt the GPU's ability to perform.
 
In the right screenshot the numbers for the CPU itself are also lower, and actually only about half of the other, which proves that the Radeon was paired with a LOT slower CPU performance to begin with, which obviously would also affect the GPU's performance. So, this is at best comparing apples to oranges, and definitely not something that would allow ANY conclusion in regards to how the Radeon Pro's and Apple's graphic chipset's performance compare to each other.
Yep, them creepy crawly Intel server level CPUs.
 
Back