The iPhone 16 could feature major Neural Engine upgrade for on-device generative AI tasks


Posts: 9,730   +121
Staff member
Rumor mill: Apple might have been slow to jump onto the generative AI bandwagon, but the company is starting to go all-in on artificial intelligence. According to a new rumor, nowhere will that be more apparent than in the iPhone 16, which is said to come with a massively upgraded Neural Engine for on-device AI tasks.

Apple's future generation of iPhone, iPad, and MacBook chips, the M4 and A18, will have an increased number of cores in their improved Neural Engines, writes Taiwanese publication Economic Daily News.

Apple first introduced its dual-core Neural Engine in the A11 Bionic SoC found in the iPhone 8/8 Plus and iPhone X, which released in 2017. The company bumped the Neural Engine's cores to 8 in the A13 that launched in the iPhone 11 series in 2019, doubling the count to 16 in the A14 that debuted in the iPhone 12/10th-gen iPads a year later.

Apple has stuck with 16 cores in its iPhone's Neural Engines since 2020, though the component's performance has still improved with each generation – Cupertino says the iPhone 15 Pro's A17 Pro chip's Neural Engine is twice as fast as the one in the iPhone 14 Pro. It sounds as if the A18 could double the core count to 32, which would match the Mac Studio and the Mac Pro that are configured with an M1 Ultra or M2 Ultra SoC.

The latest rumor follows reports that Apple's future products will likely run generative AI models using built-in hardware instead of cloud services.

There are plenty of advantages to using on-device silicon for generative AI tasks rather than relying on remote servers and cloud platforms. Google made a big deal about the AI processing abilities of its Tensor G3 chip, which it claims pushes the boundaries of on-device machine learning, bringing the latest in Google AI research directly to the phone. That statement was put under scrutiny when YouTube channel Mrwhosetheboss found that most of the Pixel 8 Pro's new generative AI features need to be processed in the cloud, meaning a constant internet connection is required.

Earlier this month, Apple CEO Tim Cook confirmed that the company would announce new generative AI features for its products later this year.

Permalink to story.

I value the content on TechSpot but have noticed a trend towards brief, surface-level articles. Could more detailed technical background be included to deepen reader understanding and enrich the discussions?

I had no idea what a Neural Engine was and was hoping to at least get some basic info on what it is and what it is used for. Instead, I had to google search and ask ChatGPT for the background info, sigh.

To be clear, you are not going to be running a "generative AI" model (ChatGPT level) on your phone with a Neural Engine. Generative AI requires massive compute power and in my experience any of the simpler models (LLaMA, which is still way too large for a phone) are practically useless for real tasks.

For all the readers actually looking for some useful tech info about this topic, here you go:

The Apple iPhone's Neural Engine is a specialized component designed to handle machine learning (ML) and artificial intelligence (AI) tasks efficiently. This allows your iPhone to perform complex computations required for advanced features without significantly impacting battery life or overall performance. The Neural Engine allows developers to create custom ML and AI applications, enabling a broad range of innovative apps and features that can leverage this powerful technology to perform tasks that were previously difficult or impossible on mobile devices.

Face ID: The Neural Engine quickly and accurately scans your face to unlock your iPhone. It's capable of recognizing your face even if you change your hairstyle, wear glasses, or are in different lighting conditions.

Animoji and Memoji: When you make facial expressions, the Neural Engine maps these expressions onto animated characters (Animoji) or a customized avatar that looks like you (Memoji) in real-time, making them mimic your movements and expressions.

Live Text: This feature allows you to interact with text found in your photos and videos. For example, you can take a picture of a sign with a phone number on it and then tap the number to call it, thanks to the Neural Engine recognizing and understanding the text.

Smart HDR: When taking photos, the Neural Engine analyzes multiple images taken at different exposures and blends them together to create a single photo with the best exposure and details in both the shadows and highlights.

Cinematic mode: In videos, the Neural Engine can differentiate the subject from the background, applying a blur effect to the background while keeping the subject in sharp focus, similar to the shallow depth of field effect you see in professional films.

Real-time language translation: It can translate spoken or written languages in real-time, making it easier to communicate in foreign countries or with people who speak different languages.

Object recognition in photos: It can identify and categorize objects within your photos, such as recognizing a dog, a car, or a flower, which helps in organizing and searching your photo library.

Spam detection in emails: By analyzing patterns and content, it helps filter out unwanted spam emails, making your inbox cleaner and more secure.

Battery life optimization: It learns your usage patterns and optimizes battery consumption accordingly, so your device lasts longer between charges.

Gaming performance: It enhances gaming experiences by enabling more realistic graphics, better AI behavior for non-player characters, and smoother gameplay.
But can it run neural Crysis?
Looking forward for the time all these edge AI enabled devices get no better use case then apps that make photos sing and faster Instagram filters.