A neural network in Apple’s iOS keyboard, face scanning and hand tracking in its new Apple Vision Pro headset. Apple is putting AI, sorry, machine learning, into its software and hardware.
The most striking thing about Apple’s keynote was the deliberate avoidance of the term “artificial intelligence”, which is loaded with many ideas and explanations. Instead, Apple prefers to talk about “machine learning”, a method from the field of AI, mostly in connection with “on-device” to emphasize privacy in data processing.
This can be seen as a meta-commentary on the current AI hype, which Apple has at least partially avoided so far. However, the choice of words is also in line with previous announcements. At least the latest market movements have not changed Apple’s position not to use the term AI.
Anyway, such a “machine learning language model” now drives the autocorrection and word prediction of Apple’s iOS 17 keyboard. The company emphasizes privacy here: the model runs Apple’s “neural engine” on the mobile device and learns from the user’s writing style, so the additions should become more personal over time. AI text suggestions are also used in Apple’s Journal app, and the dictation function is more accurate thanks to a language model.
During the presentation of its AI-driven keyboard, Apple got carried away for a moment and mentioned a “Transformer” model, the neural network architecture behind recent breakthroughs in major language models. For a keynote watched by millions, this was surprisingly specific and probably more of a friendly salute to the AI community.
AI is in the details
Apple uses machine learning in other areas, such as animating static photos on the lock screen or auto-filling PDFs. Apps on the Apple Watch and AirPod headphones use machine learning to learn user habits and deliver information accordingly, or adjust volume automatically. Voicemails are automatically converted to text using AI.
These numerous quality-of-life improvements to Apple’s software through machine learning are likely part of the vision that Apple’s AI chief, John Giannandrea (formerly of Google), laid out in 2020: AI should transform every aspect of Apple’s operating systems. Even then, Gianniandrea emphasized that Apple would focus on local computing.
Interestingly, AI shows up in Giannandrea’s job description: He is responsible for Apple’s “AI strategy.”
AI at the heart of Apple’s latest hardware
AI and machine learning are fundamental to Apple’s Vision Pro headsetwhich uses computer vision for room and hand tracking, as well as environment and object recognition via twelve cameras, five sensors, and six microphones.
Perhaps the biggest innovation is Apple’s “persona,” which is synthesized from a 3D facial scan of the user and then used, for example, in Facetime chats with the headset. This “digital representation” is generated by an “advanced neural encoder-decoder network,” according to Apple.
Meta has presented such high-end avatars in the past as a research project called “Codec Avatars”. But Apple is now the first company to actually use them in a product.
According to Apple, there is also a lot of AI future in the new high-end “M2 Ultra” chip: It has a 32-core neural engine, as well as 24 CPU and 76 GPU cores, which together can perform up to 31.6 trillion operations per second. This is 40 percent more performance than the M1.
The extra power is suitable for training large transformers at levels that even “the most powerful discrete GPUs” could not handle due to memory constraints, according to Apple. The M2 Ultra offers 192GB of unified memory, which could make high-end Macs serious computers for AI development and research in the future.