In the search for the next big hardware platform, Meta and soon Apple are experimenting with VR and AR headset. Artificial intelligence forms the basis of this new class of devices.
VR and AR headsets are benefiting from massive advances in computer vision, the way a computer perceives its environment. Meta’s Quest headset, for example, can scan the environment in real-time and point out obstacles, so you don’t trip over them in VR.
You can draw virtual boundaries in space with centimeter precision. And your hand movements are captured in real-time with minimal computational overhead – a task that once required expensive, power-hungry specialized hardware is now done on a mobile device.
“AI accelerates everything it touches”
AI is the driving force behind this increasing blending of analog and digital reality, also known as mixed reality, which Apple and Meta are targeting with their latest hardware.
“Machine learning has been a foundational technology in VR products for many years, and it was breakthroughs in computer vision that made devices like Meta Quest possible in the first place,” Meta’s CTO Andrew Bosworth writes in Meta Quest 3’s announcement.
For its latest VR headset, Meta has optimized the visual tracking of the controllers to eliminate the need for large, conspicuous tracking rings. Instead, the compact controllers are detected even when they largely disappear in the user’s hand.
“These are the kind of places where AI allows for computational heavy lifting that would otherwise be impossible on a mobile device without advanced sensors or additional hardware affordances,” Bosworth writes.
But the biggest impact, he says, will be generative AI in the metaverse, allowing many people to create content for the digital world. Small teams would suddenly have the power of big studios. A huge acceleration of innovation.
Meta’s AI chief Yann LeCun also describes an AI assistant as a daily companion: With access to the world’s knowledge, it will be “your best rampart against disinformation.”
Meta is known to be working on video datasets for AI training that show people’s everyday lives from a first-person perspective. This data could be used to train AI assistants that participate in people’s daily lives via AR headsets, as opposed to ChatGPT, which is locked inside a computer.
Apple’s big AI moment?
In all likelihood, Apple will unveil its next big piece of hardware this coming Monday: A headset that looks more like a VR headset, but uses video cameras to stream the outside world onto its built-in display, where it’s blended into mixed reality.
The VR and AR industry is hoping for a breakthrough, but the announcement is also interesting for the AI scene, as Apple has been rather reserved when it comes to AI, despite massive investments in recent years.
For its Mixed Reality headset, however, Apple will have to rely on AI to compete, eg for computer vision or spatial scanning and hand tracking. Advanced multimodal models could be used for object recognition. Apple has already shown that it has a handle on the technology with AR services for the iPhone and iPad.
It would not be a big surprise that Apple can build better hardware than Meta, thanks to decades of experience. However, the bigger lever may ultimately be the quality of the user experience and the functionality that a high-quality AI implementation brings. This is where the real competition could take place. Starting on Monday, we will be able to see how far Apple has come with AI in a direct comparison with a competitor.