Meta Smart Glasses will get multimodal AI input in April

[ad_1]

Meta Smart Glasses will get multimodal AI input in April

Image: Mixed

Der Artikel kann nur mit aktiviertem JavaScript dargestellt werden. Bitte aktiviere JavaScript in deinem Browser und lade die Seite neu.

An update coming to Ray-Ban Meta Smart Glasses in April will let Meta AI accept photos as input and respond to questions about what you see.

”FACTS”

Multimodal Meta AI

Ray-Ban Meta smart glasses will get multimodal AI next month. This added capability lets you ask Meta AI about what you’re looking at.

In addition to the usual “Hey Meta” wake word, adding “look and …” signals to the AI assistant that it should use the built-in camera to “see” what you’re talking about and respond to questions.

The New York Times broke the news of an April release date and shared some examples of its successes and failures. For example, the beta of Meta AI can translate signs in English, Spanish, French, and German, but couldn’t identify cherimoya fruit.

”CONTEXT”

Out of Beta

This isn’t the first time we’ve heard about the upcoming feature. Meta started beta testing multimodal AI for the Ray-Ban Meta Smart Glasses in December of 2023.

Several YouTube influencers have posted about their experience with the beta version of Meta AI with visual processing capabilities. Here’s one example from XR enthusiast Jasmine Uniza.

 

Sources: New York Times, YouTube


[ad_2]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top