Sci-fi short film paints bleak future

[ad_1]

AR headset reads emotions: Sci-fi short film paints bleak future

Image: Privacy Lost

Der Artikel kann nur mit aktiviertem JavaScript dargestellt werden. Bitte aktiviere JavaScript in deinem Browser und lade die Seite neu.

Privacy Lost sketches a future in which human emotions become public through AR and AI. It comes with a serious warning.

In a new short film production titled “Privacy Lost,” Dr. Louis Rosenberg, CEO and Chief Scientist of Unanimous AI, addresses the potential risks of AI technologies, augmented reality, and data glasses.

Augmented Reality reads human emotions

A family is sitting at a table in a restaurant. While the son is busy with a tablet, the parents are arguing about a date. The father forgot to tell the mother about a weekend of golf with his best buddy. This is an everyday scene that has probably happened countless times before. If it weren’t for the data glasses on the noses of the people involved.

The futuristic wearables can read each other’s emotions. So the mother’s overplayed anger is projected into the father’s line of sight as he confesses to her about the golf weekend. The mother, on the other hand, sees that the father is bluffing when he claims that it is not important to him and that he can cancel at any time.

It is also interesting to note that the family is served by an artificial intelligence that looks different for each guest. The father sees a gorgeous blonde, the mother a fit surfer, and the child a cuddly teddy bear. The AI ​​also reads the emotions of family members and uses them to sell them overpriced extras.

AI risks no longer theoretical

Privacy Lost depicts a scene that could be typical of the near future, according to Rosenberg. The film aims to make the complexity of AI manipulation easily understandable. Rosenberg has been developing VR, AR and AI technologies for more than 30 years and warns of the potential societal impact.

Now, as AI and the metaverse become more prominent in the media, his concerns are gaining currency. “ChatGPT happened and suddenly these risks no longer seemed theoretical,” Rosenberg said in an interview with ARPost. Policymakers and regulators who want to better understand the potential for AI-driven manipulation in the metaverse have since flooded him with questions, he said.

Regulation and the middle ground

For Rosenberg, regulation of emerging technologies is essential to ensure public trust. While he supports the advancement of these technologies, he also advocates safeguards against misuse and manipulation.

“We need to allow for real-time emotional tracking, to make the metaverse more human, but ban the storage and profiling of emotional data, to protect against powerful forms of manipulation,” Rosenberg explains. “It’s about finding a smart middle ground, and it’s totally doable.”


[ad_2]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top