Soon, lifelike VR avatars could be generated by text descriptions. A video shows how this works. We also explore additional considerations.
A new method makes it possible to automatically create avatars in VR environments simply by describing them. A video from the Text-to-VR project by SpecialguestX, Meta Cs Labs, Brian Fox, and 1stAveMachine shows how an avatar is brought to life by describing movements and actions in a text-based interface.
The process uses a combination of artificial intelligence and 3D modeling to generate avatars in real-time. A simple text description such as “A man walks toward the camera and waves” creates a lifelike avatar that moves according to commands.
Designing virtual reality with text input
The developers recognized the potential of text-to-VR and work to use it in conjunction with other AI models to create complete scenes in VR environments. This idea is not entirely new, however. One developer already created a VR world using the AI image generator Stable Diffusion.
The goal of this project is to improve social science research through immersive experiences. Text-to-VR could also be used in other fields, such as the video game industry, to automate and speed up the creation of characters and environments.
Implications for AI-assisted VR development
The ability to create complex movements and actions using text descriptions alone would allow non-professionals with no prior knowledge of 3D modeling to bring their ideas to life in VR. However, the technology also has ethical and social implications. For example, it would make it easy to generate avatars for political or commercial purposes that might violate the privacy rights of real people.
Still, text-to-VR is an exciting vision in VR development. Perhaps it won’t be long before we make our own imaginations a reality in virtual reality.