Users of Apple’s Vision Pro headset will be able to build a digital avatar in order to conduct more realistic video chats while their face is partially hidden.
The internet company claims to use “advanced machine learning” to accurately replicate a user’s face and hand motions during FaceTime discussions. To build 3D, hyper-realistic digital avatars, users must scan their faces with the headset’s front-facing cameras.
In an introductory video, Apple demonstrated the function, claiming that it would allow people to see a user’s “eyes, hands, and true expressions” during video chats.
On Monday, Apple announced the headgear, the company’s first significant product in eight years, during its Apple Worldwide Developers Conference. Despite its hefty price tag, the headgear was well anticipated and received generally excellent reviews from critics.
Mike Rockwell, the lead of Apple’s AR/VR project team, said during the conference that videoconferencing was one of the “most difficult challenges” the team faced while developing Vision Pro because users were always wearing something over their eyes.
Apple representatives did not immediately answer Insider’s request for more information on the feature.
The avatars for Apple’s augmented-reality headset look to be hyper-realistic, in contrast to some of Meta’s early attempts at virtual-reality avatars, based on the examples used in the movie. After critical memes were circulated in reaction to a selfie of his early metaverse avatar, Meta CEO Mark Zuckerberg was extensively criticised.
Zuckerberg posted a photo of his avatar in front of the Eiffel Tower on Facebook last year. For its simple aesthetics, this was instantly embraced by social media users. In response to the criticism, the CEO later shared a more advanced version of the avatar on Instagram.