This week, Apple updated its Vision Pro spatial computing headset, upgrading the framework to visionOS 2. Apple delivered optimised MR features, including improved spatial capture, hand-tracking, key productivity enhancements, virtual environments, avatars, and more.
Like its success in bringing Apple smartphones and iPads into the business world, Apple appears to Be looking to repeat this success with an MR device. The latest device update continues that trend across broader updates.
Productivity Enhancements for Apple Vision Pro
The productivity updates appear to improve general usability and accessibility for professional workflows.
By introducing new customer features, Apple aims to assist professionals in leveraging Vision Pro device-specific workflows, but it seems like Apple is working to optimise the device through incremental optimisations, streamlining the user experience.
Firstly, Apple added physical mouse integration to help with Vision Pro application navigation and precision.
The mouse integration also comes with Magic Keyboard optimisation that reveals the location of the physical keyboard when fully immersed in a virtual environment.
Notably, the visionOS 2 update includes new accessibility features that could enhance productivity ambitions, including a Live Captions mode that provides users with real‑time transcriptions of speech, audio, and video content.
Additionally, the device’s Look to Dictate feature now works in the Messages application, which Apple believes will streamline response times.
Apple notes that Vision Pro will receive a further update before the end of the year, during which time the firm will introduce Mac Virtual Display updates.
Improved Spatial Capture
visionOS 2 also enhances the device’s spatial recording feature, which allows wearers to capture 3D recordings – such as a photo or video – and view the capture on a Vision Pro or iOS device.
The updated feature now allows machine learning algorithms to improve capture results. Moreover, visionOS 2 will enable users to translate 2D images into 3D spatial photos.
Recently, Apple debuted the iPhone 16 Pro and 16 Pro Max, two new smartphones powered by the A18 Pro chip and ready to optimise AI, camera, and battery life features.
The new iPhone models utilize Apple’s spatial recording feature, which was introduced alongside the Vision Pro earlier this year. This feature enables users to capture 3D recordings of a moment, similar to a photo or video. Afterwards, users can replay the spatial recording on their Vision Pro headset as a 3D immersive visualization.
The visionOS 2 updates follow the new iPhone developments by permitting spatial captures on iPhone 16 and 16 Pro models.
Additionally, Apple Vision Pro’s SharePlay improves the peer-to-peer distribution of spatial content, including 16/16 Pro model captures.
Apple is also updating the Vision Pro Photo application, redesigning its layout, and implementing capture editing workflows.
It appears that Apple is working to champion the development of its spatial recording features, which are now becoming a mainstay in recent XR devices. As part of the update, the firm announced that Canon will soon debut a spatial capture lens for its EOS R7 digital camera range. This will lead to higher-quality spatial recordings, perhaps opening new petunias for XR content creation.
Apple is showcasing an interest in appealing to media professionals towards the device, most likely to scale MR content. Notably, Apple highlights that Final Cut Pro for Mac will include a spatial video editing ability.
Updated Avatars
The Vision Pro update allows improvements to the Persona avatar feature that workers can use to communicate with others using devices or those leveraging Persona-compatible communication services like Microsoft Teams.
The updated Persona system adds more accurate skin tones, new clothing colours, enhanced hand movements, and more virtual backgrounds for FaceTime or third-party service video calls.
The aforementioned SharePlay also includes new Persona integrations, allowing users’ avatars to watch spatial recordings together.
visionOS Optimises Hand and Eye Tracking
The Vision Pro turned heads for various reasons. Namely, the device’s controller-less design, instead leveraging eye and hand tracking for input. While smart glasses rarely use controllers, a more advanced MR headset without controllers brought a new perspective to the industry.
The Vision Pro allows users to use voice, hand, and eye-base inputs. Using hand tracking, users can drag, pinch, and pull to interact with spatial applications – similar to smartphone screen integrations.
Now, with visionOS 2, Apple is optimising input by introducing new hand gestures that allow users to quickly access the home screen, check battery levels, adjust device volume, and access the Control Center panel.
Moreover, the visionOS 2 update includes a feature that saves recent guests’ eye and hand data for 30 days, including coworkers or friends.
The Future of the Vision Pro
Apple’s visionOS 2 press release notes that a further update will come before the end of 2024. The device is still under a year old, so the future could bring unforeseen moves from Apple.
However, some rumours are circulating. In a recent newsletter, Mark Gurman mentioned that Apple is planning to release a lower-priced MR headset. However, this headset will still be more expensive than offerings from other competitors, with a minimum price of $1,499.
In addition, Gurman claims that Apple’s XR development teams are working on a more affordable spatial computing device, a second-generation Apple Vision Pro model, and a pair of more straightforward smart glasses. Gurman also noted that these experimental devices could be launched early next year.
This news comes after Apple expanded the availability of the Vision Pro to new international markets, including the United Kingdom, China, Japan, Singapore, Australia, Canada, France, and Germany.