Leading tech company, Google, recently announced that the new Depth API is available in ARCore 1.18 for Unity and Android. This includes access to AR Foundation, which makes the platform open and accessible across (almost) all Android devices. There’s no depth sensor necessary either.
Responding to the growing demand for mobile AR experiences and 3D scanning technology, Google decided to roll more depth technology out to its ARCore developers. The ARCore uses SLAM processes to understand the location of the phone, relevant to whatever it’s looking at.
With this technology, ARCore can detect visually distinct features in a captured image. These features tell the phone when to change its location. Visual performance combines with initial measurements from the IMU on the device to estimate position and orientation.
Creating More Realistic Experiences
Last year, Google allowed some collaborators to have a preview of the ARCore Depth API, which has now been fully tested. According to Google, this new Depth experience will be an important leap forward in creating realistic and responsive experiences for people in AR. The ARCore technology aims to overcome common problems with the placement of AR objects in real world spaces.
The new feature in the Depth API is the ability for occlusion in an AR app to occur seamlessly, making objects feel more real to the space you’re in. The original collaborators on this technology included Snapchat, and various game developers too.
One app that appears to benefit particularly well from the new technology is the Lines of Play product in the Google creative lab, which allows you to set dominos to react in different ways depending on their arrangement.
Transforming the AR and 3D Industry
There’s more to the potential benefits of Depth technology in ARCore than just better gaming experiences. Many experts agree that this will be a powerful step forward in the enterprise world too. For instance, in the TeamViewer Pilot, AR users will be able to get more accurate annotations on complex scenes, allowing for stronger collaboration experiences.
The opportunity to rapidly assess the depth of a scene, without specific sensors, then place objects into AR environments within them opens up a variety of possibilities. You could add specific navigation signs in a jobsite in VR or annotate a specific bolt that needs to be removed by someone in a repair situation. Now that there’s an open SDK for this technology, there’s no limit to the potential for new applications that go beyond gaming and entertainment.