Comparison of points clouds in Azure Spatial Anchors

Kastler 36 Reputation points
2021-12-23T06:01:41.637+00:00

Hello,
I am currently doing a study on the recent Azure spatial anchors technology. Unfortunately, I have difficulties finding information about some technical aspects, especially information about cloud comparison. If I have understood correctly, during the first session, the AR system will detect a cloud of feature points of its environment (Cloud A), in order to be able to position a spatial anchor on it. Now let's imagine that I close this session and start a second one. In order to be able to redetect the position of this previously placed spatial anchor, the system will have to redetect a cloud of feature points of its environment (Cloud B). So, somehow, Azure Spatial Anchors will have to compare Cloud A with Cloud B to determine if they come from the same real environment or not. My question would then be what is the technique used here? I specify that I am not looking for details, I am just trying to know what type of algorithm is used (Iterative Closest Point, Robust Point Matching, ... ).
I can't find any information on this subject and I would be very grateful if you could enlighten me on the subject.
I thank you in advance.

Azure Spatial Anchors
Azure Spatial Anchors
An Azure service that is used to build immersive three-dimensional applications and experiences that map, persist, and restore content or points of interest at real-world scale.
86 questions
{count} votes

Accepted answer
  1. António Sérgio Azevedo 7,661 Reputation points Microsoft Employee
    2021-12-27T19:41:30.113+00:00

    @Kastler double checking that you have came across our FAQ article? I am not sure if they answer all your current questions, but I believe they address some of them: https://learn.microsoft.com/en-us/azure/spatial-anchors/spatial-anchor-faq

    Azure Spatial Anchors depends on mixed reality / augmented reality trackers. These trackers perceive the environment with cameras and track the device in 6-degrees-of-freedom (6DoF) as it moves through the space.

    Given a 6DoF tracker as a building block, Azure Spatial Anchors allows you to designate certain points of interest in your real environment as "anchor" points. You might, for example, use an anchor to render content at a specific place in the real-world.

    When you create an anchor, the client SDK captures environment information around that point and transmits it to the service. If another device looks for the anchor in that same space, similar data transmits to the service. That data is matched against the environment data previously stored. The position of the anchor relative to the device is then sent back for use in the application.

    ...

    For each point in the sparse point cloud, we transmit and store a hash of the visual characteristics of that point. The hash is derived from, but does not contain, any pixel data.

    0 comments No comments

0 additional answers

Sort by: Most helpful