Hi @IEABJKL ,
Thank you for the post. For the model, Azure Object Anchors will do the conversion in the cloud to use for mapping. We do not detail the extraction algorithm. We do state in the FAQ that geometry is used in the algorithm.
We do note that the model that is being uploaded should be within these parameters for the conversion process. This helps provide some details that can be inferred that are relevant to the algorithm like size, color, etc:
https://learn.microsoft.com/en-us/azure/object-anchors/faq#product-faq
We recommend the following properties for objects:
- 1 -10 meters for each dimension
- Non-symmetric, with sufficient variations in geometry
- Low reflectivity (matte surfaces) with bright color
- Stationary objects
- No or small amounts of articulation
- Clear backgrounds with no or minimal clutter
- Scanned object should have 1:1 match with the model you trained with
Thanks,
Nathan Manis