Azure Remote Rendering: Is there any possible way to retrive a remote model's mesh (UnityEngine.Mesh) data and set it to Unity's Mesh Collider component?

KanadeSato-5351 21 Reputation points
2022-11-14T08:15:54.767+00:00

Hello.
We are using Azure Remote Rendering in our HoloLens 2 application and trying to add collision detection between ARR's remote model and local models in Unity scene.
Is there any possible way to retrive a remote model's mesh (UnityEngine.Mesh) data in Unity scene and set it to Unity's Mesh Collider component?

I already added Rigidbody component and Mesh Collider component to the ARR model and successfully retrived the model's "MeshComponent.Mesh" by using `FindComponentOfType

Azure Remote Rendering
Azure Remote Rendering
An Azure service that renders high-quality, interactive three-dimensional content and streams it to edge devices in real time.
33 questions
0 comments No comments
{count} votes

Accepted answer
  1. Jan Krassnigg 91 Reputation points
    2022-11-14T11:16:17.83+00:00

    Getting the mesh data is not possible, but ARR has other features that might be of help.

    As a bit of background: As you are certainly aware, ARR is used to render models that are too detailed for the HoloLens to render itself. Usually these meshes are even too large to load into its memory in the first place. Consequently, it would not only be impossible to render them, but also to do any kind of physics (even just raycasts) with those meshes on the device.

    Therefore, the HoloLens never gets any of the mesh data. The model is only loaded on the server, the image is rendered there, and the device only gets a video stream and a scenegraph with some high level data to be able to position the model etc. But no triangle data is ever sent to the device. That means, there is no way to retrieve the mesh and give it to Unity.

    However, obviously it is a very common usecase that an app needs to detect what model a user is pointing at or which piece of a model a user's hand is touching. Therefore, rather than doing such operations on the device, you can simply ask ARR to do them for you.

    See this documentation: https://learn.microsoft.com/en-us/azure/remote-rendering/overview/features/spatial-queries

    There are currently two types of queries:
    * Raycasts
    * Spatial Queries

    With the first query you shoot a ray into the scene and get back what mesh was hit and where. This is typically used to determine what a user is pointing at.
    With the second query you define a volume and ask what meshes overlap with that volume. This is often used to figure out what mesh to move when a user tries to grab something.

    There are of course many other things that can be achieved with these queries.

    Now of course you can't do physics simulation with this, so you couldn't have an object rendered with ARR literally collide with other things and fall to the floor. But that would generally not work, because of the complexity of these objects. If you really want something like that, you need to have a very low resolution (and convex) representation of your mesh at hand, to use that as a collision mesh instead. Then you could do a physics simulation on the device and synchronize the resulting position after every update to ARR.

    2 people found this answer helpful.

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.