Azure Percept DK: How to collect network inference performance on a pre-trained model?

jspdev 1 Reputation point
2021-09-09T21:37:30.427+00:00

Looking for some metric, for example latency in milliseconds, that can represent the inference time of a pre-trained model using the Azure Percept DK.

For example, I have been training a network or creating a network in TensorFlow and would like to test it on the Azure Percept DK. I would like to see any performance improvements or latency metrics that represent the changes I am making to my network to track inference performance overtime.

Is there any way to get this information from the Percept Devkit? I see telemetry information but this doesn't seem like the performance numbers I am seeking.

If easier to make an example, is this feature available for any of the pre-trained models supplied with the devkit I can test on to see the model inference performance?

Are there any guides or precedence for collecting network performance on this device? Not looking for the precision/recall/mAP percents here, more network latency times.

Thanks.

Azure Percept
Azure Percept
A comprehensive Azure platform with added security for creating edge artificial intelligence solutions.
72 questions
Azure Machine Learning
Azure Machine Learning
An Azure machine learning service for building and deploying models.
2,965 questions
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. António Sérgio Azevedo 7,671 Reputation points Microsoft Employee
    2021-09-10T16:36:07.007+00:00

    Hello @jspdev ,
    My suggestion is that you leverage IoT Edge Azure Monitor Integration -> Collect and transport metrics (Preview)

    You will be able to Add custom metrics (Preview)

    Gather custom metrics from your IoT Edge modules in addition to the built-in metrics that the system modules provide. The built-in metrics provide great baseline visibility into your deployment health. However, you may require additional information from custom modules to complete the picture. Custom modules can be integrated into your monitoring solution by using the appropriate Prometheus client library to emit metrics. This additional information can enable new views or alerts specialized to your requirements.

    Let me know if you have further questions or concerns on implementing this solution?

    Thanks!

    Remember:

    • Please accept an answer if correct. Original posters help the community find answers faster by identifying the correct answer. Here is how.
    • Want a reminder to come back and check responses? Here is how to subscribe to a notification.
    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.