Is onnx the best way to deal with azure iot edge Machine Learning models?

Anurag Shelar 181 Reputation points
2020-12-29T15:30:35.06+00:00

I am working on Azure IOT Edge. I have three models.
a)yolov3 object detection model in .weights format
b)Resnet classfication model
c)VGG16 classification model in .h5 format.

I converted them to onnx and using onnx runtime inferenced them and wrote the necessary scoring scripts.

I wanted to know how do I use these models in original format without converting them to onnx or is onnx the best way for iot edge modules?

Azure IoT Edge
Azure IoT Edge
An Azure service that is used to deploy cloud workloads to run on internet of things (IoT) edge devices via standard containers.
576 questions
Azure Machine Learning
Azure Machine Learning
An Azure machine learning service for building and deploying models.
2,958 questions
Azure IoT SDK
Azure IoT SDK
An Azure software development kit that facilitates building applications that connect to Azure IoT services.
222 questions
{count} votes

2 answers

Sort by: Most helpful
  1. António Sérgio Azevedo 7,671 Reputation points Microsoft Employee
    2020-12-29T16:07:59.92+00:00

    Hi @Anurag Shelar it is not mandatory that you convert your models to onnx. Check this doc:

    "You can train, deploy, and manage the end-to-end machine learning process in Azure Machine Learning by using open-source Python machine learning libraries and platforms. Use development tools, like Jupyter Notebooks and Visual Studio Code, to leverage your existing models and scripts in Azure Machine Learning."

    Thanks.

    1 person found this answer helpful.
    0 comments No comments

  2. Manash Goswami 11 Reputation points Microsoft Employee
    2021-01-04T18:41:02.81+00:00

    If you didn't covert the models to ONNX then the respective inference engines will need to be included in your IOT-Edge module to run the inference.

    ONNX provides the common format to represent the NN models from different training frameworks and execute using the same runtime, i.e. onnxruntime, on various edge device platforms. The same base docker image can be used to run the ONNX models across x86, arm64 devices with different accelerators like GPUs and VPUs.

    1 person found this answer helpful.
    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.