How to deploy a keras or tensorflow model on iot edge and inference using onnx runtime?

Anurag Shelar 181 Reputation points

I have developed a classification model in keras. I wish to deploy this model onto iot edge device.How can I inference this model using onnx runtime..How to write the scoring script.Is there any good source to refer the same?

Azure IoT Edge
Azure IoT Edge
An Azure service that is used to deploy cloud workloads to run on internet of things (IoT) edge devices via standard containers.
555 questions
Azure Machine Learning
Azure Machine Learning
An Azure machine learning service for building and deploying models.
2,666 questions
{count} votes

2 answers

Sort by: Most helpful
  1. Manash Goswami 11 Reputation points Microsoft Employee

    Check this reference sample for deploying ONNX models to IoT Edge devices. This reference implementation is using a DevOps pipeline to automate the retraining-deployment steps for CI/CD. It uses a keras->onnx conversion to generate the ONNX graph in the Jupyter notebook.

    1 person found this answer helpful.
    0 comments No comments

  2. Sander van de Velde | MVP 30,551 Reputation points MVP

    Hello @Anurag Shelar

    This is not a simple question to answer due to the many different ways of handling a model.

    I recommend checking out the MS Learn modules for the AI Edge Engineer role first.

    After that, check out this handson lab based on a yolo model on a Jetson Nano.

    Last but not least, look at this page and follow workflow WF1.

    0 comments No comments