How to generate an ONNX model from an Azure AI Machine Learning Studio Instance Segmentation model

Richard Smith 20 Reputation points
2025-03-13T03:00:07.0766667+00:00

I have created a Polygon (Instance Segmentation) Data Set in Machine Learning Studio.

I have labelled and approved numerous objects in numerous images.

I have generated a Model from the Data Set - which may be downloaded as a PyTorch file (plus numerous other files).

I need to generate an ONNX file but cannot find any way of doing that in Machine Learning Studio.

I have attempted to convert the .pt file downloaded from Azure using VS Code but I cannot find a way to get PyTorch to accept the .pt file.

All of the explanations I have found in the Azure "help" documentation load a pre-trained model from torch.models. All of the "help" documentation related to conversion to ONNX within Machine Learning Studio either references components that do not exist or the links themselves are broken.

I require the ONNX file for use in a C# Windows Desktop application.

Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
3,600 questions
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. SriLakshmi C 6,010 Reputation points Microsoft External Staff Moderator
    2025-03-13T06:26:53.3+00:00

    Hello Richard Smith,

    To generate an ONNX model from your Azure AI Machine Learning Studio instance segmentation model, you will first need to convert your PyTorch model file (.pt) to the ONNX format. Here’s a general approach you can follow:

    Ensure you have the necessary libraries installed by running

    pip install torch onnx
    

    Retrieve your .pt model file and any additional definition files from Azure ML Studio.

    You need to load your PyTorch model in your Python environment. Use the torch.load() function to load the model from the .pt file.

    If your model fails to load directly, try an alternative approach by exporting it within ML Studio using the "Convert to ONNX" step in your pipeline before downloading.

    Before exporting, ensure that your model is set to evaluation mode by calling model.eval() or model.train(False). This step is crucial as layers like dropout and batch normalization function differently during training and inference.

    Utilize the torch.onnx.export() function to convert your model to the ONNX format. Specify the model, a dummy input tensor matching the model's input shape, and the desired filename for the output ONNX file.

    Once the export function is executed, the model.onnx file should be available in your working directory.

    If you experience issues loading the .pt file, verify that it is compatible with your PyTorch version and that the correct model architecture is defined in your code.

    After successfully generating the ONNX file, you can integrate it into your C# Windows Desktop application.

    please refer this Convert your PyTorch model to ONNX format, Train a model with PyTorch and export to ONNX.

    I hope this helps. And, if you have any further query do let us know.

    Thank you!


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.