Export a model programmatically

All of the export options available on the Custom Vision website are also available programmatically through the client libraries. You may want to use client libraries so you can fully automate the process of retraining and updating the model iteration you use on a local device.

This guide shows you how to export your model to an ONNX file with the Python SDK.

Create a training client

You need to have a CustomVisionTrainingClient object to export a model iteration. Create variables for your Custom Vision training resources Azure endpoint and keys, and use them to create the client object.


credentials = ApiKeyCredentials(in_headers={"Training-key": training_key})
trainer = CustomVisionTrainingClient(ENDPOINT, credentials)


Remember to remove the keys from your code when youre done, and never post them publicly. For production, consider using a secure way of storing and accessing your credentials. For more information, see the Azure AI services security article.

Call the export method

Call the export_iteration method.

  • Provide the project ID, iteration ID of the model you want to export.
  • The platform parameter specifies the platform to export to: allowed values are CoreML, TensorFlow, DockerFile, ONNX, VAIDK, and OpenVino.
  • The flavor parameter specifies the format of the exported model: allowed values are Linux, Windows, ONNX10, ONNX12, ARM, TensorFlowNormal, and TensorFlowLite.
  • The raw parameter gives you the option to retrieve the raw JSON response along with the object model response.
project_id = "PASTE_YOUR_PROJECT_ID"
iteration_id = "PASTE_YOUR_ITERATION_ID"
platform = "ONNX"
flavor = "ONNX10"
export = trainer.export_iteration(project_id, iteration_id, platform, flavor, raw=False)

For more information, see the export_iteration method.


If you've already exported a particular iteration, you cannot call the export_iteration method again. Instead, skip ahead to the get_exports method call to get a link to your existing exported model.

Download the exported model

Next, you'll call the get_exports method to check the status of the export operation. The operation runs asynchronously, so you should poll this method until the operation completes. When it completes, you can retrieve the URI where you can download the model iteration to your device.

while (export.status == "Exporting"):
    print ("Waiting 10 seconds...")
    exports = trainer.get_exports(project_id, iteration_id)
    # Locate the export for this iteration and check its status  
    for e in exports:
        if e.platform == export.platform and e.flavor == export.flavor:
            export = e
    print("Export status is: ", export.status)

For more information, see the get_exports method.

Then, you can programmatically download the exported model to a location on your device.

if export.status == "Done":
    # Success, now we can download it
    export_file = requests.get(export.download_uri)
    with open("export.zip", "wb") as file:

Next steps

Integrate your exported model into an application by exploring one of the following articles or samples: