Export a model programmatically
All of the export options available on the Custom Vision website are also available programmatically through the client libraries. You may want to use client libraries so you can fully automate the process of retraining and updating the model iteration you use on a local device.
This guide shows you how to export your model to an ONNX file with the Python SDK.
Create a training client
You need to have a CustomVisionTrainingClient object to export a model iteration. Create variables for your Custom Vision training resources Azure endpoint and keys, and use them to create the client object.
ENDPOINT = "PASTE_YOUR_CUSTOM_VISION_TRAINING_ENDPOINT_HERE"
training_key = "PASTE_YOUR_CUSTOM_VISION_TRAINING_KEY_HERE"
credentials = ApiKeyCredentials(in_headers={"Training-key": training_key})
trainer = CustomVisionTrainingClient(ENDPOINT, credentials)
Important
Remember to remove the keys from your code when youre done, and never post them publicly. For production, consider using a secure way of storing and accessing your credentials. For more information, see the Azure AI services security article.
Call the export method
Call the export_iteration method.
- Provide the project ID, iteration ID of the model you want to export.
- The platform parameter specifies the platform to export to: allowed values are
CoreML
,TensorFlow
,DockerFile
,ONNX
,VAIDK
, andOpenVino
. - The flavor parameter specifies the format of the exported model: allowed values are
Linux
,Windows
,ONNX10
,ONNX12
,ARM
,TensorFlowNormal
, andTensorFlowLite
. - The raw parameter gives you the option to retrieve the raw JSON response along with the object model response.
project_id = "PASTE_YOUR_PROJECT_ID"
iteration_id = "PASTE_YOUR_ITERATION_ID"
platform = "ONNX"
flavor = "ONNX10"
export = trainer.export_iteration(project_id, iteration_id, platform, flavor, raw=False)
For more information, see the export_iteration method.
Important
If you've already exported a particular iteration, you cannot call the export_iteration method again. Instead, skip ahead to the get_exports method call to get a link to your existing exported model.
Download the exported model
Next, you'll call the get_exports method to check the status of the export operation. The operation runs asynchronously, so you should poll this method until the operation completes. When it completes, you can retrieve the URI where you can download the model iteration to your device.
while (export.status == "Exporting"):
print ("Waiting 10 seconds...")
time.sleep(10)
exports = trainer.get_exports(project_id, iteration_id)
# Locate the export for this iteration and check its status
for e in exports:
if e.platform == export.platform and e.flavor == export.flavor:
export = e
break
print("Export status is: ", export.status)
For more information, see the get_exports method.
Then, you can programmatically download the exported model to a location on your device.
if export.status == "Done":
# Success, now we can download it
export_file = requests.get(export.download_uri)
with open("export.zip", "wb") as file:
file.write(export_file.content)
Next steps
Integrate your exported model into an application by exploring one of the following articles or samples:
- Use your Tensorflow model with Python
- Use your ONNX model with Windows Machine Learning
- See the sample for CoreML model in an iOS application for real-time image classification with Swift.
- See the sample for Tensorflow model in an Android application for real-time image classification on Android.
- See the sample for CoreML model with Xamarin for real-time image classification in a Xamarin iOS app.
- See the sample for how to use the exported model (VAIDK/OpenVino)