Load a model

Important

Windows Machine Learning requires ONNX models, version 1.2 or higher.

Once you get a trained ONNX model, you'll distribute the .onnx model file(s) with your app. You can include the .onnx file(s) in your APPX package, or, for desktop apps, they can be anywhere your app can access on the hard drive.

There are several ways to load a model using static methods on the LearningModel class:

The LoadFromStream* methods allow applications to have more control over where the model comes from. For example, an app could choose to have the model encrypted on disk and decrypt it only in memory prior to calling one of the LoadFromStream* methods. Other options include loading the model stream from a network share or other media.

Tip

Loading a model can take some time, so take care not to call a Load* method from your UI thread.

The following example shows how you can load a model into your application:

private async LearningModel LoadModelAsync(string modelPath)
{
    // Load and create the model
    var modelFile = await StorageFile.GetFileFromApplicationUriAsync(
        new Uri(modelPath));

    LearningModel model =
        await LearningModel.LoadFromStorageFileAsync(modelFile);

    return model;
}

See also

Note

Use the following resources for help with Windows ML:

  • To ask or answer technical questions about Windows ML, please use the windows-machine-learning tag on Stack Overflow.
  • To report a bug, please file an issue on our GitHub.