แก้ไข

แชร์ผ่าน


Get started with Windows ML

This topic shows you how to install and use Windows ML to discover, download, and register execution providers (EPs) for use with the ONNX Runtime shipped with Windows ML. Windows ML handles the complexity of package management and hardware selection, allowing you to download the latest execution providers compatible with your users' hardware.

If you're not already familiar with the ONNX Runtime, we suggest reading the ONNX Runtime docs. In short, Windows ML provides a copy of the ONNX Runtime, plus the ability to dynamically download execution providers (EPs).

Prerequisites

  • .NET 8 or greater to use all Windows ML APIs
    • With .NET 6, you can install execution providers using the Microsoft.Windows.AI.MachineLearning APIs, but you cannot use the Microsoft.ML.OnnxRuntime APIs.
  • Targeting a Windows 10-specific TFM like net8.0-windows10.0.19041.0 or greater

Step 1: Install or update Windows ML

Windows ML is included in Windows App SDK 1.8.1 or greater.

The easiest way to use Windows ML is to install the Microsoft.WindowsAppSDK.ML NuGet package, which uses self-contained deployment by default. See Deploy your app to learn more about deployment options.

dotnet add package Microsoft.WindowsAppSDK.ML

Step 2: Download and register EPs

The simplest way to get started is to let Windows ML automatically discover, download, and register the latest version of all compatible execution providers. Execution providers need to be registered with the ONNX Runtime inside of Windows ML before you can use them. And if they haven't been downloaded yet, they need to be downloaded first. Calling EnsureAndRegisterCertifiedAsync() will do both of these in one step.

using Microsoft.ML.OnnxRuntime;
using Microsoft.Windows.AI.MachineLearning;

// First we create a new instance of EnvironmentCreationOptions
EnvironmentCreationOptions envOptions = new()
{
    logId = "WinMLDemo", // Use an ID of your own choice
    logLevel = OrtLoggingLevel.ORT_LOGGING_LEVEL_ERROR
};

// And then use that to create the ORT environment
using var ortEnv = OrtEnv.CreateInstanceWithOptions(ref envOptions);

// Get the default ExecutionProviderCatalog
var catalog = ExecutionProviderCatalog.GetDefault();

// Ensure and register all compatible execution providers with ONNX Runtime
// This downloads any necessary components and registers them
await catalog.EnsureAndRegisterCertifiedAsync();

Tip

You can sometimes get better performance in ONNX Runtime by enabling thread spinning. See Thread spinning behavior in Windows ML for more info.

Tip

In production applications, wrap the EnsureAndRegisterCertifiedAsync() call in a try-catch block to handle potential network or download failures gracefully.

Next steps

After registering execution providers, you're ready to use the ONNX Runtime APIs within Windows ML! You will want to...

  1. Select execution providers - Tell the runtime which execution providers you want to use
  2. Get your models - Use Model Catalog to dynamically download models, or include them locally
  3. Run model inference - Compile, load, and inference your model

See also