Edit

Share via


Get started with Windows ML

This topic walks you through the minimal path to running an ONNX model with Windows ML on CPU, then points you to hardware acceleration when you're ready.

To learn more about Windows ML, see What is Windows ML.

Prerequisites

  • Version of Windows that Windows App SDK supports
  • Architecture: x64 or ARM64
  • Language-specific prerequisites seen below
  • .NET 8 or greater to use all Windows ML APIs
    • With .NET 6, you can install execution providers using the Microsoft.Windows.AI.MachineLearning APIs, but you cannot use the Microsoft.ML.OnnxRuntime APIs.
  • Targeting a Windows 10-specific TFM like net8.0-windows10.0.19041.0 or greater

Step 1: Find a model

Before writing any code, you need an ONNX model. See Find or train models for guidance on obtaining ONNX models.

Step 2: Install Windows ML

See Install and deploy Windows ML for full instructions across all supported languages and deployment modes (framework-dependent and self-contained).

Step 3: Add namespaces / headers

After you've installed Windows ML in your project, see Use ONNX APIs for guidance about which namespaces / headers to use.

Step 4: Run an ONNX model

With Windows ML installed, you can run ONNX models on the CPU with no additional setup. See Run ONNX models for guidance.

At this point your app has a working inference path on CPU.

Step 5: Optionally accelerate on NPU or GPU

Want faster inference on NPU, GPU, or even CPU? See Accelerate AI models to add hardware-tuned execution providers for your target hardware.

See also