Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This topic walks you through the minimal path to running an ONNX model with Windows ML on CPU, then points you to hardware acceleration when you're ready.
To learn more about Windows ML, see What is Windows ML.
Prerequisites
- Version of Windows that Windows App SDK supports
- Architecture: x64 or ARM64
- Language-specific prerequisites seen below
- .NET 8 or greater to use all Windows ML APIs
- With .NET 6, you can install execution providers using the
Microsoft.Windows.AI.MachineLearningAPIs, but you cannot use theMicrosoft.ML.OnnxRuntimeAPIs.
- With .NET 6, you can install execution providers using the
- Targeting a Windows 10-specific TFM like
net8.0-windows10.0.19041.0or greater
Step 1: Find a model
Before writing any code, you need an ONNX model. See Find or train models for guidance on obtaining ONNX models.
Step 2: Install Windows ML
See Install and deploy Windows ML for full instructions across all supported languages and deployment modes (framework-dependent and self-contained).
Step 3: Add namespaces / headers
After you've installed Windows ML in your project, see Use ONNX APIs for guidance about which namespaces / headers to use.
Step 4: Run an ONNX model
With Windows ML installed, you can run ONNX models on the CPU with no additional setup. See Run ONNX models for guidance.
At this point your app has a working inference path on CPU.
Step 5: Optionally accelerate on NPU or GPU
Want faster inference on NPU, GPU, or even CPU? See Accelerate AI models to add hardware-tuned execution providers for your target hardware.
See also
- Accelerate AI models - Add NPU, GPU, or CPU execution providers
- Run ONNX models - Info about inferencing ONNX models
- Install and deploy Windows ML - Options for deploying an app using Windows ML
- Tutorial - Full end-to-end tutorial using Windows ML with the ResNet-50 model
- Code samples - Our code samples using Windows ML