Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This guide shows you how to create a C# native addon that uses Windows Machine Learning (WinML) in your Electron app. WinML allows you to run machine learning models (ONNX format) locally on Windows devices for tasks like image classification, object detection, and more.
Prerequisites
- Completed the development environment setup
- Windows 11 or Windows 10 (version 1809 or later)
Note
WinML runs on any Windows 10 (1809+) or Windows 11 device. For best performance, devices with GPUs or NPUs are recommended, but the API works on CPU as well.
Step 1: Create a C# native addon
npx winapp node create-addon --template cs --name winMlAddon
This creates a winMlAddon/ folder with a C# project configured with Windows SDK and Windows App SDK references.
Build the addon:
npm run build-winMlAddon
Step 2: Download the SqueezeNet model
- Install the AI Dev Gallery
- Navigate to the Classify Image sample
- Download the SqueezeNet 1.1 model
- Copy the
.onnxfile to amodels/folder in your project root
Note
The model can also be downloaded from the ONNX Model Zoo GitHub repo.
Step 3: Add required NuGet packages
Update Directory.packages.props in your project root:
<PackageVersion Include="Microsoft.ML.OnnxRuntime.Extensions" Version="0.14.0" />
<PackageVersion Include="System.Drawing.Common" Version="9.0.9" />
Update winMlAddon/winMlAddon.csproj to add the package references:
<PackageReference Include="Microsoft.ML.OnnxRuntime.Extensions" />
<PackageReference Include="System.Drawing.Common" />
Step 4: Add the sample code
The AI Dev Gallery provides the complete implementation for image classification with SqueezeNet. You can find the adapted code in the electron-winml sample.
Copy the winMlAddon/ folder from the sample, or manually update winMlAddon/addon.cs with the sample code.
Key implementation details
Project root path: The addon requires the JavaScript code to pass the project root path so it can locate the ONNX model and native dependencies.
Preloading native dependencies: The addon includes a method to load required DLLs that works for both development and production scenarios.
Electron Forge configuration: Configure your packager to unpack native files:
module.exports = {
packagerConfig: {
asar: {
unpack: "**/*.{dll,exe,node,onnx}"
},
ignore: [
/^\/.winapp\//,
"\\.msix$",
/^\/winMlAddon\/(?!dist).+/
]
},
};
Step 5: Build the addon
npm run build-winMlAddon
Step 6: Test the addon
Open src/index.js and load the addon:
const winMlAddon = require('../winMlAddon/dist/winMlAddon.node');
Add a test function:
const testWinML = async () => {
try {
let projectRoot = path.join(__dirname, '..');
if (projectRoot.includes('app.asar')) {
projectRoot = projectRoot.replace('app.asar', 'app.asar.unpacked');
}
const addon = await winMlAddon.Addon.createAsync(projectRoot);
console.log('Model loaded successfully!');
const imagePath = path.join(projectRoot, 'test-images', 'sample.jpg');
const predictions = await addon.classifyImage(imagePath);
console.log('Top predictions:');
predictions.slice(0, 5).forEach((pred, i) => {
console.log(`${i + 1}. ${pred.label}: ${(pred.confidence * 100).toFixed(2)}%`);
});
} catch (error) {
console.error('Error testing WinML:', error.message);
}
};
Prepare test images by creating a test-images/ folder with sample images, then run:
npm start
Step 7: Update debug identity
npx winapp node add-electron-debug-identity
Note
There is a known Windows bug with sparse packaging Electron applications that can cause crashes or blank windows. See the setup guide for the workaround.
Next steps
- Creating a Phi Silica addon - Use on-device AI APIs
- Packaging for distribution - Create a signed MSIX package
Related topics
Windows developer