OnnxScoringEstimator Class

Definition

IEstimator<TTransformer> for scoring ONNX models in the ML.NET framework.

public sealed class OnnxScoringEstimator : Microsoft.ML.Data.TrivialEstimator<Microsoft.ML.Transforms.Onnx.OnnxTransformer>
type OnnxScoringEstimator = class
    inherit TrivialEstimator<OnnxTransformer>
Public NotInheritable Class OnnxScoringEstimator
Inherits TrivialEstimator(Of OnnxTransformer)
Inheritance
OnnxScoringEstimator

Remarks

Estimator Characteristics

Does this estimator need to look at the data to train its parameters? No
Input column data type Known-sized vector of Single or Double types
Output column data type As specified by the ONNX model
Required NuGet in addition to Microsoft.ML Microsoft.ML.OnnxTransformer (always), either Microsoft.ML.OnnxRuntime 1.6.0 (for CPU processing) or Microsoft.ML.OnnxRuntime.Gpu 1.6.0 (for GPU processing if GPU is available)
Exportable to ONNX No

To create this estimator use the following APIs: ApplyOnnxModel

Supports inferencing of models in ONNX 1.6 format (opset 11), using the Microsoft.ML.OnnxRuntime library. Models are scored on CPU if the project references Microsoft.ML.OnnxRuntime and on the GPU if the project references Microsoft.ML.OnnxRuntime.Gpu. Every project using the OnnxScoringEstimator must reference one of the above two packages.

To run on a GPU, use the NuGet package Microsoft.ML.OnnxRuntime.Gpu instead of the Microsoft.ML.OnnxRuntime nuget (which is for CPU processing). Microsoft.ML.OnnxRuntime.Gpu requires a CUDA supported GPU, the CUDA 10.2 Toolkit, and cuDNN 8.0.3 (as indicated on Onnxruntime's documentation). When creating the estimator through ApplyOnnxModel, set the parameter 'gpuDeviceId' to a valid non-negative integer. Typical device ID values are 0 or 1. If the GPU device isn't found but 'fallbackToCpu = true' then the estimator will run on the CPU. If the GPU device isn't found but 'fallbackToCpu = false' then the estimator will throw an exception

The inputs and outputs of the ONNX models must be Tensor type. Sequence and Maps are not yet supported.

Internally, OnnxTransformer (the return value of OnnxScoringEstimator.Fit()) holds a reference to an inference session which points to unmanaged memory owned by OnnxRuntime.dll. Whenever there is a call to ApplyOnnxModel in a pipeline, it is advised to cast the return value of the Fit() call to IDisposable and call Dispose() to ensure that there are no memory leaks.

OnnxRuntime works on Windows, MacOS and Ubuntu 16.04 Linux 64-bit platforms. Visit ONNX Models to see a list of readily available models to get started with. Refer to ONNX for more information.

Methods

Fit(IDataView) (Inherited from TrivialEstimator<TTransformer>)
GetOutputSchema(SchemaShape)

Returns the SchemaShape of the schema which will be produced by the transformer. Used for schema propagation and verification in a pipeline.

Extension Methods

AppendCacheCheckpoint<TTrans>(IEstimator<TTrans>, IHostEnvironment)

Append a 'caching checkpoint' to the estimator chain. This will ensure that the downstream estimators will be trained against cached data. It is helpful to have a caching checkpoint before trainers that take multiple data passes.

WithOnFitDelegate<TTransformer>(IEstimator<TTransformer>, Action<TTransformer>)

Given an estimator, return a wrapping object that will call a delegate once Fit(IDataView) is called. It is often important for an estimator to return information about what was fit, which is why the Fit(IDataView) method returns a specifically typed object, rather than just a general ITransformer. However, at the same time, IEstimator<TTransformer> are often formed into pipelines with many objects, so we may need to build a chain of estimators via EstimatorChain<TLastTransformer> where the estimator for which we want to get the transformer is buried somewhere in this chain. For that scenario, we can through this method attach a delegate that will be called once fit is called.

Applies to