OnlineGradientDescentTrainer Class
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
The IEstimator<TTransformer> for training a linear regression model using Online Gradient Descent (OGD) for estimating the parameters of the linear regression model.
public sealed class OnlineGradientDescentTrainer : Microsoft.ML.Trainers.AveragedLinearTrainer<Microsoft.ML.Data.RegressionPredictionTransformer<Microsoft.ML.Trainers.LinearRegressionModelParameters>,Microsoft.ML.Trainers.LinearRegressionModelParameters>
type OnlineGradientDescentTrainer = class
inherit AveragedLinearTrainer<RegressionPredictionTransformer<LinearRegressionModelParameters>, LinearRegressionModelParameters>
Public NotInheritable Class OnlineGradientDescentTrainer
Inherits AveragedLinearTrainer(Of RegressionPredictionTransformer(Of LinearRegressionModelParameters), LinearRegressionModelParameters)
- Inheritance
Remarks
To create this trainer, use OnlineGradientDescent or OnlineGradientDescent(Options).
Input and Output Columns
The input label column data must be Single. The input features column data must be a known-sized vector of Single.
This trainer outputs the following columns:
Output Column Name | Column Type | Description |
---|---|---|
Score |
Single | The unbounded score that was predicted by the model. |
Trainer Characteristics
Machine learning task | Regression |
Is normalization required? | Yes |
Is caching required? | No |
Required NuGet in addition to Microsoft.ML | None |
Exportable to ONNX | Yes |
Training Algorithm Details
Stochastic gradient descent uses a simple yet efficient iterative technique to fit model coefficients using error gradients for convex loss functions. Online Gradient Descent (OGD) implements the standard (non-batch) stochastic gradient descent, with a choice of loss functions, and an option to update the weight vector using the average of the vectors seen over time (averaged argument is set to True by default).
Check the See Also section for links to usage examples.
Fields
FeatureColumn |
The feature column that the trainer expects. (Inherited from TrainerEstimatorBase<TTransformer,TModel>) |
LabelColumn |
The label column that the trainer expects. Can be |
WeightColumn |
The weight column that the trainer expects. Can be |
Properties
Info | (Inherited from OnlineLinearTrainer<TTransformer,TModel>) |
Methods
Fit(IDataView) |
Trains and returns a ITransformer. (Inherited from TrainerEstimatorBase<TTransformer,TModel>) |
Fit(IDataView, LinearModelParameters) |
Continues the training of a OnlineLinearTrainer<TTransformer,TModel> using an already trained |
GetOutputSchema(SchemaShape) | (Inherited from TrainerEstimatorBase<TTransformer,TModel>) |
Extension Methods
AppendCacheCheckpoint<TTrans>(IEstimator<TTrans>, IHostEnvironment) |
Append a 'caching checkpoint' to the estimator chain. This will ensure that the downstream estimators will be trained against cached data. It is helpful to have a caching checkpoint before trainers that take multiple data passes. |
WithOnFitDelegate<TTransformer>(IEstimator<TTransformer>, Action<TTransformer>) |
Given an estimator, return a wrapping object that will call a delegate once Fit(IDataView) is called. It is often important for an estimator to return information about what was fit, which is why the Fit(IDataView) method returns a specifically typed object, rather than just a general ITransformer. However, at the same time, IEstimator<TTransformer> are often formed into pipelines with many objects, so we may need to build a chain of estimators via EstimatorChain<TLastTransformer> where the estimator for which we want to get the transformer is buried somewhere in this chain. For that scenario, we can through this method attach a delegate that will be called once fit is called. |