次の方法で共有


ImageModelDistributionSettings interface

Distribution expressions to sweep over values of model settings. Some examples are:

ModelName = "choice('seresnext', 'resnest50')";
LearningRate = "uniform(0.001, 0.01)";
LayersToFreeze = "choice(0, 2)";
```</example>
All distributions can be specified as distribution_name(min, max) or choice(val1, val2, ..., valn)
where distribution name can be: uniform, quniform, loguniform, etc
For more details on how to compose distribution expressions please check the documentation:
https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters
For more information on the available settings please visit the official documentation:
https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models.

Properties

amsGradient

Enable AMSGrad when optimizer is 'adam' or 'adamw'.

augmentations

Settings for using Augmentations.

beta1

Value of 'beta1' when optimizer is 'adam' or 'adamw'. Must be a float in the range [0, 1].

beta2

Value of 'beta2' when optimizer is 'adam' or 'adamw'. Must be a float in the range [0, 1].

distributed

Whether to use distributer training.

earlyStopping

Enable early stopping logic during training.

earlyStoppingDelay

Minimum number of epochs or validation evaluations to wait before primary metric improvement is tracked for early stopping. Must be a positive integer.

earlyStoppingPatience

Minimum number of epochs or validation evaluations with no primary metric improvement before the run is stopped. Must be a positive integer.

enableOnnxNormalization

Enable normalization when exporting ONNX model.

evaluationFrequency

Frequency to evaluate validation dataset to get metric scores. Must be a positive integer.

gradientAccumulationStep

Gradient accumulation means running a configured number of "GradAccumulationStep" steps without updating the model weights while accumulating the gradients of those steps, and then using the accumulated gradients to compute the weight updates. Must be a positive integer.

layersToFreeze

Number of layers to freeze for the model. Must be a positive integer. For instance, passing 2 as value for 'seresnext' means freezing layer0 and layer1. For a full list of models supported and details on layer freeze, please see: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models.

learningRate

Initial learning rate. Must be a float in the range [0, 1].

learningRateScheduler

Type of learning rate scheduler. Must be 'warmup_cosine' or 'step'.

modelName

Name of the model to use for training. For more information on the available models please visit the official documentation: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models.

momentum

Value of momentum when optimizer is 'sgd'. Must be a float in the range [0, 1].

nesterov

Enable nesterov when optimizer is 'sgd'.

numberOfEpochs

Number of training epochs. Must be a positive integer.

numberOfWorkers

Number of data loader workers. Must be a non-negative integer.

optimizer

Type of optimizer. Must be either 'sgd', 'adam', or 'adamw'.

randomSeed

Random seed to be used when using deterministic training.

stepLRGamma

Value of gamma when learning rate scheduler is 'step'. Must be a float in the range [0, 1].

stepLRStepSize

Value of step size when learning rate scheduler is 'step'. Must be a positive integer.

trainingBatchSize

Training batch size. Must be a positive integer.

validationBatchSize

Validation batch size. Must be a positive integer.

warmupCosineLRCycles

Value of cosine cycle when learning rate scheduler is 'warmup_cosine'. Must be a float in the range [0, 1].

warmupCosineLRWarmupEpochs

Value of warmup epochs when learning rate scheduler is 'warmup_cosine'. Must be a positive integer.

weightDecay

Value of weight decay when optimizer is 'sgd', 'adam', or 'adamw'. Must be a float in the range[0, 1].

Property Details

amsGradient

Enable AMSGrad when optimizer is 'adam' or 'adamw'.

amsGradient?: string

Property Value

string

augmentations

Settings for using Augmentations.

augmentations?: string

Property Value

string

beta1

Value of 'beta1' when optimizer is 'adam' or 'adamw'. Must be a float in the range [0, 1].

beta1?: string

Property Value

string

beta2

Value of 'beta2' when optimizer is 'adam' or 'adamw'. Must be a float in the range [0, 1].

beta2?: string

Property Value

string

distributed

Whether to use distributer training.

distributed?: string

Property Value

string

earlyStopping

Enable early stopping logic during training.

earlyStopping?: string

Property Value

string

earlyStoppingDelay

Minimum number of epochs or validation evaluations to wait before primary metric improvement is tracked for early stopping. Must be a positive integer.

earlyStoppingDelay?: string

Property Value

string

earlyStoppingPatience

Minimum number of epochs or validation evaluations with no primary metric improvement before the run is stopped. Must be a positive integer.

earlyStoppingPatience?: string

Property Value

string

enableOnnxNormalization

Enable normalization when exporting ONNX model.

enableOnnxNormalization?: string

Property Value

string

evaluationFrequency

Frequency to evaluate validation dataset to get metric scores. Must be a positive integer.

evaluationFrequency?: string

Property Value

string

gradientAccumulationStep

Gradient accumulation means running a configured number of "GradAccumulationStep" steps without updating the model weights while accumulating the gradients of those steps, and then using the accumulated gradients to compute the weight updates. Must be a positive integer.

gradientAccumulationStep?: string

Property Value

string

layersToFreeze

Number of layers to freeze for the model. Must be a positive integer. For instance, passing 2 as value for 'seresnext' means freezing layer0 and layer1. For a full list of models supported and details on layer freeze, please see: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models.

layersToFreeze?: string

Property Value

string

learningRate

Initial learning rate. Must be a float in the range [0, 1].

learningRate?: string

Property Value

string

learningRateScheduler

Type of learning rate scheduler. Must be 'warmup_cosine' or 'step'.

learningRateScheduler?: string

Property Value

string

modelName

Name of the model to use for training. For more information on the available models please visit the official documentation: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models.

modelName?: string

Property Value

string

momentum

Value of momentum when optimizer is 'sgd'. Must be a float in the range [0, 1].

momentum?: string

Property Value

string

nesterov

Enable nesterov when optimizer is 'sgd'.

nesterov?: string

Property Value

string

numberOfEpochs

Number of training epochs. Must be a positive integer.

numberOfEpochs?: string

Property Value

string

numberOfWorkers

Number of data loader workers. Must be a non-negative integer.

numberOfWorkers?: string

Property Value

string

optimizer

Type of optimizer. Must be either 'sgd', 'adam', or 'adamw'.

optimizer?: string

Property Value

string

randomSeed

Random seed to be used when using deterministic training.

randomSeed?: string

Property Value

string

stepLRGamma

Value of gamma when learning rate scheduler is 'step'. Must be a float in the range [0, 1].

stepLRGamma?: string

Property Value

string

stepLRStepSize

Value of step size when learning rate scheduler is 'step'. Must be a positive integer.

stepLRStepSize?: string

Property Value

string

trainingBatchSize

Training batch size. Must be a positive integer.

trainingBatchSize?: string

Property Value

string

validationBatchSize

Validation batch size. Must be a positive integer.

validationBatchSize?: string

Property Value

string

warmupCosineLRCycles

Value of cosine cycle when learning rate scheduler is 'warmup_cosine'. Must be a float in the range [0, 1].

warmupCosineLRCycles?: string

Property Value

string

warmupCosineLRWarmupEpochs

Value of warmup epochs when learning rate scheduler is 'warmup_cosine'. Must be a positive integer.

warmupCosineLRWarmupEpochs?: string

Property Value

string

weightDecay

Value of weight decay when optimizer is 'sgd', 'adam', or 'adamw'. Must be a float in the range[0, 1].

weightDecay?: string

Property Value

string