DML_BATCH_NORMALIZATION_TRAINING_GRAD_OPERATOR_DESC structure (directml.h)
Computes backpropagation gradients for batch normalization training.
This operator performs multiple computations, which are detailed in the separate output descriptions.
Any dimension in MeanTensor, VarianceTensor, and ScaleTensor can be set to 1, and be automatically broadcast to match InputTensor, but otherwise must equal the corresponding dimension's size from InputTensor.
OutputScaleGradientTensor and OutputBiasGradientTensor are computed using sums across the set of dimensions for which MeanTensor, ScaleTensor and VarianceTensor sizes equal one.
struct DML_BATCH_NORMALIZATION_TRAINING_GRAD_OPERATOR_DESC {
const DML_TENSOR_DESC *InputTensor;
const DML_TENSOR_DESC *InputGradientTensor;
const DML_TENSOR_DESC *MeanTensor;
const DML_TENSOR_DESC *VarianceTensor;
const DML_TENSOR_DESC *ScaleTensor;
const DML_TENSOR_DESC *OutputGradientTensor;
const DML_TENSOR_DESC *OutputScaleGradientTensor;
const DML_TENSOR_DESC *OutputBiasGradientTensor;
FLOAT Epsilon;
};
InputTensor
Type: const DML_TENSOR_DESC*
A tensor containing the input data. This is typically the same tensor that was provided as the InputTensor to DML_BATCH_NORMALIZATION_TRAINING_OPERATOR_DESC in the forward pass.
InputGradientTensor
Type: const DML_TENSOR_DESC*
The incoming gradient tensor. This is typically obtained from the output of backpropagation of a preceding layer.
MeanTensor
Type: const DML_TENSOR_DESC*
A tensor containing the mean data. This is typically the same tensor that was returned by MeanTensor from DML_BATCH_NORMALIZATION_TRAINING_OPERATOR_DESC in the forward pass.
VarianceTensor
Type: const DML_TENSOR_DESC*
A tensor containing the variance data. This is typically the same tensor that was returned as the OutputVarianceTensor from DML_BATCH_NORMALIZATION_TRAINING_OPERATOR_DESC in the forward pass.
ScaleTensor
Type: const DML_TENSOR_DESC*
A tensor containing the scale data.
OutputGradientTensor
Type: const DML_TENSOR_DESC*
For every corresponding value in the inputs:
Coef0 = 1.0f / sqrt(Variance + Epsilon)
Coef1 = InputGradient * (Input - mean(Input))
InputGradientCentered = InputGradient - mean(InputGradient)
InputCentered = InputCentered - mean(InputCentered)
OutputGradient = Scale * Coef0 * (InputGradientCentered - InputCentered * mean(Coef1) / (Variance + Epsilon))
OutputScaleGradientTensor
Type: const DML_TENSOR_DESC*
The following computation is done or every corresponding value in the inputs: OutputScaleGradient = sum(InputGradient * (Input - Mean) / sqrt(Variance + Epsilon))
OutputBiasGradientTensor
Type: const DML_TENSOR_DESC*
The following computation is done or every corresponding value in the inputs: OutputBiasGradient = sum(InputGradient)
Epsilon
Type: FLOAT
A small float value added to the variance to avoid zero.
This operator was introduced in DML_FEATURE_LEVEL_4_1
.
- InputGradientTensor, InputTensor, MeanTensor, OutputBiasGradientTensor, OutputGradientTensor, OutputScaleGradientTensor, ScaleTensor, and VarianceTensor must have the same DataType and DimensionCount.
- MeanTensor, OutputBiasGradientTensor, OutputScaleGradientTensor, ScaleTensor, and VarianceTensor must have the same Sizes.
- InputGradientTensor, InputTensor, and OutputGradientTensor must have the same Sizes.
Tensor | Kind | Dimensions | Supported dimension counts | Supported data types |
---|---|---|---|---|
InputTensor | Input | { InputDimensions[] } | 1 to 8 | FLOAT32, FLOAT16 |
InputGradientTensor | Input | { InputDimensions[] } | 1 to 8 | FLOAT32, FLOAT16 |
MeanTensor | Input | { MeanDimensions[] } | 1 to 8 | FLOAT32, FLOAT16 |
VarianceTensor | Input | { MeanDimensions[] } | 1 to 8 | FLOAT32, FLOAT16 |
ScaleTensor | Input | { MeanDimensions[] } | 1 to 8 | FLOAT32, FLOAT16 |
OutputGradientTensor | Output | { InputDimensions[] } | 1 to 8 | FLOAT32, FLOAT16 |
OutputScaleGradientTensor | Output | { MeanDimensions[] } | 1 to 8 | FLOAT32, FLOAT16 |
OutputBiasGradientTensor | Output | { MeanDimensions[] } | 1 to 8 | FLOAT32, FLOAT16 |
Requirement | Value |
---|---|
Header | directml.h |