DML_MEAN_VARIANCE_NORMALIZATION_OPERATOR_DESC structure (directml.h)
Performs a mean variance normalization function on the input tensor. This operator will calculate the mean and variance of the input tensor to perform normalization. This operator performs the following computation.
Output = FusedActivation(Scale * ((Input - Mean) / sqrt(Variance + Epsilon)) + Bias).
Syntax
struct DML_MEAN_VARIANCE_NORMALIZATION_OPERATOR_DESC {
const DML_TENSOR_DESC *InputTensor;
const DML_TENSOR_DESC *ScaleTensor;
const DML_TENSOR_DESC *BiasTensor;
const DML_TENSOR_DESC *OutputTensor;
BOOL CrossChannel;
BOOL NormalizeVariance;
FLOAT Epsilon;
const DML_OPERATOR_DESC *FusedActivation;
};
Members
InputTensor
Type: const DML_TENSOR_DESC*
A tensor containing the Input data. This tensor's dimensions should be { BatchCount, ChannelCount, Height, Width }
.
ScaleTensor
Type: _Maybenull_ const DML_TENSOR_DESC*
An optional tensor containing the Scale data. This tensor's dimensions should be { BatchCount, ChannelCount, Height, Width }
. Any dimension can be replaced with 1 to broadcast in that dimension. If DML_FEATURE_LEVEL is less than DML_FEATURE_LEVEL_5_2, then this tensor is required if BiasTensor is present. If DML_FEATURE_LEVEL is greater than or equal to DML_FEATURE_LEVEL_5_2, then this tensor can be null regardless of the value of BiasTensor.
BiasTensor
Type: _Maybenull_ const DML_TENSOR_DESC*
An optional tensor containing the bias data. This tensor's dimensions should be { BatchCount, ChannelCount, Height, Width }
. Any dimension can be replaced with 1 to broadcast in that dimension. If DML_FEATURE_LEVEL is less than DML_FEATURE_LEVEL_5_2, then this tensor is required if ScaleTensor is present. If DML_FEATURE_LEVEL is greater than or equal to DML_FEATURE_LEVEL_5_2, then this tensor can be null regardless of the value of ScaleTensor.
OutputTensor
Type: const DML_TENSOR_DESC*
A tensor to write the results to. This tensor's dimensions are { BatchCount, ChannelCount, Height, Width }
.
CrossChannel
Type: BOOL
When TRUE, the MeanVariance layer includes channels in the Mean and Variance calculations, meaning they are normalized across axes {ChannelCount, Height, Width}
. When FALSE, Mean and Variance calculations are normalized across axes {Height, Width}
with each channel being independent.
NormalizeVariance
Type: BOOL
TRUE if the Normalization layer includes Variance in the normalization calculation. Otherwise, FALSE. If FALSE, then normalization equation is Output = FusedActivation(Scale * (Input - Mean) + Bias)
.
Epsilon
Type: FLOAT
The epsilon value to use to avoid division by zero. A value of 0.00001 is recommended as default.
FusedActivation
Type: _Maybenull_ const DML_OPERATOR_DESC*
An optional fused activation layer to apply after the normalization. For more info, see Using fused operators for improved performance.
Remarks
A newer version of this operator, DML_MEAN_VARIANCE_NORMALIZATION1_OPERATOR_DESC, was introduced in DML_FEATURE_LEVEL_2_1
.
Availability
This operator was introduced in DML_FEATURE_LEVEL_1_0
.
Tensor constraints
- InputTensor and OutputTensor must have the same Sizes.
- BiasTensor, InputTensor, OutputTensor, and ScaleTensor must have the same DataType.
Tensor support
Tensor | Kind | Dimensions | Supported dimension counts | Supported data types |
---|---|---|---|---|
InputTensor | Input | { BatchCount, ChannelCount, Height, Width } | 4 | FLOAT32, FLOAT16 |
ScaleTensor | Optional input | { ScaleBatchCount, ScaleChannelCount, ScaleHeight, ScaleWidth } | 4 | FLOAT32, FLOAT16 |
BiasTensor | Optional input | { BiasBatchCount, BiasChannelCount, BiasHeight, BiasWidth } | 4 | FLOAT32, FLOAT16 |
OutputTensor | Output | { BatchCount, ChannelCount, Height, Width } | 4 | FLOAT32, FLOAT16 |
Requirements
Requirement | Value |
---|---|
Header | directml.h |