DML_ELEMENT_WISE_ADD1_OPERATOR_DESC structure (directml.h)
Adds every element in ATensor to its corresponding element in BTensor and places the result into the corresponding element of OutputTensor, with the option for fused activation.
f(a, b) = FusedActivation(a + b)
The fused activation operator description, if provided, then executes the given activation operator on the output.
This operator supports in-place execution, meaning that OutputTensor is permitted to alias one or more of the input tensors during binding.
struct DML_ELEMENT_WISE_ADD1_OPERATOR_DESC {
const DML_TENSOR_DESC *ATensor;
const DML_TENSOR_DESC *BTensor;
const DML_TENSOR_DESC *OutputTensor;
const DML_OPERATOR_DESC *FusedActivation;
};
ATensor
Type: const DML_TENSOR_DESC*
A tensor containing the left-hand side inputs.
BTensor
Type: const DML_TENSOR_DESC*
A tensor containing the right-hand side inputs.
OutputTensor
Type: const DML_TENSOR_DESC*
The output tensor to write the results to.
FusedActivation
Type: _Maybenull_ const DML_OPERATOR_DESC*
An optional fused activation layer to apply after the addition. For more info, see Using fused operators for improved performance.
Fused activation may be used only when the output datatype is FLOAT16 or FLOAT32.
This operator was introduced in DML_FEATURE_LEVEL_2_0
.
ATensor, BTensor, and OutputTensor must have the same DataType, DimensionCount, and Sizes.
Tensor | Kind | Supported dimension counts | Supported data types |
---|---|---|---|
ATensor | Input | 1 to 8 | FLOAT32, FLOAT16 |
BTensor | Input | 1 to 8 | FLOAT32, FLOAT16 |
OutputTensor | Output | 1 to 8 | FLOAT32, FLOAT16 |
Tensor | Kind | Supported dimension counts | Supported data types |
---|---|---|---|
ATensor | Input | 4 to 5 | FLOAT32, FLOAT16 |
BTensor | Input | 4 to 5 | FLOAT32, FLOAT16 |
OutputTensor | Output | 4 to 5 | FLOAT32, FLOAT16 |
Requirement | Value |
---|---|
Minimum supported client | Windows 10, version 2004 (10.0; Build 19041) |
Minimum supported server | Windows Server, version 2004 (10.0; Build 19041) |
Header | directml.h |