편집

다음을 통해 공유


DML_ACTIVATION_HARD_SWISH_OPERATOR_DESC structure (directml.h)

Performs a hard swish activation function on every element in InputTensor, placing the result into the corresponding element of OutputTensor.

f(x) = x * HardSigmoid(x, Alpha, Beta)

This operator supports in-place execution, meaning that the output tensor is permitted to alias InputTensor during binding.

Important

This API is available as part of the DirectML standalone redistributable package (see Microsoft.AI.DirectML version 1.13 and later. Also see DirectML version history.

Syntax

struct DML_ACTIVATION_HARD_SWISH_OPERATOR_DESC
{
    const DML_TENSOR_DESC* InputTensor;
    const DML_TENSOR_DESC* OutputTensor;
    FLOAT Alpha;
    FLOAT Beta;
};

Members

InputTensor

Type: const DML_TENSOR_DESC*

The input tensor to read from.

OutputTensor

Type: const DML_TENSOR_DESC*

The output tensor to write the results to.

Alpha

Type: FLOAT

The scale coefficient. A typical default for this value is 0.166667 (1/6).

Beta

Type: FLOAT

The bias coefficient. A typical default for this value is 0.5.

Availability

This operator was introduced in DML_FEATURE_LEVEL_6_2.

Tensor constraints

InputTensor and OutputTensor must have the same DataType, DimensionCount, and Sizes.

Tensor support

Tensor Kind Supported dimension counts Supported data types
InputTensor Input 1 to 8 FLOAT32, FLOAT16
OutputTensor Output 1 to 8 FLOAT32, FLOAT16

Requirements

   
Header directml.h