FaceDetectorPreset Class

Describes all the settings to be used when analyzing a video in order to detect (and optionally redact) all the faces present.

All required parameters must be populated in order to send to Azure.

Inheritance
azure.mgmt.media.models._models_py3.Preset
FaceDetectorPreset

Constructor

FaceDetectorPreset(*, resolution: str | _models.AnalysisResolution | None = None, mode: str | _models.FaceRedactorMode | None = None, blur_type: str | _models.BlurType | None = None, experimental_options: Dict[str, str] | None = None, **kwargs)

Keyword-Only Parameters

Name Description
resolution

Specifies the maximum resolution at which your video is analyzed. The default behavior is "SourceResolution," which will keep the input video at its original resolution when analyzed. Using "StandardDefinition" will resize input videos to standard definition while preserving the appropriate aspect ratio. It will only resize if the video is of higher resolution. For example, a 1920x1080 input would be scaled to 640x360 before processing. Switching to "StandardDefinition" will reduce the time it takes to process high resolution video. It may also reduce the cost of using this component (see https://azure.microsoft.com/en-us/pricing/details/media-services/#analytics for details). However, faces that end up being too small in the resized video may not be detected. Known values are: "SourceResolution" and "StandardDefinition".

mode

This mode provides the ability to choose between the following settings: 1) Analyze - For detection only.This mode generates a metadata JSON file marking appearances of faces throughout the video.Where possible, appearances of the same person are assigned the same ID. 2) Combined - Additionally redacts(blurs) detected faces. 3) Redact - This enables a 2-pass process, allowing for selective redaction of a subset of detected faces.It takes in the metadata file from a prior analyze pass, along with the source video, and a user-selected subset of IDs that require redaction. Known values are: "Analyze", "Redact", and "Combined".

blur_type

Blur type. Known values are: "Box", "Low", "Med", "High", and "Black".

experimental_options

Dictionary containing key value pairs for parameters not exposed in the preset itself.

Variables

Name Description
odata_type
str

The discriminator for derived types. Required.

resolution

Specifies the maximum resolution at which your video is analyzed. The default behavior is "SourceResolution," which will keep the input video at its original resolution when analyzed. Using "StandardDefinition" will resize input videos to standard definition while preserving the appropriate aspect ratio. It will only resize if the video is of higher resolution. For example, a 1920x1080 input would be scaled to 640x360 before processing. Switching to "StandardDefinition" will reduce the time it takes to process high resolution video. It may also reduce the cost of using this component (see https://azure.microsoft.com/en-us/pricing/details/media-services/#analytics for details). However, faces that end up being too small in the resized video may not be detected. Known values are: "SourceResolution" and "StandardDefinition".

mode

This mode provides the ability to choose between the following settings: 1) Analyze

  • For detection only.This mode generates a metadata JSON file marking appearances of faces throughout the video.Where possible, appearances of the same person are assigned the same ID.
  1. Combined - Additionally redacts(blurs) detected faces. 3) Redact - This enables a 2-pass process, allowing for selective redaction of a subset of detected faces.It takes in the metadata file from a prior analyze pass, along with the source video, and a user-selected subset of IDs that require redaction. Known values are: "Analyze", "Redact", and "Combined".
blur_type

Blur type. Known values are: "Box", "Low", "Med", "High", and "Black".

experimental_options

Dictionary containing key value pairs for parameters not exposed in the preset itself.