Partager via


FaceClient Class

FaceClient.

Inheritance
azure.ai.vision.face._client.FaceClient
FaceClient

Constructor

FaceClient(endpoint: str, credential: AzureKeyCredential | TokenCredential, **kwargs: Any)

Parameters

Name Description
endpoint
Required
str

Supported Cognitive Services endpoints (protocol and hostname, for example: https://{resource-name}.cognitiveservices.azure.com). Required.

credential
Required

Credential used to authenticate requests to the service. Is either a AzureKeyCredential type or a TokenCredential type. Required.

Keyword-Only Parameters

Name Description
api_version

API Version. Default value is "v1.1-preview.1". Note that overriding this default value may result in unsupported behavior.

Methods

close
detect

Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes.

[!IMPORTANT] To mitigate potential misuse that can subject people to stereotyping, discrimination, or

unfair denial of services, we are retiring Face API attributes that predict emotion, gender, age, smile, facial hair, hair, and makeup. Read more about this decision https://azure.microsoft.com/blog/responsible-ai-investments-and-safeguards-for-facial-recognition/.

  • No image will be stored. Only the extracted face feature(s) will be stored on server. The

faceId is an identifier of the face feature and will be used in Face - Identify, Face - Verify, and Face - Find Similar. The stored face features will expire and be deleted at the time specified by faceIdTimeToLive after the original detection call.

  • Optional parameters include faceId, landmarks, and attributes. Attributes include headPose, glasses, occlusion, accessories, blur, exposure, noise, mask, and qualityForRecognition. Some of the results returned for specific attributes may not be highly accurate.

  • JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size is from 1KB to 6MB.

  • The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum face size.

  • Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from large to small.

  • For optimal results when querying Face - Identify, Face - Verify, and Face - Find Similar ('returnFaceId' is true), please use faces that are: frontal, clear, and with a minimum size of 200x200 pixels (100 pixels between eyes).

  • Different 'detectionModel' values can be provided. To use and compare different detection models, please refer to https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-detection-model

    • 'detection_02': Face attributes and landmarks are disabled if you choose this detection

model.

  • 'detection_03': Face attributes (mask and headPose only) and landmarks are supported if you

choose this detection model.

  • Different 'recognitionModel' values are provided. If follow-up operations like Verify,

Identify, Find Similar are needed, please specify the recognition model with 'recognitionModel' parameter. The default value for 'recognitionModel' is 'recognition_01', if latest model needed, please explicitly specify the model you need in this parameter. Once specified, the detected faceIds will be associated with the specified recognition model. More details, please refer to https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-recognition-model.

detect_from_url

Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes.

[!IMPORTANT] To mitigate potential misuse that can subject people to stereotyping, discrimination, or

unfair denial of services, we are retiring Face API attributes that predict emotion, gender, age, smile, facial hair, hair, and makeup. Read more about this decision https://azure.microsoft.com/blog/responsible-ai-investments-and-safeguards-for-facial-recognition/.

  • No image will be stored. Only the extracted face feature(s) will be stored on server. The

faceId is an identifier of the face feature and will be used in Face - Identify, Face - Verify, and Face - Find Similar. The stored face features will expire and be deleted at the time specified by faceIdTimeToLive after the original detection call.

  • Optional parameters include faceId, landmarks, and attributes. Attributes include headPose, glasses, occlusion, accessories, blur, exposure, noise, mask, and qualityForRecognition. Some of the results returned for specific attributes may not be highly accurate.

  • JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size is from 1KB to 6MB.

  • The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum face size.

  • Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from large to small.

  • For optimal results when querying Face - Identify, Face - Verify, and Face - Find Similar ('returnFaceId' is true), please use faces that are: frontal, clear, and with a minimum size of 200x200 pixels (100 pixels between eyes).

  • Different 'detectionModel' values can be provided. To use and compare different detection models, please refer to https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-detection-model

    • 'detection_02': Face attributes and landmarks are disabled if you choose this detection

model.

  • 'detection_03': Face attributes (mask and headPose only) and landmarks are supported if you

choose this detection model.

  • Different 'recognitionModel' values are provided. If follow-up operations like Verify,

Identify, Find Similar are needed, please specify the recognition model with 'recognitionModel' parameter. The default value for 'recognitionModel' is 'recognition_01', if latest model needed, please explicitly specify the model you need in this parameter. Once specified, the detected faceIds will be associated with the specified recognition model. More details, please refer to https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-recognition-model.

find_similar

Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId array contains the faces created by Detect.

Depending on the input the returned similar faces list contains faceIds or persistedFaceIds ranked by similarity.

Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default mode that it tries to find faces of the same person as possible by using internal same-person thresholds. It is useful to find a known person's other photos. Note that an empty list will be returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used in the cases like searching celebrity-looking faces.

The 'recognitionModel' associated with the query faceId should be the same as the 'recognitionModel' used by the target faceId array.

group

Divide candidate faces into groups based on face similarity.

>

  • The output is one or more disjointed face groups and a messyGroup. A face group contains

faces that have similar looking, often of the same person. Face groups are ranked by group size, i.e. number of faces. Notice that faces belonging to a same person might be split into several groups in the result.

  • MessyGroup is a special face group containing faces that cannot find any similar counterpart face from original faces. The messyGroup will not appear in the result if all faces found their counterparts.
  • Group API needs at least 2 candidate faces and 1000 at most. We suggest to try "Verify Face To Face" when you only have 2 candidate faces.
  • The 'recognitionModel' associated with the query faces' faceIds should be the same.

param body: Is either a JSON type or a IO[bytes] type. Required.

type body: JSON or IO[bytes]

keyword face_ids: Array of candidate faceIds created by "Detect". The maximum is 1000 faces. Required.

paramtype face_ids: list[str]

return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping

rtype: ~azure.ai.vision.face.models.FaceGroupingResult

raises ~azure.core.exceptions.HttpResponseError:

Example


   # JSON input template you can fill out and use as your body input.
   body = {
       "faceIds": [
           "str"  # Array of candidate faceIds created by "Detect". The maximum
             is 1000 faces. Required.
       ]
   }

   # response body for status code(s): 200
   response == {
       "groups": [
           [
               "str"  # A partition of the original faces based on face
                 similarity. Groups are ranked by number of faces. Required.
           ]
       ],
       "messyGroup": [
           "str"  # Face ids array of faces that cannot find any similar faces
             from original faces. Required.
       ]
   }
send_request

Runs the network request through the client's chained policies.


>>> from azure.core.rest import HttpRequest
>>> request = HttpRequest("GET", "https://www.example.org/")
<HttpRequest [GET], url: 'https://www.example.org/'>
>>> response = client.send_request(request)
<HttpResponse: 200 OK>

For more information on this code flow, see https://aka.ms/azsdk/dpcodegen/python/send_request

verify_face_to_face

Verify whether two faces belong to a same person.

[!NOTE]

  • Higher face image quality means better identification precision. Please consider

high-quality faces: frontal, clear, and face size is 200x200 pixels (100 pixels between eyes) or bigger.

  • For the scenarios that are sensitive to accuracy please make your own judgment.

  • The 'recognitionModel' associated with the both faces should be the same.

close

close() -> None

detect

Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes.

[!IMPORTANT] To mitigate potential misuse that can subject people to stereotyping, discrimination, or

unfair denial of services, we are retiring Face API attributes that predict emotion, gender, age, smile, facial hair, hair, and makeup. Read more about this decision https://azure.microsoft.com/blog/responsible-ai-investments-and-safeguards-for-facial-recognition/.

  • No image will be stored. Only the extracted face feature(s) will be stored on server. The

faceId is an identifier of the face feature and will be used in Face - Identify, Face - Verify, and Face - Find Similar. The stored face features will expire and be deleted at the time specified by faceIdTimeToLive after the original detection call.

  • Optional parameters include faceId, landmarks, and attributes. Attributes include headPose, glasses, occlusion, accessories, blur, exposure, noise, mask, and qualityForRecognition. Some of the results returned for specific attributes may not be highly accurate.

  • JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size is from 1KB to 6MB.

  • The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum face size.

  • Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from large to small.

  • For optimal results when querying Face - Identify, Face - Verify, and Face - Find Similar ('returnFaceId' is true), please use faces that are: frontal, clear, and with a minimum size of 200x200 pixels (100 pixels between eyes).

  • Different 'detectionModel' values can be provided. To use and compare different detection models, please refer to https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-detection-model

    • 'detection_02': Face attributes and landmarks are disabled if you choose this detection

model.

  • 'detection_03': Face attributes (mask and headPose only) and landmarks are supported if you

choose this detection model.

  • Different 'recognitionModel' values are provided. If follow-up operations like Verify,

Identify, Find Similar are needed, please specify the recognition model with 'recognitionModel' parameter. The default value for 'recognitionModel' is 'recognition_01', if latest model needed, please explicitly specify the model you need in this parameter. Once specified, the detected faceIds will be associated with the specified recognition model. More details, please refer to https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-recognition-model.

detect(image_content: bytes, *, detection_model: str | FaceDetectionModel, recognition_model: str | FaceRecognitionModel, return_face_id: bool, return_face_attributes: List[str | FaceAttributeType] | None = None, return_face_landmarks: bool | None = None, return_recognition_model: bool | None = None, face_id_time_to_live: int | None = None, **kwargs: Any) -> List[FaceDetectionResult]

Parameters

Name Description
image_content
Required

The input image binary. Required.

Keyword-Only Parameters

Name Description
detection_model

The 'detectionModel' associated with the detected faceIds. Supported 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". Required.

recognition_model

The 'recognitionModel' associated with the detected faceIds. Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' is recommended since its accuracy is improved on faces wearing masks compared with 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and "recognition_04". Required.

return_face_id

Return faceIds of the detected faces or not. Required.

return_face_attributes

Analyze and return the one or more specified face attributes in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute analysis has additional computational and time cost. Default value is None.

return_face_landmarks

Return face landmarks of the detected faces or not. The default value is false. Default value is None.

return_recognition_model

Return 'recognitionModel' or not. The default value is false. Default value is None.

face_id_time_to_live
int

The number of seconds for the face ID being cached. Supported range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value is None.

Returns

Type Description

list of FaceDetectionResult

Exceptions

Type Description

Examples


   # response body for status code(s): 200
   response == [
       {
           "faceRectangle": {
               "height": 0,  # The height of the rectangle, in pixels.
                 Required.
               "left": 0,  # The distance from the left edge if the image to
                 the left edge of the rectangle, in pixels. Required.
               "top": 0,  # The distance from the top edge if the image to
                 the top edge of the rectangle, in pixels. Required.
               "width": 0  # The width of the rectangle, in pixels.
                 Required.
           },
           "faceAttributes": {
               "accessories": [
                   {
                       "confidence": 0.0,  # Confidence level of the
                         accessory type. Range between [0,1]. Required.
                       "type": "str"  # Type of the accessory.
                         Required. Known values are: "headwear", "glasses", and "mask".
                   }
               ],
               "age": 0.0,  # Optional. Age in years.
               "blur": {
                   "blurLevel": "str",  # An enum value indicating level
                     of blurriness. Required. Known values are: "low", "medium", and
                     "high".
                   "value": 0.0  # A number indicating level of
                     blurriness ranging from 0 to 1. Required.
               },
               "exposure": {
                   "exposureLevel": "str",  # An enum value indicating
                     level of exposure. Required. Known values are: "underExposure",
                     "goodExposure", and "overExposure".
                   "value": 0.0  # A number indicating level of exposure
                     level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75)
                     is good exposure. [0.75, 1] is over exposure. Required.
               },
               "facialHair": {
                   "beard": 0.0,  # A number ranging from 0 to 1
                     indicating a level of confidence associated with a property.
                     Required.
                   "moustache": 0.0,  # A number ranging from 0 to 1
                     indicating a level of confidence associated with a property.
                     Required.
                   "sideburns": 0.0  # A number ranging from 0 to 1
                     indicating a level of confidence associated with a property.
                     Required.
               },
               "glasses": "str",  # Optional. Glasses type if any of the
                 face. Known values are: "noGlasses", "readingGlasses", "sunglasses", and
                 "swimmingGoggles".
               "hair": {
                   "bald": 0.0,  # A number describing confidence level
                     of whether the person is bald. Required.
                   "hairColor": [
                       {
                           "color": "str",  # Name of the hair
                             color. Required. Known values are: "unknown", "white",
                             "gray", "blond", "brown", "red", "black", and "other".
                           "confidence": 0.0  # Confidence level
                             of the color. Range between [0,1]. Required.
                       }
                   ],
                   "invisible": bool  # A boolean value describing
                     whether the hair is visible in the image. Required.
               },
               "headPose": {
                   "pitch": 0.0,  # Value of angles. Required.
                   "roll": 0.0,  # Value of angles. Required.
                   "yaw": 0.0  # Value of angles. Required.
               },
               "mask": {
                   "noseAndMouthCovered": bool,  # A boolean value
                     indicating whether nose and mouth are covered. Required.
                   "type": "str"  # Type of the mask. Required. Known
                     values are: "faceMask", "noMask", "otherMaskOrOcclusion", and
                     "uncertain".
               },
               "noise": {
                   "noiseLevel": "str",  # An enum value indicating
                     level of noise. Required. Known values are: "low", "medium", and
                     "high".
                   "value": 0.0  # A number indicating level of noise
                     level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75)
                     is good exposure. [0.75, 1] is over exposure. [0, 0.3) is low noise
                     level. [0.3, 0.7) is medium noise level. [0.7, 1] is high noise
                     level. Required.
               },
               "occlusion": {
                   "eyeOccluded": bool,  # A boolean value indicating
                     whether eyes are occluded. Required.
                   "foreheadOccluded": bool,  # A boolean value
                     indicating whether forehead is occluded. Required.
                   "mouthOccluded": bool  # A boolean value indicating
                     whether the mouth is occluded. Required.
               },
               "qualityForRecognition": "str",  # Optional. Properties
                 describing the overall image quality regarding whether the image being
                 used in the detection is of sufficient quality to attempt face
                 recognition on. Known values are: "low", "medium", and "high".
               "smile": 0.0  # Optional. Smile intensity, a number between
                 [0,1].
           },
           "faceId": "str",  # Optional. Unique faceId of the detected face,
             created by detection API and it will expire 24 hours after the detection
             call. To return this, it requires 'returnFaceId' parameter to be true.
           "faceLandmarks": {
               "eyeLeftBottom": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyeLeftInner": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyeLeftOuter": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyeLeftTop": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyeRightBottom": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyeRightInner": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyeRightOuter": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyeRightTop": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyebrowLeftInner": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyebrowLeftOuter": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyebrowRightInner": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyebrowRightOuter": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "mouthLeft": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "mouthRight": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "noseLeftAlarOutTip": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "noseLeftAlarTop": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "noseRightAlarOutTip": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "noseRightAlarTop": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "noseRootLeft": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "noseRootRight": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "noseTip": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "pupilLeft": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "pupilRight": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "underLipBottom": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "underLipTop": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "upperLipBottom": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "upperLipTop": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               }
           },
           "recognitionModel": "str"  # Optional. The 'recognitionModel'
             associated with this faceId. This is only returned when
             'returnRecognitionModel' is explicitly set as true. Known values are:
             "recognition_01", "recognition_02", "recognition_03", and "recognition_04".
       }
   ]

detect_from_url

Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes.

[!IMPORTANT] To mitigate potential misuse that can subject people to stereotyping, discrimination, or

unfair denial of services, we are retiring Face API attributes that predict emotion, gender, age, smile, facial hair, hair, and makeup. Read more about this decision https://azure.microsoft.com/blog/responsible-ai-investments-and-safeguards-for-facial-recognition/.

  • No image will be stored. Only the extracted face feature(s) will be stored on server. The

faceId is an identifier of the face feature and will be used in Face - Identify, Face - Verify, and Face - Find Similar. The stored face features will expire and be deleted at the time specified by faceIdTimeToLive after the original detection call.

  • Optional parameters include faceId, landmarks, and attributes. Attributes include headPose, glasses, occlusion, accessories, blur, exposure, noise, mask, and qualityForRecognition. Some of the results returned for specific attributes may not be highly accurate.

  • JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size is from 1KB to 6MB.

  • The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum face size.

  • Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from large to small.

  • For optimal results when querying Face - Identify, Face - Verify, and Face - Find Similar ('returnFaceId' is true), please use faces that are: frontal, clear, and with a minimum size of 200x200 pixels (100 pixels between eyes).

  • Different 'detectionModel' values can be provided. To use and compare different detection models, please refer to https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-detection-model

    • 'detection_02': Face attributes and landmarks are disabled if you choose this detection

model.

  • 'detection_03': Face attributes (mask and headPose only) and landmarks are supported if you

choose this detection model.

  • Different 'recognitionModel' values are provided. If follow-up operations like Verify,

Identify, Find Similar are needed, please specify the recognition model with 'recognitionModel' parameter. The default value for 'recognitionModel' is 'recognition_01', if latest model needed, please explicitly specify the model you need in this parameter. Once specified, the detected faceIds will be associated with the specified recognition model. More details, please refer to https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/specify-recognition-model.

detect_from_url(*, url: str, content_type: str = 'application/json', detection_model: str | _models.FaceDetectionModel, recognition_model: str | _models.FaceRecognitionModel, return_face_id: bool, return_face_attributes: List[str | _models.FaceAttributeType] | None = None, return_face_landmarks: bool | None = None, return_recognition_model: bool | None = None, face_id_time_to_live: int | None = None, **kwargs: Any) -> List[_models.FaceDetectionResult]

Parameters

Name Description
body
Required
<xref:JSON> or IO[bytes]

Is either a JSON type or a IO[bytes] type. Required.

Keyword-Only Parameters

Name Description
url
str

URL of input image. Required when body is not set.

detection_model

The 'detectionModel' associated with the detected faceIds. Supported 'detectionModel' values include 'detection_01', 'detection_02' and 'detection_03'. The default value is 'detection_01'. Known values are: "detection_01", "detection_02", and "detection_03". Required.

recognition_model

The 'recognitionModel' associated with the detected faceIds. Supported 'recognitionModel' values include 'recognition_01', 'recognition_02', 'recognition_03' or 'recognition_04'. The default value is 'recognition_01'. 'recognition_04' is recommended since its accuracy is improved on faces wearing masks compared with 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and 'recognition_02'. Known values are: "recognition_01", "recognition_02", "recognition_03", and "recognition_04". Required.

return_face_id

Return faceIds of the detected faces or not. Required.

return_face_attributes

Analyze and return the one or more specified face attributes in the comma-separated string like 'returnFaceAttributes=headPose,glasses'. Face attribute analysis has additional computational and time cost. Default value is None.

return_face_landmarks

Return face landmarks of the detected faces or not. The default value is false. Default value is None.

return_recognition_model

Return 'recognitionModel' or not. The default value is false. Default value is None.

face_id_time_to_live
int

The number of seconds for the face ID being cached. Supported range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours). Default value is None.

Returns

Type Description

list of FaceDetectionResult

Exceptions

Type Description

Examples


   # JSON input template you can fill out and use as your body input.
   body = {
       "url": "str"  # URL of input image. Required.
   }

   # response body for status code(s): 200
   response == [
       {
           "faceRectangle": {
               "height": 0,  # The height of the rectangle, in pixels.
                 Required.
               "left": 0,  # The distance from the left edge if the image to
                 the left edge of the rectangle, in pixels. Required.
               "top": 0,  # The distance from the top edge if the image to
                 the top edge of the rectangle, in pixels. Required.
               "width": 0  # The width of the rectangle, in pixels.
                 Required.
           },
           "faceAttributes": {
               "accessories": [
                   {
                       "confidence": 0.0,  # Confidence level of the
                         accessory type. Range between [0,1]. Required.
                       "type": "str"  # Type of the accessory.
                         Required. Known values are: "headwear", "glasses", and "mask".
                   }
               ],
               "age": 0.0,  # Optional. Age in years.
               "blur": {
                   "blurLevel": "str",  # An enum value indicating level
                     of blurriness. Required. Known values are: "low", "medium", and
                     "high".
                   "value": 0.0  # A number indicating level of
                     blurriness ranging from 0 to 1. Required.
               },
               "exposure": {
                   "exposureLevel": "str",  # An enum value indicating
                     level of exposure. Required. Known values are: "underExposure",
                     "goodExposure", and "overExposure".
                   "value": 0.0  # A number indicating level of exposure
                     level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75)
                     is good exposure. [0.75, 1] is over exposure. Required.
               },
               "facialHair": {
                   "beard": 0.0,  # A number ranging from 0 to 1
                     indicating a level of confidence associated with a property.
                     Required.
                   "moustache": 0.0,  # A number ranging from 0 to 1
                     indicating a level of confidence associated with a property.
                     Required.
                   "sideburns": 0.0  # A number ranging from 0 to 1
                     indicating a level of confidence associated with a property.
                     Required.
               },
               "glasses": "str",  # Optional. Glasses type if any of the
                 face. Known values are: "noGlasses", "readingGlasses", "sunglasses", and
                 "swimmingGoggles".
               "hair": {
                   "bald": 0.0,  # A number describing confidence level
                     of whether the person is bald. Required.
                   "hairColor": [
                       {
                           "color": "str",  # Name of the hair
                             color. Required. Known values are: "unknown", "white",
                             "gray", "blond", "brown", "red", "black", and "other".
                           "confidence": 0.0  # Confidence level
                             of the color. Range between [0,1]. Required.
                       }
                   ],
                   "invisible": bool  # A boolean value describing
                     whether the hair is visible in the image. Required.
               },
               "headPose": {
                   "pitch": 0.0,  # Value of angles. Required.
                   "roll": 0.0,  # Value of angles. Required.
                   "yaw": 0.0  # Value of angles. Required.
               },
               "mask": {
                   "noseAndMouthCovered": bool,  # A boolean value
                     indicating whether nose and mouth are covered. Required.
                   "type": "str"  # Type of the mask. Required. Known
                     values are: "faceMask", "noMask", "otherMaskOrOcclusion", and
                     "uncertain".
               },
               "noise": {
                   "noiseLevel": "str",  # An enum value indicating
                     level of noise. Required. Known values are: "low", "medium", and
                     "high".
                   "value": 0.0  # A number indicating level of noise
                     level ranging from 0 to 1. [0, 0.25) is under exposure. [0.25, 0.75)
                     is good exposure. [0.75, 1] is over exposure. [0, 0.3) is low noise
                     level. [0.3, 0.7) is medium noise level. [0.7, 1] is high noise
                     level. Required.
               },
               "occlusion": {
                   "eyeOccluded": bool,  # A boolean value indicating
                     whether eyes are occluded. Required.
                   "foreheadOccluded": bool,  # A boolean value
                     indicating whether forehead is occluded. Required.
                   "mouthOccluded": bool  # A boolean value indicating
                     whether the mouth is occluded. Required.
               },
               "qualityForRecognition": "str",  # Optional. Properties
                 describing the overall image quality regarding whether the image being
                 used in the detection is of sufficient quality to attempt face
                 recognition on. Known values are: "low", "medium", and "high".
               "smile": 0.0  # Optional. Smile intensity, a number between
                 [0,1].
           },
           "faceId": "str",  # Optional. Unique faceId of the detected face,
             created by detection API and it will expire 24 hours after the detection
             call. To return this, it requires 'returnFaceId' parameter to be true.
           "faceLandmarks": {
               "eyeLeftBottom": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyeLeftInner": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyeLeftOuter": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyeLeftTop": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyeRightBottom": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyeRightInner": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyeRightOuter": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyeRightTop": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyebrowLeftInner": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyebrowLeftOuter": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyebrowRightInner": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "eyebrowRightOuter": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "mouthLeft": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "mouthRight": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "noseLeftAlarOutTip": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "noseLeftAlarTop": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "noseRightAlarOutTip": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "noseRightAlarTop": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "noseRootLeft": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "noseRootRight": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "noseTip": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "pupilLeft": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "pupilRight": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "underLipBottom": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "underLipTop": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "upperLipBottom": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               },
               "upperLipTop": {
                   "x": 0.0,  # The horizontal component, in pixels.
                     Required.
                   "y": 0.0  # The vertical component, in pixels.
                     Required.
               }
           },
           "recognitionModel": "str"  # Optional. The 'recognitionModel'
             associated with this faceId. This is only returned when
             'returnRecognitionModel' is explicitly set as true. Known values are:
             "recognition_01", "recognition_02", "recognition_03", and "recognition_04".
       }
   ]

find_similar

Given query face's faceId, to search the similar-looking faces from a faceId array. A faceId array contains the faces created by Detect.

Depending on the input the returned similar faces list contains faceIds or persistedFaceIds ranked by similarity.

Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default mode that it tries to find faces of the same person as possible by using internal same-person thresholds. It is useful to find a known person's other photos. Note that an empty list will be returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used in the cases like searching celebrity-looking faces.

The 'recognitionModel' associated with the query faceId should be the same as the 'recognitionModel' used by the target faceId array.

find_similar(body: ~collections.abc.MutableMapping[str, ~typing.Any] | ~typing.IO[bytes] = <object object>, *, face_id: str = <object object>, face_ids: ~typing.List[str] = <object object>, max_num_of_candidates_returned: int | None = None, mode: str | ~azure.ai.vision.face.models._enums.FindSimilarMatchMode | None = None, **kwargs: ~typing.Any) -> List[FaceFindSimilarResult]

Parameters

Name Description
body
Required
<xref:JSON> or IO[bytes]

Is either a JSON type or a IO[bytes] type. Required.

Keyword-Only Parameters

Name Description
face_id
str

faceId of the query face. User needs to call "Detect" first to get a valid faceId. Note that this faceId is not persisted and will expire 24 hours after the detection call. Required.

face_ids

An array of candidate faceIds. All of them are created by "Detect" and the faceIds will expire 24 hours after the detection call. The number of faceIds is limited to 1000. Required.

max_num_of_candidates_returned
int

The number of top similar faces returned. The valid range is [1, 1000]. Default value is 20. Default value is None.

mode

Similar face searching mode. It can be 'matchPerson' or 'matchFace'. Default value is 'matchPerson'. Known values are: "matchPerson" and "matchFace". Default value is None.

Returns

Type Description

list of FaceFindSimilarResult

Exceptions

Type Description

Examples


   # JSON input template you can fill out and use as your body input.
   body = {
       "faceId": "str",  # faceId of the query face. User needs to call "Detect"
         first to get a valid faceId. Note that this faceId is not persisted and will
         expire 24 hours after the detection call. Required.
       "faceIds": [
           "str"  # An array of candidate faceIds. All of them are created by
             "Detect" and the faceIds will expire 24 hours after the detection call. The
             number of faceIds is limited to 1000. Required.
       ],
       "maxNumOfCandidatesReturned": 0,  # Optional. The number of top similar faces
         returned. The valid range is [1, 1000]. Default value is 20.
       "mode": "str"  # Optional. Similar face searching mode. It can be
         'matchPerson' or 'matchFace'. Default value is 'matchPerson'. Known values are:
         "matchPerson" and "matchFace".
   }

   # response body for status code(s): 200
   response == [
       {
           "confidence": 0.0,  # Confidence value of the candidate. The higher
             confidence, the more similar. Range between [0,1]. Required.
           "faceId": "str",  # Optional. faceId of candidate face when find by
             faceIds. faceId is created by "Detect" and will expire 24 hours after the
             detection call.
           "persistedFaceId": "str"  # Optional. persistedFaceId of candidate
             face when find by faceListId or largeFaceListId. persistedFaceId in face
             list/large face list is persisted and will not expire.
       }
   ]

group

Divide candidate faces into groups based on face similarity.

>

  • The output is one or more disjointed face groups and a messyGroup. A face group contains

faces that have similar looking, often of the same person. Face groups are ranked by group size, i.e. number of faces. Notice that faces belonging to a same person might be split into several groups in the result.

  • MessyGroup is a special face group containing faces that cannot find any similar counterpart face from original faces. The messyGroup will not appear in the result if all faces found their counterparts.
  • Group API needs at least 2 candidate faces and 1000 at most. We suggest to try "Verify Face To Face" when you only have 2 candidate faces.
  • The 'recognitionModel' associated with the query faces' faceIds should be the same.

param body: Is either a JSON type or a IO[bytes] type. Required.

type body: JSON or IO[bytes]

keyword face_ids: Array of candidate faceIds created by "Detect". The maximum is 1000 faces. Required.

paramtype face_ids: list[str]

return: FaceGroupingResult. The FaceGroupingResult is compatible with MutableMapping

rtype: ~azure.ai.vision.face.models.FaceGroupingResult

raises ~azure.core.exceptions.HttpResponseError:

Example


   # JSON input template you can fill out and use as your body input.
   body = {
       "faceIds": [
           "str"  # Array of candidate faceIds created by "Detect". The maximum
             is 1000 faces. Required.
       ]
   }

   # response body for status code(s): 200
   response == {
       "groups": [
           [
               "str"  # A partition of the original faces based on face
                 similarity. Groups are ranked by number of faces. Required.
           ]
       ],
       "messyGroup": [
           "str"  # Face ids array of faces that cannot find any similar faces
             from original faces. Required.
       ]
   }
group(body: ~collections.abc.MutableMapping[str, ~typing.Any] | ~typing.IO[bytes] = <object object>, *, face_ids: ~typing.List[str] = <object object>, **kwargs: ~typing.Any) -> FaceGroupingResult

send_request

Runs the network request through the client's chained policies.


>>> from azure.core.rest import HttpRequest
>>> request = HttpRequest("GET", "https://www.example.org/")
<HttpRequest [GET], url: 'https://www.example.org/'>
>>> response = client.send_request(request)
<HttpResponse: 200 OK>

For more information on this code flow, see https://aka.ms/azsdk/dpcodegen/python/send_request

send_request(request: HttpRequest, *, stream: bool = False, **kwargs: Any) -> HttpResponse

Parameters

Name Description
request
Required

The network request you want to make. Required.

Keyword-Only Parameters

Name Description
stream

Whether the response payload will be streamed. Defaults to False.

Returns

Type Description

The response of your network call. Does not do error handling on your response.

verify_face_to_face

Verify whether two faces belong to a same person.

[!NOTE]

  • Higher face image quality means better identification precision. Please consider

high-quality faces: frontal, clear, and face size is 200x200 pixels (100 pixels between eyes) or bigger.

  • For the scenarios that are sensitive to accuracy please make your own judgment.

  • The 'recognitionModel' associated with the both faces should be the same.

verify_face_to_face(body: ~collections.abc.MutableMapping[str, ~typing.Any] | ~typing.IO[bytes] = <object object>, *, face_id1: str = <object object>, face_id2: str = <object object>, **kwargs: ~typing.Any) -> FaceVerificationResult

Parameters

Name Description
body
Required
<xref:JSON> or IO[bytes]

Is either a JSON type or a IO[bytes] type. Required.

Keyword-Only Parameters

Name Description
face_id1
str

The faceId of one face, come from "Detect". Required.

face_id2
str

The faceId of another face, come from "Detect". Required.

Returns

Type Description

FaceVerificationResult. The FaceVerificationResult is compatible with MutableMapping

Exceptions

Type Description

Examples


   # JSON input template you can fill out and use as your body input.
   body = {
       "faceId1": "str",  # The faceId of one face, come from "Detect". Required.
       "faceId2": "str"  # The faceId of another face, come from "Detect". Required.
   }

   # response body for status code(s): 200
   response == {
       "confidence": 0.0,  # A number indicates the similarity confidence of whether
         two faces belong to the same person, or whether the face belongs to the person.
         By default, isIdentical is set to True if similarity confidence is greater than
         or equal to 0.5. This is useful for advanced users to override 'isIdentical' and
         fine-tune the result on their own data. Required.
       "isIdentical": bool  # True if the two faces belong to the same person or the
         face belongs to the person, otherwise false. Required.
   }