Call the Detect API


Face service access is limited based on eligibility and usage criteria in order to support our Responsible AI principles. Face service is only available to Microsoft managed customers and partners. Use the Face Recognition intake form to apply for access. For more information, see the Face limited access page.


Microsoft has retired facial recognition capabilities that can be used to try to infer emotional states and identity attributes which, if misused, can subject people to stereotyping, discrimination or unfair denial of services. These include capabilities that predict emotion, gender, age, smile, facial hair, hair and makeup. Read more about this decision here.

This guide demonstrates how to use the face detection API to extract attributes like age, emotion, or head pose from a given image. You'll learn the different ways to configure the behavior of this API to meet your needs.

The code snippets in this guide are written in C# by using the Azure AI Face client library. The same functionality is available through the REST API.


This guide assumes that you already constructed a FaceClient object, named faceClient, using a Face key and endpoint URL. For instructions on how to set up this feature, follow one of the quickstarts.

Submit data to the service

To find faces and get their locations in an image, call the DetectWithUrlAsync or DetectWithStreamAsync method. DetectWithUrlAsync takes a URL string as input, and DetectWithStreamAsync takes the raw byte stream of an image as input.

IList<DetectedFace> faces = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId: false, detectionModel: DetectionModel.Detection03);

The service returns a DetectedFace object, which you can query for different kinds of information, specified below.

For information on how to parse the location and dimensions of the face, see FaceRectangle. Usually, this rectangle contains the eyes, eyebrows, nose, and mouth. The top of head, ears, and chin aren't necessarily included. To use the face rectangle to crop a complete head or get a mid-shot portrait, you should expand the rectangle in each direction.

Determine how to process the data

This guide focuses on the specifics of the Detect call, such as what arguments you can pass and what you can do with the returned data. We recommend that you query for only the features you need. Each operation takes more time to complete.

Get face ID

If you set the parameter returnFaceId to true (approved customers only), you can get the unique ID for each face, which you can use in later face recognition tasks.

foreach (var face in faces)
    string id = face.FaceId.ToString();
    FaceRectangle rect = face.FaceRectangle;

The optional faceIdTimeToLive parameter specifies how long (in seconds) the face ID should be stored on the server. After this time expires, the face ID is removed. The default value is 86400 (24 hours).

Get face landmarks

Face landmarks are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. To get face landmark data, set the detectionModel parameter to DetectionModel.Detection01 and the returnFaceLandmarks parameter to true.

// Note DetectionModel.Detection02 cannot be used with returnFaceLandmarks.
IList<DetectedFace> faces2 = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId: false, returnFaceLandmarks: true, detectionModel: DetectionModel.Detection01);

Get face attributes

Besides face rectangles and landmarks, the face detection API can analyze several conceptual attributes of a face. For a full list, see the Face attributes conceptual section.

To analyze face attributes, set the detectionModel parameter to DetectionModel.Detection01 and the returnFaceAttributes parameter to a list of FaceAttributeType Enum values.

var requiredFaceAttributes = new FaceAttributeType[] {
// Note DetectionModel.Detection02 cannot be used with returnFaceAttributes.
var faces3 = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId: false, returnFaceAttributes: requiredFaceAttributes, detectionModel: DetectionModel.Detection01, recognitionModel: RecognitionModel.Recognition04);

Get results from the service

Face landmark results

The following code demonstrates how you might retrieve the locations of the nose and pupils:

foreach (var face in faces2)
    var landmarks = face.FaceLandmarks;

    double noseX = landmarks.NoseTip.X;
    double noseY = landmarks.NoseTip.Y;

    double leftPupilX = landmarks.PupilLeft.X;
    double leftPupilY = landmarks.PupilLeft.Y;

    double rightPupilX = landmarks.PupilRight.X;
    double rightPupilY = landmarks.PupilRight.Y;

You also can use face landmark data to accurately calculate the direction of the face. For example, you can define the rotation of the face as a vector from the center of the mouth to the center of the eyes. The following code calculates this vector:

    var upperLipBottom = landmarks.UpperLipBottom;
    var underLipTop = landmarks.UnderLipTop;

    var centerOfMouth = new Point(
        (upperLipBottom.X + underLipTop.X) / 2,
        (upperLipBottom.Y + underLipTop.Y) / 2);

    var eyeLeftInner = landmarks.EyeLeftInner;
    var eyeRightInner = landmarks.EyeRightInner;

    var centerOfTwoEyes = new Point(
        (eyeLeftInner.X + eyeRightInner.X) / 2,
        (eyeLeftInner.Y + eyeRightInner.Y) / 2);

    Vector faceDirection = new Vector(
        centerOfTwoEyes.X - centerOfMouth.X,
        centerOfTwoEyes.Y - centerOfMouth.Y);

When you know the direction of the face, you can rotate the rectangular face frame to align it more properly. To crop faces in an image, you can programmatically rotate the image so the faces always appear upright.

Face attribute results

The following code shows how you might retrieve the face attribute data that you requested in the original call.

foreach (var face in faces3)
    var attributes = face.FaceAttributes;
    var headPose = attributes.HeadPose;
    var glasses = attributes.Glasses;
    var qualityForRecognition = attributes.QualityForRecognition;

To learn more about each of the attributes, see the Face detection and attributes conceptual guide.

Next steps

In this guide, you learned how to use the various functionalities of face detection and analysis. Next, integrate these features into an app to add face data from users.