ערוך

שתף באמצעות


Quickstart: Use the Face service

Important

If you are using Microsoft products or services to process Biometric Data, you are responsible for: (i) providing notice to data subjects, including with respect to retention periods and destruction; (ii) obtaining consent from data subjects; and (iii) deleting the Biometric Data, all as appropriate and required under applicable Data Protection Requirements. "Biometric Data" will have the meaning set forth in Article 4 of the GDPR and, if applicable, equivalent terms in other data protection requirements. For related information, see Data and Privacy for Face.

Caution

Face service access is limited based on eligibility and usage criteria in order to support our Responsible AI principles. Face service is only available to Microsoft managed customers and partners. Use the Face Recognition intake form to apply for access. For more information, see the Face limited access page.

Get started with facial recognition using the Face client library for .NET. The Azure AI Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images. Follow these steps to install the package and try out the example code for basic face identification using remote images.

Reference documentation | Library source code | Package (NuGet) | Samples

Prerequisites

  • Azure subscription - Create one for free
  • The Visual Studio IDE or current version of .NET Core.
  • Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, select Go to resource.
    • You'll need the key and endpoint from the resource you create to connect your application to the Face API.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Create environment variables

In this example, write your credentials to environment variables on the local machine that runs the application.

Go to the Azure portal. If the resource you created in the Prerequisites section deployed successfully, select Go to resource under Next Steps. You can find your key and endpoint under Resource Management in the Keys and Endpoint page. Your resource key isn't the same as your Azure subscription ID.

To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.

  • To set the FACE_APIKEY environment variable, replace <your_key> with one of the keys for your resource.
  • To set the FACE_ENDPOINT environment variable, replace <your_endpoint> with the endpoint for your resource.

Important

If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.

For more information about AI services security, see Authenticate requests to Azure AI services.

setx FACE_APIKEY <your_key>
setx FACE_ENDPOINT <your_endpoint>

After you add the environment variables, you may need to restart any running programs that will read the environment variables, including the console window.

Identify and verify faces

  1. Create a new C# application

    Using Visual Studio, create a new .NET Core application.

    Install the client library

    Once you've created a new project, install the client library by right-clicking on the project solution in the Solution Explorer and selecting Manage NuGet Packages. In the package manager that opens select Browse, check Include prerelease, and search for Azure.AI.Vision.Face. Select the latest version, and then Install.

  2. Add the following code into the Program.cs file.

    Note

    If you haven't received access to the Face service using the intake form, some of these functions won't work.

    using Azure;
    using Azure.AI.Vision.Face;
    
    namespace FaceQuickstart
    {
        class Program
        {
            static readonly string LargePersonGroupId = Guid.NewGuid().ToString();
    
            // URL path for the images.
            const string ImageBaseUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/";
    
            // From your Face subscription in the Azure portal, get your subscription key and endpoint.
            static readonly string SubscriptionKey = Environment.GetEnvironmentVariable("FACE_APIKEY") ?? "<apikey>";
            static readonly string Endpoint = Environment.GetEnvironmentVariable("FACE_ENDPOINT") ?? "<endpoint>";
    
            static void Main(string[] args)
            {
                // Recognition model 4 was released in 2021 February.
                // It is recommended since its accuracy is improved
                // on faces wearing masks compared with model 3,
                // and its overall accuracy is improved compared
                // with models 1 and 2.
                FaceRecognitionModel RecognitionModel4 = FaceRecognitionModel.Recognition04;
    
                // Authenticate.
                FaceClient client = Authenticate(Endpoint, SubscriptionKey);
    
                // Identify - recognize a face(s) in a large person group (a large person group is created in this example).
                IdentifyInLargePersonGroup(client, ImageBaseUrl, RecognitionModel4).Wait();
    
                Console.WriteLine("End of quickstart.");
            }
    
            /*
             *	AUTHENTICATE
             *	Uses subscription key and region to create a client.
             */
            public static FaceClient Authenticate(string endpoint, string key)
            {
                return new FaceClient(new Uri(endpoint), new AzureKeyCredential(key));
            }
    
            // Detect faces from image url for recognition purposes. This is a helper method for other functions in this quickstart.
            // Parameter `returnFaceId` of `DetectAsync` must be set to `true` (by default) for recognition purposes.
            // Parameter `returnFaceAttributes` is set to include the QualityForRecognition attribute. 
            // Recognition model must be set to recognition_03 or recognition_04 as a result.
            // Result faces with insufficient quality for recognition are filtered out. 
            // The field `faceId` in returned `DetectedFace`s will be used in Verify and Identify.
            // It will expire 24 hours after the detection call.
            private static async Task<List<FaceDetectionResult>> DetectFaceRecognize(FaceClient faceClient, string url, FaceRecognitionModel recognitionModel)
            {
                // Detect faces from image URL.
                var response = await faceClient.DetectAsync(new Uri(url), FaceDetectionModel.Detection03, recognitionModel, true, [FaceAttributeType.QualityForRecognition]);
                IReadOnlyList<FaceDetectionResult> detectedFaces = response.Value;
                List<FaceDetectionResult> sufficientQualityFaces = new List<FaceDetectionResult>();
                foreach (FaceDetectionResult detectedFace in detectedFaces)
                {
                    QualityForRecognition? faceQualityForRecognition = detectedFace.FaceAttributes.QualityForRecognition;
                    if (faceQualityForRecognition.HasValue && (faceQualityForRecognition.Value != QualityForRecognition.Low))
                    {
                        sufficientQualityFaces.Add(detectedFace);
                    }
                }
                Console.WriteLine($"{detectedFaces.Count} face(s) with {sufficientQualityFaces.Count} having sufficient quality for recognition detected from image `{Path.GetFileName(url)}`");
    
                return sufficientQualityFaces;
            }
    
            /*
             * IDENTIFY FACES
             * To identify faces, you need to create and define a large person group.
             * The Identify operation takes one or several face IDs from DetectedFace or PersistedFace and a LargePersonGroup and returns 
             * a list of Person objects that each face might belong to. Returned Person objects are wrapped as Candidate objects, 
             * which have a prediction confidence value.
             */
            public static async Task IdentifyInLargePersonGroup(FaceClient client, string url, FaceRecognitionModel recognitionModel)
            {
                Console.WriteLine("========IDENTIFY FACES========");
                Console.WriteLine();
    
                // Create a dictionary for all your images, grouping similar ones under the same key.
                Dictionary<string, string[]> personDictionary =
                    new Dictionary<string, string[]>
                        { { "Family1-Dad", new[] { "Family1-Dad1.jpg", "Family1-Dad2.jpg" } },
                          { "Family1-Mom", new[] { "Family1-Mom1.jpg", "Family1-Mom2.jpg" } },
                          { "Family1-Son", new[] { "Family1-Son1.jpg", "Family1-Son2.jpg" } }
                        };
                // A group photo that includes some of the persons you seek to identify from your dictionary.
                string sourceImageFileName = "identification1.jpg";
    
                // Create a large person group.
                Console.WriteLine($"Create a person group ({LargePersonGroupId}).");
                LargePersonGroupClient largePersonGroupClient = new FaceAdministrationClient(new Uri(Endpoint), new AzureKeyCredential(SubscriptionKey)).GetLargePersonGroupClient(LargePersonGroupId);
                await largePersonGroupClient.CreateAsync(LargePersonGroupId, recognitionModel: recognitionModel);
                // The similar faces will be grouped into a single large person group person.
                foreach (string groupedFace in personDictionary.Keys)
                {
                    // Limit TPS
                    await Task.Delay(250);
                    var createPersonResponse = await largePersonGroupClient.CreatePersonAsync(groupedFace);
                    Guid personId = createPersonResponse.Value.PersonId;
                    Console.WriteLine($"Create a person group person '{groupedFace}'.");
    
                    // Add face to the large person group person.
                    foreach (string similarImage in personDictionary[groupedFace])
                    {
                        Console.WriteLine($"Check whether image is of sufficient quality for recognition");
                        var detectResponse = await client.DetectAsync(new Uri($"{url}{similarImage}"), FaceDetectionModel.Detection03, recognitionModel, false, [FaceAttributeType.QualityForRecognition]);
                        IReadOnlyList<FaceDetectionResult> facesInImage = detectResponse.Value;
                        bool sufficientQuality = true;
                        foreach (FaceDetectionResult face in facesInImage)
                        {
                            QualityForRecognition? faceQualityForRecognition = face.FaceAttributes.QualityForRecognition;
                            //  Only "high" quality images are recommended for person enrollment
                            if (faceQualityForRecognition.HasValue && (faceQualityForRecognition.Value != QualityForRecognition.High))
                            {
                                sufficientQuality = false;
                                break;
                            }
                        }
    
                        if (!sufficientQuality)
                        {
                            continue;
                        }
    
                        if (facesInImage.Count != 1)
                        {
                            continue;
                        }
    
                        // add face to the large person group
                        Console.WriteLine($"Add face to the person group person({groupedFace}) from image `{similarImage}`");
                        await largePersonGroupClient.AddFaceAsync(personId, new Uri($"{url}{similarImage}"), detectionModel: FaceDetectionModel.Detection03);
                    }
                }
    
                // Start to train the large person group.
                Console.WriteLine();
                Console.WriteLine($"Train person group {LargePersonGroupId}.");
                Operation operation = await largePersonGroupClient.TrainAsync(WaitUntil.Completed);
    
                // Wait until the training is completed.
                await operation.WaitForCompletionResponseAsync();
                Console.WriteLine("Training status: succeeded.");
                Console.WriteLine();
    
                Console.WriteLine("Pausing for 60 seconds to avoid triggering rate limit on free account...");
                await Task.Delay(60000);
    
                List<Guid> sourceFaceIds = new List<Guid>();
                // Detect faces from source image url.
                List<FaceDetectionResult> detectedFaces = await DetectFaceRecognize(client, $"{url}{sourceImageFileName}", recognitionModel);
    
                // Add detected faceId to sourceFaceIds.
                foreach (FaceDetectionResult detectedFace in detectedFaces) { sourceFaceIds.Add(detectedFace.FaceId.Value); }
    
                // Identify the faces in a large person group.
                var identifyResponse = await client.IdentifyFromLargePersonGroupAsync(sourceFaceIds, LargePersonGroupId);
                IReadOnlyList<FaceIdentificationResult> identifyResults = identifyResponse.Value;
                foreach (FaceIdentificationResult identifyResult in identifyResults)
                {
                    if (identifyResult.Candidates.Count == 0)
                    {
                        Console.WriteLine($"No person is identified for the face in: {sourceImageFileName} - {identifyResult.FaceId},");
                        continue;
                    }
    
                    FaceIdentificationCandidate candidate = identifyResult.Candidates.First();
                    var getPersonResponse = await largePersonGroupClient.GetPersonAsync(candidate.PersonId);
                    string personName = getPersonResponse.Value.Name;
                    Console.WriteLine($"Person '{personName}' is identified for the face in: {sourceImageFileName} - {identifyResult.FaceId}," + $" confidence: {candidate.Confidence}.");
    
                    var verifyResponse = await client.VerifyFromLargePersonGroupAsync(identifyResult.FaceId, LargePersonGroupId, candidate.PersonId);
                    FaceVerificationResult verifyResult = verifyResponse.Value;
                    Console.WriteLine($"Verification result: is a match? {verifyResult.IsIdentical}. confidence: {verifyResult.Confidence}");
                }
                Console.WriteLine();
    
                // Delete large person group.
                Console.WriteLine("========DELETE PERSON GROUP========");
                Console.WriteLine();
                await largePersonGroupClient.DeleteAsync();
                Console.WriteLine($"Deleted the person group {LargePersonGroupId}.");
                Console.WriteLine();
            }
        }
    }
    
  3. Run the application

    Run the application by clicking the Debug button at the top of the IDE window.

Output

========IDENTIFY FACES========

Create a person group (18d1c443-a01b-46a4-9191-121f74a831cd).
Create a person group person 'Family1-Dad'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Dad) from image `Family1-Dad1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Dad) from image `Family1-Dad2.jpg`
Create a person group person 'Family1-Mom'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Mom) from image `Family1-Mom1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Mom) from image `Family1-Mom2.jpg`
Create a person group person 'Family1-Son'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Son) from image `Family1-Son1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Son) from image `Family1-Son2.jpg`

Train person group 18d1c443-a01b-46a4-9191-121f74a831cd.
Training status: succeeded.

Pausing for 60 seconds to avoid triggering rate limit on free account...
4 face(s) with 4 having sufficient quality for recognition detected from image `identification1.jpg`
Person 'Family1-Dad' is identified for the face in: identification1.jpg - ad813534-9141-47b4-bfba-24919223966f, confidence: 0.96807.
Verification result: is a match? True. confidence: 0.96807
Person 'Family1-Mom' is identified for the face in: identification1.jpg - 1a39420e-f517-4cee-a898-5d968dac1a7e, confidence: 0.96902.
Verification result: is a match? True. confidence: 0.96902
No person is identified for the face in: identification1.jpg - 889394b1-e30f-4147-9be1-302beb5573f3,
Person 'Family1-Son' is identified for the face in: identification1.jpg - 0557d87b-356c-48a8-988f-ce0ad2239aa5, confidence: 0.9281.
Verification result: is a match? True. confidence: 0.9281

========DELETE PERSON GROUP========

Deleted the person group 18d1c443-a01b-46a4-9191-121f74a831cd.

End of quickstart.

Tip

The Face API runs on a set of pre-built models that are static by nature (the model's performance will not regress or improve as the service is run). The results that the model produces might change if Microsoft updates the model's backend without migrating to an entirely new model version. To take advantage of a newer version of a model, you can retrain your PersonGroup, specifying the newer model as a parameter with the same enrollment images.

Clean up resources

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to use the Face client library for .NET to do basic face identification. Next, learn about the different face detection models and how to specify the right model for your use case.

Get started with facial recognition using the Face client library for Python. Follow these steps to install the package and try out the example code for basic tasks. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images. Follow these steps to install the package and try out the example code for basic face identification using remote images.

Reference documentation | Library source code | Package (PiPy) | Samples

Prerequisites

  • Azure subscription - Create one for free
  • Python 3.x
    • Your Python installation should include pip. You can check if you have pip installed by running pip --version on the command line. Get pip by installing the latest version of Python.
  • Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, select Go to resource.
    • You'll need the key and endpoint from the resource you create to connect your application to the Face API.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Create environment variables

In this example, write your credentials to environment variables on the local machine that runs the application.

Go to the Azure portal. If the resource you created in the Prerequisites section deployed successfully, select Go to resource under Next Steps. You can find your key and endpoint under Resource Management in the Keys and Endpoint page. Your resource key isn't the same as your Azure subscription ID.

To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.

  • To set the FACE_APIKEY environment variable, replace <your_key> with one of the keys for your resource.
  • To set the FACE_ENDPOINT environment variable, replace <your_endpoint> with the endpoint for your resource.

Important

If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.

For more information about AI services security, see Authenticate requests to Azure AI services.

setx FACE_APIKEY <your_key>
setx FACE_ENDPOINT <your_endpoint>

After you add the environment variables, you may need to restart any running programs that will read the environment variables, including the console window.

Identify and verify faces

  1. Install the client library

    After installing Python, you can install the client library with:

    pip install --upgrade azure-ai-vision-face
    
  2. Create a new Python application

    Create a new Python script—quickstart-file.py, for example. Then open it in your preferred editor or IDE and paste in the following code.

    Note

    If you haven't received access to the Face service using the intake form, some of these functions won't work.

    import os
    import time
    import uuid
    
    from azure.core.credentials import AzureKeyCredential
    from azure.ai.vision.face import FaceAdministrationClient, FaceClient
    from azure.ai.vision.face.models import FaceAttributeTypeRecognition04, FaceDetectionModel, FaceRecognitionModel, QualityForRecognition
    
    
    # This key will serve all examples in this document.
    KEY = os.environ["FACE_APIKEY"]
    
    # This endpoint will be used in all examples in this quickstart.
    ENDPOINT = os.environ["FACE_ENDPOINT"]
    
    # Used in the Large Person Group Operations and Delete Large Person Group examples.
    # LARGE_PERSON_GROUP_ID should be all lowercase and alphanumeric. For example, 'mygroupname' (dashes are OK).
    LARGE_PERSON_GROUP_ID = str(uuid.uuid4())  # assign a random ID (or name it anything)
    
    # Create an authenticated FaceClient.
    with FaceAdministrationClient(endpoint=ENDPOINT, credential=AzureKeyCredential(KEY)) as face_admin_client, \
         FaceClient(endpoint=ENDPOINT, credential=AzureKeyCredential(KEY)) as face_client:
        '''
        Create the LargePersonGroup
        '''
        # Create empty Large Person Group. Large Person Group ID must be lower case, alphanumeric, and/or with '-', '_'.
        print("Person group:", LARGE_PERSON_GROUP_ID)
        face_admin_client.large_person_group.create(
            large_person_group_id=LARGE_PERSON_GROUP_ID,
            name=LARGE_PERSON_GROUP_ID,
            recognition_model=FaceRecognitionModel.RECOGNITION04,
        )
    
        # Define woman friend
        woman = face_admin_client.large_person_group.create_person(
            large_person_group_id=LARGE_PERSON_GROUP_ID,
            name="Woman",
        )
        # Define man friend
        man = face_admin_client.large_person_group.create_person(
            large_person_group_id=LARGE_PERSON_GROUP_ID,
            name="Man",
        )
        # Define child friend
        child = face_admin_client.large_person_group.create_person(
            large_person_group_id=LARGE_PERSON_GROUP_ID,
            name="Child",
        )
    
        '''
        Detect faces and register them to each person
        '''
        # Find all jpeg images of friends in working directory (TBD pull from web instead)
        woman_images = [
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Mom1.jpg",
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Mom2.jpg",
        ]
        man_images = [
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Dad1.jpg",
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Dad2.jpg",
        ]
        child_images = [
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Son1.jpg",
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Son2.jpg",
        ]
    
        # Add to woman person
        for image in woman_images:
            # Check if the image is of sufficent quality for recognition.
            sufficient_quality = True
            detected_faces = face_client.detect_from_url(
                url=image,
                detection_model=FaceDetectionModel.DETECTION03,
                recognition_model=FaceRecognitionModel.RECOGNITION04,
                return_face_id=True,
                return_face_attributes=[FaceAttributeTypeRecognition04.QUALITY_FOR_RECOGNITION],
            )
            for face in detected_faces:
                if face.face_attributes.quality_for_recognition != QualityForRecognition.HIGH:
                    sufficient_quality = False
                    break
    
            if not sufficient_quality:
                continue
    
            if len(detected_faces) != 1:
                continue
    
            face_admin_client.large_person_group.add_face_from_url(
                large_person_group_id=LARGE_PERSON_GROUP_ID,
                person_id=woman.person_id,
                url=image,
                detection_model=FaceDetectionModel.DETECTION03,
            )
            print(f"face {face.face_id} added to person {woman.person_id}")
    
    
        # Add to man person
        for image in man_images:
            # Check if the image is of sufficent quality for recognition.
            sufficient_quality = True
            detected_faces = face_client.detect_from_url(
                url=image,
                detection_model=FaceDetectionModel.DETECTION03,
                recognition_model=FaceRecognitionModel.RECOGNITION04,
                return_face_id=True,
                return_face_attributes=[FaceAttributeTypeRecognition04.QUALITY_FOR_RECOGNITION],
            )
            for face in detected_faces:
                if face.face_attributes.quality_for_recognition != QualityForRecognition.HIGH:
                    sufficient_quality = False
                    break
    
            if not sufficient_quality:
                continue
    
            if len(detected_faces) != 1:
                continue
    
            face_admin_client.large_person_group.add_face_from_url(
                large_person_group_id=LARGE_PERSON_GROUP_ID,
                person_id=man.person_id,
                url=image,
                detection_model=FaceDetectionModel.DETECTION03,
            )
            print(f"face {face.face_id} added to person {man.person_id}")
    
        # Add to child person
        for image in child_images:
            # Check if the image is of sufficent quality for recognition.
            sufficient_quality = True
            detected_faces = face_client.detect_from_url(
                url=image,
                detection_model=FaceDetectionModel.DETECTION03,
                recognition_model=FaceRecognitionModel.RECOGNITION04,
                return_face_id=True,
                return_face_attributes=[FaceAttributeTypeRecognition04.QUALITY_FOR_RECOGNITION],
            )
            for face in detected_faces:
                if face.face_attributes.quality_for_recognition != QualityForRecognition.HIGH:
                    sufficient_quality = False
                    break
            if not sufficient_quality:
                continue
    
            if len(detected_faces) != 1:
                continue
    
            face_admin_client.large_person_group.add_face_from_url(
                large_person_group_id=LARGE_PERSON_GROUP_ID,
                person_id=child.person_id,
                url=image,
                detection_model=FaceDetectionModel.DETECTION03,
            )
            print(f"face {face.face_id} added to person {child.person_id}")
    
        '''
        Train LargePersonGroup
        '''
        # Train the large person group and set the polling interval to 5s
        print(f"Train the person group {LARGE_PERSON_GROUP_ID}")
        poller = face_admin_client.large_person_group.begin_train(
            large_person_group_id=LARGE_PERSON_GROUP_ID,
            polling_interval=5,
        )
    
        poller.wait()
        print(f"The person group {LARGE_PERSON_GROUP_ID} is trained successfully.")
    
        '''
        Identify a face against a defined LargePersonGroup
        '''
        # Group image for testing against
        test_image = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/identification1.jpg"
    
        print("Pausing for 60 seconds to avoid triggering rate limit on free account...")
        time.sleep(60)
    
        # Detect faces
        face_ids = []
        # We use detection model 03 to get better performance, recognition model 04 to support quality for
        # recognition attribute.
        faces = face_client.detect_from_url(
            url=test_image,
            detection_model=FaceDetectionModel.DETECTION03,
            recognition_model=FaceRecognitionModel.RECOGNITION04,
            return_face_id=True,
            return_face_attributes=[FaceAttributeTypeRecognition04.QUALITY_FOR_RECOGNITION],
        )
        for face in faces:
            # Only take the face if it is of sufficient quality.
            if face.face_attributes.quality_for_recognition != QualityForRecognition.LOW:
                face_ids.append(face.face_id)
    
        # Identify faces
        identify_results = face_client.identify_from_large_person_group(
            face_ids=face_ids,
            large_person_group_id=LARGE_PERSON_GROUP_ID,
        )
        print("Identifying faces in image")
        for identify_result in identify_results:
            if identify_result.candidates:
                print(f"Person is identified for face ID {identify_result.face_id} in image, with a confidence of "
                      f"{identify_result.candidates[0].confidence}.")  # Get topmost confidence score
    
                # Verify faces
                verify_result = face_client.verify_from_large_person_group(
                    face_id=identify_result.face_id,
                    large_person_group_id=LARGE_PERSON_GROUP_ID,
                    person_id=identify_result.candidates[0].person_id,
                )
                print(f"verification result: {verify_result.is_identical}. confidence: {verify_result.confidence}")
            else:
                print(f"No person identified for face ID {identify_result.face_id} in image.")
    
        print()
    
        # Delete the large person group
        face_admin_client.large_person_group.delete(LARGE_PERSON_GROUP_ID)
        print(f"The person group {LARGE_PERSON_GROUP_ID} is deleted.")
    
        print()
        print("End of quickstart.")
    
    
  3. Run your face recognition app from the application directory with the python command.

    python quickstart-file.py
    

    Tip

    The Face API runs on a set of pre-built models that are static by nature (the model's performance will not regress or improve as the service is run). The results that the model produces might change if Microsoft updates the model's backend without migrating to an entirely new model version. To take advantage of a newer version of a model, you can retrain your PersonGroup, specifying the newer model as a parameter with the same enrollment images.

Output

Person group: ad12b2db-d892-48ec-837a-0e7168c18224
face 335a2cb1-5211-4c29-9c45-776dd014b2af added to person 9ee65510-81a5-47e5-9e50-66727f719465
face df57eb50-4a13-4f93-b804-cd108327ad5a added to person 9ee65510-81a5-47e5-9e50-66727f719465
face d8b7b8b8-3ca6-4309-b76e-eeed84f7738a added to person 00651036-4236-4004-88b9-11466c251548
face dffbb141-f40b-4392-8785-b6c434fa534e added to person 00651036-4236-4004-88b9-11466c251548
face 9cdac36e-5455-447b-a68d-eb1f5e2ec27d added to person 23614724-b132-407a-aaa0-67003987ce93
face d8208412-92b7-4b8d-a2f8-3926c839c87e added to person 23614724-b132-407a-aaa0-67003987ce93
Train the person group ad12b2db-d892-48ec-837a-0e7168c18224
The person group ad12b2db-d892-48ec-837a-0e7168c18224 is trained successfully.
Pausing for 60 seconds to avoid triggering rate limit on free account...
Identifying faces in image
Person is identified for face ID bc52405a-5d83-4500-9218-557468ccdf99 in image, with a confidence of 0.96726.
verification result: True. confidence: 0.96726
Person is identified for face ID dfcc3fc8-6252-4f3a-8205-71466f39d1a7 in image, with a confidence of 0.96925.
verification result: True. confidence: 0.96925
No person identified for face ID 401c581b-a178-45ed-8205-7692f6eede88 in image.
Person is identified for face ID 8809d9c7-e362-4727-8c95-e1e44f5c2e8a in image, with a confidence of 0.92898.
verification result: True. confidence: 0.92898

The person group ad12b2db-d892-48ec-837a-0e7168c18224 is deleted.

End of quickstart.

Clean up resources

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to use the Face client library for Python to do basic face identification. Next, learn about the different face detection models and how to specify the right model for your use case.

Get started with facial recognition using the Face client library for Java. Follow these steps to install the package and try out the example code for basic tasks. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images. Follow these steps to install the package and try out the example code for basic face identification using remote images.

Reference documentation | Library source code | Package (Maven) | Samples

Prerequisites

  • Azure subscription - Create one for free
  • The current version of the Java Development Kit (JDK)
  • Apache Maven installed. On Linux, install from the distribution repositories if available.
  • Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, select Go to resource.
    • You'll need the key and endpoint from the resource you create to connect your application to the Face API.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Create environment variables

In this example, write your credentials to environment variables on the local machine that runs the application.

Go to the Azure portal. If the resource you created in the Prerequisites section deployed successfully, select Go to resource under Next Steps. You can find your key and endpoint under Resource Management in the Keys and Endpoint page. Your resource key isn't the same as your Azure subscription ID.

To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.

  • To set the FACE_APIKEY environment variable, replace <your_key> with one of the keys for your resource.
  • To set the FACE_ENDPOINT environment variable, replace <your_endpoint> with the endpoint for your resource.

Important

If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.

For more information about AI services security, see Authenticate requests to Azure AI services.

setx FACE_APIKEY <your_key>
setx FACE_ENDPOINT <your_endpoint>

After you add the environment variables, you may need to restart any running programs that will read the environment variables, including the console window.

Identify and verify faces

  1. Install the client library

    Open a console window and create a new folder for your quickstart application. Copy the following content to a new file. Save the file as pom.xml in your project directory:

    <project xmlns="http://maven.apache.org/POM/4.0.0"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
      <modelVersion>4.0.0</modelVersion>
      <groupId>com.example</groupId>
      <artifactId>my-application-name</artifactId>
      <version>1.0.0</version>
      <dependencies>
        <!-- https://mvnrepository.com/artifact/com.azure/azure-ai-vision-face -->
        <dependency>
          <groupId>com.azure</groupId>
          <artifactId>azure-ai-vision-face</artifactId>
          <version>1.0.0-beta.2</version>
        </dependency>
      </dependencies>
    </project>
    

    Install the SDK and dependencies by running the following in the project directory:

    mvn clean dependency:copy-dependencies
    
  2. Create a new Java application

    Create a file named Quickstart.java, open it in a text editor, and paste in the following code:

    Note

    If you haven't received access to the Face service using the intake form, some of these functions won't work.

    import java.util.Arrays;
    import java.util.LinkedHashMap;
    import java.util.List;
    import java.util.Map;
    import java.util.stream.Collectors;
    import java.util.UUID;
    
    import com.azure.ai.vision.face.FaceClient;
    import com.azure.ai.vision.face.FaceClientBuilder;
    import com.azure.ai.vision.face.administration.FaceAdministrationClient;
    import com.azure.ai.vision.face.administration.FaceAdministrationClientBuilder;
    import com.azure.ai.vision.face.administration.LargePersonGroupClient;
    import com.azure.ai.vision.face.models.DetectOptions;
    import com.azure.ai.vision.face.models.FaceAttributeType;
    import com.azure.ai.vision.face.models.FaceDetectionModel;
    import com.azure.ai.vision.face.models.FaceDetectionResult;
    import com.azure.ai.vision.face.models.FaceIdentificationCandidate;
    import com.azure.ai.vision.face.models.FaceIdentificationResult;
    import com.azure.ai.vision.face.models.FaceRecognitionModel;
    import com.azure.ai.vision.face.models.FaceTrainingResult;
    import com.azure.ai.vision.face.models.FaceVerificationResult;
    import com.azure.ai.vision.face.models.QualityForRecognition;
    import com.azure.core.credential.KeyCredential;
    import com.azure.core.util.polling.SyncPoller;
    
    public class Quickstart {
        // LARGE_PERSON_GROUP_ID should be all lowercase and alphanumeric. For example, 'mygroupname' (dashes are OK).
        private static final String LARGE_PERSON_GROUP_ID = UUID.randomUUID().toString();
    
        // URL path for the images.
        private static final String IMAGE_BASE_URL = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/";
    
        // From your Face subscription in the Azure portal, get your subscription key and endpoint.
        private static final String SUBSCRIPTION_KEY = System.getenv("FACE_APIKEY");
        private static final String ENDPOINT = System.getenv("FACE_ENDPOINT");
    
        public static void main(String[] args) throws Exception {
            // Recognition model 4 was released in 2021 February.
            // It is recommended since its accuracy is improved
            // on faces wearing masks compared with model 3,
            // and its overall accuracy is improved compared
            // with models 1 and 2.
            FaceRecognitionModel RECOGNITION_MODEL4 = FaceRecognitionModel.RECOGNITION_04;
    
            // Authenticate.
            FaceClient client = authenticate(ENDPOINT, SUBSCRIPTION_KEY);
    
            // Identify - recognize a face(s) in a large person group (a large person group is created in this example).
            identifyInLargePersonGroup(client, IMAGE_BASE_URL, RECOGNITION_MODEL4);
    
            System.out.println("End of quickstart.");
        }
    
        /*
         *	AUTHENTICATE
         *	Uses subscription key and region to create a client.
         */
        public static FaceClient authenticate(String endpoint, String key) {
            return new FaceClientBuilder().endpoint(endpoint).credential(new KeyCredential(key)).buildClient();
        }
    
    
        // Detect faces from image url for recognition purposes. This is a helper method for other functions in this quickstart.
        // Parameter `returnFaceId` of `DetectOptions` must be set to `true` (by default) for recognition purposes.
        // Parameter `returnFaceAttributes` is set to include the QualityForRecognition attribute. 
        // Recognition model must be set to recognition_03 or recognition_04 as a result.
        // Result faces with insufficient quality for recognition are filtered out. 
        // The field `faceId` in returned `DetectedFace`s will be used in Verify and Identify.
        // It will expire 24 hours after the detection call.
        private static List<FaceDetectionResult> detectFaceRecognize(FaceClient faceClient, String url, FaceRecognitionModel recognitionModel) {
            // Detect faces from image URL.
            DetectOptions options = new DetectOptions(FaceDetectionModel.DETECTION_03, recognitionModel, true).setReturnFaceAttributes(Arrays.asList(FaceAttributeType.QUALITY_FOR_RECOGNITION));
            List<FaceDetectionResult> detectedFaces = faceClient.detect(url, options);
            List<FaceDetectionResult> sufficientQualityFaces = detectedFaces.stream().filter(f -> f.getFaceAttributes().getQualityForRecognition() != QualityForRecognition.LOW).collect(Collectors.toList());
            System.out.println(detectedFaces.size() + " face(s) with " + sufficientQualityFaces.size() + " having sufficient quality for recognition.");
    
            return sufficientQualityFaces;
        }
    
        /*
         * IDENTIFY FACES
         * To identify faces, you need to create and define a large person group.
         * The Identify operation takes one or several face IDs from DetectedFace or PersistedFace and a LargePersonGroup and returns
         * a list of Person objects that each face might belong to. Returned Person objects are wrapped as Candidate objects,
         * which have a prediction confidence value.
         */
        public static void identifyInLargePersonGroup(FaceClient client, String url, FaceRecognitionModel recognitionModel) throws Exception {
            System.out.println("========IDENTIFY FACES========");
            System.out.println();
    
            // Create a dictionary for all your images, grouping similar ones under the same key.
            Map<String, String[]> personDictionary = new LinkedHashMap<String, String[]>();
            personDictionary.put("Family1-Dad", new String[]{"Family1-Dad1.jpg", "Family1-Dad2.jpg"});
            personDictionary.put("Family1-Mom", new String[]{"Family1-Mom1.jpg", "Family1-Mom2.jpg"});
            personDictionary.put("Family1-Son", new String[]{"Family1-Son1.jpg", "Family1-Son2.jpg"});
            // A group photo that includes some of the persons you seek to identify from your dictionary.
            String sourceImageFileName = "identification1.jpg";
    
            // Create a large person group.
            System.out.println("Create a person group (" + LARGE_PERSON_GROUP_ID + ").");
            FaceAdministrationClient faceAdministrationClient = new FaceAdministrationClientBuilder().endpoint(ENDPOINT).credential(new KeyCredential(SUBSCRIPTION_KEY)).buildClient();
            LargePersonGroupClient largePersonGroupClient = faceAdministrationClient.getLargePersonGroupClient(LARGE_PERSON_GROUP_ID);
            largePersonGroupClient.create(LARGE_PERSON_GROUP_ID, null, recognitionModel);
            // The similar faces will be grouped into a single large person group person.
            for (String groupedFace : personDictionary.keySet()) {
                // Limit TPS
                Thread.sleep(250);
                String personId = largePersonGroupClient.createPerson(groupedFace).getPersonId();
                System.out.println("Create a person group person '" + groupedFace + "'.");
    
                // Add face to the large person group person.
                for (String similarImage : personDictionary.get(groupedFace)) {
                    System.out.println("Check whether image is of sufficient quality for recognition");
                    DetectOptions options = new DetectOptions(FaceDetectionModel.DETECTION_03, recognitionModel, false).setReturnFaceAttributes(Arrays.asList(FaceAttributeType.QUALITY_FOR_RECOGNITION));
                    List<FaceDetectionResult> facesInImage = client.detect(url + similarImage, options);
                    if (facesInImage.stream().anyMatch(f -> f.getFaceAttributes().getQualityForRecognition() != QualityForRecognition.HIGH)) {
                        continue;
                    }
    
                    if (facesInImage.size() != 1) {
                        continue;
                    }
    
                    // add face to the large person group
                    System.out.println("Add face to the person group person(" + groupedFace + ") from image `" + similarImage + "`");
                    largePersonGroupClient.addFace(personId, url + similarImage, null, FaceDetectionModel.DETECTION_03, null);
                }
            }
    
            // Start to train the large person group.
            System.out.println();
            System.out.println("Train person group " + LARGE_PERSON_GROUP_ID + ".");
            SyncPoller<FaceTrainingResult, Void> poller = largePersonGroupClient.beginTrain();
    
            // Wait until the training is completed.
            poller.waitForCompletion();
            System.out.println("Training status: succeeded.");
            System.out.println();
    
            System.out.println("Pausing for 60 seconds to avoid triggering rate limit on free account...");
            Thread.sleep(60000);
    
            // Detect faces from source image url.
            List<FaceDetectionResult> detectedFaces = detectFaceRecognize(client, url + sourceImageFileName, recognitionModel);
            // Add detected faceId to sourceFaceIds.
            List<String> sourceFaceIds = detectedFaces.stream().map(FaceDetectionResult::getFaceId).collect(Collectors.toList());
    
            // Identify the faces in a large person group.
            List<FaceIdentificationResult> identifyResults = client.identifyFromLargePersonGroup(sourceFaceIds, LARGE_PERSON_GROUP_ID);
    
            for (FaceIdentificationResult identifyResult : identifyResults) {
                if (identifyResult.getCandidates().isEmpty()) {
                    System.out.println("No person is identified for the face in: " + sourceImageFileName + " - " + identifyResult.getFaceId() + ".");
                    continue;
                }
    
                FaceIdentificationCandidate candidate = identifyResult.getCandidates().stream().findFirst().orElseThrow();
                String personName = largePersonGroupClient.getPerson(candidate.getPersonId()).getName();
                System.out.println("Person '" + personName + "' is identified for the face in: " + sourceImageFileName + " - " + identifyResult.getFaceId() + ", confidence: " + candidate.getConfidence() + ".");
    
                FaceVerificationResult verifyResult = client.verifyFromLargePersonGroup(identifyResult.getFaceId(), LARGE_PERSON_GROUP_ID, candidate.getPersonId());
                System.out.println("Verification result: is a match? " + verifyResult.isIdentical() + ". confidence: " + verifyResult.getConfidence());
            }
            System.out.println();
    
            // Delete large person group.
            System.out.println("========DELETE PERSON GROUP========");
            System.out.println();
            largePersonGroupClient.delete();
            System.out.println("Deleted the person group " + LARGE_PERSON_GROUP_ID + ".");
            System.out.println();
        }
    }
    
  3. Run your face recognition app from the application directory with the javac and java commands.

    javac -cp target\dependency\* Quickstart.java
    java -cp .;target\dependency\* Quickstart
    

Output

========IDENTIFY FACES========

Create a person group (3761e61a-16b2-4503-ad29-ed34c58ba676).
Create a person group person 'Family1-Dad'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Dad) from image `Family1-Dad1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Dad) from image `Family1-Dad2.jpg`
Create a person group person 'Family1-Mom'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Mom) from image `Family1-Mom1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Mom) from image `Family1-Mom2.jpg`
Create a person group person 'Family1-Son'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Son) from image `Family1-Son1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Son) from image `Family1-Son2.jpg`

Train person group 3761e61a-16b2-4503-ad29-ed34c58ba676.
Training status: succeeded.

Pausing for 60 seconds to avoid triggering rate limit on free account...
4 face(s) with 4 having sufficient quality for recognition.
Person 'Family1-Dad' is identified for the face in: identification1.jpg - d7995b34-1b72-47fe-82b6-e9877ed2578d, confidence: 0.96807.
Verification result: is a match? true. confidence: 0.96807
Person 'Family1-Mom' is identified for the face in: identification1.jpg - 844da0ed-4890-4bbf-a531-e638797f96fc, confidence: 0.96902.
Verification result: is a match? true. confidence: 0.96902
No person is identified for the face in: identification1.jpg - c543159a-57f3-4872-83ce-2d4a733d71c9.
Person 'Family1-Son' is identified for the face in: identification1.jpg - 414fac6c-7381-4dba-9c8b-fd26d52e879b, confidence: 0.9281.
Verification result: is a match? true. confidence: 0.9281

========DELETE PERSON GROUP========

Deleted the person group 3761e61a-16b2-4503-ad29-ed34c58ba676.

End of quickstart.

Clean up resources

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to use the Face client library for Java to do basic face identification. Next, learn about the different face detection models and how to specify the right model for your use case.

Get started with facial recognition using the Face client library for JavaScript. Follow these steps to install the package and try out the example code for basic tasks. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images. Follow these steps to install the package and try out the example code for basic face identification using remote images.

Reference documentation | Library source code | Package (npm) | Samples

Prerequisites

  • Azure subscription - Create one for free
  • The latest version of Node.js
  • Once you have your Azure subscription, Create a Face resource in the Azure portal to get your key and endpoint. After it deploys, select Go to resource.
    • You'll need the key and endpoint from the resource you create to connect your application to the Face API.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.

Create environment variables

In this example, write your credentials to environment variables on the local machine that runs the application.

Go to the Azure portal. If the resource you created in the Prerequisites section deployed successfully, select Go to resource under Next Steps. You can find your key and endpoint under Resource Management in the Keys and Endpoint page. Your resource key isn't the same as your Azure subscription ID.

To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.

  • To set the FACE_APIKEY environment variable, replace <your_key> with one of the keys for your resource.
  • To set the FACE_ENDPOINT environment variable, replace <your_endpoint> with the endpoint for your resource.

Important

If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.

For more information about AI services security, see Authenticate requests to Azure AI services.

setx FACE_APIKEY <your_key>
setx FACE_ENDPOINT <your_endpoint>

After you add the environment variables, you may need to restart any running programs that will read the environment variables, including the console window.

Identify and verify faces

  1. Create a new Node.js application

    In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.

    mkdir myapp && cd myapp
    

    Run the npm init command to create a node application with a package.json file.

    npm init
    
  2. Install the @azure-rest/ai-vision-face npm packages:

    npm install @azure-rest/ai-vision-face
    

    Your app's package.json file is updated with the dependencies.

  3. Create a file named index.js, open it in a text editor, and paste in the following code:

    Note

    If you haven't received access to the Face service using the intake form, some of these functions won't work.

    const { randomUUID } = require("crypto");
    
    const { AzureKeyCredential } = require("@azure/core-auth");
    
    const createFaceClient = require("@azure-rest/ai-vision-face").default,
      { getLongRunningPoller } = require("@azure-rest/ai-vision-face");
    
    const sleep = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
    
    const main = async () => {
      const endpoint = process.env["FACE_ENDPOINT"] ?? "<endpoint>";
      const apikey = process.env["FACE_APIKEY"] ?? "<apikey>";
      const credential = new AzureKeyCredential(apikey);
      const client = createFaceClient(endpoint, credential);
    
      const imageBaseUrl =
        "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/";
      const largePersonGroupId = randomUUID();
    
      console.log("========IDENTIFY FACES========");
      console.log();
    
      // Create a dictionary for all your images, grouping similar ones under the same key.
      const personDictionary = {
        "Family1-Dad": ["Family1-Dad1.jpg", "Family1-Dad2.jpg"],
        "Family1-Mom": ["Family1-Mom1.jpg", "Family1-Mom2.jpg"],
        "Family1-Son": ["Family1-Son1.jpg", "Family1-Son2.jpg"],
      };
    
      // A group photo that includes some of the persons you seek to identify from your dictionary.
      const sourceImageFileName = "identification1.jpg";
    
      // Create a large person group.
      console.log(`Creating a person group with ID: ${largePersonGroupId}`);
      await client.path("/largepersongroups/{largePersonGroupId}", largePersonGroupId).put({
        body: {
          name: largePersonGroupId,
          recognitionModel: "recognition_04",
        },
      });
    
      // The similar faces will be grouped into a single large person group person.
      console.log("Adding faces to person group...");
      await Promise.all(
        Object.keys(personDictionary).map(async (name) => {
          console.log(`Create a persongroup person: ${name}`);
          const createLargePersonGroupPersonResponse = await client
            .path("/largepersongroups/{largePersonGroupId}/persons", largePersonGroupId)
            .post({
              body: { name },
            });
    
          const { personId } = createLargePersonGroupPersonResponse.body;
    
          await Promise.all(
            personDictionary[name].map(async (similarImage) => {
              // Check if the image is of sufficent quality for recognition.
              const detectResponse = await client.path("/detect").post({
                contentType: "application/json",
                queryParameters: {
                  detectionModel: "detection_03",
                  recognitionModel: "recognition_04",
                  returnFaceId: false,
                  returnFaceAttributes: ["qualityForRecognition"],
                },
                body: { url: `${imageBaseUrl}${similarImage}` },
              });
    
              const sufficientQuality = detectResponse.body.every(
                (face) => face.faceAttributes?.qualityForRecognition === "high",
              );
              if (!sufficientQuality) {
                return;
              }
    
              if (detectResponse.body.length != 1) {
                return;
              }
    
              // Quality is sufficent, add to group.
              console.log(
                `Add face to the person group person: (${name}) from image: (${similarImage})`,
              );
              await client
                .path(
                  "/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces",
                  largePersonGroupId,
                  personId,
                )
                .post({
                  queryParameters: { detectionModel: "detection_03" },
                  body: { url: `${imageBaseUrl}${similarImage}` },
                });
            }),
          );
        }),
      );
      console.log("Done adding faces to person group.");
    
      // Start to train the large person group.
      console.log();
      console.log(`Training person group: ${largePersonGroupId}`);
      const trainResponse = await client
        .path("/largepersongroups/{largePersonGroupId}/train", largePersonGroupId)
        .post();
      const poller = await getLongRunningPoller(client, trainResponse);
      await poller.pollUntilDone();
      console.log(`Training status: ${poller.getOperationState().status}`);
      if (poller.getOperationState().status !== "succeeded") {
        return;
      }
    
      console.log("Pausing for 60 seconds to avoid triggering rate limit on free account...");
      await sleep(60000);
    
      // Detect faces from source image url and only take those with sufficient quality for recognition.
      const detectResponse = await client.path("/detect").post({
        contentType: "application/json",
        queryParameters: {
          detectionModel: "detection_03",
          recognitionModel: "recognition_04",
          returnFaceId: true,
          returnFaceAttributes: ["qualityForRecognition"],
        },
        body: { url: `${imageBaseUrl}${sourceImageFileName}` },
      });
      const faceIds = detectResponse.body.filter((face) => face.faceAttributes?.qualityForRecognition !== "low").map((face) => face.faceId);
    
      // Identify the faces in a large person group.
      const identifyResponse = await client.path("/identify").post({
        body: { faceIds, largePersonGroupId: largePersonGroupId },
      });
      await Promise.all(
        identifyResponse.body.map(async (result) => {
          try {
            const getLargePersonGroupPersonResponse = await client
              .path(
                "/largepersongroups/{largePersonGroupId}/persons/{personId}",
                largePersonGroupId,
                result.candidates[0].personId,
              )
              .get();
            const person = getLargePersonGroupPersonResponse.body;
            console.log(
              `Person: ${person.name} is identified for face in: ${sourceImageFileName} with ID: ${result.faceId}. Confidence: ${result.candidates[0].confidence}`,
            );
    
            // Verification:
            const verifyResponse = await client.path("/verify").post({
              body: {
                faceId: result.faceId,
                largePersonGroupId: largePersonGroupId,
                personId: person.personId,
              },
            });
            console.log(
              `Verification result between face ${result.faceId} and person ${person.personId}: ${verifyResponse.body.isIdentical} with confidence: ${verifyResponse.body.confidence}`,
            );
          } catch (error) {
            console.log(`No persons identified for face with ID ${result.faceId}`);
          }
        }),
      );
      console.log();
    
      // Delete large person group.
      console.log(`Deleting person group: ${largePersonGroupId}`);
      await client.path("/largepersongroups/{largePersonGroupId}", largePersonGroupId).delete();
      console.log();
    
      console.log("Done.");
    };
    
    main().catch(console.error);
    
  4. Run the application with the node command on your quickstart file.

    node index.js
    

Output

========IDENTIFY FACES========

Creating a person group with ID: a230ac8b-09b2-4fa0-ae04-d76356d88d9f
Adding faces to person group...
Create a persongroup person: Family1-Dad
Create a persongroup person: Family1-Mom
Create a persongroup person: Family1-Son
Add face to the person group person: (Family1-Dad) from image: (Family1-Dad1.jpg)
Add face to the person group person: (Family1-Mom) from image: (Family1-Mom1.jpg)
Add face to the person group person: (Family1-Son) from image: (Family1-Son1.jpg)
Add face to the person group person: (Family1-Dad) from image: (Family1-Dad2.jpg)
Add face to the person group person: (Family1-Mom) from image: (Family1-Mom2.jpg)
Add face to the person group person: (Family1-Son) from image: (Family1-Son2.jpg)
Done adding faces to person group.

Training person group: a230ac8b-09b2-4fa0-ae04-d76356d88d9f
Training status: succeeded
Pausing for 60 seconds to avoid triggering rate limit on free account...
No persons identified for face with ID 56380623-8bf0-414a-b9d9-c2373386b7be
Person: Family1-Dad is identified for face in: identification1.jpg with ID: c45052eb-a910-4fd3-b1c3-f91ccccc316a. Confidence: 0.96807
Person: Family1-Son is identified for face in: identification1.jpg with ID: 8dce9b50-513f-4fe2-9e19-352acfd622b3. Confidence: 0.9281
Person: Family1-Mom is identified for face in: identification1.jpg with ID: 75868da3-66f6-4b5f-a172-0b619f4d74c1. Confidence: 0.96902
Verification result between face c45052eb-a910-4fd3-b1c3-f91ccccc316a and person 35a58d14-fd58-4146-9669-82ed664da357: true with confidence: 0.96807
Verification result between face 8dce9b50-513f-4fe2-9e19-352acfd622b3 and person 2d4d196c-5349-431c-bf0c-f1d7aaa180ba: true with confidence: 0.9281
Verification result between face 75868da3-66f6-4b5f-a172-0b619f4d74c1 and person 35d5de9e-5f92-4552-8907-0d0aac889c3e: true with confidence: 0.96902

Deleting person group: a230ac8b-09b2-4fa0-ae04-d76356d88d9f

Done.

Clean up resources

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to use the Face client library for JavaScript to do basic face identification. Next, learn about the different face detection models and how to specify the right model for your use case.

Get started with facial recognition using the Face REST API. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images.

Note

This quickstart uses cURL commands to call the REST API. You can also call the REST API using a programming language. Complex scenarios like face identification are easier to implement using a language SDK. See the GitHub samples for examples in C#, Python, Java, JavaScript, and Go.

Prerequisites

  • Azure subscription - Create one for free
  • Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, select Go to resource.
    • You'll need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below later in the quickstart.
    • You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
  • PowerShell version 6.0+, or a similar command-line application.
  • cURL installed.

Identify and verify faces

Note

If you haven't received access to the Face service using the intake form, some of these functions won't work.

  1. First, call the Detect API on the source face. This is the face that we'll try to identify from the larger group. Copy the following command to a text editor, insert your own key and endpoint, and then copy it into a shell window and run it.

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&recognitionModel=recognition_04&returnRecognitionModel=false&detectionModel=detection_03&faceIdTimeToLive=86400" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{""url"":""https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/identification1.jpg""}"
    

    Save the returned face ID string to a temporary location. You'll use it again at the end.

  2. Next you'll need to create a LargePersonGroup and give it an arbitrary ID that matches regex pattern ^[a-z0-9-_]+$. This object will store the aggregated face data of several persons. Run the following command, inserting your own key. Optionally, change the group's name and metadata in the request body.

    curl.exe -v -X PUT "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{
        ""name"": ""large-person-group-name"",
        ""userData"": ""User-provided data attached to the large person group."",
        ""recognitionModel"": ""recognition_04""
    }"
    

    Save the specified ID of the created group to a temporary location.

  3. Next, you'll create Person objects that belong to the group. Run the following command, inserting your own key and the ID of the LargePersonGroup from the previous step. This command creates a Person named "Family1-Dad".

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}/persons" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{
        ""name"": ""Family1-Dad"",
        ""userData"": ""User-provided data attached to the person.""
    }"
    

    After you run this command, run it again with different input data to create more Person objects: "Family1-Mom", "Family1-Son", "Family1-Daughter", "Family2-Lady", and "Family2-Man".

    Save the IDs of each Person created; it's important to keep track of which person name has which ID.

  4. Next you'll need to detect new faces and associate them with the Person objects that exist. The following command detects a face from the image Family1-Dad1.jpg and adds it to the corresponding person. You need to specify the personId as the ID that was returned when you created the "Family1-Dad" Person object. The image name corresponds to the name of the created Person. Also enter the LargePersonGroup ID and your key in the appropriate fields.

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces?detectionModel=detection_03" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{""url"":""https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Dad1.jpg""}"
    

    Then, run the above command again with a different source image and target Person. The images available are: Family1-Dad1.jpg, Family1-Dad2.jpg Family1-Mom1.jpg, Family1-Mom2.jpg, Family1-Son1.jpg, Family1-Son2.jpg, Family1-Daughter1.jpg, Family1-Daughter2.jpg, Family2-Lady1.jpg, Family2-Lady2.jpg, Family2-Man1.jpg, and Family2-Man2.jpg. Be sure that the Person whose ID you specify in the API call matches the name of the image file in the request body.

    At the end of this step, you should have multiple Person objects that each have one or more corresponding faces, detected directly from the provided images.

  5. Next, train the LargePersonGroup with the current face data. The training operation teaches the model how to associate facial features, sometimes aggregated from multiple source images, to each single person. Insert the LargePersonGroup ID and your key before running the command.

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}/train" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data ""
    
  6. Check whether the training status is succeeded. If not, wait for a while and query again.

    curl.exe -v "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}/training" -H "Ocp-Apim-Subscription-Key: {subscription key}"
    
  7. Now you're ready to call the Identify API, using the source face ID from the first step and the LargePersonGroup ID. Insert these values into the appropriate fields in the request body, and insert your key.

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/identify" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{
        ""largePersonGroupId"": ""INSERT_PERSONGROUP_ID"",
        ""faceIds"": [
            ""INSERT_SOURCE_FACE_ID""
        ],
        ""maxNumOfCandidatesReturned"": 1,
        ""confidenceThreshold"": 0.5
    }"
    

    The response should give you a Person ID indicating the person identified with the source face. It should be the ID that corresponds to the "Family1-Dad" person, because the source face is of that person.

  8. To do face verification, you'll use the Person ID returned in the previous step, the LargePersonGroup ID, and also the source face ID. Insert these values into the fields in the request body, and insert your key.

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/verify" `
    -H "Content-Type: application/json" `
    -H "Ocp-Apim-Subscription-Key: {subscription key}" `
    --data-ascii "{
        ""faceId"": ""INSERT_SOURCE_FACE_ID"",
        ""personId"": ""INSERT_PERSON_ID"",
        ""largePersonGroupId"": ""INSERT_PERSONGROUP_ID""
    }"
    

    The response should give you a boolean verification result along with a confidence value.

Clean up resources

To delete the LargePersonGroup you created in this exercise, run the LargePersonGroup - Delete call.

curl.exe -v -X DELETE "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}" -H "Ocp-Apim-Subscription-Key: {subscription key}"

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to use the Face REST API to do basic facial recognition tasks. Next, learn about the different face detection models and how to specify the right model for your use case.