Specify a face recognition model

Caution

Face service access is limited based on eligibility and usage criteria in order to support our Responsible AI principles. Face service is only available to Microsoft managed customers and partners. Use the Face Recognition intake form to apply for access. For more information, see the Face limited access page.

This guide shows you how to specify a face recognition model for face detection, identification and similarity search using the Azure AI Face service.

The Face service uses machine learning models to perform operations on visible human faces in images. We continue to improve the accuracy of our models based on customer feedback and advances in research, and we deliver these improvements as model updates. Developers can specify which version of the face recognition model they'd like to use, choosing the model that best fits their use case.

Model compatibility

The Azure AI Face service has four recognition models available. The models recognition_01 (published 2017), recognition_02 (published 2019), and recognition_03 (published 2020) are continually supported to ensure backwards compatibility for customers using FaceLists or PersonGroups created with these models. A FaceList or PersonGroup always uses the recognition model it was created with, and new faces become associated with this model when they're added. This can't be changed after creation and customers need to use the corresponding recognition model with the corresponding FaceList or PersonGroup.

You can move to later recognition models at your own convenience; however, you'll need to create new FaceLists and PersonGroups with the recognition model of your choice.

The recognition_04 model (published 2021) is the most accurate model currently available. If you're a new customer, we recommend using this model. Recognition_04 provides improved accuracy for both similarity comparisons and person-matching comparisons. Recognition_04 improves recognition for enrolled users wearing face covers (surgical masks, N95 masks, cloth masks). Now you can build safe and seamless user experiences that use the latest detection_03 model to detect whether an enrolled user is wearing a face cover. Then you can use the latest recognition_04 model to recognize their identity. Each model operates independently of the others, and a confidence threshold set for one model isn't meant to be compared across the other recognition models.

Read on to learn how to specify a selected model in different Face operations while avoiding model conflicts. If you're an advanced user and would like to determine whether you should switch to the latest model, skip to the Evaluate different models section. You can evaluate the new model and compare results using your current data set.

Prerequisites

You should be familiar with the concepts of AI face detection and identification. If you aren't, see these guides first:

Detect faces with specified model

Face detection identifies the visual landmarks of human faces and finds their bounding-box locations. It also extracts the face's features and stores them temporarily for up to 24 hours for use in identification. All of this information forms the representation of one face.

The recognition model is used when the face features are extracted, so you can specify a model version when performing the Detect operation.

When using the Detect API, assign the model version with the recognitionModel parameter. The available values are:

  • recognition_01
  • recognition_02
  • recognition_03
  • recognition_04

Optionally, you can specify the returnRecognitionModel parameter (default false) to indicate whether recognitionModel should be returned in response. So, a request URL for the Detect REST API will look like this:

https://westus.api.cognitive.microsoft.com/face/v1.0/detect?detectionModel={detectionModel}&recognitionModel={recognitionModel}&returnFaceId={returnFaceId}&returnFaceAttributes={returnFaceAttributes}&returnFaceLandmarks={returnFaceLandmarks}&returnRecognitionModel={returnRecognitionModel}&faceIdTimeToLive={faceIdTimeToLive}

If you're using the client library, you can assign the value for recognitionModel by passing a string representing the version. If you leave it unassigned, a default model version of recognition_01 will be used. See the following code example for the .NET client library.

string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
var response = await faceClient.DetectAsync(new Uri(imageUrl), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: true, returnFaceLandmarks: true, returnRecognitionModel: true);
var faces = response.Value;

Note

The returnFaceId parameter must be set to true in order to enable the face recognition scenarios in later steps.

Identify faces with the specified model

The Face service can extract face data from an image and associate it with a Person object (through the Add Person Group Person Face API call, for example), and multiple Person objects can be stored together in a PersonGroup. Then, a new face can be compared against a PersonGroup (with the Identify From Person Group call), and the matching person within that group can be identified.

A PersonGroup should have one unique recognition model for all of the Persons, and you can specify this using the recognitionModel parameter when you create the group (Create Person Group or Create Large Person Group). If you don't specify this parameter, the original recognition_01 model is used. A group will always use the recognition model it was created with, and new faces will become associated with this model when they're added to it. This can't be changed after a group's creation. To see what model a PersonGroup is configured with, use the Get Person Group API with the returnRecognitionModel parameter set as true.

See the following .NET code example.

// Create an empty PersonGroup with "recognition_04" model
string personGroupId = "mypersongroupid";
using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = "My Person Group Name", ["recognitionModel"] = "recognition_04" }))))
{
    content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
    await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}", content);
}

In this code, a PersonGroup with ID mypersongroupid is created, and it's set up to use the recognition_04 model to extract face features.

Correspondingly, you need to specify which model to use when detecting faces to compare against this PersonGroup (through the Detect API). The model you use should always be consistent with the PersonGroup's configuration; otherwise, the operation will fail due to incompatible models.

There is no change in the Identify From Person Group API; you only need to specify the model version in detection.

Find similar faces with the specified model

You can also specify a recognition model for similarity search. You can assign the model version with recognitionModel when creating the FaceList with Create Face List API or Create Large Face List. If you don't specify this parameter, the recognition_01 model is used by default. A FaceList will always use the recognition model it was created with, and new faces will become associated with this model when they're added to the list; you can't change this after creation. To see what model a FaceList is configured with, use the Get Face List API with the returnRecognitionModel parameter set as true.

See the following .NET code example.

using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = "My face collection", ["recognitionModel"] = "recognition_04" }))))
{
    content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
    await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/facelists/{faceListId}", content);
}

This code creates a FaceList called My face collection, using the recognition_04 model for feature extraction. When you search this FaceList for similar faces to a new detected face, that face must have been detected (Detect) using the recognition_04 model. As in the previous section, the model needs to be consistent.

There is no change in the Find Similar API; you only specify the model version in detection.

Verify faces with the specified model

The Verify Face To Face API checks whether two faces belong to the same person. There is no change in the Verify API with regard to recognition models, but you can only compare faces that were detected with the same model.

Evaluate different models

If you'd like to compare the performances of different recognition models on your own data, you'll need to:

  1. Create four PersonGroups using recognition_01, recognition_02, recognition_03, and recognition_04 respectively.
  2. Use your image data to detect faces and register them to Persons within these four PersonGroups.
  3. Train your PersonGroups using the Train Person Group API.
  4. Test with Identify From Person Group on all four PersonGroups and compare the results.

If you normally specify a confidence threshold (a value between zero and one that determines how confident the model must be to identify a face), you may need to use different thresholds for different models. A threshold for one model isn't meant to be shared to another and won't necessarily produce the same results.

Next steps

In this article, you learned how to specify the recognition model to use with different Face service APIs. Next, follow a quickstart to get started with face detection.