Hi, we value your feedback. Currently Face API doesn't support this scenario (create person using detected face id parameter). You'd need to add persons to the person directory and add face images accordingly. However, this is a good optimization use case and I'll forward your feedback to the product group. Sorry for any inconvenience.
Azure FaceClient / PersonDirectory API usage
Hi there,
I've just upgraded my code to use the new PersonDirectory API, via the Microsoft.Azure.CognitiveServices.Vision.Face v2.8.0-preview-1 .Net package. I have a question about the use of the API.
What I want to do in my code is the following:
- Post an image to the 'Detect' API, to have it find and detect faces in the image.
- Get the list of faceIDs returned from the Detect API, and pass them into the Identify API to get a corresponding PersonID, which I have stored in my local DB. I can then display the person's name, etc.
- For any faces that are not identified, I then create a new Person via the CreatePersonAsync API. There is no way to pass a FaceID to this API.
- At this point, the next thing I have to do is to call AddFaceFromStreamAsync to upload a face image to associate with the newly created person, which will then associate that face with the person in my PersonDirectory, and trigger the back end to train.
Clearly, this isn't very efficient. I have to upload the original image passed to Detect for a second time, passing in the rectangle that the Detect API returned, to tell Azure Face that this is a face for that person. Alternatively, I can upload a cropped image with just the face for that person. But all of this seems entirely pointless, given that I just uploaded the original photo in step one above.
Is there any way that I can combine the operations so that I can just do the following:
- Post an image to the 'Detect' API, to have it find and detect faces in the image.
- Get the list of faceIDs returned from the Detect API, and pass them into the Identify API to get a corresponding PersonID from my local DB
- For any faces that are not identified, I then create a new Person via the CreatePersonAsync API, passing the already-detected FaceID and having the recognition engine store the characteristics of that face against the new person entry?
Doing this would a) reduce the amount of ingress needed to MSFT's servers, and b) reduce the number of transactions I need to process, since I could call Detect once, Identify once, and then CreatePersonAsync once (passing the list of faceIDs), meaning 3 transactions per image. Currently, I have to do 2 + (n) per image, where n is the number of new faces in a photograph.