Hello Joseph Wan,
Welcome to the Microsoft Q&A and thank you for posting your questions here.
I understand that you would like to know how you can lower the confidence threshold value.
This issue is not just about adjusting the confidence threshold parameter, but understanding how facial recognition systems interpret and apply it. Most platforms like AWS Rekognition, Azure Face API, and Face++ treat the confidenceThreshold as a filter. Because it determines the minimum score required for a match to be considered valid, but it does not influence or reduce the actual confidence score returned by the model. So, if you're still seeing confidence scores above 80%, it means the system is confident in the match, and the threshold is working as intended. However, if no match is returned, it likely means no face met the threshold criteria, not that the system failed entirely.
Therefore, the inconsistency you're experiencing over time is more likely due to changes in facial appearance, lighting conditions, or pose variations rather than a misconfiguration of the threshold. Facial recognition models are sensitive to such variations, and even subtle changes can significantly affect match accuracy. This is a common challenge across platforms and not unique to your implementation. To mitigate this, it's best practice to register multiple images of a person under different lighting, angles, and expressions. This helps the model build a more robust representation of the individual.
Additionally, make sure you're using the API correctly. For an example, in AWS Rekognition, the FaceMatchThreshold
parameter in the SearchFacesByImage call should be set like this:
{
"CollectionId": "my-face-collection",
"Image": {
"S3Object": {
"Bucket": "my-bucket",
"Name": "input.jpg"
}
},
"FaceMatchThreshold": 70,
"MaxFaces": 1
}
If you're using Azure Face API, the confidenceThreshold
in the Identify
call works similarly, It filters out candidates below the threshold but does not alter the confidence score itself. You can refer to the official Azure documentation here: https://docs.aws.amazon.com/rekognition/latest/dg/faces.html
Now, to improve long-term recognition reliability, consider periodically updating the registered face data, especially if users’ appearances change. Also, log and analyze failed matches to identify patterns and this can help you refine your registration and matching strategies.
I hope this is helpful! Do not hesitate to let me know if you have any other questions or clarifications.
Please don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful.