共用方式為


快速入門:使用臉部服務

重要

如果您使用 Microsoft 產品或服務來處理生物特徵辨識資料,您必須負責:(i) 向資料主體提供通知,包括與保留期間和毀損有關的通知;(ii) 向資料主體取得同意;以及 (iii) 刪除生物特徵辨識資料,應根據適用資料保護要求酌情採取上述措施。 「生物特徵辨識資料」具有 GDPR 第 4 條中所述的意義,以及其他資料保護要求中的對等字詞 (如適用)。 如需相關資訊,請參閱臉部的資料和隱私權

警告

臉部辨識服務存取受限於資格和使用準則,以支援我們的「負責任的 AI 原則」。 臉部辨識服務僅供 Microsoft 受管理的客戶和合作夥伴使用。 請使用臉部辨識受理表單以申請存取。 如需詳細資訊,請參閱臉部的有限存取權頁面。

開始使用適用於 .NET 的臉部用戶端程式庫進行臉部辨識。 Azure AI 臉部服務可讓您存取進階演算法,以偵測和辨識影像中的人臉。 請遵循下列步驟來安裝套件,並試用使用遠端影像進行基本臉部識別的程式碼範例。

參考文件 | 程式庫來源程式碼 | 套件 (NuGet) | 範例

必要條件

  • Azure 訂用帳戶 - 建立免費帳戶
  • Visual Studio IDE 或目前版本的 .NET Core
  • Azure 帳戶必須已獲指派 Cognitive Services Contributor 角色,您才能同意負責任的 AI 條款並建立資源。 若要將此角色指派給您的帳戶,請遵循指派角色文件中的步驟,或連絡系統管理員。
  • 擁有 Azure 訂用帳戶之後,在 Azure 入口網站中建立臉部資源,以取得您的金鑰和端點。 在其部署後,選取 [前往資源]
    • 您需要來自所建立資源的金鑰和端點,以將應用程式連線至 Face API。
    • 您可以使用免費定價層 (F0) 來試用服務,之後可升級至付費層以用於實際執行環境。

建立環境變數

在此範例中,在執行應用程式的本機電腦上將認證寫入環境變數。

前往 Azure 入口網站。 如果已成功部署您在 [必要條件] 區段中建立的資源,請選取 [後續步驟] 下的 [前往資源] 按鈕。 您可以在 [金鑰和端點] 頁面中 [資源管理] 底下找到金鑰和端點。 您的資源金鑰與您的 Azure 訂用帳戶識別碼不同。

提示

請勿在程式碼中直接包含索引碼,且切勿公開張貼索引碼。 如需更多驗證選項 (例如 Azure Key Vault),請參閱 Azure AI 服務安全性文章。

若要設定金鑰和端點的環境變數,請開啟主控台視窗,然後遵循作業系統和開發環境的指示進行。

  1. 若要設定 VISION_KEY 環境變數,請以您其中一個資源索引碼取代 your-key
  2. 若要設定 VISION_ENDPOINT 環境變數,請將 your-endpoint 取代為您資源的端點。
setx VISION_KEY <your_key>
setx VISION_ENDPOINT <your_endpoint>

新增環境變數之後,您可能需要重新啟動任何將讀取環境變數的執行中程式,包括主控台視窗。

識別並驗證臉部

  1. 建立新的 C# 應用程式

    使用 Visual Studio,建立新的 .NET Core 應用程式。

    安裝用戶端程式庫

    建立新專案後,以滑鼠右鍵按一下 [方案總管] 中的專案解決方案,然後選取 [管理 NuGet 套件],以安裝用戶端程式庫。 在開啟的套件管理員中,選取 [瀏覽]、核取 [包含發行前版本],然後搜尋 Microsoft.Azure.CognitiveServices.Vision.Face。 選取最新版本,然後選取 [安裝]

  2. 將下列程式碼新增至 Program.cs 檔案。

    注意

    如果您尚未使用登記表單接收臉部服務的存取權限,其中部分功能將無法運作。

    using System;
    using System.Collections.Generic;
    using System.IO;
    using System.Linq;
    using System.Threading;
    using System.Threading.Tasks;
    
    using Microsoft.Azure.CognitiveServices.Vision.Face;
    using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
    
    namespace FaceQuickstart
    {
        class Program
        {
            static string personGroupId = Guid.NewGuid().ToString();
    
            // URL path for the images.
            const string IMAGE_BASE_URL = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/";
    
            // From your Face subscription in the Azure portal, get your subscription key and endpoint.
            const string SUBSCRIPTION_KEY = Environment.GetEnvironmentVariable("VISION_KEY");
            const string ENDPOINT = Environment.GetEnvironmentVariable("VISION_ENDPOINT");
    
    
             static void Main(string[] args)
            {
                // Recognition model 4 was released in 2021 February.
                // It is recommended since its accuracy is improved
                // on faces wearing masks compared with model 3,
                // and its overall accuracy is improved compared
                // with models 1 and 2.
                const string RECOGNITION_MODEL4 = RecognitionModel.Recognition04;
    
                // Authenticate.
                IFaceClient client = Authenticate(ENDPOINT, SUBSCRIPTION_KEY);
    
                // Identify - recognize a face(s) in a person group (a person group is created in this example).
                IdentifyInPersonGroup(client, IMAGE_BASE_URL, RECOGNITION_MODEL4).Wait();
    
                Console.WriteLine("End of quickstart.");
            }
    
            /*
             *	AUTHENTICATE
             *	Uses subscription key and region to create a client.
             */
            public static IFaceClient Authenticate(string endpoint, string key)
            {
                return new FaceClient(new ApiKeyServiceClientCredentials(key)) { Endpoint = endpoint };
            }
    
            // Detect faces from image url for recognition purposes. This is a helper method for other functions in this quickstart.
            // Parameter `returnFaceId` of `DetectWithUrlAsync` must be set to `true` (by default) for recognition purposes.
            // Parameter `FaceAttributes` is set to include the QualityForRecognition attribute. 
            // Recognition model must be set to recognition_03 or recognition_04 as a result.
            // Result faces with insufficient quality for recognition are filtered out. 
            // The field `faceId` in returned `DetectedFace`s will be used in Face - Face - Verify and Face - Identify.
            // It will expire 24 hours after the detection call.
            private static async Task<List<DetectedFace>> DetectFaceRecognize(IFaceClient faceClient, string url, string recognition_model)
            {
                // Detect faces from image URL. Since only recognizing, use the recognition model 1.
                // We use detection model 3 because we are not retrieving attributes.
                IList<DetectedFace> detectedFaces = await faceClient.Face.DetectWithUrlAsync(url, recognitionModel: recognition_model, detectionModel: DetectionModel.Detection03, returnFaceAttributes: new List<FaceAttributeType> { FaceAttributeType.QualityForRecognition });
                List<DetectedFace> sufficientQualityFaces = new List<DetectedFace>();
                foreach (DetectedFace detectedFace in detectedFaces){
                    var faceQualityForRecognition = detectedFace.FaceAttributes.QualityForRecognition;
                    if (faceQualityForRecognition.HasValue && (faceQualityForRecognition.Value >= QualityForRecognition.Medium)){
                        sufficientQualityFaces.Add(detectedFace);
                    }
                }
                Console.WriteLine($"{detectedFaces.Count} face(s) with {sufficientQualityFaces.Count} having sufficient quality for recognition detected from image `{Path.GetFileName(url)}`");
    
                return sufficientQualityFaces.ToList();
            }
    
            /*
             * IDENTIFY FACES
             * To identify faces, you need to create and define a person group.
             * The Identify operation takes one or several face IDs from DetectedFace or PersistedFace and a PersonGroup and returns 
             * a list of Person objects that each face might belong to. Returned Person objects are wrapped as Candidate objects, 
             * which have a prediction confidence value.
             */
            public static async Task IdentifyInPersonGroup(IFaceClient client, string url, string recognitionModel)
            {
                Console.WriteLine("========IDENTIFY FACES========");
                Console.WriteLine();
    
                // Create a dictionary for all your images, grouping similar ones under the same key.
                Dictionary<string, string[]> personDictionary =
                    new Dictionary<string, string[]>
                        { { "Family1-Dad", new[] { "Family1-Dad1.jpg", "Family1-Dad2.jpg" } },
                          { "Family1-Mom", new[] { "Family1-Mom1.jpg", "Family1-Mom2.jpg" } },
                          { "Family1-Son", new[] { "Family1-Son1.jpg", "Family1-Son2.jpg" } },
                          { "Family1-Daughter", new[] { "Family1-Daughter1.jpg", "Family1-Daughter2.jpg" } },
                          { "Family2-Lady", new[] { "Family2-Lady1.jpg", "Family2-Lady2.jpg" } },
                          { "Family2-Man", new[] { "Family2-Man1.jpg", "Family2-Man2.jpg" } }
                        };
                // A group photo that includes some of the persons you seek to identify from your dictionary.
                string sourceImageFileName = "identification1.jpg";
    
                // Create a person group. 
                Console.WriteLine($"Create a person group ({personGroupId}).");
                await client.PersonGroup.CreateAsync(personGroupId, personGroupId, recognitionModel: recognitionModel);
                // The similar faces will be grouped into a single person group person.
                foreach (var groupedFace in personDictionary.Keys)
                {
                    // Limit TPS
                    await Task.Delay(250);
                    Person person = await client.PersonGroupPerson.CreateAsync(personGroupId: personGroupId, name: groupedFace);
                    Console.WriteLine($"Create a person group person '{groupedFace}'.");
    
                    // Add face to the person group person.
                    foreach (var similarImage in personDictionary[groupedFace])
                    {
                        Console.WriteLine($"Check whether image is of sufficient quality for recognition");
                        IList<DetectedFace> detectedFaces1 = await client.Face.DetectWithUrlAsync($"{url}{similarImage}", 
                            recognitionModel: recognitionModel, 
                            detectionModel: DetectionModel.Detection03,
                            returnFaceAttributes: new List<FaceAttributeType> { FaceAttributeType.QualityForRecognition });
                        bool sufficientQuality = true;
                        foreach (var face1 in detectedFaces1)
                        {
                            var faceQualityForRecognition = face1.FaceAttributes.QualityForRecognition;
                            //  Only "high" quality images are recommended for person enrollment
                            if (faceQualityForRecognition.HasValue && (faceQualityForRecognition.Value != QualityForRecognition.High)){
                                sufficientQuality = false;
                                break;
                            }
                        }
    
                        if (!sufficientQuality){
                            continue;
                        }
    
                        // add face to the person group
                        Console.WriteLine($"Add face to the person group person({groupedFace}) from image `{similarImage}`");
                        PersistedFace face = await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, person.PersonId,
                            $"{url}{similarImage}", similarImage);
                    }
                }
    
                // Start to train the person group.
                Console.WriteLine();
                Console.WriteLine($"Train person group {personGroupId}.");
                await client.PersonGroup.TrainAsync(personGroupId);
    
                // Wait until the training is completed.
                while (true)
                {
                    await Task.Delay(1000);
                    var trainingStatus = await client.PersonGroup.GetTrainingStatusAsync(personGroupId);
                    Console.WriteLine($"Training status: {trainingStatus.Status}.");
                    if (trainingStatus.Status == TrainingStatusType.Succeeded) { break; }
                }
                Console.WriteLine();
    
                List<Guid> sourceFaceIds = new List<Guid>();
                // Detect faces from source image url.
                List<DetectedFace> detectedFaces = await DetectFaceRecognize(client, $"{url}{sourceImageFileName}", recognitionModel);
    
                // Add detected faceId to sourceFaceIds.
                foreach (var detectedFace in detectedFaces) { sourceFaceIds.Add(detectedFace.FaceId.Value); }
                
                // Identify the faces in a person group. 
                var identifyResults = await client.Face.IdentifyAsync(sourceFaceIds, personGroupId);
    
                foreach (var identifyResult in identifyResults)
                {
                    if (identifyResult.Candidates.Count==0) {
                        Console.WriteLine($"No person is identified for the face in: {sourceImageFileName} - {identifyResult.FaceId},");
                        continue;
                    }
                    Person person = await client.PersonGroupPerson.GetAsync(personGroupId, identifyResult.Candidates[0].PersonId);
                    Console.WriteLine($"Person '{person.Name}' is identified for the face in: {sourceImageFileName} - {identifyResult.FaceId}," +
                        $" confidence: {identifyResult.Candidates[0].Confidence}.");
    
                    VerifyResult verifyResult = await client.Face.VerifyFaceToPersonAsync(identifyResult.FaceId, person.PersonId, personGroupId);
                    Console.WriteLine($"Verification result: is a match? {verifyResult.IsIdentical}. confidence: {verifyResult.Confidence}");
                }
                Console.WriteLine();
            }
        }
    }
    
  3. 執行應用程式

    按一下 IDE 視窗頂端的 [偵錯] 按鈕,以執行應用程式。

輸出

========IDENTIFY FACES========

Create a person group (3972c063-71b3-4328-8579-6d190ee76f99).
Create a person group person 'Family1-Dad'.
Add face to the person group person(Family1-Dad) from image `Family1-Dad1.jpg`
Add face to the person group person(Family1-Dad) from image `Family1-Dad2.jpg`
Create a person group person 'Family1-Mom'.
Add face to the person group person(Family1-Mom) from image `Family1-Mom1.jpg`
Add face to the person group person(Family1-Mom) from image `Family1-Mom2.jpg`
Create a person group person 'Family1-Son'.
Add face to the person group person(Family1-Son) from image `Family1-Son1.jpg`
Add face to the person group person(Family1-Son) from image `Family1-Son2.jpg`
Create a person group person 'Family1-Daughter'.
Create a person group person 'Family2-Lady'.
Add face to the person group person(Family2-Lady) from image `Family2-Lady1.jpg`
Add face to the person group person(Family2-Lady) from image `Family2-Lady2.jpg`
Create a person group person 'Family2-Man'.
Add face to the person group person(Family2-Man) from image `Family2-Man1.jpg`
Add face to the person group person(Family2-Man) from image `Family2-Man2.jpg`

Train person group 3972c063-71b3-4328-8579-6d190ee76f99.
Training status: Succeeded.

4 face(s) with 4 having sufficient quality for recognition detected from image `identification1.jpg`
Person 'Family1-Dad' is identified for face in: identification1.jpg - 994bfd7a-0d8f-4fae-a5a6-c524664cbee7, confidence: 0.96725.
Person 'Family1-Mom' is identified for face in: identification1.jpg - 0c9da7b9-a628-429d-97ff-cebe7c638fb5, confidence: 0.96921.
No person is identified for face in: identification1.jpg - a881259c-e811-4f7e-a35e-a453e95ca18f,
Person 'Family1-Son' is identified for face in: identification1.jpg - 53772235-8193-46eb-bdfc-1ebc25ea062e, confidence: 0.92886.

End of quickstart.

提示

臉部 API 會在一組本質為靜態的預建模型上執行 (模型的效能不會在服務執行時衰退或改善)。 如果 Microsoft 更新模型的後端,而未遷移到全新的模型版本,則模型產生的結果可能會變更。 若要利用較新版本的模型,您可以使用相同的註冊映像來重新訓練 PersonGroup,進而將較新的模型指定為參數。

清除資源

如果您想要清除和移除 Azure AI 服務訂用帳戶,則可以刪除資源或資源群組。 刪除資源群組也會刪除與其相關聯的任何其他資源。

若要刪除您在本快速入門中建立的 PersonGroup,請在程式中執行下列程式碼:

// At end, delete person groups in both regions (since testing only)
Console.WriteLine("========DELETE PERSON GROUP========");
Console.WriteLine();
DeletePersonGroup(client, personGroupId).Wait();

以下列程式碼定義刪除方法:

/*
 * DELETE PERSON GROUP
 * After this entire example is executed, delete the person group in your Azure account,
 * otherwise you cannot recreate one with the same name (if running example repeatedly).
 */
public static async Task DeletePersonGroup(IFaceClient client, String personGroupId)
{
    await client.PersonGroup.DeleteAsync(personGroupId);
    Console.WriteLine($"Deleted the person group {personGroupId}.");
}

下一步

在本快速入門中,您已了解如何使用適用於 .NET 的臉部用戶端程式庫來執行基本臉部識別。 接下來,了解不同的臉部偵測模型,以及如何為您的使用案例指定正確的模型。

開始使用適用於 Python 的臉部用戶端程式庫進行臉部辨識。 請遵循下列步驟來安裝套件,並試用基本工作的程式碼範例。 臉部服務可讓您存取先進的演算法,以偵測和辨識影像中的人臉。 請遵循下列步驟來安裝套件,並試用使用遠端影像進行基本臉部識別的程式碼範例。

參考文件 | 程式庫原始程式碼 | 套件 (PiPy) | 範例

必要條件

  • Azure 訂用帳戶 - 建立免費帳戶
  • Python 3.x
    • 您安裝的 Python 應包含 pip。 您可以在命令列上執行 pip --version 來檢查是否已安裝 pip。 安裝最新版本的 Python 以取得 pip。
  • Azure 帳戶必須已獲指派 Cognitive Services Contributor 角色,您才能同意負責任的 AI 條款並建立資源。 若要將此角色指派給您的帳戶,請遵循指派角色文件中的步驟,或連絡系統管理員。
  • 擁有 Azure 訂用帳戶之後,在 Azure 入口網站中建立臉部資源,以取得您的金鑰和端點。 在其部署後,選取 [前往資源]
    • 您需要來自所建立資源的金鑰和端點,以將應用程式連線至 Face API。
    • 您可以使用免費定價層 (F0) 來試用服務,之後可升級至付費層以用於實際執行環境。

建立環境變數

在此範例中,在執行應用程式的本機電腦上將認證寫入環境變數。

前往 Azure 入口網站。 如果已成功部署您在 [必要條件] 區段中建立的資源,請選取 [後續步驟] 下的 [前往資源] 按鈕。 您可以在 [金鑰和端點] 頁面中 [資源管理] 底下找到金鑰和端點。 您的資源金鑰與您的 Azure 訂用帳戶識別碼不同。

提示

請勿在程式碼中直接包含索引碼,且切勿公開張貼索引碼。 如需更多驗證選項 (例如 Azure Key Vault),請參閱 Azure AI 服務安全性文章。

若要設定金鑰和端點的環境變數,請開啟主控台視窗,然後遵循作業系統和開發環境的指示進行。

  1. 若要設定 VISION_KEY 環境變數,請以您其中一個資源索引碼取代 your-key
  2. 若要設定 VISION_ENDPOINT 環境變數,請將 your-endpoint 取代為您資源的端點。
setx VISION_KEY <your_key>
setx VISION_ENDPOINT <your_endpoint>

新增環境變數之後,您可能需要重新啟動任何將讀取環境變數的執行中程式,包括主控台視窗。

識別並驗證臉部

  1. 安裝用戶端程式庫

    安裝 Python 之後,您可以透過以下項目安裝用戶端程式庫:

    pip install --upgrade azure-cognitiveservices-vision-face
    
  2. 建立新的 Python 應用程式

    建立新的 Python 指令碼—例如,quickstart-file.py。 然後在您慣用的編輯器或 IDE 中開啟該檔案,並貼上下列程式碼。

    注意

    如果您尚未使用登記表單接收臉部服務的存取權限,其中部分功能將無法運作。

    import asyncio
    import io
    import os
    import sys
    import time
    import uuid
    import requests
    from urllib.parse import urlparse
    from io import BytesIO
    # To install this module, run:
    # python -m pip install Pillow
    from PIL import Image, ImageDraw
    from azure.cognitiveservices.vision.face import FaceClient
    from msrest.authentication import CognitiveServicesCredentials
    from azure.cognitiveservices.vision.face.models import TrainingStatusType, Person, QualityForRecognition
    
    
    # This key will serve all examples in this document.
    KEY = os.environ["VISION_KEY"]
    
    # This endpoint will be used in all examples in this quickstart.
    ENDPOINT = os.environ["VISION_ENDPOINT"]
    
    # Base url for the Verify and Facelist/Large Facelist operations
    IMAGE_BASE_URL = 'https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/'
    
    # Used in the Person Group Operations and Delete Person Group examples.
    # You can call list_person_groups to print a list of preexisting PersonGroups.
    # SOURCE_PERSON_GROUP_ID should be all lowercase and alphanumeric. For example, 'mygroupname' (dashes are OK).
    PERSON_GROUP_ID = str(uuid.uuid4()) # assign a random ID (or name it anything)
    
    # Used for the Delete Person Group example.
    TARGET_PERSON_GROUP_ID = str(uuid.uuid4()) # assign a random ID (or name it anything)
    
    # Create an authenticated FaceClient.
    face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))
    
    '''
    Create the PersonGroup
    '''
    # Create empty Person Group. Person Group ID must be lower case, alphanumeric, and/or with '-', '_'.
    print('Person group:', PERSON_GROUP_ID)
    face_client.person_group.create(person_group_id=PERSON_GROUP_ID, name=PERSON_GROUP_ID, recognition_model='recognition_04')
    
    # Define woman friend
    woman = face_client.person_group_person.create(PERSON_GROUP_ID, name="Woman")
    # Define man friend
    man = face_client.person_group_person.create(PERSON_GROUP_ID, name="Man")
    # Define child friend
    child = face_client.person_group_person.create(PERSON_GROUP_ID, name="Child")
    
    '''
    Detect faces and register them to each person
    '''
    # Find all jpeg images of friends in working directory (TBD pull from web instead)
    woman_images = ["https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Mom1.jpg", "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Mom2.jpg"]
    man_images = ["https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Dad1.jpg", "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Dad2.jpg"]
    child_images = ["https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Son1.jpg", "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Son2.jpg"]
    
    # Add to woman person
    for image in woman_images:
        # Check if the image is of sufficent quality for recognition.
        sufficientQuality = True
        detected_faces = face_client.face.detect_with_url(url=image, detection_model='detection_03', recognition_model='recognition_04', return_face_attributes=['qualityForRecognition'])
        for face in detected_faces:
            if face.face_attributes.quality_for_recognition != QualityForRecognition.high:
                sufficientQuality = False
                break
            face_client.person_group_person.add_face_from_url(PERSON_GROUP_ID, woman.person_id, image)
            print("face {} added to person {}".format(face.face_id, woman.person_id))
    
        if not sufficientQuality: continue
    
    # Add to man person
    for image in man_images:
        # Check if the image is of sufficent quality for recognition.
        sufficientQuality = True
        detected_faces = face_client.face.detect_with_url(url=image, detection_model='detection_03', recognition_model='recognition_04', return_face_attributes=['qualityForRecognition'])
        for face in detected_faces:
            if face.face_attributes.quality_for_recognition != QualityForRecognition.high:
                sufficientQuality = False
                break
            face_client.person_group_person.add_face_from_url(PERSON_GROUP_ID, man.person_id, image)
            print("face {} added to person {}".format(face.face_id, man.person_id))
    
        if not sufficientQuality: continue
    
    # Add to child person
    for image in child_images:
        # Check if the image is of sufficent quality for recognition.
        sufficientQuality = True
        detected_faces = face_client.face.detect_with_url(url=image, detection_model='detection_03', recognition_model='recognition_04', return_face_attributes=['qualityForRecognition'])
        for face in detected_faces:
            if face.face_attributes.quality_for_recognition != QualityForRecognition.high:
                sufficientQuality = False
                print("{} has insufficient quality".format(face))
                break
            face_client.person_group_person.add_face_from_url(PERSON_GROUP_ID, child.person_id, image)
            print("face {} added to person {}".format(face.face_id, child.person_id))
        if not sufficientQuality: continue
    
    
    '''
    Train PersonGroup
    '''
    # Train the person group
    print("pg resource is {}".format(PERSON_GROUP_ID))
    rawresponse = face_client.person_group.train(PERSON_GROUP_ID, raw= True)
    print(rawresponse)
    
    while (True):
        training_status = face_client.person_group.get_training_status(PERSON_GROUP_ID)
        print("Training status: {}.".format(training_status.status))
        print()
        if (training_status.status is TrainingStatusType.succeeded):
            break
        elif (training_status.status is TrainingStatusType.failed):
            face_client.person_group.delete(person_group_id=PERSON_GROUP_ID)
            sys.exit('Training the person group has failed.')
        time.sleep(5)
    
    '''
    Identify a face against a defined PersonGroup
    '''
    # Group image for testing against
    test_image = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/identification1.jpg"
    
    print('Pausing for 10 seconds to avoid triggering rate limit on free account...')
    time.sleep (10)
    
    # Detect faces
    face_ids = []
    # We use detection model 3 to get better performance, recognition model 4 to support quality for recognition attribute.
    faces = face_client.face.detect_with_url(test_image, detection_model='detection_03', recognition_model='recognition_04', return_face_attributes=['qualityForRecognition'])
    for face in faces:
        # Only take the face if it is of sufficient quality.
        if face.face_attributes.quality_for_recognition == QualityForRecognition.high or face.face_attributes.quality_for_recognition == QualityForRecognition.medium:
            face_ids.append(face.face_id)
    
    # Identify faces
    results = face_client.face.identify(face_ids, PERSON_GROUP_ID)
    print('Identifying faces in image')
    if not results:
        print('No person identified in the person group')
    for identifiedFace in results:
        if len(identifiedFace.candidates) > 0:
            print('Person is identified for face ID {} in image, with a confidence of {}.'.format(identifiedFace.face_id, identifiedFace.candidates[0].confidence)) # Get topmost confidence score
    
            # Verify faces
            verify_result = face_client.face.verify_face_to_person(identifiedFace.face_id, identifiedFace.candidates[0].person_id, PERSON_GROUP_ID)
            print('verification result: {}. confidence: {}'.format(verify_result.is_identical, verify_result.confidence))
        else:
            print('No person identified for face ID {} in image.'.format(identifiedFace.face_id))
     
    
    print()
    print('End of quickstart.')
    
    
  3. 使用 python 命令,從應用程式目錄執行臉部辨識應用程式。

    python quickstart-file.py
    

    提示

    臉部 API 會在一組本質為靜態的預建模型上執行 (模型的效能不會在服務執行時衰退或改善)。 如果 Microsoft 更新模型的後端,而未遷移到全新的模型版本,則模型產生的結果可能會變更。 若要利用較新版本的模型,您可以使用相同的註冊映像來重新訓練 PersonGroup,進而將較新的模型指定為參數。

輸出

Person group: c8e679eb-0b71-43b4-aa91-ab8200cae7df
face 861d769b-d014-40e8-8b4a-7fd3bc9b425b added to person f80c1cfa-b8cb-46f8-9f7f-e72fbe402bc3
face e3c356a4-1ac3-4c97-9219-14648997f195 added to person f80c1cfa-b8cb-46f8-9f7f-e72fbe402bc3
face f9119820-c374-4c4d-b795-96ae2fec5069 added to person be4084a7-0c7b-4cf9-9463-3756d2e28e17
face 67d626df-3f75-4801-9364-601b63c8296a added to person be4084a7-0c7b-4cf9-9463-3756d2e28e17
face 19e2e8cc-5029-4087-bca0-9f94588fb850 added to person 3ff07c65-6193-4d3e-bf18-d7c106393cd5
face dcc61e80-16b1-4241-ae3f-9721597bae4c added to person 3ff07c65-6193-4d3e-bf18-d7c106393cd5
pg resource is c8e679eb-0b71-43b4-aa91-ab8200cae7df
<msrest.pipeline.ClientRawResponse object at 0x00000240DAD47310>
Training status: running.

Training status: succeeded.

Pausing for 10 seconds to avoid triggering rate limit on free account...
Identifying faces in image
Person for face ID 40582995-d3a8-41c4-a9d1-d17ae6b46c5c is identified in image, with a confidence of 0.96725.
Person for face ID 7a0368a2-332c-4e7a-81c4-2db3d74c78c5 is identified in image, with a confidence of 0.96921.
No person identified for face ID c4a3dd28-ef2d-457e-81d1-a447344242c4 in image.
Person for face ID 360edf1a-1e8f-402d-aa96-1734d0c21c1c is identified in image, with a confidence of 0.92886.

清除資源

如果您想要清除和移除 Azure AI 服務訂用帳戶,則可以刪除資源或資源群組。 刪除資源群組也會刪除與其相關聯的任何其他資源。

若要刪除您在本快速入門中建立的 PersonGroup,請在指令碼中執行下列程式碼:

# Delete the main person group.
face_client.person_group.delete(person_group_id=PERSON_GROUP_ID)
print("Deleted the person group {} from the source location.".format(PERSON_GROUP_ID))
print()

下一步

在本快速入門中,您已了解如何使用適用於 Python 的臉部用戶端程式庫來執行基本臉部識別。 接下來,了解不同的臉部偵測模型,以及如何為您的使用案例指定正確的模型。

開始使用適用於 JavaScript 的臉部用戶端程式庫進行臉部辨識。 請遵循下列步驟來安裝套件,並試用基本工作的程式碼範例。 臉部服務可讓您存取先進的演算法,以偵測和辨識影像中的人臉。 請遵循下列步驟來安裝套件,並試用使用遠端影像進行基本臉部識別的程式碼範例。

參考文件 | 套件 (npm) | 範例

必要條件

  • Azure 訂用帳戶 - 建立免費帳戶
  • 最新版的 Node.js
  • Azure 帳戶必須已獲指派 Cognitive Services Contributor 角色,您才能同意負責任的 AI 條款並建立資源。 若要將此角色指派給您的帳戶,請遵循指派角色文件中的步驟,或連絡系統管理員。
  • 擁有 Azure 訂用帳戶之後,在 Azure 入口網站中建立臉部資源,以取得您的金鑰和端點。 在其部署後,選取 [前往資源]
    • 您需要來自所建立資源的金鑰和端點,以將應用程式連線至 Face API。
    • 您可以使用免費定價層 (F0) 來試用服務,之後可升級至付費層以用於實際執行環境。

建立環境變數

在此範例中,在執行應用程式的本機電腦上將認證寫入環境變數。

前往 Azure 入口網站。 如果已成功部署您在 [必要條件] 區段中建立的資源,請選取 [後續步驟] 下的 [前往資源] 按鈕。 您可以在 [金鑰和端點] 頁面中 [資源管理] 底下找到金鑰和端點。 您的資源金鑰與您的 Azure 訂用帳戶識別碼不同。

提示

請勿在程式碼中直接包含索引碼,且切勿公開張貼索引碼。 如需更多驗證選項 (例如 Azure Key Vault),請參閱 Azure AI 服務安全性文章。

若要設定金鑰和端點的環境變數,請開啟主控台視窗,然後遵循作業系統和開發環境的指示進行。

  1. 若要設定 VISION_KEY 環境變數,請以您其中一個資源索引碼取代 your-key
  2. 若要設定 VISION_ENDPOINT 環境變數,請將 your-endpoint 取代為您資源的端點。
setx VISION_KEY <your_key>
setx VISION_ENDPOINT <your_endpoint>

新增環境變數之後,您可能需要重新啟動任何將讀取環境變數的執行中程式,包括主控台視窗。

識別並驗證臉部

  1. 建立新的 Node.js 應用程式

    在主控台視窗 (例如 cmd、PowerShell 或 Bash) 中,為您的應用程式建立新的目錄,並瀏覽至該目錄。

    mkdir myapp && cd myapp
    

    執行命令 npm init,以使用 package.json 檔案建立節點應用程式。

    npm init
    
  2. 安裝 ms-rest-azureazure-cognitiveservices-face NPM 套件:

    npm install @azure/cognitiveservices-face @azure/ms-rest-js uuid
    

    您應用程式的 package.json 檔案會隨著相依性而更新。

  3. 建立名為 index.js 的檔案,在文字編輯器中開啟,並貼上下列程式碼:

    注意

    如果您尚未使用登記表單接收臉部服務的存取權限,其中部分功能將無法運作。

    const { randomUUID } = require("crypto");
    
    const { AzureKeyCredential } = require("@azure/core-auth");
    
    const createFaceClient = require("@azure-rest/ai-vision-face").default,
      { FaceAttributeTypeRecognition04, getLongRunningPoller } = require("@azure-rest/ai-vision-face");
    
    /**
     * NOTE This sample might not work with the free tier of the Face service because it might exceed the rate limits.
     * If that happens, try inserting calls to sleep() between calls to the Face service.
     */
    const sleep = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
    
    const main = async () => {
      const endpoint = process.env["FACE_ENDPOINT"] ?? "<endpoint>";
      const apikey = process.env["FACE_APIKEY"] ?? "<apikey>";
      const credential = new AzureKeyCredential(apikey);
      const client = createFaceClient(endpoint, credential);
    
      const imageBaseUrl =
        "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/";
      const personGroupId = randomUUID();
    
      console.log("========IDENTIFY FACES========");
      console.log();
    
      // Create a dictionary for all your images, grouping similar ones under the same key.
      const personDictionary = {
        "Family1-Dad": ["Family1-Dad1.jpg", "Family1-Dad2.jpg"],
        "Family1-Mom": ["Family1-Mom1.jpg", "Family1-Mom2.jpg"],
        "Family1-Son": ["Family1-Son1.jpg", "Family1-Son2.jpg"],
        "Family1-Daughter": ["Family1-Daughter1.jpg", "Family1-Daughter2.jpg"],
        "Family2-Lady": ["Family2-Lady1.jpg", "Family2-Lady2.jpg"],
        "Family2-Man": ["Family2-Man1.jpg", "Family2-Man2.jpg"],
      };
    
      // A group photo that includes some of the persons you seek to identify from your dictionary.
      const sourceImageFileName = "identification1.jpg";
    
      // Create a person group.
      console.log(`Creating a person group with ID: ${personGroupId}`);
      await client.path("/persongroups/{personGroupId}", personGroupId).put({
        body: {
          name: personGroupId,
          recognitionModel: "recognition_04",
        },
      });
    
      // The similar faces will be grouped into a single person group person.
      console.log("Adding faces to person group...");
      await Promise.all(
        Object.keys(personDictionary).map(async (name) => {
          console.log(`Create a persongroup person: ${name}`);
          const createPersonGroupPersonResponse = await client
            .path("/persongroups/{personGroupId}/persons", personGroupId)
            .post({
              body: { name },
            });
    
          const { personId } = createPersonGroupPersonResponse.body;
    
          await Promise.all(
            personDictionary[name].map(async (similarImage) => {
              // Check if the image is of sufficent quality for recognition.
              const detectResponse = await client.path("/detect").post({
                contentType: "application/json",
                queryParameters: {
                  detectionModel: "detection_03",
                  recognitionModel: "recognition_04",
                  returnFaceId: false,
                  returnFaceAttributes: [FaceAttributeTypeRecognition04.QUALITY_FOR_RECOGNITION],
                },
                body: { url: `${imageBaseUrl}${similarImage}` },
              });
    
              const sufficientQuality = detectResponse.body.every(
                (face) => face.faceAttributes?.qualityForRecognition === "high",
              );
              if (!sufficientQuality) {
                return;
              }
    
              // Quality is sufficent, add to group.
              console.log(
                `Add face to the person group person: (${name}) from image: (${similarImage})`,
              );
              await client
                .path(
                  "/persongroups/{personGroupId}/persons/{personId}/persistedfaces",
                  personGroupId,
                  personId,
                )
                .post({
                  queryParameters: { detectionModel: "detection_03" },
                  body: { url: `${imageBaseUrl}${similarImage}` },
                });
            }),
          );
        }),
      );
      console.log("Done adding faces to person group.");
    
      // Start to train the person group.
      console.log();
      console.log(`Training person group: ${personGroupId}`);
      const trainResponse = await client
        .path("/persongroups/{personGroupId}/train", personGroupId)
        .post();
      const poller = await getLongRunningPoller(client, trainResponse);
      await poller.pollUntilDone();
      console.log(`Training status: ${poller.getOperationState().status}`);
      if (poller.getOperationState().status !== "succeeded") {
        return;
      }
    
      // Detect faces from source image url and only take those with sufficient quality for recognition.
      const detectResponse = await client.path("/detect").post({
        contentType: "application/json",
        queryParameters: {
          detectionModel: "detection_03",
          recognitionModel: "recognition_04",
          returnFaceId: true,
        },
        body: { url: `${imageBaseUrl}${sourceImageFileName}` },
      });
      const faceIds = detectResponse.body.map((face) => face.faceId);
    
      // Identify the faces in a person group.
      const identifyResponse = await client.path("/identify").post({
        body: { faceIds, personGroupId },
      });
      await Promise.all(
        identifyResponse.body.map(async (result) => {
          try {
            const getPersonGroupPersonResponse = await client
              .path(
                "/persongroups/{personGroupId}/persons/{personId}",
                personGroupId,
                result.candidates[0].personId,
              )
              .get();
            const person = getPersonGroupPersonResponse.body;
            console.log(
              `Person: ${person.name} is identified for face in: ${sourceImageFileName} with ID: ${result.faceId}. Confidence: ${result.candidates[0].confidence}`,
            );
    
            // Verification:
            const verifyResponse = await client.path("/verify").post({
              body: {
                faceId: result.faceId,
                personGroupId,
                personId: person.personId,
              },
            });
            console.log(
              `Verification result between face ${result.faceId} and person ${person.personId}: ${verifyResponse.body.isIdentical} with confidence: ${verifyResponse.body.confidence}`,
            );
          } catch (error) {
            console.log(`No persons identified for face with ID ${result.faceId}`);
          }
        }),
      );
      console.log();
    
      // Delete person group.
      console.log(`Deleting person group: ${personGroupId}`);
      await client.path("/persongroups/{personGroupId}", personGroupId).delete();
      console.log();
    
      console.log("Done.");
    };
    
    main().catch(console.error);
    
  4. 使用快速入門檔案上使用 node 命令執行應用程式。

    node index.js
    

輸出

========IDENTIFY FACES========

Creating a person group with ID: c08484e0-044b-4610-8b7e-c957584e5d2d
Adding faces to person group...
Create a persongroup person: Family1-Dad.
Create a persongroup person: Family1-Mom.
Create a persongroup person: Family2-Lady.
Create a persongroup person: Family1-Son.
Create a persongroup person: Family1-Daughter.
Create a persongroup person: Family2-Man.
Add face to the person group person: (Family1-Son) from image: Family1-Son2.jpg.
Add face to the person group person: (Family1-Dad) from image: Family1-Dad2.jpg.
Add face to the person group person: (Family1-Mom) from image: Family1-Mom1.jpg.
Add face to the person group person: (Family2-Man) from image: Family2-Man1.jpg.
Add face to the person group person: (Family1-Son) from image: Family1-Son1.jpg.
Add face to the person group person: (Family2-Lady) from image: Family2-Lady2.jpg.
Add face to the person group person: (Family1-Mom) from image: Family1-Mom2.jpg.
Add face to the person group person: (Family1-Dad) from image: Family1-Dad1.jpg.
Add face to the person group person: (Family2-Man) from image: Family2-Man2.jpg.
Add face to the person group person: (Family2-Lady) from image: Family2-Lady1.jpg.
Done adding faces to person group.

Training person group: c08484e0-044b-4610-8b7e-c957584e5d2d.
Waiting 10 seconds...
Training status: succeeded.

Person: Family1-Mom is identified for face in: identification1.jpg with ID: b7f7f542-c338-4a40-ad52-e61772bc6e14. Confidence: 0.96921.
Person: Family1-Son is identified for face in: identification1.jpg with ID: 600dc1b4-b2c4-4516-87de-edbbdd8d7632. Confidence: 0.92886.
Person: Family1-Dad is identified for face in: identification1.jpg with ID: e83b494f-9ad2-473f-9d86-3de79c01e345. Confidence: 0.96725.

清除資源

如果您想要清除和移除 Azure AI 服務訂用帳戶,則可以刪除資源或資源群組。 刪除資源群組也會刪除與其相關聯的任何其他資源。

下一步

在本快速入門中,您已了解如何使用適用於 JavaScript 的臉部用戶端程式庫來執行基本臉部識別。 接下來,了解不同的臉部偵測模型,以及如何為您的使用案例指定正確的模型。

開始使用臉部 REST API 進行臉部辨識。 臉部服務可讓您存取先進的演算法,以偵測和辨識影像中的人臉。

注意

本快速入門會使用 cURL 命令來呼叫 REST API。 您也可以使用程式設計語言來呼叫 REST API。 使用語言 SDK 實作臉部識別等複雜案例會比較容易。 如需 C#PythonJavaJavaScriptGo 的範例,請參閱 GitHub 範例。

必要條件

  • Azure 訂用帳戶 - 建立免費帳戶
  • Azure 帳戶必須已獲指派 Cognitive Services Contributor 角色,您才能同意負責任的 AI 條款並建立資源。 若要將此角色指派給您的帳戶,請遵循指派角色文件中的步驟,或連絡系統管理員。
  • 擁有 Azure 訂用帳戶之後,在 Azure 入口網站中建立臉部資源,以取得您的金鑰和端點。 在其部署後,選取 [前往資源]
    • 您需要來自所建立資源的金鑰和端點,以將應用程式連線至 Face API。 您稍後會在快速入門中將金鑰和端點貼到下列程式碼中。
    • 您可以使用免費定價層 (F0) 來試用服務,之後可升級至付費層以用於實際執行環境。
  • PowerShell 6.0 版以上,或類似的命令列應用程式。
  • 已安裝 cURL

識別並驗證臉部

注意

如果您尚未使用登記表單接收臉部服務的存取權限,其中部分功能將無法運作。

  1. 首先,在來源臉部上呼叫偵測 API。 這是我們嘗試從較大群組中識別的臉部。 將下列命令複製到文字編輯器,插入您自己的金鑰,然後將其複製到殼層視窗並加以執行。

    curl -v -X POST "https://{resource endpoint}/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&recognitionModel=recognition_04&returnRecognitionModel=false&detectionModel=detection_03&faceIdTimeToLive=86400" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{\"url\":\"https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/identification1.jpg\"}"
    

    將傳回的臉部識別碼字串儲存至暫存位置。 您會在結尾再次使用。

  2. 接下來,您必須建立 LargePersonGroup。 此物件會儲存數個人員的彙總臉部資料。 執行下列命令,插入您自己的金鑰。 選擇性地變更要求本文中的群組名稱和中繼資料。

    curl -v -X PUT "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{ \
        \"name\": \"large-person-group-name\", \
        \"userData\": \"User-provided data attached to the large person group.\", \
        \"recognitionModel\": \"recognition_04\" \
    }"
    

    將所建立群組的傳回識別碼儲存到暫存位置。

  3. 接下來,您將建立屬於群組的 Person 物件。 執行下列命令,插入您自己的金鑰和上一個步驟中的 LargePersonGroup 識別碼。 此命令會建立名為「Family1-Dad」的 Person

    curl -v -X POST "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}/persons" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{ \
        \"name\": \"Family1-Dad\", \
        \"userData\": \"User-provided data attached to the person.\" \
    }"
    

    執行此命令之後,請使用不同的輸入資料再次執行命令,以建立更多 Person 物件:「Family1-Mom」、「Family1-Son」、「Family1-Daughter」、「Family2-Lady」和「Family2-Man」。

    儲存所建立每個 Person 的識別碼;請務必追蹤哪個識別碼屬於哪個人員。

  4. 接下來,您必須偵測新的臉部,並將其與現有的 Person 物件產生關聯。 下列命令會從影像 Family1-Dad1.jpg 偵測臉部,並將它新增至對應的人員。 您必須指定 personId 作為建立「Family1-Dad」Person 物件時傳回的識別碼。 影像名稱對應至所建立 Person 的名稱。 此外,請在適當的欄位中輸入 LargePersonGroup 識別碼和您的金鑰。

    curl -v -X POST "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces?detectionModel=detection_03" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{\"url\":\"https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Dad1.jpg\"}"
    

    然後,以不同的來源影像和目標 Person 再次執行上述命令。 可用影像包括:Family1-Dad1.jpgFamily1-Dad2.jpgFamily1-Mom1.jpgFamily1-Mom2.jpgFamily1-Son1.jpgFamily1-Son2.jpgFamily1-Daughter1.jpgFamily1-Daughter2.jpgFamily2-Lady1.jpgFamily2-Lady2.jpgFamily2-Man1.jpgFamily2-Man2.jpg。 請確定您在 API 呼叫中所指定 Person 的識別碼,與要求本文中影像檔案的名稱相符。

    在此步驟結束時,您應該有多個 Person 物件,每個物件都有一或多個對應的臉部,直接從提供的影像偵測。

  5. 接下來,使用目前的臉部資料來定型 LargePersonGroup。 定型作業會教導模型如何將臉部特徵 (有時是從多個來源影像彙總) 關聯至每一個人。 在執行命令之前,請先插入 LargePersonGroup 識別碼和金鑰。

    curl -v -X POST "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}/train" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data ""
    
  6. 現在您已準備好使用第一個步驟的來源臉部識別碼和 LargePersonGroup 識別碼來呼叫識別 API。 將這些值插入要求本文中適當的欄位,然後插入您的金鑰。

    curl -v -X POST "https://{resource endpoint}/face/v1.0/identify" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{ \
        \"largePersonGroupId\": \"INSERT_PERSONGROUP_ID\", \
        \"faceIds\": [ \
            \"INSERT_SOURCE_FACE_ID\" \
        ], \
        \"maxNumOfCandidatesReturned\": 1, \
        \"confidenceThreshold\": 0.5 \
    }"
    

    回應應該會為您提供 Person 識別碼,指出以來源臉部識別的人員。 應該是對應到「Family1-Dad」人員的識別碼,因為來源臉部是該人員。

  7. 若要進行臉部驗證,您將使用上一個步驟中傳回的人員識別碼、LargePersonGroup 識別碼,以及來源臉部識別碼。 將這些值插入要求本文中的欄位,然後插入金鑰。

    curl -v -X POST "https://{resource endpoint}/face/v1.0/verify" \
    -H "Content-Type: application/json" \
    -H "Ocp-Apim-Subscription-Key: {subscription key}" \
    --data-ascii "{ \
        \"faceId\": \"INSERT_SOURCE_FACE_ID\", \
        \"personId\": \"INSERT_PERSON_ID\", \
        \"largePersonGroupId\": \"INSERT_PERSONGROUP_ID\" \
    }"
    

    回應應該會提供布林驗證結果以及信賴度值。

清除資源

若要刪除您在此練習中建立的 LargePersonGroup,請執行 LargePersonGroup - Delete 呼叫。

curl -v -X DELETE "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}" -H "Ocp-Apim-Subscription-Key: {subscription key}"

如果您想要清除和移除 Azure AI 服務訂用帳戶,則可以刪除資源或資源群組。 刪除資源群組也會刪除與其相關聯的任何其他資源。

下一步

在本快速入門中,您已了解如何使用臉部 REST API 執行基本臉部辨識工作。 接下來,了解不同的臉部偵測模型,以及如何為您的使用案例指定正確的模型。