共用方式為


快速入門:使用臉部服務

重要

如果您使用 Microsoft 產品或服務來處理生物特徵辨識資料,您必須負責:(i) 向資料主體提供通知,包括與保留期間和毀損有關的通知;(ii) 向資料主體取得同意;以及 (iii) 刪除生物特徵辨識資料,應根據適用資料保護要求酌情採取上述措施。 「生物特徵辨識資料」具有 GDPR 第 4 條中所述的意義,以及其他資料保護要求中的對等字詞 (如適用)。 如需相關資訊,請參閱臉部的資料和隱私權

警告

臉部辨識服務存取受限於資格和使用準則,以支援我們的「負責任的 AI 原則」。 臉部辨識服務僅供 Microsoft 受管理的客戶和合作夥伴使用。 請使用臉部辨識受理表單以申請存取。 如需詳細資訊,請參閱臉部的有限存取權頁面。

開始使用適用於 .NET 的臉部用戶端程式庫進行臉部辨識。 Azure AI 臉部服務可讓您存取進階的演算法,以偵測和辨識影像中的人臉。 請遵循下列步驟來安裝套件,並試用使用遠端影像進行基本臉部識別的程式碼範例。

參考文件 | 程式庫來源程式碼 | 套件 (NuGet) | 範例

必要條件

  • Azure 訂用帳戶 - 建立免費帳戶
  • Visual Studio IDE 或目前版本的 .NET Core
  • 擁有 Azure 訂用帳戶之後,在 Azure 入口網站中建立臉部資源,以取得您的金鑰和端點。 在其部署後,選取 [前往資源]
    • 您需要來自所建立資源的金鑰和端點,以將應用程式連線至 Face API。
    • 您可以使用免費定價層 (F0) 來試用服務,之後可升級至付費層以用於實際執行環境。

建立環境變數

在此範例中,在執行應用程式的本機電腦上將認證寫入環境變數。

前往 Azure 入口網站。 如果已成功部署您在 [必要條件] 區段中建立的資源,請選取 [後續步驟] 下的 [前往資源] 按鈕。 您可以在 [金鑰和端點] 頁面中 [資源管理] 底下找到金鑰和端點。 您的資源金鑰與您的 Azure 訂用帳戶識別碼不同。

若要設定金鑰和端點的環境變數,請開啟主控台視窗,然後遵循作業系統和開發環境的指示進行。

  • 若要設定 FACE_APIKEY 環境變數,請以您其中一個資源索引碼取代 <your_key>
  • 若要設定 FACE_ENDPOINT 環境變數,請將 <your_endpoint> 取代為您資源的端點。

重要

如果您使用 API 金鑰,請將其安全地儲存在別處,例如 Azure Key Vault。 請勿在程式碼中直接包含 API 索引碼,且切勿公開張貼索引碼。

如需 AI 服務安全性的詳細資訊,請參閱驗證對 Azure AI 服務的要求

setx FACE_APIKEY <your_key>
setx FACE_ENDPOINT <your_endpoint>

新增環境變數之後,您可能需要重新啟動任何將讀取環境變數的執行中程式,包括主控台視窗。

識別並驗證臉部

  1. 建立新的 C# 應用程式

    使用 Visual Studio,建立新的 .NET Core 應用程式。

    安裝用戶端程式庫

    建立新專案後,以滑鼠右鍵按一下 [方案總管] 中的專案解決方案,然後選取 [管理 NuGet 套件],以安裝用戶端程式庫。 在開啟的套件管理員中,選取 [瀏覽]、核取 [包含發行前版本],然後搜尋 Azure.AI.Vision.Face。 選取最新版本,然後選取 [安裝]

  2. 將下列程式碼新增至 Program.cs 檔案。

    注意

    如果您尚未使用登記表單接收臉部服務的存取權限,其中部分功能將無法運作。

    using System.Net.Http.Headers;
    using System.Text;
    
    using Azure;
    using Azure.AI.Vision.Face;
    using Newtonsoft.Json;
    using Newtonsoft.Json.Linq;
    
    namespace FaceQuickstart
    {
        class Program
        {
            static readonly string largePersonGroupId = Guid.NewGuid().ToString();
    
            // URL path for the images.
            const string IMAGE_BASE_URL = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/";
    
            // From your Face subscription in the Azure portal, get your subscription key and endpoint.
            static readonly string SUBSCRIPTION_KEY = Environment.GetEnvironmentVariable("FACE_APIKEY") ?? "<apikey>";
            static readonly string ENDPOINT = Environment.GetEnvironmentVariable("FACE_ENDPOINT") ?? "<endpoint>";
    
            static void Main(string[] args)
            {
                // Recognition model 4 was released in 2021 February.
                // It is recommended since its accuracy is improved
                // on faces wearing masks compared with model 3,
                // and its overall accuracy is improved compared
                // with models 1 and 2.
                FaceRecognitionModel RECOGNITION_MODEL4 = FaceRecognitionModel.Recognition04;
    
                // Authenticate.
                FaceClient client = Authenticate(ENDPOINT, SUBSCRIPTION_KEY);
    
                // Identify - recognize a face(s) in a large person group (a large person group is created in this example).
                IdentifyInLargePersonGroup(client, IMAGE_BASE_URL, RECOGNITION_MODEL4).Wait();
    
                Console.WriteLine("End of quickstart.");
            }
    
            /*
             *	AUTHENTICATE
             *	Uses subscription key and region to create a client.
             */
            public static FaceClient Authenticate(string endpoint, string key)
            {
                return new FaceClient(new Uri(endpoint), new AzureKeyCredential(key));
            }
    
            // Detect faces from image url for recognition purposes. This is a helper method for other functions in this quickstart.
            // Parameter `returnFaceId` of `DetectAsync` must be set to `true` (by default) for recognition purposes.
            // Parameter `returnFaceAttributes` is set to include the QualityForRecognition attribute. 
            // Recognition model must be set to recognition_03 or recognition_04 as a result.
            // Result faces with insufficient quality for recognition are filtered out. 
            // The field `faceId` in returned `DetectedFace`s will be used in Verify and Identify.
            // It will expire 24 hours after the detection call.
            private static async Task<List<FaceDetectionResult>> DetectFaceRecognize(FaceClient faceClient, string url, FaceRecognitionModel recognition_model)
            {
                // Detect faces from image URL.
                Response<IReadOnlyList<FaceDetectionResult>> response = await faceClient.DetectAsync(new Uri(url), FaceDetectionModel.Detection03, recognition_model, returnFaceId: true, [FaceAttributeType.QualityForRecognition]);
                IReadOnlyList<FaceDetectionResult> detectedFaces = response.Value;
                List<FaceDetectionResult> sufficientQualityFaces = new List<FaceDetectionResult>();
                foreach (FaceDetectionResult detectedFace in detectedFaces)
                {
                    var faceQualityForRecognition = detectedFace.FaceAttributes.QualityForRecognition;
                    if (faceQualityForRecognition.HasValue && (faceQualityForRecognition.Value != QualityForRecognition.Low))
                    {
                        sufficientQualityFaces.Add(detectedFace);
                    }
                }
                Console.WriteLine($"{detectedFaces.Count} face(s) with {sufficientQualityFaces.Count} having sufficient quality for recognition detected from image `{Path.GetFileName(url)}`");
    
                return sufficientQualityFaces;
            }
    
            /*
             * IDENTIFY FACES
             * To identify faces, you need to create and define a large person group.
             * The Identify operation takes one or several face IDs from DetectedFace or PersistedFace and a LargePersonGroup and returns 
             * a list of Person objects that each face might belong to. Returned Person objects are wrapped as Candidate objects, 
             * which have a prediction confidence value.
             */
            public static async Task IdentifyInLargePersonGroup(FaceClient client, string url, FaceRecognitionModel recognitionModel)
            {
                Console.WriteLine("========IDENTIFY FACES========");
                Console.WriteLine();
    
                // Create a dictionary for all your images, grouping similar ones under the same key.
                Dictionary<string, string[]> personDictionary =
                    new Dictionary<string, string[]>
                        { { "Family1-Dad", new[] { "Family1-Dad1.jpg", "Family1-Dad2.jpg" } },
                          { "Family1-Mom", new[] { "Family1-Mom1.jpg", "Family1-Mom2.jpg" } },
                          { "Family1-Son", new[] { "Family1-Son1.jpg", "Family1-Son2.jpg" } }
                        };
                // A group photo that includes some of the persons you seek to identify from your dictionary.
                string sourceImageFileName = "identification1.jpg";
    
                // Create a large person group.
                Console.WriteLine($"Create a person group ({largePersonGroupId}).");
                HttpClient httpClient = new HttpClient();
                httpClient.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", SUBSCRIPTION_KEY);
                using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = largePersonGroupId, ["recognitionModel"] = recognitionModel.ToString() }))))
                {
                    content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
                    await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/largepersongroups/{largePersonGroupId}", content);
                }
                // The similar faces will be grouped into a single large person group person.
                foreach (var groupedFace in personDictionary.Keys)
                {
                    // Limit TPS
                    await Task.Delay(250);
                    string? personId = null;
                    using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = groupedFace }))))
                    {
                        content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
                        using (var response = await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/largepersongroups/{largePersonGroupId}/persons", content))
                        {
                            string contentString = await response.Content.ReadAsStringAsync();
                            personId = (string?)(JsonConvert.DeserializeObject<Dictionary<string, object>>(contentString)?["personId"]);
                        }
                    }
                    Console.WriteLine($"Create a person group person '{groupedFace}'.");
    
                    // Add face to the large person group person.
                    foreach (var similarImage in personDictionary[groupedFace])
                    {
                        Console.WriteLine($"Check whether image is of sufficient quality for recognition");
                        Response<IReadOnlyList<FaceDetectionResult>> response = await client.DetectAsync(new Uri($"{url}{similarImage}"), FaceDetectionModel.Detection03, recognitionModel, returnFaceId: false, [FaceAttributeType.QualityForRecognition]);
                        IReadOnlyList<FaceDetectionResult> detectedFaces1 = response.Value;
                        bool sufficientQuality = true;
                        foreach (var face1 in detectedFaces1)
                        {
                            var faceQualityForRecognition = face1.FaceAttributes.QualityForRecognition;
                            //  Only "high" quality images are recommended for person enrollment
                            if (faceQualityForRecognition.HasValue && (faceQualityForRecognition.Value != QualityForRecognition.High))
                            {
                                sufficientQuality = false;
                                break;
                            }
                        }
    
                        if (!sufficientQuality)
                        {
                            continue;
                        }
    
                        if (detectedFaces1.Count != 1)
                        {
                            continue;
                        }
    
                        // add face to the large person group
                        Console.WriteLine($"Add face to the person group person({groupedFace}) from image `{similarImage}`");
                        using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["url"] = $"{url}{similarImage}" }))))
                        {
                            content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
                            await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces?detectionModel=detection_03", content);
                        }
                    }
                }
    
                // Start to train the large person group.
                Console.WriteLine();
                Console.WriteLine($"Train person group {largePersonGroupId}.");
                await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/largepersongroups/{largePersonGroupId}/train", null);
    
                // Wait until the training is completed.
                while (true)
                {
                    await Task.Delay(1000);
                    string? trainingStatus = null;
                    using (var response = await httpClient.GetAsync($"{ENDPOINT}/face/v1.0/largepersongroups/{largePersonGroupId}/training"))
                    {
                        string contentString = await response.Content.ReadAsStringAsync();
                        trainingStatus = (string?)(JsonConvert.DeserializeObject<Dictionary<string, object>>(contentString)?["status"]);
                    }
                    Console.WriteLine($"Training status: {trainingStatus}.");
                    if ("succeeded".Equals(trainingStatus)) { break; }
                }
                Console.WriteLine();
    
                Console.WriteLine("Pausing for 60 seconds to avoid triggering rate limit on free account...");
                await Task.Delay(60000);
    
                List<Guid> sourceFaceIds = new List<Guid>();
                // Detect faces from source image url.
                List<FaceDetectionResult> detectedFaces = await DetectFaceRecognize(client, $"{url}{sourceImageFileName}", recognitionModel);
    
                // Add detected faceId to sourceFaceIds.
                foreach (var detectedFace in detectedFaces) { sourceFaceIds.Add(detectedFace.FaceId.Value); }
    
                // Identify the faces in a large person group.
                List<Dictionary<string, object>> identifyResults = new List<Dictionary<string, object>>();
                using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["faceIds"] = sourceFaceIds, ["largePersonGroupId"] = largePersonGroupId }))))
                {
                    content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
                    using (var response = await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/identify", content))
                    {
                        string contentString = await response.Content.ReadAsStringAsync();
                        identifyResults = JsonConvert.DeserializeObject<List<Dictionary<string, object>>>(contentString) ?? [];
                    }
                }
    
                foreach (var identifyResult in identifyResults)
                {
                    string faceId = (string)identifyResult["faceId"];
                    List<Dictionary<string, object>> candidates = JsonConvert.DeserializeObject<List<Dictionary<string, object>>>(((JArray)identifyResult["candidates"]).ToString()) ?? [];
                    if (candidates.Count == 0)
                    {
                        Console.WriteLine($"No person is identified for the face in: {sourceImageFileName} - {faceId},");
                        continue;
                    }
    
                    string? personName = null;
                    using (var response = await httpClient.GetAsync($"{ENDPOINT}/face/v1.0/largepersongroups/{largePersonGroupId}/persons/{candidates.First()["personId"]}"))
                    {
                        string contentString = await response.Content.ReadAsStringAsync();
                        personName = (string?)(JsonConvert.DeserializeObject<Dictionary<string, object>>(contentString)?["name"]);
                    }
                    Console.WriteLine($"Person '{personName}' is identified for the face in: {sourceImageFileName} - {faceId}," +
                        $" confidence: {candidates.First()["confidence"]}.");
    
                    Dictionary<string, object> verifyResult = new Dictionary<string, object>();
                    using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["faceId"] = faceId, ["personId"] = candidates.First()["personId"], ["largePersonGroupId"] = largePersonGroupId }))))
                    {
                        content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
                        using (var response = await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/verify", content))
                        {
                            string contentString = await response.Content.ReadAsStringAsync();
                            verifyResult = JsonConvert.DeserializeObject<Dictionary<string, object>>(contentString) ?? [];
                        }
                    }
                    Console.WriteLine($"Verification result: is a match? {verifyResult["isIdentical"]}. confidence: {verifyResult["confidence"]}");
                }
                Console.WriteLine();
    
                // Delete large person group.
                Console.WriteLine("========DELETE PERSON GROUP========");
                Console.WriteLine();
                await httpClient.DeleteAsync($"{ENDPOINT}/face/v1.0/largepersongroups/{largePersonGroupId}");
                Console.WriteLine($"Deleted the person group {largePersonGroupId}.");
                Console.WriteLine();
            }
        }
    }
    
  3. 執行應用程式

    按一下 IDE 視窗頂端的 [偵錯] 按鈕,以執行應用程式。

輸出

========IDENTIFY FACES========

Create a person group (18d1c443-a01b-46a4-9191-121f74a831cd).
Create a person group person 'Family1-Dad'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Dad) from image `Family1-Dad1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Dad) from image `Family1-Dad2.jpg`
Create a person group person 'Family1-Mom'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Mom) from image `Family1-Mom1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Mom) from image `Family1-Mom2.jpg`
Create a person group person 'Family1-Son'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Son) from image `Family1-Son1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Son) from image `Family1-Son2.jpg`

Train person group 18d1c443-a01b-46a4-9191-121f74a831cd.
Training status: succeeded.

Pausing for 60 seconds to avoid triggering rate limit on free account...
4 face(s) with 4 having sufficient quality for recognition detected from image `identification1.jpg`
Person 'Family1-Dad' is identified for the face in: identification1.jpg - ad813534-9141-47b4-bfba-24919223966f, confidence: 0.96807.
Verification result: is a match? True. confidence: 0.96807
Person 'Family1-Mom' is identified for the face in: identification1.jpg - 1a39420e-f517-4cee-a898-5d968dac1a7e, confidence: 0.96902.
Verification result: is a match? True. confidence: 0.96902
No person is identified for the face in: identification1.jpg - 889394b1-e30f-4147-9be1-302beb5573f3,
Person 'Family1-Son' is identified for the face in: identification1.jpg - 0557d87b-356c-48a8-988f-ce0ad2239aa5, confidence: 0.9281.
Verification result: is a match? True. confidence: 0.9281

========DELETE PERSON GROUP========

Deleted the person group 18d1c443-a01b-46a4-9191-121f74a831cd.

End of quickstart.

提示

臉部 API 會在一組本質為靜態的預建模型上執行 (模型的效能不會在服務執行時衰退或改善)。 如果 Microsoft 更新模型的後端,而未遷移到全新的模型版本,則模型產生的結果可能會變更。 若要利用較新版本的模型,您可以使用相同的註冊映像來重新訓練 PersonGroup,進而將較新的模型指定為參數。

清除資源

如果您想要清除和移除 Azure AI 服務訂用帳戶,則可以刪除資源或資源群組。 刪除資源群組也會刪除與其相關聯的任何其他資源。

下一步

在本快速入門中,您已了解如何使用適用於 .NET 的臉部用戶端程式庫來執行基本臉部識別。 接下來,了解不同的臉部偵測模型,以及如何為您的使用案例指定正確的模型。

開始使用適用於 Python 的臉部用戶端程式庫進行臉部辨識。 請遵循下列步驟來安裝套件,並試用基本工作的程式碼範例。 臉部服務可讓您存取先進的演算法,以偵測和辨識影像中的人臉。 請遵循下列步驟來安裝套件,並試用使用遠端影像進行基本臉部識別的程式碼範例。

參考文件 | 程式庫原始程式碼 | 套件 (PiPy) | 範例

必要條件

  • Azure 訂用帳戶 - 建立免費帳戶
  • Python 3.x
    • 您安裝的 Python 應包含 pip。 您可以在命令列上執行 pip --version 來檢查是否已安裝 pip。 安裝最新版本的 Python 以取得 pip。
  • 擁有 Azure 訂用帳戶之後,在 Azure 入口網站中建立臉部資源,以取得您的金鑰和端點。 在其部署後,選取 [前往資源]
    • 您需要來自所建立資源的金鑰和端點,以將應用程式連線至 Face API。
    • 您可以使用免費定價層 (F0) 來試用服務,之後可升級至付費層以用於實際執行環境。

建立環境變數

在此範例中,在執行應用程式的本機電腦上將認證寫入環境變數。

前往 Azure 入口網站。 如果已成功部署您在 [必要條件] 區段中建立的資源,請選取 [後續步驟] 下的 [前往資源] 按鈕。 您可以在 [金鑰和端點] 頁面中 [資源管理] 底下找到金鑰和端點。 您的資源金鑰與您的 Azure 訂用帳戶識別碼不同。

若要設定金鑰和端點的環境變數,請開啟主控台視窗,然後遵循作業系統和開發環境的指示進行。

  • 若要設定 FACE_APIKEY 環境變數,請以您其中一個資源索引碼取代 <your_key>
  • 若要設定 FACE_ENDPOINT 環境變數,請將 <your_endpoint> 取代為您資源的端點。

重要

如果您使用 API 金鑰,請將其安全地儲存在別處,例如 Azure Key Vault。 請勿在程式碼中直接包含 API 索引碼,且切勿公開張貼索引碼。

如需 AI 服務安全性的詳細資訊,請參閱驗證對 Azure AI 服務的要求

setx FACE_APIKEY <your_key>
setx FACE_ENDPOINT <your_endpoint>

新增環境變數之後,您可能需要重新啟動任何將讀取環境變數的執行中程式,包括主控台視窗。

識別並驗證臉部

  1. 安裝用戶端程式庫

    安裝 Python 之後,您可以透過以下項目安裝用戶端程式庫:

    pip install --upgrade azure-ai-vision-face
    
  2. 建立新的 Python 應用程式

    建立新的 Python 指令碼—例如,quickstart-file.py。 然後在您慣用的編輯器或 IDE 中開啟該檔案,並貼上下列程式碼。

    注意

    如果您尚未使用登記表單接收臉部服務的存取權限,其中部分功能將無法運作。

    import os
    import time
    import uuid
    import requests
    
    from azure.core.credentials import AzureKeyCredential
    from azure.ai.vision.face import FaceClient
    from azure.ai.vision.face.models import (
        FaceAttributeTypeRecognition04,
        FaceDetectionModel,
        FaceRecognitionModel,
        QualityForRecognition,
    )
    
    
    # This key will serve all examples in this document.
    KEY = os.environ["FACE_APIKEY"]
    
    # This endpoint will be used in all examples in this quickstart.
    ENDPOINT = os.environ["FACE_ENDPOINT"]
    
    # Used in the Large Person Group Operations and Delete Large Person Group examples.
    # LARGE_PERSON_GROUP_ID should be all lowercase and alphanumeric. For example, 'mygroupname' (dashes are OK).
    LARGE_PERSON_GROUP_ID = str(uuid.uuid4())  # assign a random ID (or name it anything)
    
    HEADERS = {"Ocp-Apim-Subscription-Key": KEY, "Content-Type": "application/json"}
    
    # Create an authenticated FaceClient.
    with FaceClient(endpoint=ENDPOINT, credential=AzureKeyCredential(KEY)) as face_client:
        '''
        Create the LargePersonGroup
        '''
        # Create empty Large Person Group. Large Person Group ID must be lower case, alphanumeric, and/or with '-', '_'.
        print("Person group:", LARGE_PERSON_GROUP_ID)
        response = requests.put(
            ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}",
            headers=HEADERS,
            json={"name": LARGE_PERSON_GROUP_ID, "recognitionModel": "recognition_04"})
        response.raise_for_status()
    
        # Define woman friend
        response = requests.post(ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}/persons", headers=HEADERS, json={"name": "Woman"})
        response.raise_for_status()
        woman = response.json()
        # Define man friend
        response = requests.post(ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}/persons", headers=HEADERS, json={"name": "Man"})
        response.raise_for_status()
        man = response.json()
        # Define child friend
        response = requests.post(ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}/persons", headers=HEADERS, json={"name": "Child"})
        response.raise_for_status()
        child = response.json()
    
        '''
        Detect faces and register them to each person
        '''
        # Find all jpeg images of friends in working directory (TBD pull from web instead)
        woman_images = [
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Mom1.jpg",  # noqa: E501
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Mom2.jpg",  # noqa: E501
        ]
        man_images = [
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Dad1.jpg",  # noqa: E501
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Dad2.jpg",  # noqa: E501
        ]
        child_images = [
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Son1.jpg",  # noqa: E501
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Son2.jpg",  # noqa: E501
        ]
    
        # Add to woman person
        for image in woman_images:
            # Check if the image is of sufficent quality for recognition.
            sufficientQuality = True
            detected_faces = face_client.detect_from_url(
                url=image,
                detection_model=FaceDetectionModel.DETECTION_03,
                recognition_model=FaceRecognitionModel.RECOGNITION_04,
                return_face_id=True,
                return_face_attributes=[FaceAttributeTypeRecognition04.QUALITY_FOR_RECOGNITION])
            for face in detected_faces:
                if face.face_attributes.quality_for_recognition != QualityForRecognition.HIGH:
                    sufficientQuality = False
                    break
    
            if not sufficientQuality:
                continue
    
            if len(detected_faces) != 1:
                continue
    
            response = requests.post(
                ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}/persons/{woman['personId']}/persistedFaces",
                headers=HEADERS,
                json={"url": image})
            response.raise_for_status()
            print(f"face {face.face_id} added to person {woman['personId']}")
    
    
        # Add to man person
        for image in man_images:
            # Check if the image is of sufficent quality for recognition.
            sufficientQuality = True
            detected_faces = face_client.detect_from_url(
                url=image,
                detection_model=FaceDetectionModel.DETECTION_03,
                recognition_model=FaceRecognitionModel.RECOGNITION_04,
                return_face_id=True,
                return_face_attributes=[FaceAttributeTypeRecognition04.QUALITY_FOR_RECOGNITION])
            for face in detected_faces:
                if face.face_attributes.quality_for_recognition != QualityForRecognition.HIGH:
                    sufficientQuality = False
                    break
    
            if not sufficientQuality:
                continue
    
            if len(detected_faces) != 1:
                continue
    
            response = requests.post(
                ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}/persons/{man['personId']}/persistedFaces",
                headers=HEADERS,
                json={"url": image})
            response.raise_for_status()
            print(f"face {face.face_id} added to person {man['personId']}")
    
        # Add to child person
        for image in child_images:
            # Check if the image is of sufficent quality for recognition.
            sufficientQuality = True
            detected_faces = face_client.detect_from_url(
                url=image,
                detection_model=FaceDetectionModel.DETECTION_03,
                recognition_model=FaceRecognitionModel.RECOGNITION_04,
                return_face_id=True,
                return_face_attributes=[FaceAttributeTypeRecognition04.QUALITY_FOR_RECOGNITION])
            for face in detected_faces:
                if face.face_attributes.quality_for_recognition != QualityForRecognition.HIGH:
                    sufficientQuality = False
                    break
            if not sufficientQuality:
                continue
    
            if len(detected_faces) != 1:
                continue
    
            response = requests.post(
                ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}/persons/{child['personId']}/persistedFaces",
                headers=HEADERS,
                json={"url": image})
            response.raise_for_status()
            print(f"face {face.face_id} added to person {child['personId']}")
    
        '''
        Train LargePersonGroup
        '''
        # Train the large person group
        print(f"Train the person group {LARGE_PERSON_GROUP_ID}")
        response = requests.post(ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}/train", headers=HEADERS)
        response.raise_for_status()
    
        while (True):
            response = requests.get(ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}/training", headers=HEADERS)
            response.raise_for_status()
            training_status = response.json()["status"]
            if training_status == "succeeded":
                break
        print(f"The person group {LARGE_PERSON_GROUP_ID} is trained successfully.")
    
        '''
        Identify a face against a defined LargePersonGroup
        '''
        # Group image for testing against
        test_image = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/identification1.jpg"  # noqa: E501
    
        print("Pausing for 60 seconds to avoid triggering rate limit on free account...")
        time.sleep(60)
    
        # Detect faces
        face_ids = []
        # We use detection model 03 to get better performance, recognition model 04 to support quality for
        # recognition attribute.
        faces = face_client.detect_from_url(
            url=test_image,
            detection_model=FaceDetectionModel.DETECTION_03,
            recognition_model=FaceRecognitionModel.RECOGNITION_04,
            return_face_id=True,
            return_face_attributes=[FaceAttributeTypeRecognition04.QUALITY_FOR_RECOGNITION])
        for face in faces:
            # Only take the face if it is of sufficient quality.
            if face.face_attributes.quality_for_recognition != QualityForRecognition.LOW:
                face_ids.append(face.face_id)
    
        # Identify faces
        response = requests.post(
            ENDPOINT + f"/face/v1.0/identify",
            headers=HEADERS,
            json={"faceIds": face_ids, "largePersonGroupId": LARGE_PERSON_GROUP_ID})
        response.raise_for_status()
        results = response.json()
        print("Identifying faces in image")
        if not results:
            print("No person identified in the person group")
        for identifiedFace in results:
            if len(identifiedFace["candidates"]) > 0:
                print(f"Person is identified for face ID {identifiedFace['faceId']} in image, with a confidence of "
                      f"{identifiedFace['candidates'][0]['confidence']}.")  # Get topmost confidence score
    
                # Verify faces
                response = requests.post(
                    ENDPOINT + f"/face/v1.0/verify",
                    headers=HEADERS,
                    json={"faceId": identifiedFace["faceId"], "personId": identifiedFace["candidates"][0]["personId"], "largePersonGroupId": LARGE_PERSON_GROUP_ID})
                response.raise_for_status()
                verify_result = response.json()
                print(f"verification result: {verify_result['isIdentical']}. confidence: {verify_result['confidence']}")
            else:
                print(f"No person identified for face ID {identifiedFace['faceId']} in image.")
    
        print()
    
        # Delete the large person group
        response = requests.delete(ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}", headers=HEADERS)
        response.raise_for_status()
        print(f"The person group {LARGE_PERSON_GROUP_ID} is deleted.")
    
        print()
        print("End of quickstart.")
    
    
  3. 使用 python 命令,從應用程式目錄執行臉部辨識應用程式。

    python quickstart-file.py
    

    提示

    臉部 API 會在一組本質為靜態的預建模型上執行 (模型的效能不會在服務執行時衰退或改善)。 如果 Microsoft 更新模型的後端,而未遷移到全新的模型版本,則模型產生的結果可能會變更。 若要利用較新版本的模型,您可以使用相同的註冊映像來重新訓練 PersonGroup,進而將較新的模型指定為參數。

輸出

Person group: ad12b2db-d892-48ec-837a-0e7168c18224
face 335a2cb1-5211-4c29-9c45-776dd014b2af added to person 9ee65510-81a5-47e5-9e50-66727f719465
face df57eb50-4a13-4f93-b804-cd108327ad5a added to person 9ee65510-81a5-47e5-9e50-66727f719465
face d8b7b8b8-3ca6-4309-b76e-eeed84f7738a added to person 00651036-4236-4004-88b9-11466c251548
face dffbb141-f40b-4392-8785-b6c434fa534e added to person 00651036-4236-4004-88b9-11466c251548
face 9cdac36e-5455-447b-a68d-eb1f5e2ec27d added to person 23614724-b132-407a-aaa0-67003987ce93
face d8208412-92b7-4b8d-a2f8-3926c839c87e added to person 23614724-b132-407a-aaa0-67003987ce93
Train the person group ad12b2db-d892-48ec-837a-0e7168c18224
The person group ad12b2db-d892-48ec-837a-0e7168c18224 is trained successfully.
Pausing for 60 seconds to avoid triggering rate limit on free account...
Identifying faces in image
Person is identified for face ID bc52405a-5d83-4500-9218-557468ccdf99 in image, with a confidence of 0.96726.
verification result: True. confidence: 0.96726
Person is identified for face ID dfcc3fc8-6252-4f3a-8205-71466f39d1a7 in image, with a confidence of 0.96925.
verification result: True. confidence: 0.96925
No person identified for face ID 401c581b-a178-45ed-8205-7692f6eede88 in image.
Person is identified for face ID 8809d9c7-e362-4727-8c95-e1e44f5c2e8a in image, with a confidence of 0.92898.
verification result: True. confidence: 0.92898

The person group ad12b2db-d892-48ec-837a-0e7168c18224 is deleted.

End of quickstart.

清除資源

如果您想要清除和移除 Azure AI 服務訂用帳戶,則可以刪除資源或資源群組。 刪除資源群組也會刪除與其相關聯的任何其他資源。

下一步

在本快速入門中,您已了解如何使用適用於 Python 的臉部用戶端程式庫來執行基本臉部識別。 接下來,了解不同的臉部偵測模型,以及如何為您的使用案例指定正確的模型。

開始使用適用於 Java 的臉部用戶端程式庫進行臉部辨識。 請遵循下列步驟來安裝套件,並試用基本工作的程式碼範例。 臉部服務可讓您存取先進的演算法,以偵測和辨識影像中的人臉。 請遵循下列步驟來安裝套件,並試用使用遠端影像進行基本臉部識別的程式碼範例。

參考文件 | 程式庫來源程式碼 | 套件 (Maven) | 範例

必要條件

  • Azure 訂用帳戶 - 建立免費帳戶
  • 最新版的 Java Development Kit (JDK)
  • 已安裝 Apache Maven (英文)。 在 Linux上,從發行版本存放庫安裝 (如有)。
  • 擁有 Azure 訂用帳戶之後,在 Azure 入口網站中建立臉部資源,以取得您的金鑰和端點。 在其部署後,選取 [前往資源]
    • 您需要來自所建立資源的金鑰和端點,以將應用程式連線至 Face API。
    • 您可以使用免費定價層 (F0) 來試用服務,之後可升級至付費層以用於實際執行環境。

建立環境變數

在此範例中,在執行應用程式的本機電腦上將認證寫入環境變數。

前往 Azure 入口網站。 如果已成功部署您在 [必要條件] 區段中建立的資源,請選取 [後續步驟] 下的 [前往資源] 按鈕。 您可以在 [金鑰和端點] 頁面中 [資源管理] 底下找到金鑰和端點。 您的資源金鑰與您的 Azure 訂用帳戶識別碼不同。

若要設定金鑰和端點的環境變數,請開啟主控台視窗,然後遵循作業系統和開發環境的指示進行。

  • 若要設定 FACE_APIKEY 環境變數,請以您其中一個資源索引碼取代 <your_key>
  • 若要設定 FACE_ENDPOINT 環境變數,請將 <your_endpoint> 取代為您資源的端點。

重要

如果您使用 API 金鑰,請將其安全地儲存在別處,例如 Azure Key Vault。 請勿在程式碼中直接包含 API 索引碼,且切勿公開張貼索引碼。

如需 AI 服務安全性的詳細資訊,請參閱驗證對 Azure AI 服務的要求

setx FACE_APIKEY <your_key>
setx FACE_ENDPOINT <your_endpoint>

新增環境變數之後,您可能需要重新啟動任何將讀取環境變數的執行中程式,包括主控台視窗。

識別並驗證臉部

  1. 安裝用戶端程式庫

    開啟主控台視窗,並為快速入門應用程式建立新的資料夾。 將下列內容複製到新的檔案。 將檔案儲存為您專案目錄中的 pom.xml

    <project xmlns="http://maven.apache.org/POM/4.0.0"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
      <modelVersion>4.0.0</modelVersion>
      <groupId>com.example</groupId>
      <artifactId>my-application-name</artifactId>
      <version>1.0.0</version>
      <dependencies>
        <!-- https://mvnrepository.com/artifact/com.azure/azure-ai-vision-face -->
        <dependency>
          <groupId>com.azure</groupId>
          <artifactId>azure-ai-vision-face</artifactId>
          <version>1.0.0-beta.1</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.httpcomponents/httpclient -->
        <dependency>
          <groupId>org.apache.httpcomponents</groupId>
          <artifactId>httpclient</artifactId>
          <version>4.5.13</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/com.google.code.gson/gson -->
        <dependency>
          <groupId>com.google.code.gson</groupId>
          <artifactId>gson</artifactId>
          <version>2.11.0</version>
        </dependency>
      </dependencies>
    </project>
    

    在專案目錄中執行下列動作,以安裝 SDK 和相依性:

    mvn clean dependency:copy-dependencies
    
  2. 建立新的 Java 應用程式

    建立名為 Quickstart.java 的檔案,在文字編輯器中開啟,並貼上下列程式碼:

    注意

    如果您尚未使用登記表單接收臉部服務的存取權限,其中部分功能將無法運作。

    import java.util.Arrays;
    import java.util.LinkedHashMap;
    import java.util.List;
    import java.util.Map;
    import java.util.stream.Collectors;
    import java.util.UUID;
    
    import com.azure.ai.vision.face.FaceClient;
    import com.azure.ai.vision.face.FaceClientBuilder;
    import com.azure.ai.vision.face.models.DetectOptions;
    import com.azure.ai.vision.face.models.FaceAttributeType;
    import com.azure.ai.vision.face.models.FaceDetectionModel;
    import com.azure.ai.vision.face.models.FaceDetectionResult;
    import com.azure.ai.vision.face.models.FaceRecognitionModel;
    import com.azure.ai.vision.face.models.QualityForRecognition;
    import com.azure.core.credential.KeyCredential;
    import com.google.gson.Gson;
    import com.google.gson.reflect.TypeToken;
    
    import org.apache.http.HttpHeaders;
    import org.apache.http.client.HttpClient;
    import org.apache.http.client.methods.HttpDelete;
    import org.apache.http.client.methods.HttpGet;
    import org.apache.http.client.methods.HttpPost;
    import org.apache.http.client.methods.HttpPut;
    import org.apache.http.client.utils.URIBuilder;
    import org.apache.http.entity.StringEntity;
    import org.apache.http.impl.client.HttpClients;
    import org.apache.http.message.BasicHeader;
    import org.apache.http.util.EntityUtils;
    
    public class Quickstart {
        // LARGE_PERSON_GROUP_ID should be all lowercase and alphanumeric. For example, 'mygroupname' (dashes are OK).
        private static final String LARGE_PERSON_GROUP_ID = UUID.randomUUID().toString();
    
        // URL path for the images.
        private static final String IMAGE_BASE_URL = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/";
    
        // From your Face subscription in the Azure portal, get your subscription key and endpoint.
        private static final String SUBSCRIPTION_KEY = System.getenv("FACE_APIKEY");
        private static final String ENDPOINT = System.getenv("FACE_ENDPOINT");
    
        public static void main(String[] args) throws Exception {
            // Recognition model 4 was released in 2021 February.
            // It is recommended since its accuracy is improved
            // on faces wearing masks compared with model 3,
            // and its overall accuracy is improved compared
            // with models 1 and 2.
            FaceRecognitionModel RECOGNITION_MODEL4 = FaceRecognitionModel.RECOGNITION_04;
    
            // Authenticate.
            FaceClient client = authenticate(ENDPOINT, SUBSCRIPTION_KEY);
    
            // Identify - recognize a face(s) in a large person group (a large person group is created in this example).
            identifyInLargePersonGroup(client, IMAGE_BASE_URL, RECOGNITION_MODEL4);
    
            System.out.println("End of quickstart.");
        }
    
        /*
         *	AUTHENTICATE
         *	Uses subscription key and region to create a client.
         */
        public static FaceClient authenticate(String endpoint, String key) {
            return new FaceClientBuilder().endpoint(endpoint).credential(new KeyCredential(key)).buildClient();
        }
    
    
        // Detect faces from image url for recognition purposes. This is a helper method for other functions in this quickstart.
        // Parameter `returnFaceId` of `DetectOptions` must be set to `true` (by default) for recognition purposes.
        // Parameter `returnFaceAttributes` is set to include the QualityForRecognition attribute. 
        // Recognition model must be set to recognition_03 or recognition_04 as a result.
        // Result faces with insufficient quality for recognition are filtered out. 
        // The field `faceId` in returned `DetectedFace`s will be used in Verify and Identify.
        // It will expire 24 hours after the detection call.
        private static List<FaceDetectionResult> detectFaceRecognize(FaceClient faceClient, String url, FaceRecognitionModel recognitionModel) {
            // Detect faces from image URL.
            DetectOptions options = new DetectOptions(FaceDetectionModel.DETECTION_03, recognitionModel, true).setReturnFaceAttributes(Arrays.asList(FaceAttributeType.QUALITY_FOR_RECOGNITION));
            List<FaceDetectionResult> detectedFaces = faceClient.detect(url, options);
            List<FaceDetectionResult> sufficientQualityFaces = detectedFaces.stream().filter(f -> f.getFaceAttributes().getQualityForRecognition() != QualityForRecognition.LOW).collect(Collectors.toList());
            System.out.println(detectedFaces.size() + " face(s) with " + sufficientQualityFaces.size() + " having sufficient quality for recognition.");
    
            return sufficientQualityFaces;
        }
    
        /*
         * IDENTIFY FACES
         * To identify faces, you need to create and define a large person group.
         * The Identify operation takes one or several face IDs from DetectedFace or PersistedFace and a LargePersonGroup and returns
         * a list of Person objects that each face might belong to. Returned Person objects are wrapped as Candidate objects,
         * which have a prediction confidence value.
         */
        public static void identifyInLargePersonGroup(FaceClient client, String url, FaceRecognitionModel recognitionModel) throws Exception {
            System.out.println("========IDENTIFY FACES========");
            System.out.println();
    
            // Create a dictionary for all your images, grouping similar ones under the same key.
            Map<String, String[]> personDictionary = new LinkedHashMap<String, String[]>();
            personDictionary.put("Family1-Dad", new String[]{"Family1-Dad1.jpg", "Family1-Dad2.jpg"});
            personDictionary.put("Family1-Mom", new String[]{"Family1-Mom1.jpg", "Family1-Mom2.jpg"});
            personDictionary.put("Family1-Son", new String[]{"Family1-Son1.jpg", "Family1-Son2.jpg"});
            // A group photo that includes some of the persons you seek to identify from your dictionary.
            String sourceImageFileName = "identification1.jpg";
    
            // Create a large person group.
            System.out.println("Create a person group (" + LARGE_PERSON_GROUP_ID + ").");
            List<BasicHeader> headers = Arrays.asList(new BasicHeader("Ocp-Apim-Subscription-Key", SUBSCRIPTION_KEY), new BasicHeader(HttpHeaders.CONTENT_TYPE, "application/json"));
            HttpClient httpClient = HttpClients.custom().setDefaultHeaders(headers).build();
            createLargePersonGroup(httpClient, recognitionModel);
            // The similar faces will be grouped into a single large person group person.
            for (String groupedFace : personDictionary.keySet()) {
                // Limit TPS
                Thread.sleep(250);
                String personId = createLargePersonGroupPerson(httpClient, groupedFace);
                System.out.println("Create a person group person '" + groupedFace + "'.");
    
                // Add face to the large person group person.
                for (String similarImage : personDictionary.get(groupedFace)) {
                    System.out.println("Check whether image is of sufficient quality for recognition");
                    DetectOptions options = new DetectOptions(FaceDetectionModel.DETECTION_03, recognitionModel, false).setReturnFaceAttributes(Arrays.asList(FaceAttributeType.QUALITY_FOR_RECOGNITION));
                    List<FaceDetectionResult> detectedFaces1 = client.detect(url + similarImage, options);
                    if (detectedFaces1.stream().anyMatch(f -> f.getFaceAttributes().getQualityForRecognition() != QualityForRecognition.HIGH)) {
                        continue;
                    }
    
                    if (detectedFaces1.size() != 1) {
                        continue;
                    }
    
                    // add face to the large person group
                    System.out.println("Add face to the person group person(" + groupedFace + ") from image `" + similarImage + "`");
                    addFaceToLargePersonGroup(httpClient, personId, url + similarImage);
                }
            }
    
            // Start to train the large person group.
            System.out.println();
            System.out.println("Train person group " + LARGE_PERSON_GROUP_ID + ".");
            trainLargePersonGroup(httpClient);
    
            // Wait until the training is completed.
            while (true) {
                Thread.sleep(1000);
                String trainingStatus = getLargePersonGroupTrainingStatus(httpClient);
                System.out.println("Training status: " + trainingStatus + ".");
                if ("succeeded".equals(trainingStatus)) {
                    break;
                }
            }
            System.out.println();
    
            System.out.println("Pausing for 60 seconds to avoid triggering rate limit on free account...");
            Thread.sleep(60000);
    
            // Detect faces from source image url.
            List<FaceDetectionResult> detectedFaces = detectFaceRecognize(client, url + sourceImageFileName, recognitionModel);
            // Add detected faceId to sourceFaceIds.
            List<String> sourceFaceIds = detectedFaces.stream().map(FaceDetectionResult::getFaceId).collect(Collectors.toList());
    
            // Identify the faces in a large person group.
            List<Map<String, Object>> identifyResults = identifyFacesInLargePersonGroup(httpClient, sourceFaceIds);
    
            for (Map<String, Object> identifyResult : identifyResults) {
                String faceId = identifyResult.get("faceId").toString();
                List<Map<String, Object>> candidates = new Gson().fromJson(new Gson().toJson(identifyResult.get("candidates")), new TypeToken<List<Map<String, Object>>>(){});
                if (candidates.isEmpty()) {
                    System.out.println("No person is identified for the face in: " + sourceImageFileName + " - " + faceId + ".");
                    continue;
                }
    
                Map<String, Object> candidate = candidates.stream().findFirst().orElseThrow();
                String personName = getLargePersonGroupPersonName(httpClient, candidate.get("personId").toString());
                System.out.println("Person '" + personName + "' is identified for the face in: " + sourceImageFileName + " - " + faceId + ", confidence: " + candidate.get("confidence") + ".");
    
                Map<String, Object> verifyResult = verifyFaceWithLargePersonGroupPerson(httpClient, faceId, candidate.get("personId").toString());
                System.out.println("Verification result: is a match? " + verifyResult.get("isIdentical") + ". confidence: " + verifyResult.get("confidence"));
            }
            System.out.println();
    
            // Delete large person group.
            System.out.println("========DELETE PERSON GROUP========");
            System.out.println();
            deleteLargePersonGroup(httpClient);
            System.out.println("Deleted the person group " + LARGE_PERSON_GROUP_ID + ".");
            System.out.println();
        }
    
        private static void createLargePersonGroup(HttpClient httpClient, FaceRecognitionModel recognitionModel) throws Exception {
            HttpPut request = new HttpPut(new URIBuilder(ENDPOINT + "/face/v1.0/largepersongroups/" + LARGE_PERSON_GROUP_ID).build());
            request.setEntity(new StringEntity(new Gson().toJson(Map.of("name", LARGE_PERSON_GROUP_ID, "recognitionModel", recognitionModel.toString()))));
            httpClient.execute(request);
            request.releaseConnection();
        }
    
        private static String createLargePersonGroupPerson(HttpClient httpClient, String name) throws Exception {
            HttpPost request = new HttpPost(new URIBuilder(ENDPOINT + "/face/v1.0/largepersongroups/" + LARGE_PERSON_GROUP_ID + "/persons").build());
            request.setEntity(new StringEntity(new Gson().toJson(Map.of("name", name))));
            String response = EntityUtils.toString(httpClient.execute(request).getEntity());
            request.releaseConnection();
            return new Gson().fromJson(response, new TypeToken<Map<String, Object>>(){}).get("personId").toString();
        }
    
        private static void addFaceToLargePersonGroup(HttpClient httpClient, String personId, String url) throws Exception {
            URIBuilder builder = new URIBuilder(ENDPOINT + "/face/v1.0/largepersongroups/" + LARGE_PERSON_GROUP_ID + "/persons/" + personId + "/persistedfaces");
            builder.setParameter("detectionModel", "detection_03");
            HttpPost request = new HttpPost(builder.build());
            request.setEntity(new StringEntity(new Gson().toJson(Map.of("url", url))));
            httpClient.execute(request);
            request.releaseConnection();
        }
    
        private static void trainLargePersonGroup(HttpClient httpClient) throws Exception {
            HttpPost request = new HttpPost(new URIBuilder(ENDPOINT + "/face/v1.0/largepersongroups/" + LARGE_PERSON_GROUP_ID + "/train").build());
            httpClient.execute(request);
            request.releaseConnection();
        }
    
        private static String getLargePersonGroupTrainingStatus(HttpClient httpClient) throws Exception {
            HttpGet request = new HttpGet(new URIBuilder(ENDPOINT + "/face/v1.0/largepersongroups/" + LARGE_PERSON_GROUP_ID + "/training").build());
            String response = EntityUtils.toString(httpClient.execute(request).getEntity());
            request.releaseConnection();
            return new Gson().fromJson(response, new TypeToken<Map<String, Object>>(){}).get("status").toString();
        }
    
        private static List<Map<String, Object>> identifyFacesInLargePersonGroup(HttpClient httpClient, List<String> sourceFaceIds) throws Exception {
            HttpPost request = new HttpPost(new URIBuilder(ENDPOINT + "/face/v1.0/identify").build());
            request.setEntity(new StringEntity(new Gson().toJson(Map.of("faceIds", sourceFaceIds, "largePersonGroupId", LARGE_PERSON_GROUP_ID))));
            String response = EntityUtils.toString(httpClient.execute(request).getEntity());
            request.releaseConnection();
            return new Gson().fromJson(response, new TypeToken<List<Map<String, Object>>>(){});
        }
    
        private static String getLargePersonGroupPersonName(HttpClient httpClient, String personId) throws Exception {
            HttpGet request = new HttpGet(new URIBuilder(ENDPOINT + "/face/v1.0/largepersongroups/" + LARGE_PERSON_GROUP_ID + "/persons/" + personId).build());
            String response = EntityUtils.toString(httpClient.execute(request).getEntity());
            request.releaseConnection();
            return new Gson().fromJson(response, new TypeToken<Map<String, Object>>(){}).get("name").toString();
        }
    
        private static Map<String, Object> verifyFaceWithLargePersonGroupPerson(HttpClient httpClient, String faceId, String personId) throws Exception {
            HttpPost request = new HttpPost(new URIBuilder(ENDPOINT + "/face/v1.0/verify").build());
            request.setEntity(new StringEntity(new Gson().toJson(Map.of("faceId", faceId, "personId", personId, "largePersonGroupId", LARGE_PERSON_GROUP_ID))));
            String response = EntityUtils.toString(httpClient.execute(request).getEntity());
            request.releaseConnection();
            return new Gson().fromJson(response, new TypeToken<Map<String, Object>>(){});
        }
    
        private static void deleteLargePersonGroup(HttpClient httpClient) throws Exception {
            HttpDelete request = new HttpDelete(new URIBuilder(ENDPOINT + "/face/v1.0/largepersongroups/" + LARGE_PERSON_GROUP_ID).build());
            httpClient.execute(request);
            request.releaseConnection();
        }
    }
    
  3. 使用 javacjava 命令,從應用程式目錄執行臉部辨識應用程式。

    javac -cp target\dependency\* Quickstart.java
    java -cp .;target\dependency\* Quickstart
    

輸出

========IDENTIFY FACES========

Create a person group (3761e61a-16b2-4503-ad29-ed34c58ba676).
Create a person group person 'Family1-Dad'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Dad) from image `Family1-Dad1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Dad) from image `Family1-Dad2.jpg`
Create a person group person 'Family1-Mom'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Mom) from image `Family1-Mom1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Mom) from image `Family1-Mom2.jpg`
Create a person group person 'Family1-Son'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Son) from image `Family1-Son1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Son) from image `Family1-Son2.jpg`

Train person group 3761e61a-16b2-4503-ad29-ed34c58ba676.
Training status: succeeded.

Pausing for 60 seconds to avoid triggering rate limit on free account...
4 face(s) with 4 having sufficient quality for recognition.
Person 'Family1-Dad' is identified for the face in: identification1.jpg - d7995b34-1b72-47fe-82b6-e9877ed2578d, confidence: 0.96807.
Verification result: is a match? true. confidence: 0.96807
Person 'Family1-Mom' is identified for the face in: identification1.jpg - 844da0ed-4890-4bbf-a531-e638797f96fc, confidence: 0.96902.
Verification result: is a match? true. confidence: 0.96902
No person is identified for the face in: identification1.jpg - c543159a-57f3-4872-83ce-2d4a733d71c9.
Person 'Family1-Son' is identified for the face in: identification1.jpg - 414fac6c-7381-4dba-9c8b-fd26d52e879b, confidence: 0.9281.
Verification result: is a match? true. confidence: 0.9281

========DELETE PERSON GROUP========

Deleted the person group 3761e61a-16b2-4503-ad29-ed34c58ba676.

End of quickstart.

清除資源

如果您想要清除和移除 Azure AI 服務訂用帳戶,則可以刪除資源或資源群組。 刪除資源群組也會刪除與其相關聯的任何其他資源。

下一步

在本快速入門中,您已了解如何使用適用於 Java 的臉部用戶端程式庫來執行基本臉部識別。 接下來,了解不同的臉部偵測模型,以及如何為您的使用案例指定正確的模型。

開始使用適用於 JavaScript 的臉部用戶端程式庫進行臉部辨識。 請遵循下列步驟來安裝套件,並試用基本工作的程式碼範例。 臉部服務可讓您存取先進的演算法,以偵測和辨識影像中的人臉。 請遵循下列步驟來安裝套件,並試用使用遠端影像進行基本臉部識別的程式碼範例。

參考文件 | 程式庫來源程式碼 | 套件 (npm) | 範例

必要條件

  • Azure 訂用帳戶 - 建立免費帳戶
  • 最新版的 Node.js
  • 擁有 Azure 訂用帳戶之後,在 Azure 入口網站中建立臉部資源,以取得您的金鑰和端點。 在其部署後,選取 [前往資源]
    • 您需要來自所建立資源的金鑰和端點,以將應用程式連線至 Face API。
    • 您可以使用免費定價層 (F0) 來試用服務,之後可升級至付費層以用於實際執行環境。

建立環境變數

在此範例中,在執行應用程式的本機電腦上將認證寫入環境變數。

前往 Azure 入口網站。 如果已成功部署您在 [必要條件] 區段中建立的資源,請選取 [後續步驟] 下的 [前往資源] 按鈕。 您可以在 [金鑰和端點] 頁面中 [資源管理] 底下找到金鑰和端點。 您的資源金鑰與您的 Azure 訂用帳戶識別碼不同。

若要設定金鑰和端點的環境變數,請開啟主控台視窗,然後遵循作業系統和開發環境的指示進行。

  • 若要設定 FACE_APIKEY 環境變數,請以您其中一個資源索引碼取代 <your_key>
  • 若要設定 FACE_ENDPOINT 環境變數,請將 <your_endpoint> 取代為您資源的端點。

重要

如果您使用 API 金鑰,請將其安全地儲存在別處,例如 Azure Key Vault。 請勿在程式碼中直接包含 API 索引碼,且切勿公開張貼索引碼。

如需 AI 服務安全性的詳細資訊,請參閱驗證對 Azure AI 服務的要求

setx FACE_APIKEY <your_key>
setx FACE_ENDPOINT <your_endpoint>

新增環境變數之後,您可能需要重新啟動任何將讀取環境變數的執行中程式,包括主控台視窗。

識別並驗證臉部

  1. 建立新的 Node.js 應用程式

    在主控台視窗 (例如 cmd、PowerShell 或 Bash) 中,為您的應用程式建立新的目錄,並瀏覽至該目錄。

    mkdir myapp && cd myapp
    

    執行命令 npm init,以使用 package.json 檔案建立節點應用程式。

    npm init
    
  2. 安裝 @azure-rest/ai-vision-face npm 套件:

    npm install @azure-rest/ai-vision-face
    

    您應用程式的 package.json 檔案會隨著相依性而更新。

  3. 建立名為 index.js 的檔案,在文字編輯器中開啟,並貼上下列程式碼:

    注意

    如果您尚未使用登記表單接收臉部服務的存取權限,其中部分功能將無法運作。

    const { randomUUID } = require("crypto");
    
    const { AzureKeyCredential } = require("@azure/core-auth");
    
    const createFaceClient = require("@azure-rest/ai-vision-face").default,
      { getLongRunningPoller } = require("@azure-rest/ai-vision-face");
    
    const sleep = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
    
    const main = async () => {
      const endpoint = process.env["FACE_ENDPOINT"] ?? "<endpoint>";
      const apikey = process.env["FACE_APIKEY"] ?? "<apikey>";
      const credential = new AzureKeyCredential(apikey);
      const client = createFaceClient(endpoint, credential);
    
      const imageBaseUrl =
        "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/";
      const largePersonGroupId = randomUUID();
    
      console.log("========IDENTIFY FACES========");
      console.log();
    
      // Create a dictionary for all your images, grouping similar ones under the same key.
      const personDictionary = {
        "Family1-Dad": ["Family1-Dad1.jpg", "Family1-Dad2.jpg"],
        "Family1-Mom": ["Family1-Mom1.jpg", "Family1-Mom2.jpg"],
        "Family1-Son": ["Family1-Son1.jpg", "Family1-Son2.jpg"],
      };
    
      // A group photo that includes some of the persons you seek to identify from your dictionary.
      const sourceImageFileName = "identification1.jpg";
    
      // Create a large person group.
      console.log(`Creating a person group with ID: ${largePersonGroupId}`);
      await client.path("/largepersongroups/{largePersonGroupId}", largePersonGroupId).put({
        body: {
          name: largePersonGroupId,
          recognitionModel: "recognition_04",
        },
      });
    
      // The similar faces will be grouped into a single large person group person.
      console.log("Adding faces to person group...");
      await Promise.all(
        Object.keys(personDictionary).map(async (name) => {
          console.log(`Create a persongroup person: ${name}`);
          const createLargePersonGroupPersonResponse = await client
            .path("/largepersongroups/{largePersonGroupId}/persons", largePersonGroupId)
            .post({
              body: { name },
            });
    
          const { personId } = createLargePersonGroupPersonResponse.body;
    
          await Promise.all(
            personDictionary[name].map(async (similarImage) => {
              // Check if the image is of sufficent quality for recognition.
              const detectResponse = await client.path("/detect").post({
                contentType: "application/json",
                queryParameters: {
                  detectionModel: "detection_03",
                  recognitionModel: "recognition_04",
                  returnFaceId: false,
                  returnFaceAttributes: ["qualityForRecognition"],
                },
                body: { url: `${imageBaseUrl}${similarImage}` },
              });
    
              const sufficientQuality = detectResponse.body.every(
                (face) => face.faceAttributes?.qualityForRecognition === "high",
              );
              if (!sufficientQuality) {
                return;
              }
    
              if (detectResponse.body.length != 1) {
                return;
              }
    
              // Quality is sufficent, add to group.
              console.log(
                `Add face to the person group person: (${name}) from image: (${similarImage})`,
              );
              await client
                .path(
                  "/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces",
                  largePersonGroupId,
                  personId,
                )
                .post({
                  queryParameters: { detectionModel: "detection_03" },
                  body: { url: `${imageBaseUrl}${similarImage}` },
                });
            }),
          );
        }),
      );
      console.log("Done adding faces to person group.");
    
      // Start to train the large person group.
      console.log();
      console.log(`Training person group: ${largePersonGroupId}`);
      const trainResponse = await client
        .path("/largepersongroups/{largePersonGroupId}/train", largePersonGroupId)
        .post();
      const poller = await getLongRunningPoller(client, trainResponse);
      await poller.pollUntilDone();
      console.log(`Training status: ${poller.getOperationState().status}`);
      if (poller.getOperationState().status !== "succeeded") {
        return;
      }
    
      console.log("Pausing for 60 seconds to avoid triggering rate limit on free account...");
      await sleep(60000);
    
      // Detect faces from source image url and only take those with sufficient quality for recognition.
      const detectResponse = await client.path("/detect").post({
        contentType: "application/json",
        queryParameters: {
          detectionModel: "detection_03",
          recognitionModel: "recognition_04",
          returnFaceId: true,
          returnFaceAttributes: ["qualityForRecognition"],
        },
        body: { url: `${imageBaseUrl}${sourceImageFileName}` },
      });
      const faceIds = detectResponse.body.filter((face) => face.faceAttributes?.qualityForRecognition !== "low").map((face) => face.faceId);
    
      // Identify the faces in a large person group.
      const identifyResponse = await client.path("/identify").post({
        body: { faceIds, largePersonGroupId: largePersonGroupId },
      });
      await Promise.all(
        identifyResponse.body.map(async (result) => {
          try {
            const getLargePersonGroupPersonResponse = await client
              .path(
                "/largepersongroups/{largePersonGroupId}/persons/{personId}",
                largePersonGroupId,
                result.candidates[0].personId,
              )
              .get();
            const person = getLargePersonGroupPersonResponse.body;
            console.log(
              `Person: ${person.name} is identified for face in: ${sourceImageFileName} with ID: ${result.faceId}. Confidence: ${result.candidates[0].confidence}`,
            );
    
            // Verification:
            const verifyResponse = await client.path("/verify").post({
              body: {
                faceId: result.faceId,
                largePersonGroupId: largePersonGroupId,
                personId: person.personId,
              },
            });
            console.log(
              `Verification result between face ${result.faceId} and person ${person.personId}: ${verifyResponse.body.isIdentical} with confidence: ${verifyResponse.body.confidence}`,
            );
          } catch (error) {
            console.log(`No persons identified for face with ID ${result.faceId}`);
          }
        }),
      );
      console.log();
    
      // Delete large person group.
      console.log(`Deleting person group: ${largePersonGroupId}`);
      await client.path("/largepersongroups/{largePersonGroupId}", largePersonGroupId).delete();
      console.log();
    
      console.log("Done.");
    };
    
    main().catch(console.error);
    
  4. 使用快速入門檔案上使用 node 命令執行應用程式。

    node index.js
    

輸出

========IDENTIFY FACES========

Creating a person group with ID: a230ac8b-09b2-4fa0-ae04-d76356d88d9f
Adding faces to person group...
Create a persongroup person: Family1-Dad
Create a persongroup person: Family1-Mom
Create a persongroup person: Family1-Son
Add face to the person group person: (Family1-Dad) from image: (Family1-Dad1.jpg)
Add face to the person group person: (Family1-Mom) from image: (Family1-Mom1.jpg)
Add face to the person group person: (Family1-Son) from image: (Family1-Son1.jpg)
Add face to the person group person: (Family1-Dad) from image: (Family1-Dad2.jpg)
Add face to the person group person: (Family1-Mom) from image: (Family1-Mom2.jpg)
Add face to the person group person: (Family1-Son) from image: (Family1-Son2.jpg)
Done adding faces to person group.

Training person group: a230ac8b-09b2-4fa0-ae04-d76356d88d9f
Training status: succeeded
Pausing for 60 seconds to avoid triggering rate limit on free account...
No persons identified for face with ID 56380623-8bf0-414a-b9d9-c2373386b7be
Person: Family1-Dad is identified for face in: identification1.jpg with ID: c45052eb-a910-4fd3-b1c3-f91ccccc316a. Confidence: 0.96807
Person: Family1-Son is identified for face in: identification1.jpg with ID: 8dce9b50-513f-4fe2-9e19-352acfd622b3. Confidence: 0.9281
Person: Family1-Mom is identified for face in: identification1.jpg with ID: 75868da3-66f6-4b5f-a172-0b619f4d74c1. Confidence: 0.96902
Verification result between face c45052eb-a910-4fd3-b1c3-f91ccccc316a and person 35a58d14-fd58-4146-9669-82ed664da357: true with confidence: 0.96807
Verification result between face 8dce9b50-513f-4fe2-9e19-352acfd622b3 and person 2d4d196c-5349-431c-bf0c-f1d7aaa180ba: true with confidence: 0.9281
Verification result between face 75868da3-66f6-4b5f-a172-0b619f4d74c1 and person 35d5de9e-5f92-4552-8907-0d0aac889c3e: true with confidence: 0.96902

Deleting person group: a230ac8b-09b2-4fa0-ae04-d76356d88d9f

Done.

清除資源

如果您想要清除和移除 Azure AI 服務訂用帳戶,則可以刪除資源或資源群組。 刪除資源群組也會刪除與其相關聯的任何其他資源。

下一步

在本快速入門中,您已了解如何使用適用於 JavaScript 的臉部用戶端程式庫來執行基本臉部識別。 接下來,了解不同的臉部偵測模型,以及如何為您的使用案例指定正確的模型。

開始使用臉部 REST API 進行臉部辨識。 臉部服務可讓您存取先進的演算法,以偵測和辨識影像中的人臉。

注意

本快速入門會使用 cURL 命令來呼叫 REST API。 您也可以使用程式設計語言來呼叫 REST API。 使用語言 SDK 實作臉部識別等複雜案例會比較容易。 如需 C#PythonJavaJavaScriptGo 的範例,請參閱 GitHub 範例。

必要條件

  • Azure 訂用帳戶 - 建立免費帳戶
  • 擁有 Azure 訂用帳戶之後,在 Azure 入口網站中建立臉部資源,以取得您的金鑰和端點。 在其部署後,選取 [前往資源]
    • 您需要來自所建立資源的金鑰和端點,以將應用程式連線至 Face API。 您稍後會在快速入門中將金鑰和端點貼到下列程式碼中。
    • 您可以使用免費定價層 (F0) 來試用服務,之後可升級至付費層以用於實際執行環境。
  • PowerShell 6.0 版以上,或類似的命令列應用程式。
  • 已安裝 cURL

識別並驗證臉部

注意

如果您尚未使用登記表單接收臉部服務的存取權限,其中部分功能將無法運作。

  1. 首先,在來源臉部上呼叫偵測 API。 這是我們嘗試從較大群組中識別的臉部。 將下列命令複製到文字編輯器,插入您自己的金鑰和端點,然後將其複製到殼層視窗並加以執行。

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&recognitionModel=recognition_04&returnRecognitionModel=false&detectionModel=detection_03&faceIdTimeToLive=86400" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{""url"":""https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/identification1.jpg""}"
    

    將傳回的臉部識別碼字串儲存至暫存位置。 您會在結尾再次使用。

  2. 接下來,您將需要建立 LargePersonGroup ,並提供符合 regex 模式 ^[a-z0-9-_]+$ 的任意識別碼。 此物件會儲存數個人員的彙總臉部資料。 執行下列命令,插入您自己的金鑰。 選擇性地變更要求本文中的群組名稱和中繼資料。

    curl.exe -v -X PUT "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{
        ""name"": ""large-person-group-name"",
        ""userData"": ""User-provided data attached to the large person group."",
        ""recognitionModel"": ""recognition_04""
    }"
    

    將所建立群組的指定識別碼儲存到暫存位置。

  3. 接下來,您將建立屬於群組的 Person 物件。 執行下列命令,插入您自己的金鑰和上一個步驟中的 LargePersonGroup 識別碼。 此命令會建立名為「Family1-Dad」的 Person

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}/persons" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{
        ""name"": ""Family1-Dad"",
        ""userData"": ""User-provided data attached to the person.""
    }"
    

    執行此命令之後,請使用不同的輸入資料再次執行命令,以建立更多 Person 物件:「Family1-Mom」、「Family1-Son」、「Family1-Daughter」、「Family2-Lady」和「Family2-Man」。

    儲存所建立每個 Person 的識別碼;請務必追蹤哪個識別碼屬於哪個人員。

  4. 接下來,您必須偵測新的臉部,並將其與現有的 Person 物件產生關聯。 下列命令會從影像 Family1-Dad.jpg 偵測臉部,並且將其新增至對應的人員。 您必須指定 personId 作為建立「Family1-Dad」Person 物件時傳回的識別碼。 影像名稱對應至所建立 Person 的名稱。 此外,請在適當的欄位中輸入 LargePersonGroup 識別碼和您的金鑰。

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces?detectionModel=detection_03" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{""url"":""https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Dad1.jpg""}"
    

    然後,以不同的來源影像和目標 Person 再次執行上述命令。 可用影像包括:Family1-Dad1.jpgFamily1-Dad2.jpgFamily1-Mom1.jpgFamily1-Mom2.jpgFamily1-Son1.jpgFamily1-Son2.jpgFamily1-Daughter1.jpgFamily1-Daughter2.jpgFamily2-Lady1.jpgFamily2-Lady2.jpgFamily2-Man1.jpgFamily2-Man2.jpg。 請確定您在 API 呼叫中所指定 Person 的識別碼,與要求本文中影像檔案的名稱相符。

    在此步驟結束時,您應該有多個 Person 物件,每個物件都有一或多個對應的臉部,直接從提供的影像偵測。

  5. 接下來,使用目前的臉部資料來定型 LargePersonGroup。 定型作業會教導模型如何將臉部特徵 (有時是從多個來源影像彙總) 關聯至每一個人。 在執行命令之前,請先插入 LargePersonGroup 識別碼和金鑰。

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}/train" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data ""
    
  6. 檢查定型狀態是否成功。 如果不成功,請等候一段時間,然後再查詢一次。

    curl.exe -v "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}/training" -H "Ocp-Apim-Subscription-Key: {subscription key}"
    
  7. 現在您已準備好使用第一個步驟的來源臉部識別碼和 LargePersonGroup 識別碼來呼叫識別 API。 將這些值插入要求本文中適當的欄位,然後插入您的金鑰。

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/identify" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{
        ""largePersonGroupId"": ""INSERT_PERSONGROUP_ID"",
        ""faceIds"": [
            ""INSERT_SOURCE_FACE_ID""
        ],
        ""maxNumOfCandidatesReturned"": 1,
        ""confidenceThreshold"": 0.5
    }"
    

    回應應該會為您提供 Person 識別碼,指出以來源臉部識別的人員。 應該是對應到「Family1-Dad」人員的識別碼,因為來源臉部是該人員。

  8. 若要進行臉部驗證,您將使用上一個步驟中傳回的人員識別碼、LargePersonGroup 識別碼,以及來源臉部識別碼。 將這些值插入要求本文中的欄位,然後插入金鑰。

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/verify" `
    -H "Content-Type: application/json" `
    -H "Ocp-Apim-Subscription-Key: {subscription key}" `
    --data-ascii "{
        ""faceId"": ""INSERT_SOURCE_FACE_ID"",
        ""personId"": ""INSERT_PERSON_ID"",
        ""largePersonGroupId"": ""INSERT_PERSONGROUP_ID""
    }"
    

    回應應該會提供布林驗證結果以及信賴度值。

清除資源

若要刪除您在此練習中建立的 LargePersonGroup,請執行 LargePersonGroup - Delete 呼叫。

curl.exe -v -X DELETE "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}" -H "Ocp-Apim-Subscription-Key: {subscription key}"

如果您想要清除和移除 Azure AI 服務訂用帳戶,則可以刪除資源或資源群組。 刪除資源群組也會刪除與其相關聯的任何其他資源。

下一步

在本快速入門中,您已了解如何使用臉部 REST API 執行基本臉部辨識工作。 接下來,了解不同的臉部偵測模型,以及如何為您的使用案例指定正確的模型。