適用于 Python 的 Azure 文字分析用戶端程式庫 - 5.3.0 版

適用于語言的 Azure 認知服務是一項雲端式服務,可提供自然語言處理 (NLP) 功能,以瞭解和分析文字,並包含下列主要功能:

  • 情感分析
  • 具名實體辨識
  • 語言偵測
  • 關鍵片語擷取
  • 實體連結
  • 多重分析
  • 個人識別資訊 (PII) 偵測
  • 健康情況的文字分析
  • 自訂具名實體辨識
  • 自訂文字分類
  • 擷取文字摘要
  • 抽象文字摘要

| 原始程式碼套件 (PyPI) | 封裝 (Conda) | API 參考檔 | 產品檔 | 樣品

開始使用

Prerequisites

建立認知服務或語言服務資源

語言服務同時支援 多重服務和單一服務存取。 如果您打算在單一端點/金鑰下存取多個認知服務,請建立認知服務資源。 若為僅限語言服務存取,請建立語言服務資源。 您可以遵循本檔中的步驟,使用Azure 入口網站或Azure CLI來建立資源。

使用用戶端程式庫與服務的互動會從 用戶端開始。 若要建立用戶端物件,您需要資源的認知服務或語言服務 endpoint ,以及 credential 可讓您存取的 :

from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient

credential = AzureKeyCredential("<api_key>")
text_analytics_client = TextAnalyticsClient(endpoint="https://<resource-name>.cognitiveservices.azure.com/", credential=credential)

請注意,對於某些認知服務資源,端點看起來可能會與上述程式碼片段不同。 例如:https://<region>.api.cognitive.microsoft.com/

安裝套件

使用 pip安裝適用于 Python 的 Azure 文字分析用戶端程式庫:

pip install azure-ai-textanalytics
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint, AzureKeyCredential(key))

請注意, 5.2.X 和更新版本的目標是適用于語言的 Azure 認知服務 API。 這些 API 包括舊版文字分析用戶端程式庫中所找到的文字分析和自然語言處理功能。 此外,服務 API 已從語意變更為以日期為基礎的版本控制。 此版本的用戶端程式庫預設為最新支援的 API 版本,目前為 2023-04-01

下表顯示 SDK 版本與服務支援的 API 版本之間的關聯性

SDK 版本 服務支援的 API 版本
5.3.X - 最新穩定版本 3.0、3.1、2022-05-01、2023-04-01 (預設)
5.2.X 3.0、3.1、2022-05-01 (預設)
5.1.0 3.0、3.1 (預設)
5.0.0 3.0

您可以將 api_version 關鍵字引數傳遞至用戶端,以選取 API 版本。 如需最新的語言服務功能,請考慮選取最新的 Beta API 版本。 針對生產案例,建議使用最新的穩定版本。 將 設定為較舊版本可能會導致功能相容性降低。

驗證用戶端

取得端點

您可以使用 Azure 入口網站或 AzureCLI來尋找語言服務資源的端點:

# Get the endpoint for the Language service resource
az cognitiveservices account show --name "resource-name" --resource-group "resource-group-name" --query "properties.endpoint"

取得 API 金鑰

您可以從Azure入口網站中的認知服務或語言服務資源取得API 金鑰。 或者,您可以使用下方 的 Azure CLI 程式碼片段來取得資源的 API 金鑰。

az cognitiveservices account keys list --name "resource-name" --resource-group "resource-group-name"

使用 API 金鑰認證建立 TextAnalyticsClient

取得 API 金鑰的值之後,您可以將它當做字串傳遞至 AzureKeyCredential的實例。 使用金鑰作為認證參數來驗證用戶端:

import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint, AzureKeyCredential(key))

使用 Azure Active Directory 認證建立 TextAnalyticsClient

若要使用 Azure Active Directory (AAD) 權杖認證,請提供從 azure-identity 程式庫取得所需認證類型的實例。 請注意,區域端點不支援 AAD 驗證。 為您的資源建立 自訂子域 名稱,以使用此類型的驗證。

使用 AAD 進行驗證需要一些初始設定:

設定之後,您可以從 azure.identity 選擇要使用的認證類型。 例如, DefaultAzureCredential 可用來驗證用戶端:

將 AAD 應用程式的用戶端識別碼、租使用者識別碼和用戶端密碼的值設定為環境變數:AZURE_CLIENT_ID、AZURE_TENANT_ID、AZURE_CLIENT_SECRET

使用傳回的權杖認證來驗證用戶端:

import os
from azure.ai.textanalytics import TextAnalyticsClient
from azure.identity import DefaultAzureCredential

endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
credential = DefaultAzureCredential()

text_analytics_client = TextAnalyticsClient(endpoint, credential=credential)

重要概念

TextAnalyticsClient

文字分析用戶端程式庫提供 TextAnalyticsClient 來分析 批次。 它提供同步和非同步作業來存取文字分析的特定用法,例如語言偵測或關鍵字組擷取。

輸入

是由語言服務中的預測模型分析的單一單位。 每個作業的輸入都會以檔 清單 的形式傳遞。

每個檔都可以當做清單中的字串傳遞,例如

documents = ["I hated the movie. It was so slow!", "The movie made it into my top ten favorites. What a great movie!"]

或者,如果您想要傳入個別專案的檔 idlanguagecountry_hint/ ,則可以傳遞為DetectLanguageInputTextDocumentInput或類似聽寫的物件標記法:

documents = [
    {"id": "1", "language": "en", "text": "I hated the movie. It was so slow!"},
    {"id": "2", "language": "en", "text": "The movie made it into my top ten favorites. What a great movie!"},
]

請參閱輸入 的服務限制 ,包括檔長度限制、批次大小上限和支援的文字編碼。

傳回值

單一檔的傳回值可以是結果或錯誤物件。 系統會從每個作業傳回包含結果集合和錯誤物件的異質清單。 這些結果/錯誤會與所提供檔的順序進行索引比對。

例如AnalyzeSentimentResult的結果是文字分析作業的結果,其中包含有關檔輸入的預測或預測。

Error物件DocumentError表示服務無法處理檔,並包含失敗的原因。

檔錯誤處理

您可以使用 屬性來篩選清單中 is_error 的結果或錯誤物件。 針對結果物件,這一律 False 為 ,而 DocumentError 則為 True

例如,若要篩選掉所有 DocumentErrors,您可以使用清單理解:

response = text_analytics_client.analyze_sentiment(documents)
successful_responses = [doc for doc in response if not doc.is_error]

您也可以使用 kind 屬性來篩選結果類型:

poller = text_analytics_client.begin_analyze_actions(documents, actions)
response = poller.result()
for result in response:
    if result.kind == "SentimentAnalysis":
        print(f"Sentiment is {result.sentiment}")
    elif result.kind == "KeyPhraseExtraction":
        print(f"Key phrases: {result.key_phrases}")
    elif result.is_error is True:
        print(f"Document error: {result.code}, {result.message}")

Long-Running作業

長時間執行的作業是由傳送至服務以啟動作業的初始要求所組成,接著依間隔輪詢服務,以判斷作業是否已完成或失敗,以及是否成功,以取得結果。

支援醫療保健分析、自訂文字分析或多個分析的方法會模型化為長時間執行的作業。 用戶端會公開傳 begin_<method-name> 回輪詢器物件的方法。 呼叫端應該等候作業完成,方法是 result() 呼叫從 方法傳回的 begin_<method-name> 輪詢器物件。 提供範例程式碼片段來說明如何使用長時間執行的作業

範例

下一節提供數個程式碼片段,涵蓋一些最常見的語言服務工作,包括:

分析情感

analyze_sentiment 查看其輸入文字,並判斷其情感是否為正面、負面、中性或混合。 其回應包括每個句子情感分析和信賴分數。

import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient

endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))

documents = [
    """I had the best day of my life. I decided to go sky-diving and it made me appreciate my whole life so much more.
    I developed a deep-connection with my instructor as well, and I feel as if I've made a life-long friend in her.""",
    """This was a waste of my time. All of the views on this drop are extremely boring, all I saw was grass. 0/10 would
    not recommend to any divers, even first timers.""",
    """This was pretty good! The sights were ok, and I had fun with my instructors! Can't complain too much about my experience""",
    """I only have one word for my experience: WOW!!! I can't believe I have had such a wonderful skydiving company right
    in my backyard this whole time! I will definitely be a repeat customer, and I want to take my grandmother skydiving too,
    I know she'll love it!"""
]


result = text_analytics_client.analyze_sentiment(documents, show_opinion_mining=True)
docs = [doc for doc in result if not doc.is_error]

print("Let's visualize the sentiment of each of these documents")
for idx, doc in enumerate(docs):
    print(f"Document text: {documents[idx]}")
    print(f"Overall sentiment: {doc.sentiment}")

傳回的回應是結果和錯誤物件的異質清單:list[AnalyzeSentimentResultDocumentError]

如需 情感分析的概念性討論,請參閱服務檔。 若要瞭解如何對個別層面的意見進行更細微的分析, (例如文字中的產品或服務屬性) ,請參閱 這裡

辨識實體

recognize_entities 會將輸入文字中的實體辨識為人員、地點、組織、日期/時間、數量、百分比、貨幣等等。

import os
import typing
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient

endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
reviews = [
    """I work for Foo Company, and we hired Contoso for our annual founding ceremony. The food
    was amazing and we all can't say enough good words about the quality and the level of service.""",
    """We at the Foo Company re-hired Contoso after all of our past successes with the company.
    Though the food was still great, I feel there has been a quality drop since their last time
    catering for us. Is anyone else running into the same problem?""",
    """Bar Company is over the moon about the service we received from Contoso, the best sliders ever!!!!"""
]

result = text_analytics_client.recognize_entities(reviews)
result = [review for review in result if not review.is_error]
organization_to_reviews: typing.Dict[str, typing.List[str]] = {}

for idx, review in enumerate(result):
    for entity in review.entities:
        print(f"Entity '{entity.text}' has category '{entity.category}'")
        if entity.category == 'Organization':
            organization_to_reviews.setdefault(entity.text, [])
            organization_to_reviews[entity.text].append(reviews[idx])

for organization, reviews in organization_to_reviews.items():
    print(
        "\n\nOrganization '{}' has left us the following review(s): {}".format(
            organization, "\n\n".join(reviews)
        )
    )

傳回的回應是結果和錯誤物件的異質清單:list[RecognizeEntitiesResultDocumentError]

如需 具名實體辨識 和支援 類型的概念性討論,請參閱服務檔。

辨識連結實體

recognize_linked_entities 辨識並厘清在其輸入 (文字中找到的每個實體的身分識別,例如,判斷 Mars 一詞是否是指行星,或判斷某個字是否是指地球,或判斷) 的羅馬人。 辨識的實體會與已知知識庫的 URL 相關聯,例如 Wikipedia。

import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient

endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
    """
    Microsoft was founded by Bill Gates with some friends he met at Harvard. One of his friends,
    Steve Ballmer, eventually became CEO after Bill Gates as well. Steve Ballmer eventually stepped
    down as CEO of Microsoft, and was succeeded by Satya Nadella.
    Microsoft originally moved its headquarters to Bellevue, Washington in January 1979, but is now
    headquartered in Redmond.
    """
]

result = text_analytics_client.recognize_linked_entities(documents)
docs = [doc for doc in result if not doc.is_error]

print(
    "Let's map each entity to it's Wikipedia article. I also want to see how many times each "
    "entity is mentioned in a document\n\n"
)
entity_to_url = {}
for doc in docs:
    for entity in doc.entities:
        print("Entity '{}' has been mentioned '{}' time(s)".format(
            entity.name, len(entity.matches)
        ))
        if entity.data_source == "Wikipedia":
            entity_to_url[entity.name] = entity.url

傳回的回應是結果和錯誤物件的異質清單:list[RecognizeLinkedEntitiesResultDocumentError]

如需 實體連結 和支援 類型的概念性討論,請參閱服務檔。

辨識 PII 實體

recognize_pii_entities 在其輸入文字中辨識並分類個人識別資訊 (PII) 實體,例如社會安全號碼、銀行帳戶資訊、信用卡號碼等等。

import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient

endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]

text_analytics_client = TextAnalyticsClient(
    endpoint=endpoint, credential=AzureKeyCredential(key)
)
documents = [
    """Parker Doe has repaid all of their loans as of 2020-04-25.
    Their SSN is 859-98-0987. To contact them, use their phone number
    555-555-5555. They are originally from Brazil and have Brazilian CPF number 998.214.865-68"""
]

result = text_analytics_client.recognize_pii_entities(documents)
docs = [doc for doc in result if not doc.is_error]

print(
    "Let's compare the original document with the documents after redaction. "
    "I also want to comb through all of the entities that got redacted"
)
for idx, doc in enumerate(docs):
    print(f"Document text: {documents[idx]}")
    print(f"Redacted document text: {doc.redacted_text}")
    for entity in doc.entities:
        print("...Entity '{}' with category '{}' got redacted".format(
            entity.text, entity.category
        ))

傳回的回應是結果和錯誤物件的異質清單:list[RecognizePiiEntitiesResultDocumentError]

如需 支援的 PII 實體類型,請參閱服務檔。

注意:辨識 PII 實體服務可在 API 3.1 版和更新版本中取得。

擷取關鍵字組

extract_key_phrases 會決定其輸入文字中的主要交談點。 例如,針對輸入文字「食物是晚餐,而且有很棒的員工」,API 會傳回:「食物」和「很棒的員工」。

import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient

endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
articles = [
    """
    Washington, D.C. Autumn in DC is a uniquely beautiful season. The leaves fall from the trees
    in a city chock-full of forests, leaving yellow leaves on the ground and a clearer view of the
    blue sky above...
    """,
    """
    Redmond, WA. In the past few days, Microsoft has decided to further postpone the start date of
    its United States workers, due to the pandemic that rages with no end in sight...
    """,
    """
    Redmond, WA. Employees at Microsoft can be excited about the new coffee shop that will open on campus
    once workers no longer have to work remotely...
    """
]

result = text_analytics_client.extract_key_phrases(articles)
for idx, doc in enumerate(result):
    if not doc.is_error:
        print("Key phrases in article #{}: {}".format(
            idx + 1,
            ", ".join(doc.key_phrases)
        ))

傳回的回應是結果和錯誤物件的異質清單:list[ExtractKeyPhrasesResultDocumentError]

如需 關鍵字組擷取的概念性討論,請參閱服務檔。

偵測語言

detect_language 決定其輸入文字的語言,包括預測語言的信賴分數。

import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient

endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
    """
    The concierge Paulette was extremely helpful. Sadly when we arrived the elevator was broken, but with Paulette's help we barely noticed this inconvenience.
    She arranged for our baggage to be brought up to our room with no extra charge and gave us a free meal to refurbish all of the calories we lost from
    walking up the stairs :). Can't say enough good things about my experience!
    """,
    """
    最近由于工作压力太大,我们决定去富酒店度假。那儿的温泉实在太舒服了,我跟我丈夫都完全恢复了工作前的青春精神!加油!
    """
]

result = text_analytics_client.detect_language(documents)
reviewed_docs = [doc for doc in result if not doc.is_error]

print("Let's see what language each review is in!")

for idx, doc in enumerate(reviewed_docs):
    print("Review #{} is in '{}', which has ISO639-1 name '{}'\n".format(
        idx, doc.primary_language.name, doc.primary_language.iso6391_name
    ))

傳回的回應是結果和錯誤物件的異質清單:list[DetectLanguageResultDocumentError]

如需 語言偵測語言和區域支援的概念性討論,請參閱服務檔。

醫療保健實體分析

長時間執行的作業begin_analyze_healthcare_entities 擷取醫療保健網域內辨識的實體,並識別輸入檔中實體之間的關聯性,以及各種已知資料庫中已知資訊來源的連結,例如 UMLS、CHV、MSH 等。

import os
import typing
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient, HealthcareEntityRelation

endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]

text_analytics_client = TextAnalyticsClient(
    endpoint=endpoint,
    credential=AzureKeyCredential(key),
)

documents = [
    """
    Patient needs to take 100 mg of ibuprofen, and 3 mg of potassium. Also needs to take
    10 mg of Zocor.
    """,
    """
    Patient needs to take 50 mg of ibuprofen, and 2 mg of Coumadin.
    """
]

poller = text_analytics_client.begin_analyze_healthcare_entities(documents)
result = poller.result()

docs = [doc for doc in result if not doc.is_error]

print("Let's first visualize the outputted healthcare result:")
for doc in docs:
    for entity in doc.entities:
        print(f"Entity: {entity.text}")
        print(f"...Normalized Text: {entity.normalized_text}")
        print(f"...Category: {entity.category}")
        print(f"...Subcategory: {entity.subcategory}")
        print(f"...Offset: {entity.offset}")
        print(f"...Confidence score: {entity.confidence_score}")
        if entity.data_sources is not None:
            print("...Data Sources:")
            for data_source in entity.data_sources:
                print(f"......Entity ID: {data_source.entity_id}")
                print(f"......Name: {data_source.name}")
        if entity.assertion is not None:
            print("...Assertion:")
            print(f"......Conditionality: {entity.assertion.conditionality}")
            print(f"......Certainty: {entity.assertion.certainty}")
            print(f"......Association: {entity.assertion.association}")
    for relation in doc.entity_relations:
        print(f"Relation of type: {relation.relation_type} has the following roles")
        for role in relation.roles:
            print(f"...Role '{role.name}' with entity '{role.entity.text}'")
    print("------------------------------------------")

print("Now, let's get all of medication dosage relations from the documents")
dosage_of_medication_relations = [
    entity_relation
    for doc in docs
    for entity_relation in doc.entity_relations if entity_relation.relation_type == HealthcareEntityRelation.DOSAGE_OF_MEDICATION
]

注意:醫療保健實體分析僅適用于 API 3.1 版和更新版本。

多重分析

長時間執行的作業begin_analyze_actions 單一要求中對一組檔執行多個分析。 目前支援在單一要求中使用下列任何語言 API 組合:

  • 實體辨識
  • PII 實體辨識
  • 連結實體辨識
  • 關鍵片語擷取
  • 情感分析
  • 自訂實體辨識 (API 版本 2022-05-01 和更新版本)
  • 自訂單一標籤分類 (API 版本 2022-05-01 和更新版本)
  • 自訂多重標籤分類 (API 版本 2022-05-01 和更新版本)
  • 醫療保健實體分析 (API 版本 2022-05-01 和更新版本)
  • 擷取摘要 (API 版本 2023-04-01 和更新版本)
  • 抽象摘要 (API 版本 2023-04-01 和更新版本)
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import (
    TextAnalyticsClient,
    RecognizeEntitiesAction,
    RecognizeLinkedEntitiesAction,
    RecognizePiiEntitiesAction,
    ExtractKeyPhrasesAction,
    AnalyzeSentimentAction,
)

endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]

text_analytics_client = TextAnalyticsClient(
    endpoint=endpoint,
    credential=AzureKeyCredential(key),
)

documents = [
    'We went to Contoso Steakhouse located at midtown NYC last week for a dinner party, and we adore the spot! '
    'They provide marvelous food and they have a great menu. The chief cook happens to be the owner (I think his name is John Doe) '
    'and he is super nice, coming out of the kitchen and greeted us all.'
    ,

    'We enjoyed very much dining in the place! '
    'The Sirloin steak I ordered was tender and juicy, and the place was impeccably clean. You can even pre-order from their '
    'online menu at www.contososteakhouse.com, call 312-555-0176 or send email to order@contososteakhouse.com! '
    'The only complaint I have is the food didn\'t come fast enough. Overall I highly recommend it!'
]

poller = text_analytics_client.begin_analyze_actions(
    documents,
    display_name="Sample Text Analysis",
    actions=[
        RecognizeEntitiesAction(),
        RecognizePiiEntitiesAction(),
        ExtractKeyPhrasesAction(),
        RecognizeLinkedEntitiesAction(),
        AnalyzeSentimentAction(),
    ],
)

document_results = poller.result()
for doc, action_results in zip(documents, document_results):
    print(f"\nDocument text: {doc}")
    for result in action_results:
        if result.kind == "EntityRecognition":
            print("...Results of Recognize Entities Action:")
            for entity in result.entities:
                print(f"......Entity: {entity.text}")
                print(f".........Category: {entity.category}")
                print(f".........Confidence Score: {entity.confidence_score}")
                print(f".........Offset: {entity.offset}")

        elif result.kind == "PiiEntityRecognition":
            print("...Results of Recognize PII Entities action:")
            for pii_entity in result.entities:
                print(f"......Entity: {pii_entity.text}")
                print(f".........Category: {pii_entity.category}")
                print(f".........Confidence Score: {pii_entity.confidence_score}")

        elif result.kind == "KeyPhraseExtraction":
            print("...Results of Extract Key Phrases action:")
            print(f"......Key Phrases: {result.key_phrases}")

        elif result.kind == "EntityLinking":
            print("...Results of Recognize Linked Entities action:")
            for linked_entity in result.entities:
                print(f"......Entity name: {linked_entity.name}")
                print(f".........Data source: {linked_entity.data_source}")
                print(f".........Data source language: {linked_entity.language}")
                print(
                    f".........Data source entity ID: {linked_entity.data_source_entity_id}"
                )
                print(f".........Data source URL: {linked_entity.url}")
                print(".........Document matches:")
                for match in linked_entity.matches:
                    print(f"............Match text: {match.text}")
                    print(f"............Confidence Score: {match.confidence_score}")
                    print(f"............Offset: {match.offset}")
                    print(f"............Length: {match.length}")

        elif result.kind == "SentimentAnalysis":
            print("...Results of Analyze Sentiment action:")
            print(f"......Overall sentiment: {result.sentiment}")
            print(
                f"......Scores: positive={result.confidence_scores.positive}; \
                neutral={result.confidence_scores.neutral}; \
                negative={result.confidence_scores.negative} \n"
            )

        elif result.is_error is True:
            print(
                f"...Is an error with code '{result.error.code}' and message '{result.error.message}'"
            )

    print("------------------------------------------")

傳回的回應是封裝多個可反覆運算的物件,每個物件都代表個別分析的結果。

注意:API 3.1 版和更新版本提供多個分析。

選用組態

選擇性關鍵字引數可以在用戶端和每個作業層級傳入。 azure 核心 參考檔 說明重試、記錄、傳輸通訊協定等可用的組態。

疑難排解

一般

文字分析用戶端會引發 Azure Core中定義的例外狀況。

記錄

此程式庫會使用標準 記錄 程式庫進行記錄。 HTTP 會話的基本資訊 (URL、標頭等) 會記錄在 INFO 層級。

您可以在具備 logging_enable 關鍵字引數的用戶端啟用詳細的「偵錯」層級記錄,包括要求/回應本文和未刪改的標頭:

import sys
import logging
from azure.identity import DefaultAzureCredential
from azure.ai.textanalytics import TextAnalyticsClient

# Create a logger for the 'azure' SDK
logger = logging.getLogger('azure')
logger.setLevel(logging.DEBUG)

# Configure a console output
handler = logging.StreamHandler(stream=sys.stdout)
logger.addHandler(handler)

endpoint = "https://<resource-name>.cognitiveservices.azure.com/"
credential = DefaultAzureCredential()

# This client will log detailed information about its HTTP sessions, at DEBUG level
text_analytics_client = TextAnalyticsClient(endpoint, credential, logging_enable=True)
result = text_analytics_client.analyze_sentiment(["I did not like the restaurant. The food was too spicy."])

同樣地,logging_enable 可對單一作業啟用詳細記錄,即使未對用戶端啟用也可行:

result = text_analytics_client.analyze_sentiment(documents, logging_enable=True)

下一步

更多的程式碼範例

這些程式碼範例會顯示 Azure 文字分析用戶端程式庫的常見案例作業。

使用 Azure 身分識別的認知服務/語言服務 API 金鑰或權杖認證來驗證用戶端:

常見的案例

進階案例

其他文件

如需 Azure 認知服務語言的詳細資訊檔,請參閱語言 服務檔 docs.microsoft.com。

參與

此專案歡迎參與和提供建議。 大部分的參與都要求您同意「參與者授權合約 (CLA)」,宣告您有權且確實授與我們使用投稿的權利。 如需詳細資訊,請造訪 cla.microsoft.com

當您提交提取要求時,CLA Bot 會自動判斷您是否需要提供 CLA,並適當地裝飾 PR (例如標籤、註解)。 請遵循 bot 提供的指示。 您只需要使用我們的 CLA 在所有存放庫上執行此動作一次。

此專案採用 Microsoft Open Source Code of Conduct (Microsoft 開放原始碼管理辦法)。 如需詳細資訊,請參閱管理辦法常見問題集,如有任何其他問題或意見請連絡 opencode@microsoft.com