TextAnalyticsAsyncClient Class
- java.lang.Object
- com.azure.ai.textanalytics.TextAnalyticsAsyncClient
public final class TextAnalyticsAsyncClient
This class provides an asynchronous client that contains all the operations that apply to Azure Text Analytics. Operations allowed by the client are language detection, entities recognition, linked entities recognition, key phrases extraction, and sentiment analysis of a document or a list of documents.
Getting Started
In order to interact with the Text Analytics features in Azure AI Language Service, you'll need to create an instance of the TextAnalyticsAsyncClient. To make this possible you'll need the key credential of the service. Alternatively, you can use AAD authentication via Azure Identity to connect to the service.
- Azure Key Credential, see credential(AzureKeyCredential keyCredential).
- Azure Active Directory, see credential(TokenCredential tokenCredential).
Sample: Construct Asynchronous Text Analytics Client with Azure Key Credential
The following code sample demonstrates the creation of a TextAnalyticsAsyncClient, using the TextAnalyticsClientBuilder to configure it with a key credential.
TextAnalyticsAsyncClient textAnalyticsAsyncClient = new TextAnalyticsClientBuilder()
.credential(new AzureKeyCredential("{key}"))
.endpoint("{endpoint}")
.buildAsyncClient();
View TextAnalyticsClientBuilder for additional ways to construct the client.
See methods in client level class below to explore all features that library provides.
Extract information
Text Analytics client can use Natural Language Understanding (NLU) to extract information from unstructured text. For example, identify key phrases or Personally Identifiable, etc. Below you can look at the samples on how to use it.
Key Phrases Extraction
The extractKeyPhrases(String document) method can be used to extract key phrases, which returns a list of strings denoting the key phrases in the document.
textAnalyticsAsyncClient.extractKeyPhrases("Bonjour tout le monde").subscribe(keyPhrase ->
System.out.printf("%s.%n", keyPhrase));
See this for supported languages in Text Analytics API.
Note: For synchronous sample, refer to TextAnalyticsClient.
Named Entities Recognition(NER): Prebuilt Model
The recognizeEntities(String document) method can be used to recognize entities, which returns a list of general categorized entities in the provided document.
String document = "Satya Nadella is the CEO of Microsoft";
textAnalyticsAsyncClient.recognizeEntities(document)
.subscribe(entityCollection -> entityCollection.forEach(entity ->
System.out.printf("Recognized categorized entity: %s, category: %s, confidence score: %f.%n",
entity.getText(),
entity.getCategory(),
entity.getConfidenceScore())));
See this for supported languages in Text Analytics API.
Note: For synchronous sample, refer to TextAnalyticsClient.
Custom Named Entities Recognition(NER): Custom Model
The beginRecognizeCustomEntities(Iterable<String> documents, String projectName, String deploymentName) method can be used to recognize custom entities, which returns a list of custom entities for the provided list of document.
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"A recent report by the Government Accountability Office (GAO) found that the dramatic increase "
+ "in oil and natural gas development on federal lands over the past six years has stretched the"
+ " staff of the BLM to a point that it has been unable to meet its environmental protection "
+ "responsibilities."
);
}
textAnalyticsAsyncClient.beginRecognizeCustomEntities(documents, "{project_name}", "{deployment_name}")
.flatMap(pollResult -> {
RecognizeCustomEntitiesOperationDetail operationResult = pollResult.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationResult.getCreatedAt(), operationResult.getExpiresAt());
return pollResult.getFinalResult();
})
.flatMap(pagedFlux -> pagedFlux.byPage())
.subscribe(
perPage -> {
System.out.printf("Response code: %d, Continuation Token: %s.%n",
perPage.getStatusCode(), perPage.getContinuationToken());
for (RecognizeCustomEntitiesResultCollection documentsResults : perPage.getElements()) {
System.out.printf("Project name: %s, deployment name: %s.%n",
documentsResults.getProjectName(), documentsResults.getDeploymentName());
for (RecognizeEntitiesResult documentResult : documentsResults) {
System.out.println("Document ID: " + documentResult.getId());
for (CategorizedEntity entity : documentResult.getEntities()) {
System.out.printf(
"\tText: %s, category: %s, confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getConfidenceScore());
}
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
See this for supported languages in Text Analytics API.
Note: For synchronous sample, refer to TextAnalyticsClient.
Linked Entities Recognition
The recognizeLinkedEntities(String document) method can be used to find linked entities, which returns a list of recognized entities with links to a well-known knowledge base for the provided document.
String document = "Old Faithful is a geyser at Yellowstone Park.";
textAnalyticsAsyncClient.recognizeLinkedEntities(document).subscribe(
linkedEntityCollection -> linkedEntityCollection.forEach(linkedEntity -> {
System.out.println("Linked Entities:");
System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n",
linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(),
linkedEntity.getDataSource());
linkedEntity.getMatches().forEach(entityMatch -> System.out.printf(
"Matched entity: %s, confidence score: %f.%n",
entityMatch.getText(), entityMatch.getConfidenceScore()));
}));
See this for supported languages in Text Analytics API.
Note: For synchronous sample, refer to TextAnalyticsClient.
Personally Identifiable Information(PII) Entities Recognition
The recognizePiiEntities(String document) method can be used to recognize PII entities, which returns a list of Personally Identifiable Information(PII) entities in the provided document.
For a list of supported entity types, check: this.
String document = "My SSN is 859-98-0987";
textAnalyticsAsyncClient.recognizePiiEntities(document).subscribe(piiEntityCollection -> {
System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText());
piiEntityCollection.forEach(entity -> System.out.printf(
"Recognized Personally Identifiable Information entity: %s, entity category: %s,"
+ " entity subcategory: %s, confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()));
});
See this for supported languages in Text Analytics API.
Note: For synchronous sample, refer to TextAnalyticsClient.
Text Analytics for Health: Prebuilt Model
The beginAnalyzeHealthcareEntities(Iterable<String> documents) method can be used to analyze healthcare entities, entity data sources, and entity relations in a list of documents.
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add("The patient is a 54-year-old gentleman with a history of progressive angina "
+ "over the past several months.");
}
textAnalyticsAsyncClient.beginAnalyzeHealthcareEntities(documents)
.flatMap(AsyncPollResponse::getFinalResult)
.flatMap(pagedFlux -> pagedFlux.byPage())
.subscribe(
pagedResponse -> pagedResponse.getElements().forEach(
analyzeHealthcareEntitiesResultCollection -> {
analyzeHealthcareEntitiesResultCollection.forEach(healthcareEntitiesResult -> {
System.out.println("document id = " + healthcareEntitiesResult.getId());
System.out.println("Document entities: ");
AtomicInteger ct = new AtomicInteger();
healthcareEntitiesResult.getEntities().forEach(healthcareEntity -> {
System.out.printf(
"\ti = %d, Text: %s, category: %s, confidence score: %f.%n",
ct.getAndIncrement(), healthcareEntity.getText(), healthcareEntity.getCategory(),
healthcareEntity.getConfidenceScore());
IterableStream<EntityDataSource> healthcareEntityDataSources =
healthcareEntity.getDataSources();
if (healthcareEntityDataSources != null) {
healthcareEntityDataSources.forEach(healthcareEntityLink -> System.out.printf(
"\t\tEntity ID in data source: %s, data source: %s.%n",
healthcareEntityLink.getEntityId(), healthcareEntityLink.getName()));
}
});
// Healthcare entity relation groups
healthcareEntitiesResult.getEntityRelations().forEach(entityRelation -> {
System.out.printf("\tRelation type: %s.%n", entityRelation.getRelationType());
entityRelation.getRoles().forEach(role -> {
final HealthcareEntity entity = role.getEntity();
System.out.printf("\t\tEntity text: %s, category: %s, role: %s.%n",
entity.getText(), entity.getCategory(), role.getName());
});
System.out.printf("\tRelation confidence score: %f.%n",
entityRelation.getConfidenceScore());
});
});
}));
See this for supported languages in Text Analytics API.
Note: For synchronous sample, refer to TextAnalyticsClient.
Summarize text-based content: Document Summarization
Text Analytics client can use Natural Language Understanding (NLU) to summarize lengthy documents. For example, extractive or abstractive summarization. Below you can look at the samples on how to use it.
Extractive summarization
The beginExtractSummary(Iterable<String> documents) method returns a list of extract summaries for the provided list of document.
This method is supported since service API version V2023_04_01.
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,"
+ " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI"
+ " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn "
+ "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship"
+ " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,"
+ " (Y) and multilingual (Z). At the intersection of all three, there\u2019s magic\u2014what we call XYZ-code"
+ " as illustrated in Figure 1\u2014a joint representation to create more powerful AI that can speak, hear,"
+ " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term"
+ " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have"
+ " pretrained models that can jointly learn representations to support a broad range of downstream"
+ " AI tasks, much in the way humans do today. Over the past five years, we have achieved human"
+ " performance on benchmarks in conversational speech recognition, machine translation, "
+ "conversational question answering, machine reading comprehension, and image captioning. These"
+ " five breakthroughs provided us with strong signals toward our more ambitious aspiration to"
+ " produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
+ "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a "
+ "foundational component of this aspiration, if grounded with external knowledge sources in "
+ "the downstream AI tasks.");
}
textAnalyticsAsyncClient.beginExtractSummary(documents)
.flatMap(result -> {
ExtractiveSummaryOperationDetail operationDetail = result.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationDetail.getCreatedAt(), operationDetail.getExpiresAt());
return result.getFinalResult();
})
.flatMap(pagedFlux -> pagedFlux) // this unwrap the Mono<> of Mono<PagedFlux<T>> to return PagedFlux<T>
.subscribe(
resultCollection -> {
for (ExtractiveSummaryResult documentResult : resultCollection) {
for (ExtractiveSummarySentence extractiveSummarySentence : documentResult.getSentences()) {
System.out.printf(
"Sentence text: %s, length: %d, offset: %d, rank score: %f.%n",
extractiveSummarySentence.getText(), extractiveSummarySentence.getLength(),
extractiveSummarySentence.getOffset(), extractiveSummarySentence.getRankScore());
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
See this for supported languages in Text Analytics API.
Note: For synchronous sample, refer to TextAnalyticsClient.
Abstractive summarization
The beginAbstractSummary(Iterable<String> documents) method returns a list of abstractive summary for the provided list of document.
This method is supported since service API version V2023_04_01.
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,"
+ " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI"
+ " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn "
+ "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship"
+ " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,"
+ " (Y) and multilingual (Z). At the intersection of all three, there\u2019s magic\u2014what we call XYZ-code"
+ " as illustrated in Figure 1\u2014a joint representation to create more powerful AI that can speak, hear,"
+ " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term"
+ " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have"
+ " pretrained models that can jointly learn representations to support a broad range of downstream"
+ " AI tasks, much in the way humans do today. Over the past five years, we have achieved human"
+ " performance on benchmarks in conversational speech recognition, machine translation, "
+ "conversational question answering, machine reading comprehension, and image captioning. These"
+ " five breakthroughs provided us with strong signals toward our more ambitious aspiration to"
+ " produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
+ "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a "
+ "foundational component of this aspiration, if grounded with external knowledge sources in "
+ "the downstream AI tasks.");
}
textAnalyticsAsyncClient.beginAbstractSummary(documents)
.flatMap(result -> {
AbstractiveSummaryOperationDetail operationDetail = result.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationDetail.getCreatedAt(), operationDetail.getExpiresAt());
return result.getFinalResult();
})
.flatMap(pagedFlux -> pagedFlux) // this unwrap the Mono<> of Mono<PagedFlux<T>> to return PagedFlux<T>
.subscribe(
resultCollection -> {
for (AbstractiveSummaryResult documentResult : resultCollection) {
System.out.println("\tAbstractive summary sentences:");
for (AbstractiveSummary summarySentence : documentResult.getSummaries()) {
System.out.printf("\t\t Summary text: %s.%n", summarySentence.getText());
for (AbstractiveSummaryContext abstractiveSummaryContext : summarySentence.getContexts()) {
System.out.printf("\t\t offset: %d, length: %d%n",
abstractiveSummaryContext.getOffset(), abstractiveSummaryContext.getLength());
}
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
See this for supported languages in Text Analytics API.
Note: For synchronous sample, refer to TextAnalyticsClient.
Classify Text
Text Analytics client can use Natural Language Understanding (NLU) to detect the language or classify the sentiment of text you have. For example, language detection, sentiment analysis, or custom text classification. Below you can look at the samples on how to use it.
Analyze Sentiment and Mine Text for Opinions
The analyzeSentiment(String document, String language, AnalyzeSentimentOptions options) analyzeSentiment} method can be used to analyze sentiment on a given input text string, which returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it. If the includeOpinionMining
of AnalyzeSentimentOptions set to true, the output will include the opinion mining results. It mines the opinions of a sentence and conducts more granular analysis around the aspects in the text (also known as aspect-based sentiment analysis).
textAnalyticsAsyncClient.analyzeSentiment("The hotel was dark and unclean.", "en",
new AnalyzeSentimentOptions().setIncludeOpinionMining(true))
.subscribe(documentSentiment -> {
for (SentenceSentiment sentenceSentiment : documentSentiment.getSentences()) {
System.out.printf("\tSentence sentiment: %s%n", sentenceSentiment.getSentiment());
sentenceSentiment.getOpinions().forEach(opinion -> {
TargetSentiment targetSentiment = opinion.getTarget();
System.out.printf("\tTarget sentiment: %s, target text: %s%n",
targetSentiment.getSentiment(), targetSentiment.getText());
for (AssessmentSentiment assessmentSentiment : opinion.getAssessments()) {
System.out.printf("\t\t'%s' sentiment because of \"%s\". Is the assessment negated: %s.%n",
assessmentSentiment.getSentiment(), assessmentSentiment.getText(),
assessmentSentiment.isNegated());
}
});
}
});
See this for supported languages in Text Analytics API.
Note: For synchronous sample, refer to TextAnalyticsClient.
Detect Language
The detectLanguage(String document) method returns the detected language and a confidence score between zero and one. Scores close to one indicate 100% certainty that the identified language is true.
This method will use the default country hint that sets up in defaultCountryHint(String countryHint). If none is specified, service will use 'US' as the country hint.
String document = "Bonjour tout le monde";
textAnalyticsAsyncClient.detectLanguage(document).subscribe(detectedLanguage ->
System.out.printf("Detected language name: %s, ISO 6391 Name: %s, confidence score: %f.%n",
detectedLanguage.getName(), detectedLanguage.getIso6391Name(), detectedLanguage.getConfidenceScore()));
See this for supported languages in Text Analytics API.
Note: For synchronous sample, refer to TextAnalyticsClient.
Single-Label Classification
The beginSingleLabelClassify(Iterable<String> documents, String projectName, String deploymentName) method returns a list of single-label classification for the provided list of documents.
Note: this method is supported since service API version V2022_05_01.
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"A recent report by the Government Accountability Office (GAO) found that the dramatic increase "
+ "in oil and natural gas development on federal lands over the past six years has stretched the"
+ " staff of the BLM to a point that it has been unable to meet its environmental protection "
+ "responsibilities."
);
}
// See the service documentation for regional support and how to train a model to classify your documents,
// see https://aka.ms/azsdk/textanalytics/customfunctionalities
textAnalyticsAsyncClient.beginSingleLabelClassify(documents,
"{project_name}", "{deployment_name}")
.flatMap(pollResult -> {
ClassifyDocumentOperationDetail operationResult = pollResult.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationResult.getCreatedAt(), operationResult.getExpiresAt());
return pollResult.getFinalResult();
})
.flatMap(pagedFluxAsyncPollResponse -> pagedFluxAsyncPollResponse.byPage())
.subscribe(
perPage -> {
System.out.printf("Response code: %d, Continuation Token: %s.%n",
perPage.getStatusCode(), perPage.getContinuationToken());
for (ClassifyDocumentResultCollection documentsResults : perPage.getElements()) {
System.out.printf("Project name: %s, deployment name: %s.%n",
documentsResults.getProjectName(), documentsResults.getDeploymentName());
for (ClassifyDocumentResult documentResult : documentsResults) {
System.out.println("Document ID: " + documentResult.getId());
for (ClassificationCategory classification : documentResult.getClassifications()) {
System.out.printf("\tCategory: %s, confidence score: %f.%n",
classification.getCategory(), classification.getConfidenceScore());
}
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
See this for supported languages in Text Analytics API.
Note: For synchronous sample, refer to TextAnalyticsClient.
Multi-Label Classification
The beginMultiLabelClassify(Iterable<String> documents, String projectName, String deploymentName) method returns a list of multi-label classification for the provided list of document.
Note: this method is supported since service API version V2022_05_01.
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"I need a reservation for an indoor restaurant in China. Please don't stop the music."
+ " Play music and add it to my playlist");
}
textAnalyticsAsyncClient.beginMultiLabelClassify(documents, "{project_name}", "{deployment_name}")
.flatMap(pollResult -> {
ClassifyDocumentOperationDetail operationResult = pollResult.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationResult.getCreatedAt(), operationResult.getExpiresAt());
return pollResult.getFinalResult();
})
.flatMap(pagedFluxAsyncPollResponse -> pagedFluxAsyncPollResponse.byPage())
.subscribe(
perPage -> {
System.out.printf("Response code: %d, Continuation Token: %s.%n",
perPage.getStatusCode(), perPage.getContinuationToken());
for (ClassifyDocumentResultCollection documentsResults : perPage.getElements()) {
System.out.printf("Project name: %s, deployment name: %s.%n",
documentsResults.getProjectName(), documentsResults.getDeploymentName());
for (ClassifyDocumentResult documentResult : documentsResults) {
System.out.println("Document ID: " + documentResult.getId());
for (ClassificationCategory classification : documentResult.getClassifications()) {
System.out.printf("\tCategory: %s, confidence score: %f.%n",
classification.getCategory(), classification.getConfidenceScore());
}
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
See this for supported languages in Text Analytics API.
Note: For synchronous sample, refer to TextAnalyticsClient.
Execute multiple actions
The beginAnalyzeActions(Iterable<String> documents, TextAnalyticsActions actions) method execute actions, such as, entities recognition, PII entities recognition, key phrases extraction, and etc, for a list of documents.
List<String> documents = Arrays.asList(
"Elon Musk is the CEO of SpaceX and Tesla.",
"1", "My SSN is 859-98-0987"
);
textAnalyticsAsyncClient.beginAnalyzeActions(documents,
new TextAnalyticsActions().setDisplayName("{tasks_display_name}")
.setRecognizeEntitiesActions(new RecognizeEntitiesAction())
.setExtractKeyPhrasesActions(new ExtractKeyPhrasesAction()))
.flatMap(AsyncPollResponse::getFinalResult)
.flatMap(analyzeActionsResultPagedFlux -> analyzeActionsResultPagedFlux.byPage())
.subscribe(
pagedResponse -> pagedResponse.getElements().forEach(
analyzeActionsResult -> {
analyzeActionsResult.getRecognizeEntitiesResults().forEach(
actionResult -> {
if (!actionResult.isError()) {
actionResult.getDocumentsResults().forEach(
entitiesResult -> entitiesResult.getEntities().forEach(
entity -> System.out.printf(
"Recognized entity: %s, entity category: %s, entity subcategory: %s,"
+ " confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getSubcategory(),
entity.getConfidenceScore())));
}
});
analyzeActionsResult.getExtractKeyPhrasesResults().forEach(
actionResult -> {
if (!actionResult.isError()) {
actionResult.getDocumentsResults().forEach(extractKeyPhraseResult -> {
System.out.println("Extracted phrases:");
extractKeyPhraseResult.getKeyPhrases()
.forEach(keyPhrases -> System.out.printf("\t%s.%n", keyPhrases));
});
}
});
}));
See this for supported languages in Text Analytics API.
Note: For synchronous sample, refer to TextAnalyticsClient.
Method Summary
Methods inherited from java.lang.Object
Method Details
analyzeSentiment
public Mono
Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it. This method will use the default language that can be set by using method defaultLanguage(String language). If none is specified, service will use 'en' as the language.
Code Sample
Analyze the sentiment in a document. Subscribes to the call asynchronously and prints out the sentiment details when a response is received.
String document = "The hotel was dark and unclean.";
textAnalyticsAsyncClient.analyzeSentiment(document).subscribe(documentSentiment -> {
System.out.printf("Recognized document sentiment: %s.%n", documentSentiment.getSentiment());
for (SentenceSentiment sentenceSentiment : documentSentiment.getSentences()) {
System.out.printf(
"Recognized sentence sentiment: %s, positive score: %.2f, neutral score: %.2f, "
+ "negative score: %.2f.%n",
sentenceSentiment.getSentiment(),
sentenceSentiment.getConfidenceScores().getPositive(),
sentenceSentiment.getConfidenceScores().getNeutral(),
sentenceSentiment.getConfidenceScores().getNegative());
}
});
Parameters:
Returns:
analyzeSentiment
public Mono
Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it.
Code Sample
Analyze the sentiments in a document with a provided language representation. Subscribes to the call asynchronously and prints out the sentiment details when a response is received.
String document = "The hotel was dark and unclean.";
textAnalyticsAsyncClient.analyzeSentiment(document, "en")
.subscribe(documentSentiment -> {
System.out.printf("Recognized sentiment label: %s.%n", documentSentiment.getSentiment());
for (SentenceSentiment sentenceSentiment : documentSentiment.getSentences()) {
System.out.printf("Recognized sentence sentiment: %s, positive score: %.2f, neutral score: %.2f, "
+ "negative score: %.2f.%n",
sentenceSentiment.getSentiment(),
sentenceSentiment.getConfidenceScores().getPositive(),
sentenceSentiment.getConfidenceScores().getNeutral(),
sentenceSentiment.getConfidenceScores().getNegative());
}
});
Parameters:
Returns:
analyzeSentiment
public Mono
Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it. If the includeOpinionMining
of AnalyzeSentimentOptions set to true, the output will include the opinion mining results. It mines the opinions of a sentence and conducts more granular analysis around the aspects in the text (also known as aspect-based sentiment analysis).
Code Sample
Analyze the sentiment and mine the opinions for each sentence in a document with a provided language representation and AnalyzeSentimentOptions options. Subscribes to the call asynchronously and prints out the sentiment and sentence opinions details when a response is received.
textAnalyticsAsyncClient.analyzeSentiment("The hotel was dark and unclean.", "en",
new AnalyzeSentimentOptions().setIncludeOpinionMining(true))
.subscribe(documentSentiment -> {
for (SentenceSentiment sentenceSentiment : documentSentiment.getSentences()) {
System.out.printf("\tSentence sentiment: %s%n", sentenceSentiment.getSentiment());
sentenceSentiment.getOpinions().forEach(opinion -> {
TargetSentiment targetSentiment = opinion.getTarget();
System.out.printf("\tTarget sentiment: %s, target text: %s%n",
targetSentiment.getSentiment(), targetSentiment.getText());
for (AssessmentSentiment assessmentSentiment : opinion.getAssessments()) {
System.out.printf("\t\t'%s' sentiment because of \"%s\". Is the assessment negated: %s.%n",
assessmentSentiment.getSentiment(), assessmentSentiment.getText(),
assessmentSentiment.isNegated());
}
});
}
});
Parameters:
Returns:
analyzeSentimentBatch
public Mono
Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it. If the includeOpinionMining
of AnalyzeSentimentOptions set to true, the output will include the opinion mining results. It mines the opinions of a sentence and conducts more granular analysis around the aspects in the text (also known as aspect-based sentiment analysis).
Code Sample
Analyze the sentiments and mine the opinions for each sentence in a list of documents with a provided language representation and AnalyzeSentimentOptions options. Subscribes to the call asynchronously and prints out the sentiment and sentence opinions details when a response is received.
List<TextDocumentInput> documents = Arrays.asList(
new TextDocumentInput("0", "Elon Musk is the CEO of SpaceX and Tesla.").setLanguage("en"),
new TextDocumentInput("1", "My SSN is 859-98-0987").setLanguage("en")
);
SyncPoller<AnalyzeActionsOperationDetail, AnalyzeActionsResultPagedIterable> syncPoller =
textAnalyticsClient.beginAnalyzeActions(
documents,
new TextAnalyticsActions().setDisplayName("{tasks_display_name}")
.setRecognizeEntitiesActions(new RecognizeEntitiesAction())
.setExtractKeyPhrasesActions(new ExtractKeyPhrasesAction()),
new AnalyzeActionsOptions().setIncludeStatistics(false),
Context.NONE);
syncPoller.waitForCompletion();
AnalyzeActionsResultPagedIterable result = syncPoller.getFinalResult();
result.forEach(analyzeActionsResult -> {
System.out.println("Entities recognition action results:");
analyzeActionsResult.getRecognizeEntitiesResults().forEach(
actionResult -> {
if (!actionResult.isError()) {
actionResult.getDocumentsResults().forEach(
entitiesResult -> entitiesResult.getEntities().forEach(
entity -> System.out.printf(
"Recognized entity: %s, entity category: %s, entity subcategory: %s,"
+ " confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getSubcategory(),
entity.getConfidenceScore())));
}
});
System.out.println("Key phrases extraction action results:");
analyzeActionsResult.getExtractKeyPhrasesResults().forEach(
actionResult -> {
if (!actionResult.isError()) {
actionResult.getDocumentsResults().forEach(extractKeyPhraseResult -> {
System.out.println("Extracted phrases:");
extractKeyPhraseResult.getKeyPhrases()
.forEach(keyPhrases -> System.out.printf("\t%s.%n", keyPhrases));
});
}
});
});
Parameters:
Returns:
analyzeSentimentBatch
@Deprecated
public Mono
Deprecated
Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it.
Analyze sentiment in a list of documents with provided language code and request options. Subscribes to the call asynchronously and prints out the sentiment details when a response is received.
List<String> documents = Arrays.asList(
"The hotel was dark and unclean.",
"The restaurant had amazing gnocchi."
);
textAnalyticsAsyncClient.analyzeSentimentBatch(documents, "en",
new TextAnalyticsRequestOptions().setIncludeStatistics(true)).subscribe(
response -> {
// Batch statistics
TextDocumentBatchStatistics batchStatistics = response.getStatistics();
System.out.printf("Batch statistics, transaction count: %s, valid document count: %s.%n",
batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
response.forEach(analyzeSentimentResult -> {
System.out.printf("Document ID: %s%n", analyzeSentimentResult.getId());
DocumentSentiment documentSentiment = analyzeSentimentResult.getDocumentSentiment();
System.out.printf("Recognized document sentiment: %s.%n", documentSentiment.getSentiment());
documentSentiment.getSentences().forEach(sentenceSentiment ->
System.out.printf("Recognized sentence sentiment: %s, positive score: %.2f, "
+ "neutral score: %.2f, negative score: %.2f.%n",
sentenceSentiment.getSentiment(),
sentenceSentiment.getConfidenceScores().getPositive(),
sentenceSentiment.getConfidenceScores().getNeutral(),
sentenceSentiment.getConfidenceScores().getNegative()));
});
});
Parameters:
Returns:
analyzeSentimentBatchWithResponse
public Mono<>
Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it. If the includeOpinionMining
of AnalyzeSentimentOptions set to true, the output will include the opinion mining results. It mines the opinions of a sentence and conducts more granular analysis around the aspects in the text (also known as aspect-based sentiment analysis).
Code Sample
Analyze sentiment and mine the opinions for each sentence in a list of TextDocumentInput with provided AnalyzeSentimentOptions options. Subscribes to the call asynchronously and prints out the sentiment and sentence opinions details when a response is received.
List<TextDocumentInput> textDocumentInputs1 = Arrays.asList(
new TextDocumentInput("0", "The hotel was dark and unclean.").setLanguage("en"),
new TextDocumentInput("1", "The restaurant had amazing gnocchi.").setLanguage("en"));
AnalyzeSentimentOptions options = new AnalyzeSentimentOptions()
.setIncludeOpinionMining(true).setIncludeStatistics(true);
textAnalyticsAsyncClient.analyzeSentimentBatchWithResponse(textDocumentInputs1, options)
.subscribe(response -> {
// Response's status code
System.out.printf("Status code of request response: %d%n", response.getStatusCode());
AnalyzeSentimentResultCollection resultCollection = response.getValue();
// Batch statistics
TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
System.out.printf("Batch statistics, transaction count: %s, valid document count: %s.%n",
batchStatistics.getTransactionCount(),
batchStatistics.getValidDocumentCount());
resultCollection.forEach(analyzeSentimentResult -> {
System.out.printf("Document ID: %s%n", analyzeSentimentResult.getId());
DocumentSentiment documentSentiment = analyzeSentimentResult.getDocumentSentiment();
documentSentiment.getSentences().forEach(sentenceSentiment -> {
System.out.printf("\tSentence sentiment: %s%n", sentenceSentiment.getSentiment());
sentenceSentiment.getOpinions().forEach(opinion -> {
TargetSentiment targetSentiment = opinion.getTarget();
System.out.printf("\t\tTarget sentiment: %s, target text: %s%n",
targetSentiment.getSentiment(), targetSentiment.getText());
for (AssessmentSentiment assessmentSentiment : opinion.getAssessments()) {
System.out.printf(
"\t\t\t'%s' assessment sentiment because of \"%s\". Is the assessment negated: %s.%n",
assessmentSentiment.getSentiment(), assessmentSentiment.getText(),
assessmentSentiment.isNegated());
}
});
});
});
});
Parameters:
Returns:
analyzeSentimentBatchWithResponse
@Deprecated
public Mono<>
Deprecated
Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it.
Analyze sentiment in a list of TextDocumentInput with provided request options. Subscribes to the call asynchronously and prints out the sentiment details when a response is received.
List<TextDocumentInput> textDocumentInputs1 = Arrays.asList(
new TextDocumentInput("0", "The hotel was dark and unclean.").setLanguage("en"),
new TextDocumentInput("1", "The restaurant had amazing gnocchi.").setLanguage("en"));
TextAnalyticsRequestOptions requestOptions = new TextAnalyticsRequestOptions().setIncludeStatistics(true);
textAnalyticsAsyncClient.analyzeSentimentBatchWithResponse(textDocumentInputs1, requestOptions)
.subscribe(response -> {
// Response's status code
System.out.printf("Status code of request response: %d%n", response.getStatusCode());
AnalyzeSentimentResultCollection resultCollection = response.getValue();
// Batch statistics
TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
System.out.printf("Batch statistics, transaction count: %s, valid document count: %s.%n",
batchStatistics.getTransactionCount(),
batchStatistics.getValidDocumentCount());
resultCollection.forEach(analyzeSentimentResult -> {
System.out.printf("Document ID: %s%n", analyzeSentimentResult.getId());
DocumentSentiment documentSentiment = analyzeSentimentResult.getDocumentSentiment();
System.out.printf("Recognized document sentiment: %s.%n", documentSentiment.getSentiment());
documentSentiment.getSentences().forEach(sentenceSentiment ->
System.out.printf("Recognized sentence sentiment: %s, positive score: %.2f, "
+ "neutral score: %.2f, negative score: %.2f.%n",
sentenceSentiment.getSentiment(),
sentenceSentiment.getConfidenceScores().getPositive(),
sentenceSentiment.getConfidenceScores().getNeutral(),
sentenceSentiment.getConfidenceScores().getNegative()));
});
});
Parameters:
Returns:
beginAbstractSummary
public PollerFlux
Returns a list of abstractive summary for the provided list of TextDocumentInput with provided request options.
This method is supported since service API version V2023_04_01.
Code Sample
List<TextDocumentInput> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(new TextDocumentInput(Integer.toString(i),
"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,"
+ " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI"
+ " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn "
+ "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship"
+ " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,"
+ " (Y) and multilingual (Z). At the intersection of all three, there\u2019s magic\u2014what we call XYZ-code"
+ " as illustrated in Figure 1\u2014a joint representation to create more powerful AI that can speak, hear,"
+ " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term"
+ " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have"
+ " pretrained models that can jointly learn representations to support a broad range of downstream"
+ " AI tasks, much in the way humans do today. Over the past five years, we have achieved human"
+ " performance on benchmarks in conversational speech recognition, machine translation, "
+ "conversational question answering, machine reading comprehension, and image captioning. These"
+ " five breakthroughs provided us with strong signals toward our more ambitious aspiration to"
+ " produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
+ "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a "
+ "foundational component of this aspiration, if grounded with external knowledge sources in "
+ "the downstream AI tasks."));
}
AbstractiveSummaryOptions options = new AbstractiveSummaryOptions().setSentenceCount(4);
textAnalyticsAsyncClient.beginAbstractSummary(documents, options)
.flatMap(result -> {
AbstractiveSummaryOperationDetail operationDetail = result.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationDetail.getCreatedAt(), operationDetail.getExpiresAt());
return result.getFinalResult();
})
.flatMap(pagedFlux -> pagedFlux) // this unwrap the Mono<> of Mono<PagedFlux<T>> to return PagedFlux<T>
.subscribe(
resultCollection -> {
for (AbstractiveSummaryResult documentResult : resultCollection) {
System.out.println("\tAbstractive summary sentences:");
for (AbstractiveSummary summarySentence : documentResult.getSummaries()) {
System.out.printf("\t\t Summary text: %s.%n", summarySentence.getText());
for (AbstractiveSummaryContext abstractiveSummaryContext : summarySentence.getContexts()) {
System.out.printf("\t\t offset: %d, length: %d%n",
abstractiveSummaryContext.getOffset(), abstractiveSummaryContext.getLength());
}
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
Parameters:
Returns:
beginAbstractSummary
public PollerFlux
Returns a list of abstractive summary for the provided list of document.
This method is supported since service API version V2023_04_01.
This method will use the default language that can be set by using method defaultLanguage(String language). If none is specified, service will use 'en' as the language.
Code Sample
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,"
+ " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI"
+ " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn "
+ "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship"
+ " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,"
+ " (Y) and multilingual (Z). At the intersection of all three, there\u2019s magic\u2014what we call XYZ-code"
+ " as illustrated in Figure 1\u2014a joint representation to create more powerful AI that can speak, hear,"
+ " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term"
+ " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have"
+ " pretrained models that can jointly learn representations to support a broad range of downstream"
+ " AI tasks, much in the way humans do today. Over the past five years, we have achieved human"
+ " performance on benchmarks in conversational speech recognition, machine translation, "
+ "conversational question answering, machine reading comprehension, and image captioning. These"
+ " five breakthroughs provided us with strong signals toward our more ambitious aspiration to"
+ " produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
+ "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a "
+ "foundational component of this aspiration, if grounded with external knowledge sources in "
+ "the downstream AI tasks.");
}
textAnalyticsAsyncClient.beginAbstractSummary(documents)
.flatMap(result -> {
AbstractiveSummaryOperationDetail operationDetail = result.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationDetail.getCreatedAt(), operationDetail.getExpiresAt());
return result.getFinalResult();
})
.flatMap(pagedFlux -> pagedFlux) // this unwrap the Mono<> of Mono<PagedFlux<T>> to return PagedFlux<T>
.subscribe(
resultCollection -> {
for (AbstractiveSummaryResult documentResult : resultCollection) {
System.out.println("\tAbstractive summary sentences:");
for (AbstractiveSummary summarySentence : documentResult.getSummaries()) {
System.out.printf("\t\t Summary text: %s.%n", summarySentence.getText());
for (AbstractiveSummaryContext abstractiveSummaryContext : summarySentence.getContexts()) {
System.out.printf("\t\t offset: %d, length: %d%n",
abstractiveSummaryContext.getOffset(), abstractiveSummaryContext.getLength());
}
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
Parameters:
Returns:
beginAbstractSummary
public PollerFlux
Returns a list of abstractive summary for the provided list of document with provided request options.
This method is supported since service API version V2023_04_01.
See this supported languages in Language service API.
Code Sample
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,"
+ " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI"
+ " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn "
+ "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship"
+ " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,"
+ " (Y) and multilingual (Z). At the intersection of all three, there\u2019s magic\u2014what we call XYZ-code"
+ " as illustrated in Figure 1\u2014a joint representation to create more powerful AI that can speak, hear,"
+ " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term"
+ " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have"
+ " pretrained models that can jointly learn representations to support a broad range of downstream"
+ " AI tasks, much in the way humans do today. Over the past five years, we have achieved human"
+ " performance on benchmarks in conversational speech recognition, machine translation, "
+ "conversational question answering, machine reading comprehension, and image captioning. These"
+ " five breakthroughs provided us with strong signals toward our more ambitious aspiration to"
+ " produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
+ "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a "
+ "foundational component of this aspiration, if grounded with external knowledge sources in "
+ "the downstream AI tasks.");
}
AbstractiveSummaryOptions options = new AbstractiveSummaryOptions().setSentenceCount(4);
textAnalyticsAsyncClient.beginAbstractSummary(documents, "en", options)
.flatMap(result -> {
AbstractiveSummaryOperationDetail operationDetail = result.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationDetail.getCreatedAt(), operationDetail.getExpiresAt());
return result.getFinalResult();
})
.flatMap(pagedFlux -> pagedFlux) // this unwrap the Mono<> of Mono<PagedFlux<T>> to return PagedFlux<T>
.subscribe(
resultCollection -> {
for (AbstractiveSummaryResult documentResult : resultCollection) {
System.out.println("\tAbstractive summary sentences:");
for (AbstractiveSummary summarySentence : documentResult.getSummaries()) {
System.out.printf("\t\t Summary text: %s.%n", summarySentence.getText());
for (AbstractiveSummaryContext abstractiveSummaryContext : summarySentence.getContexts()) {
System.out.printf("\t\t offset: %d, length: %d%n",
abstractiveSummaryContext.getOffset(), abstractiveSummaryContext.getLength());
}
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
Parameters:
Returns:
beginAnalyzeActions
public PollerFlux
Execute actions, such as, entities recognition, PII entities recognition and key phrases extraction for a list of TextDocumentInput with provided request options. See this supported languages in Language service API.
Code Sample
List<TextDocumentInput> documents = Arrays.asList(
new TextDocumentInput("0", "Elon Musk is the CEO of SpaceX and Tesla.").setLanguage("en"),
new TextDocumentInput("1", "My SSN is 859-98-0987").setLanguage("en")
);
textAnalyticsAsyncClient.beginAnalyzeActions(documents,
new TextAnalyticsActions().setDisplayName("{tasks_display_name}")
.setRecognizeEntitiesActions(new RecognizeEntitiesAction())
.setExtractKeyPhrasesActions(new ExtractKeyPhrasesAction()),
new AnalyzeActionsOptions().setIncludeStatistics(false))
.flatMap(AsyncPollResponse::getFinalResult)
.flatMap(analyzeActionsResultPagedFlux -> analyzeActionsResultPagedFlux.byPage())
.subscribe(
pagedResponse -> pagedResponse.getElements().forEach(
analyzeActionsResult -> {
System.out.println("Entities recognition action results:");
analyzeActionsResult.getRecognizeEntitiesResults().forEach(
actionResult -> {
if (!actionResult.isError()) {
actionResult.getDocumentsResults().forEach(
entitiesResult -> entitiesResult.getEntities().forEach(
entity -> System.out.printf(
"Recognized entity: %s, entity category: %s, entity subcategory: %s,"
+ " confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getSubcategory(),
entity.getConfidenceScore())));
}
});
System.out.println("Key phrases extraction action results:");
analyzeActionsResult.getExtractKeyPhrasesResults().forEach(
actionResult -> {
if (!actionResult.isError()) {
actionResult.getDocumentsResults().forEach(extractKeyPhraseResult -> {
System.out.println("Extracted phrases:");
extractKeyPhraseResult.getKeyPhrases()
.forEach(keyPhrases -> System.out.printf("\t%s.%n", keyPhrases));
});
}
});
}));
Parameters:
Returns:
beginAnalyzeActions
public PollerFlux
Execute actions, such as, entities recognition, PII entities recognition and key phrases extraction for a list of documents. This method will use the default language that can be set by using method defaultLanguage(String language). If none is specified, service will use 'en' as the language.
Code Sample
List<String> documents = Arrays.asList(
"Elon Musk is the CEO of SpaceX and Tesla.",
"1", "My SSN is 859-98-0987"
);
textAnalyticsAsyncClient.beginAnalyzeActions(documents,
new TextAnalyticsActions().setDisplayName("{tasks_display_name}")
.setRecognizeEntitiesActions(new RecognizeEntitiesAction())
.setExtractKeyPhrasesActions(new ExtractKeyPhrasesAction()))
.flatMap(AsyncPollResponse::getFinalResult)
.flatMap(analyzeActionsResultPagedFlux -> analyzeActionsResultPagedFlux.byPage())
.subscribe(
pagedResponse -> pagedResponse.getElements().forEach(
analyzeActionsResult -> {
analyzeActionsResult.getRecognizeEntitiesResults().forEach(
actionResult -> {
if (!actionResult.isError()) {
actionResult.getDocumentsResults().forEach(
entitiesResult -> entitiesResult.getEntities().forEach(
entity -> System.out.printf(
"Recognized entity: %s, entity category: %s, entity subcategory: %s,"
+ " confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getSubcategory(),
entity.getConfidenceScore())));
}
});
analyzeActionsResult.getExtractKeyPhrasesResults().forEach(
actionResult -> {
if (!actionResult.isError()) {
actionResult.getDocumentsResults().forEach(extractKeyPhraseResult -> {
System.out.println("Extracted phrases:");
extractKeyPhraseResult.getKeyPhrases()
.forEach(keyPhrases -> System.out.printf("\t%s.%n", keyPhrases));
});
}
});
}));
Parameters:
Returns:
beginAnalyzeActions
public PollerFlux
Execute actions, such as, entities recognition, PII entities recognition and key phrases extraction for a list of documents with provided request options. See this supported languages in Language service API.
Code Sample
List<String> documents = Arrays.asList(
"Elon Musk is the CEO of SpaceX and Tesla.",
"1", "My SSN is 859-98-0987"
);
textAnalyticsAsyncClient.beginAnalyzeActions(documents,
new TextAnalyticsActions().setDisplayName("{tasks_display_name}")
.setRecognizeEntitiesActions(new RecognizeEntitiesAction())
.setExtractKeyPhrasesActions(new ExtractKeyPhrasesAction()),
"en",
new AnalyzeActionsOptions().setIncludeStatistics(false))
.flatMap(AsyncPollResponse::getFinalResult)
.flatMap(analyzeActionsResultPagedFlux -> analyzeActionsResultPagedFlux.byPage())
.subscribe(
pagedResponse -> pagedResponse.getElements().forEach(
analyzeActionsResult -> {
analyzeActionsResult.getRecognizeEntitiesResults().forEach(
actionResult -> {
if (!actionResult.isError()) {
actionResult.getDocumentsResults().forEach(
entitiesResult -> entitiesResult.getEntities().forEach(
entity -> System.out.printf(
"Recognized entity: %s, entity category: %s, entity subcategory: %s,"
+ " confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getSubcategory(),
entity.getConfidenceScore())));
}
});
analyzeActionsResult.getExtractKeyPhrasesResults().forEach(
actionResult -> {
if (!actionResult.isError()) {
actionResult.getDocumentsResults().forEach(extractKeyPhraseResult -> {
System.out.println("Extracted phrases:");
extractKeyPhraseResult.getKeyPhrases()
.forEach(keyPhrases -> System.out.printf("\t%s.%n", keyPhrases));
});
}
});
}));
Parameters:
Returns:
beginAnalyzeHealthcareEntities
public PollerFlux
Analyze healthcare entities, entity data sources, and entity relations in a list of TextDocumentInput and provided request options to show statistics. Subscribes to the call asynchronously and prints out the entity details when a response is received. See this supported languages in Language service API.
Code Sample
List<TextDocumentInput> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(new TextDocumentInput(Integer.toString(i),
"The patient is a 54-year-old gentleman with a history of progressive angina "
+ "over the past several months."));
}
AnalyzeHealthcareEntitiesOptions options = new AnalyzeHealthcareEntitiesOptions()
.setIncludeStatistics(true);
textAnalyticsAsyncClient.beginAnalyzeHealthcareEntities(documents, options)
.flatMap(pollResult -> {
AnalyzeHealthcareEntitiesOperationDetail operationResult = pollResult.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationResult.getCreatedAt(), operationResult.getExpiresAt());
return pollResult.getFinalResult();
})
.flatMap(analyzeActionsResultPagedFlux -> analyzeActionsResultPagedFlux.byPage())
.subscribe(
pagedResponse -> pagedResponse.getElements().forEach(
analyzeHealthcareEntitiesResultCollection -> {
// Model version
System.out.printf("Results of Azure Text Analytics \"Analyze Healthcare\" Model, version: %s%n",
analyzeHealthcareEntitiesResultCollection.getModelVersion());
TextDocumentBatchStatistics healthcareTaskStatistics =
analyzeHealthcareEntitiesResultCollection.getStatistics();
// Batch statistics
System.out.printf("Documents statistics: document count = %d, erroneous document count = %d,"
+ " transaction count = %d, valid document count = %d.%n",
healthcareTaskStatistics.getDocumentCount(),
healthcareTaskStatistics.getInvalidDocumentCount(),
healthcareTaskStatistics.getTransactionCount(),
healthcareTaskStatistics.getValidDocumentCount());
analyzeHealthcareEntitiesResultCollection.forEach(healthcareEntitiesResult -> {
System.out.println("document id = " + healthcareEntitiesResult.getId());
System.out.println("Document entities: ");
AtomicInteger ct = new AtomicInteger();
healthcareEntitiesResult.getEntities().forEach(healthcareEntity -> {
System.out.printf(
"\ti = %d, Text: %s, category: %s, confidence score: %f.%n",
ct.getAndIncrement(), healthcareEntity.getText(), healthcareEntity.getCategory(),
healthcareEntity.getConfidenceScore());
IterableStream<EntityDataSource> healthcareEntityDataSources =
healthcareEntity.getDataSources();
if (healthcareEntityDataSources != null) {
healthcareEntityDataSources.forEach(healthcareEntityLink -> System.out.printf(
"\t\tEntity ID in data source: %s, data source: %s.%n",
healthcareEntityLink.getEntityId(), healthcareEntityLink.getName()));
}
});
// Healthcare entity relation groups
healthcareEntitiesResult.getEntityRelations().forEach(entityRelation -> {
System.out.printf("\tRelation type: %s.%n", entityRelation.getRelationType());
entityRelation.getRoles().forEach(role -> {
final HealthcareEntity entity = role.getEntity();
System.out.printf("\t\tEntity text: %s, category: %s, role: %s.%n",
entity.getText(), entity.getCategory(), role.getName());
});
System.out.printf("\tRelation confidence score: %f.%n",
entityRelation.getConfidenceScore());
});
});
}));
Parameters:
Returns:
beginAnalyzeHealthcareEntities
public PollerFlux
Analyze healthcare entities, entity data sources, and entity relations in a list of documents. This method will use the default language that can be set by using method defaultLanguage(String language). If none is specified, service will use 'en' as the language.
Code Sample
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add("The patient is a 54-year-old gentleman with a history of progressive angina "
+ "over the past several months.");
}
textAnalyticsAsyncClient.beginAnalyzeHealthcareEntities(documents)
.flatMap(AsyncPollResponse::getFinalResult)
.flatMap(pagedFlux -> pagedFlux.byPage())
.subscribe(
pagedResponse -> pagedResponse.getElements().forEach(
analyzeHealthcareEntitiesResultCollection -> {
analyzeHealthcareEntitiesResultCollection.forEach(healthcareEntitiesResult -> {
System.out.println("document id = " + healthcareEntitiesResult.getId());
System.out.println("Document entities: ");
AtomicInteger ct = new AtomicInteger();
healthcareEntitiesResult.getEntities().forEach(healthcareEntity -> {
System.out.printf(
"\ti = %d, Text: %s, category: %s, confidence score: %f.%n",
ct.getAndIncrement(), healthcareEntity.getText(), healthcareEntity.getCategory(),
healthcareEntity.getConfidenceScore());
IterableStream<EntityDataSource> healthcareEntityDataSources =
healthcareEntity.getDataSources();
if (healthcareEntityDataSources != null) {
healthcareEntityDataSources.forEach(healthcareEntityLink -> System.out.printf(
"\t\tEntity ID in data source: %s, data source: %s.%n",
healthcareEntityLink.getEntityId(), healthcareEntityLink.getName()));
}
});
// Healthcare entity relation groups
healthcareEntitiesResult.getEntityRelations().forEach(entityRelation -> {
System.out.printf("\tRelation type: %s.%n", entityRelation.getRelationType());
entityRelation.getRoles().forEach(role -> {
final HealthcareEntity entity = role.getEntity();
System.out.printf("\t\tEntity text: %s, category: %s, role: %s.%n",
entity.getText(), entity.getCategory(), role.getName());
});
System.out.printf("\tRelation confidence score: %f.%n",
entityRelation.getConfidenceScore());
});
});
}));
Parameters:
Returns:
beginAnalyzeHealthcareEntities
public PollerFlux
Analyze healthcare entities, entity data sources, and entity relations in a list of documents with provided request options. See this supported languages in Language service API.
Code Sample
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add("The patient is a 54-year-old gentleman with a history of progressive angina "
+ "over the past several months.");
}
AnalyzeHealthcareEntitiesOptions options = new AnalyzeHealthcareEntitiesOptions()
.setIncludeStatistics(true);
textAnalyticsAsyncClient.beginAnalyzeHealthcareEntities(documents, "en", options)
.flatMap(AsyncPollResponse::getFinalResult)
.flatMap(pagedFlux -> pagedFlux.byPage())
.subscribe(
pagedResponse -> pagedResponse.getElements().forEach(
analyzeHealthcareEntitiesResultCollection -> {
// Model version
System.out.printf("Results of Azure Text Analytics \"Analyze Healthcare\" Model, version: %s%n",
analyzeHealthcareEntitiesResultCollection.getModelVersion());
TextDocumentBatchStatistics healthcareTaskStatistics =
analyzeHealthcareEntitiesResultCollection.getStatistics();
// Batch statistics
System.out.printf("Documents statistics: document count = %d, erroneous document count = %d,"
+ " transaction count = %d, valid document count = %d.%n",
healthcareTaskStatistics.getDocumentCount(),
healthcareTaskStatistics.getInvalidDocumentCount(),
healthcareTaskStatistics.getTransactionCount(),
healthcareTaskStatistics.getValidDocumentCount());
analyzeHealthcareEntitiesResultCollection.forEach(healthcareEntitiesResult -> {
System.out.println("document id = " + healthcareEntitiesResult.getId());
System.out.println("Document entities: ");
AtomicInteger ct = new AtomicInteger();
healthcareEntitiesResult.getEntities().forEach(healthcareEntity -> {
System.out.printf(
"\ti = %d, Text: %s, category: %s, confidence score: %f.%n",
ct.getAndIncrement(), healthcareEntity.getText(), healthcareEntity.getCategory(),
healthcareEntity.getConfidenceScore());
IterableStream<EntityDataSource> healthcareEntityDataSources =
healthcareEntity.getDataSources();
if (healthcareEntityDataSources != null) {
healthcareEntityDataSources.forEach(healthcareEntityLink -> System.out.printf(
"\t\tEntity ID in data source: %s, data source: %s.%n",
healthcareEntityLink.getEntityId(), healthcareEntityLink.getName()));
}
});
// Healthcare entity relation groups
healthcareEntitiesResult.getEntityRelations().forEach(entityRelation -> {
System.out.printf("\tRelation type: %s.%n", entityRelation.getRelationType());
entityRelation.getRoles().forEach(role -> {
final HealthcareEntity entity = role.getEntity();
System.out.printf("\t\tEntity text: %s, category: %s, role: %s.%n",
entity.getText(), entity.getCategory(), role.getName());
});
System.out.printf("\tRelation confidence score: %f.%n",
entityRelation.getConfidenceScore());
});
});
}));
Parameters:
Returns:
beginExtractSummary
public PollerFlux
Returns a list of extract summaries for the provided list of TextDocumentInput with provided request options.
This method is supported since service API version V2023_04_01.
Code Sample
List<TextDocumentInput> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(new TextDocumentInput(Integer.toString(i),
"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,"
+ " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI"
+ " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn "
+ "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship"
+ " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,"
+ " (Y) and multilingual (Z). At the intersection of all three, there\u2019s magic\u2014what we call XYZ-code"
+ " as illustrated in Figure 1\u2014a joint representation to create more powerful AI that can speak, hear,"
+ " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term"
+ " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have"
+ " pretrained models that can jointly learn representations to support a broad range of downstream"
+ " AI tasks, much in the way humans do today. Over the past five years, we have achieved human"
+ " performance on benchmarks in conversational speech recognition, machine translation, "
+ "conversational question answering, machine reading comprehension, and image captioning. These"
+ " five breakthroughs provided us with strong signals toward our more ambitious aspiration to"
+ " produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
+ "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a "
+ "foundational component of this aspiration, if grounded with external knowledge sources in "
+ "the downstream AI tasks."));
}
ExtractiveSummaryOptions options =
new ExtractiveSummaryOptions().setMaxSentenceCount(4).setOrderBy(ExtractiveSummarySentencesOrder.RANK);
textAnalyticsAsyncClient.beginExtractSummary(documents, options)
.flatMap(result -> {
ExtractiveSummaryOperationDetail operationDetail = result.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationDetail.getCreatedAt(), operationDetail.getExpiresAt());
return result.getFinalResult();
})
.flatMap(pagedFlux -> pagedFlux) // this unwrap the Mono<> of Mono<PagedFlux<T>> to return PagedFlux<T>
.subscribe(
resultCollection -> {
for (ExtractiveSummaryResult documentResult : resultCollection) {
for (ExtractiveSummarySentence extractiveSummarySentence : documentResult.getSentences()) {
System.out.printf(
"Sentence text: %s, length: %d, offset: %d, rank score: %f.%n",
extractiveSummarySentence.getText(), extractiveSummarySentence.getLength(),
extractiveSummarySentence.getOffset(), extractiveSummarySentence.getRankScore());
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
Parameters:
Returns:
beginExtractSummary
public PollerFlux
Returns a list of extract summaries for the provided list of document.
This method is supported since service API version V2023_04_01.
This method will use the default language that can be set by using method defaultLanguage(String language). If none is specified, service will use 'en' as the language.
Code Sample
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,"
+ " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI"
+ " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn "
+ "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship"
+ " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,"
+ " (Y) and multilingual (Z). At the intersection of all three, there\u2019s magic\u2014what we call XYZ-code"
+ " as illustrated in Figure 1\u2014a joint representation to create more powerful AI that can speak, hear,"
+ " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term"
+ " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have"
+ " pretrained models that can jointly learn representations to support a broad range of downstream"
+ " AI tasks, much in the way humans do today. Over the past five years, we have achieved human"
+ " performance on benchmarks in conversational speech recognition, machine translation, "
+ "conversational question answering, machine reading comprehension, and image captioning. These"
+ " five breakthroughs provided us with strong signals toward our more ambitious aspiration to"
+ " produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
+ "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a "
+ "foundational component of this aspiration, if grounded with external knowledge sources in "
+ "the downstream AI tasks.");
}
textAnalyticsAsyncClient.beginExtractSummary(documents)
.flatMap(result -> {
ExtractiveSummaryOperationDetail operationDetail = result.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationDetail.getCreatedAt(), operationDetail.getExpiresAt());
return result.getFinalResult();
})
.flatMap(pagedFlux -> pagedFlux) // this unwrap the Mono<> of Mono<PagedFlux<T>> to return PagedFlux<T>
.subscribe(
resultCollection -> {
for (ExtractiveSummaryResult documentResult : resultCollection) {
for (ExtractiveSummarySentence extractiveSummarySentence : documentResult.getSentences()) {
System.out.printf(
"Sentence text: %s, length: %d, offset: %d, rank score: %f.%n",
extractiveSummarySentence.getText(), extractiveSummarySentence.getLength(),
extractiveSummarySentence.getOffset(), extractiveSummarySentence.getRankScore());
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
Parameters:
Returns:
beginExtractSummary
public PollerFlux
Returns a list of extract summaries for the provided list of document with provided request options.
This method is supported since service API version V2023_04_01.
See this supported languages in Language service API.
Code Sample
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,"
+ " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI"
+ " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn "
+ "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship"
+ " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,"
+ " (Y) and multilingual (Z). At the intersection of all three, there\u2019s magic\u2014what we call XYZ-code"
+ " as illustrated in Figure 1\u2014a joint representation to create more powerful AI that can speak, hear,"
+ " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term"
+ " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have"
+ " pretrained models that can jointly learn representations to support a broad range of downstream"
+ " AI tasks, much in the way humans do today. Over the past five years, we have achieved human"
+ " performance on benchmarks in conversational speech recognition, machine translation, "
+ "conversational question answering, machine reading comprehension, and image captioning. These"
+ " five breakthroughs provided us with strong signals toward our more ambitious aspiration to"
+ " produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
+ "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a "
+ "foundational component of this aspiration, if grounded with external knowledge sources in "
+ "the downstream AI tasks.");
}
ExtractiveSummaryOptions options =
new ExtractiveSummaryOptions().setMaxSentenceCount(4).setOrderBy(ExtractiveSummarySentencesOrder.RANK);
textAnalyticsAsyncClient.beginExtractSummary(documents, "en", options)
.flatMap(result -> {
ExtractiveSummaryOperationDetail operationDetail = result.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationDetail.getCreatedAt(), operationDetail.getExpiresAt());
return result.getFinalResult();
})
.flatMap(pagedFlux -> pagedFlux) // this unwrap the Mono<> of Mono<PagedFlux<T>> to return PagedFlux<T>
.subscribe(
resultCollection -> {
for (ExtractiveSummaryResult documentResult : resultCollection) {
for (ExtractiveSummarySentence extractiveSummarySentence : documentResult.getSentences()) {
System.out.printf(
"Sentence text: %s, length: %d, offset: %d, rank score: %f.%n",
extractiveSummarySentence.getText(), extractiveSummarySentence.getLength(),
extractiveSummarySentence.getOffset(), extractiveSummarySentence.getRankScore());
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
Parameters:
Returns:
beginMultiLabelClassify
public PollerFlux
Returns a list of multi-label classification for the provided list of TextDocumentInput with provided request options.
This method is supported since service API version V2022_05_01.
Code Sample
List<TextDocumentInput> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(new TextDocumentInput(Integer.toString(i),
"I need a reservation for an indoor restaurant in China. Please don't stop the music."
+ " Play music and add it to my playlist"));
}
MultiLabelClassifyOptions options = new MultiLabelClassifyOptions().setIncludeStatistics(true);
textAnalyticsAsyncClient.beginMultiLabelClassify(documents, "{project_name}",
"{deployment_name}", options)
.flatMap(pollResult -> {
ClassifyDocumentOperationDetail operationResult = pollResult.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationResult.getCreatedAt(), operationResult.getExpiresAt());
return pollResult.getFinalResult();
})
.flatMap(pagedFluxAsyncPollResponse -> pagedFluxAsyncPollResponse.byPage())
.subscribe(
perPage -> {
System.out.printf("Response code: %d, Continuation Token: %s.%n",
perPage.getStatusCode(), perPage.getContinuationToken());
for (ClassifyDocumentResultCollection documentsResults : perPage.getElements()) {
System.out.printf("Project name: %s, deployment name: %s.%n",
documentsResults.getProjectName(), documentsResults.getDeploymentName());
for (ClassifyDocumentResult documentResult : documentsResults) {
System.out.println("Document ID: " + documentResult.getId());
for (ClassificationCategory classification : documentResult.getClassifications()) {
System.out.printf("\tCategory: %s, confidence score: %f.%n",
classification.getCategory(), classification.getConfidenceScore());
}
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
Parameters:
Returns:
beginMultiLabelClassify
public PollerFlux
Returns a list of multi-label classification for the provided list of document.
This method is supported since service API version V2022_05_01.
This method will use the default language that can be set by using method defaultLanguage(String language). If none is specified, service will use 'en' as the language.
Code Sample
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"I need a reservation for an indoor restaurant in China. Please don't stop the music."
+ " Play music and add it to my playlist");
}
textAnalyticsAsyncClient.beginMultiLabelClassify(documents, "{project_name}", "{deployment_name}")
.flatMap(pollResult -> {
ClassifyDocumentOperationDetail operationResult = pollResult.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationResult.getCreatedAt(), operationResult.getExpiresAt());
return pollResult.getFinalResult();
})
.flatMap(pagedFluxAsyncPollResponse -> pagedFluxAsyncPollResponse.byPage())
.subscribe(
perPage -> {
System.out.printf("Response code: %d, Continuation Token: %s.%n",
perPage.getStatusCode(), perPage.getContinuationToken());
for (ClassifyDocumentResultCollection documentsResults : perPage.getElements()) {
System.out.printf("Project name: %s, deployment name: %s.%n",
documentsResults.getProjectName(), documentsResults.getDeploymentName());
for (ClassifyDocumentResult documentResult : documentsResults) {
System.out.println("Document ID: " + documentResult.getId());
for (ClassificationCategory classification : documentResult.getClassifications()) {
System.out.printf("\tCategory: %s, confidence score: %f.%n",
classification.getCategory(), classification.getConfidenceScore());
}
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
Parameters:
Returns:
beginMultiLabelClassify
public PollerFlux
Returns a list of multi-label classification for the provided list of document with provided request options.
This method is supported since service API version V2022_05_01.
See this supported languages in Language service API.
Code Sample
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"I need a reservation for an indoor restaurant in China. Please don't stop the music."
+ " Play music and add it to my playlist");
}
MultiLabelClassifyOptions options = new MultiLabelClassifyOptions().setIncludeStatistics(true);
textAnalyticsAsyncClient.beginMultiLabelClassify(documents, "{project_name}",
"{deployment_name}", "en", options)
.flatMap(pollResult -> {
ClassifyDocumentOperationDetail operationResult = pollResult.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationResult.getCreatedAt(), operationResult.getExpiresAt());
return pollResult.getFinalResult();
})
.flatMap(pagedFluxAsyncPollResponse -> pagedFluxAsyncPollResponse.byPage())
.subscribe(
perPage -> {
System.out.printf("Response code: %d, Continuation Token: %s.%n",
perPage.getStatusCode(), perPage.getContinuationToken());
for (ClassifyDocumentResultCollection documentsResults : perPage.getElements()) {
System.out.printf("Project name: %s, deployment name: %s.%n",
documentsResults.getProjectName(), documentsResults.getDeploymentName());
for (ClassifyDocumentResult documentResult : documentsResults) {
System.out.println("Document ID: " + documentResult.getId());
for (ClassificationCategory classification : documentResult.getClassifications()) {
System.out.printf("\tCategory: %s, confidence score: %f.%n",
classification.getCategory(), classification.getConfidenceScore());
}
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
Parameters:
Returns:
beginRecognizeCustomEntities
public PollerFlux
Returns a list of custom entities for the provided list of TextDocumentInput with provided request options.
This method is supported since service API version V2022_05_01.
Code Sample
List<TextDocumentInput> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(new TextDocumentInput(Integer.toString(i),
"A recent report by the Government Accountability Office (GAO) found that the dramatic increase "
+ "in oil and natural gas development on federal lands over the past six years has stretched the"
+ " staff of the BLM to a point that it has been unable to meet its environmental protection "
+ "responsibilities."));
}
RecognizeCustomEntitiesOptions options = new RecognizeCustomEntitiesOptions().setIncludeStatistics(true);
textAnalyticsAsyncClient.beginRecognizeCustomEntities(documents, "{project_name}",
"{deployment_name}", options)
.flatMap(pollResult -> {
RecognizeCustomEntitiesOperationDetail operationResult = pollResult.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationResult.getCreatedAt(), operationResult.getExpiresAt());
return pollResult.getFinalResult();
})
.flatMap(pagedFlux -> pagedFlux.byPage())
.subscribe(
perPage -> {
System.out.printf("Response code: %d, Continuation Token: %s.%n",
perPage.getStatusCode(), perPage.getContinuationToken());
for (RecognizeCustomEntitiesResultCollection documentsResults : perPage.getElements()) {
System.out.printf("Project name: %s, deployment name: %s.%n",
documentsResults.getProjectName(), documentsResults.getDeploymentName());
for (RecognizeEntitiesResult documentResult : documentsResults) {
System.out.println("Document ID: " + documentResult.getId());
for (CategorizedEntity entity : documentResult.getEntities()) {
System.out.printf(
"\tText: %s, category: %s, confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getConfidenceScore());
}
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
Parameters:
Returns:
beginRecognizeCustomEntities
public PollerFlux
Returns a list of custom entities for the provided list of document.
This method is supported since service API version V2022_05_01.
This method will use the default language that can be set by using method defaultLanguage(String language). If none is specified, service will use 'en' as the language.
Code Sample
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"A recent report by the Government Accountability Office (GAO) found that the dramatic increase "
+ "in oil and natural gas development on federal lands over the past six years has stretched the"
+ " staff of the BLM to a point that it has been unable to meet its environmental protection "
+ "responsibilities."
);
}
textAnalyticsAsyncClient.beginRecognizeCustomEntities(documents, "{project_name}", "{deployment_name}")
.flatMap(pollResult -> {
RecognizeCustomEntitiesOperationDetail operationResult = pollResult.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationResult.getCreatedAt(), operationResult.getExpiresAt());
return pollResult.getFinalResult();
})
.flatMap(pagedFlux -> pagedFlux.byPage())
.subscribe(
perPage -> {
System.out.printf("Response code: %d, Continuation Token: %s.%n",
perPage.getStatusCode(), perPage.getContinuationToken());
for (RecognizeCustomEntitiesResultCollection documentsResults : perPage.getElements()) {
System.out.printf("Project name: %s, deployment name: %s.%n",
documentsResults.getProjectName(), documentsResults.getDeploymentName());
for (RecognizeEntitiesResult documentResult : documentsResults) {
System.out.println("Document ID: " + documentResult.getId());
for (CategorizedEntity entity : documentResult.getEntities()) {
System.out.printf(
"\tText: %s, category: %s, confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getConfidenceScore());
}
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
Parameters:
Returns:
beginRecognizeCustomEntities
public PollerFlux
Returns a list of custom entities for the provided list of document with provided request options.
This method is supported since service API version V2022_05_01.
See this supported languages in Language service API.
Code Sample
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"A recent report by the Government Accountability Office (GAO) found that the dramatic increase "
+ "in oil and natural gas development on federal lands over the past six years has stretched the"
+ " staff of the BLM to a point that it has been unable to meet its environmental protection "
+ "responsibilities."
);
}
RecognizeCustomEntitiesOptions options = new RecognizeCustomEntitiesOptions().setIncludeStatistics(true);
textAnalyticsAsyncClient.beginRecognizeCustomEntities(documents, "{project_name}",
"{deployment_name}", "en", options)
.flatMap(pollResult -> {
RecognizeCustomEntitiesOperationDetail operationResult = pollResult.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationResult.getCreatedAt(), operationResult.getExpiresAt());
return pollResult.getFinalResult();
})
.flatMap(pagedFlux -> pagedFlux.byPage())
.subscribe(
perPage -> {
System.out.printf("Response code: %d, Continuation Token: %s.%n",
perPage.getStatusCode(), perPage.getContinuationToken());
for (RecognizeCustomEntitiesResultCollection documentsResults : perPage.getElements()) {
System.out.printf("Project name: %s, deployment name: %s.%n",
documentsResults.getProjectName(), documentsResults.getDeploymentName());
for (RecognizeEntitiesResult documentResult : documentsResults) {
System.out.println("Document ID: " + documentResult.getId());
for (CategorizedEntity entity : documentResult.getEntities()) {
System.out.printf(
"\tText: %s, category: %s, confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getConfidenceScore());
}
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
Parameters:
Returns:
beginSingleLabelClassify
public PollerFlux
Returns a list of single-label classification for the provided list of TextDocumentInput with provided request options.
This method is supported since service API version V2022_05_01.
Code Sample
List<TextDocumentInput> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(new TextDocumentInput(Integer.toString(i),
"A recent report by the Government Accountability Office (GAO) found that the dramatic increase "
+ "in oil and natural gas development on federal lands over the past six years has stretched the"
+ " staff of the BLM to a point that it has been unable to meet its environmental protection "
+ "responsibilities."));
}
SingleLabelClassifyOptions options = new SingleLabelClassifyOptions().setIncludeStatistics(true);
// See the service documentation for regional support and how to train a model to classify your documents,
// see https://aka.ms/azsdk/textanalytics/customfunctionalities
textAnalyticsAsyncClient.beginSingleLabelClassify(documents,
"{project_name}", "{deployment_name}", options)
.flatMap(pollResult -> {
ClassifyDocumentOperationDetail operationResult = pollResult.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationResult.getCreatedAt(), operationResult.getExpiresAt());
return pollResult.getFinalResult();
})
.flatMap(pagedFluxAsyncPollResponse -> pagedFluxAsyncPollResponse.byPage())
.subscribe(
perPage -> {
System.out.printf("Response code: %d, Continuation Token: %s.%n",
perPage.getStatusCode(), perPage.getContinuationToken());
for (ClassifyDocumentResultCollection documentsResults : perPage.getElements()) {
System.out.printf("Project name: %s, deployment name: %s.%n",
documentsResults.getProjectName(), documentsResults.getDeploymentName());
for (ClassifyDocumentResult documentResult : documentsResults) {
System.out.println("Document ID: " + documentResult.getId());
for (ClassificationCategory classification : documentResult.getClassifications()) {
System.out.printf("\tCategory: %s, confidence score: %f.%n",
classification.getCategory(), classification.getConfidenceScore());
}
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
Parameters:
Returns:
beginSingleLabelClassify
public PollerFlux
Returns a list of single-label classification for the provided list of document.
This method is supported since service API version V2022_05_01.
This method will use the default language that can be set by using method defaultLanguage(String language). If none is specified, service will use 'en' as the language.
Code Sample
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"A recent report by the Government Accountability Office (GAO) found that the dramatic increase "
+ "in oil and natural gas development on federal lands over the past six years has stretched the"
+ " staff of the BLM to a point that it has been unable to meet its environmental protection "
+ "responsibilities."
);
}
// See the service documentation for regional support and how to train a model to classify your documents,
// see https://aka.ms/azsdk/textanalytics/customfunctionalities
textAnalyticsAsyncClient.beginSingleLabelClassify(documents,
"{project_name}", "{deployment_name}")
.flatMap(pollResult -> {
ClassifyDocumentOperationDetail operationResult = pollResult.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationResult.getCreatedAt(), operationResult.getExpiresAt());
return pollResult.getFinalResult();
})
.flatMap(pagedFluxAsyncPollResponse -> pagedFluxAsyncPollResponse.byPage())
.subscribe(
perPage -> {
System.out.printf("Response code: %d, Continuation Token: %s.%n",
perPage.getStatusCode(), perPage.getContinuationToken());
for (ClassifyDocumentResultCollection documentsResults : perPage.getElements()) {
System.out.printf("Project name: %s, deployment name: %s.%n",
documentsResults.getProjectName(), documentsResults.getDeploymentName());
for (ClassifyDocumentResult documentResult : documentsResults) {
System.out.println("Document ID: " + documentResult.getId());
for (ClassificationCategory classification : documentResult.getClassifications()) {
System.out.printf("\tCategory: %s, confidence score: %f.%n",
classification.getCategory(), classification.getConfidenceScore());
}
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
Parameters:
Returns:
beginSingleLabelClassify
public PollerFlux
Returns a list of single-label classification for the provided list of document with provided request options.
This method is supported since service API version V2022_05_01.
See this supported languages in Language service API.
Code Sample
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"A recent report by the Government Accountability Office (GAO) found that the dramatic increase "
+ "in oil and natural gas development on federal lands over the past six years has stretched the"
+ " staff of the BLM to a point that it has been unable to meet its environmental protection "
+ "responsibilities."
);
}
SingleLabelClassifyOptions options = new SingleLabelClassifyOptions().setIncludeStatistics(true);
// See the service documentation for regional support and how to train a model to classify your documents,
// see https://aka.ms/azsdk/textanalytics/customfunctionalities
textAnalyticsAsyncClient.beginSingleLabelClassify(documents,
"{project_name}", "{deployment_name}", "en", options)
.flatMap(pollResult -> {
ClassifyDocumentOperationDetail operationResult = pollResult.getValue();
System.out.printf("Operation created time: %s, expiration time: %s.%n",
operationResult.getCreatedAt(), operationResult.getExpiresAt());
return pollResult.getFinalResult();
})
.flatMap(pagedFluxAsyncPollResponse -> pagedFluxAsyncPollResponse.byPage())
.subscribe(
perPage -> {
System.out.printf("Response code: %d, Continuation Token: %s.%n",
perPage.getStatusCode(), perPage.getContinuationToken());
for (ClassifyDocumentResultCollection documentsResults : perPage.getElements()) {
System.out.printf("Project name: %s, deployment name: %s.%n",
documentsResults.getProjectName(), documentsResults.getDeploymentName());
for (ClassifyDocumentResult documentResult : documentsResults) {
System.out.println("Document ID: " + documentResult.getId());
for (ClassificationCategory classification : documentResult.getClassifications()) {
System.out.printf("\tCategory: %s, confidence score: %f.%n",
classification.getCategory(), classification.getConfidenceScore());
}
}
}
},
ex -> System.out.println("Error listing pages: " + ex.getMessage()),
() -> System.out.println("Successfully listed all pages"));
Parameters:
Returns:
detectLanguage
public Mono
Returns the detected language and a confidence score between zero and one. Scores close to one indicate 100% certainty that the identified language is true. This method will use the default country hint that sets up in defaultCountryHint(String countryHint). If none is specified, service will use 'US' as the country hint.
Code sample
Detects language in a document. Subscribes to the call asynchronously and prints out the detected language details when a response is received.
String document = "Bonjour tout le monde";
textAnalyticsAsyncClient.detectLanguage(document).subscribe(detectedLanguage ->
System.out.printf("Detected language name: %s, ISO 6391 Name: %s, confidence score: %f.%n",
detectedLanguage.getName(), detectedLanguage.getIso6391Name(), detectedLanguage.getConfidenceScore()));
Parameters:
Returns:
detectLanguage
public Mono
Returns a Response<T> contains the detected language and a confidence score between zero and one. Scores close to one indicate 100% certainty that the identified language is true.
Code sample
Detects language with http response in a document with a provided country hint. Subscribes to the call asynchronously and prints out the detected language details when a response is received.
String document = "This text is in English";
String countryHint = "US";
textAnalyticsAsyncClient.detectLanguage(document, countryHint).subscribe(detectedLanguage ->
System.out.printf("Detected language name: %s, ISO 6391 Name: %s, confidence score: %f.%n",
detectedLanguage.getName(), detectedLanguage.getIso6391Name(), detectedLanguage.getConfidenceScore()));
Parameters:
countryHint
= "" or "none".
Returns:
detectLanguageBatch
public Mono
Returns the detected language for each of documents with the provided country hint and request option.
Code sample
Detects language in a list of documents with a provided country hint and request option for the batch. Subscribes to the call asynchronously and prints out the detected language details when a response is received.
List<String> documents = Arrays.asList(
"This is written in English",
"Este es un documento escrito en Espa�ol."
);
textAnalyticsAsyncClient.detectLanguageBatch(documents, "US", null).subscribe(
batchResult -> {
// Batch statistics
TextDocumentBatchStatistics batchStatistics = batchResult.getStatistics();
System.out.printf("Batch statistics, transaction count: %s, valid document count: %s.%n",
batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
// Batch result of languages
for (DetectLanguageResult detectLanguageResult : batchResult) {
DetectedLanguage detectedLanguage = detectLanguageResult.getPrimaryLanguage();
System.out.printf("Detected language name: %s, ISO 6391 Name: %s, confidence score: %f.%n",
detectedLanguage.getName(), detectedLanguage.getIso6391Name(),
detectedLanguage.getConfidenceScore());
}
});
Parameters:
countryHint
= "" or "none".
Returns:
detectLanguageBatchWithResponse
public Mono<>
Returns the detected language for a batch of DetectLanguageInput with provided request options.
Code sample
Detects language in a batch of DetectLanguageInput with provided request options. Subscribes to the call asynchronously and prints out the detected language details when a response is received.
List<DetectLanguageInput> detectLanguageInputs1 = Arrays.asList(
new DetectLanguageInput("1", "This is written in English.", "US"),
new DetectLanguageInput("2", "Este es un documento escrito en Espa�ol.", "ES")
);
TextAnalyticsRequestOptions requestOptions = new TextAnalyticsRequestOptions().setIncludeStatistics(true);
textAnalyticsAsyncClient.detectLanguageBatchWithResponse(detectLanguageInputs1, requestOptions)
.subscribe(response -> {
// Response's status code
System.out.printf("Status code of request response: %d%n", response.getStatusCode());
DetectLanguageResultCollection resultCollection = response.getValue();
// Batch statistics
TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
System.out.printf("Batch statistics, transaction count: %s, valid document count: %s.%n",
batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
// Batch result of languages
for (DetectLanguageResult detectLanguageResult : resultCollection) {
DetectedLanguage detectedLanguage = detectLanguageResult.getPrimaryLanguage();
System.out.printf("Detected language name: %s, ISO 6391 Name: %s, confidence score: %f.%n",
detectedLanguage.getName(), detectedLanguage.getIso6391Name(),
detectedLanguage.getConfidenceScore());
}
});
Parameters:
Returns:
extractKeyPhrases
public Mono
Returns a list of strings denoting the key phrases in the document. This method will use the default language that can be set by using method defaultLanguage(String language). If none is specified, service will use 'en' as the language.
Extract key phrases in a document. Subscribes to the call asynchronously and prints out the key phrases when a response is received.
textAnalyticsAsyncClient.extractKeyPhrases("Bonjour tout le monde").subscribe(keyPhrase ->
System.out.printf("%s.%n", keyPhrase));
Parameters:
Returns:
extractKeyPhrases
public Mono
Returns a list of strings denoting the key phrases in the document. See this for the list of enabled languages.
Extract key phrases in a document with a provided language code. Subscribes to the call asynchronously and prints out the key phrases when a response is received.
System.out.println("Extracted phrases:");
textAnalyticsAsyncClient.extractKeyPhrases("Bonjour tout le monde", "fr")
.subscribe(keyPhrase -> System.out.printf("%s.%n", keyPhrase));
Parameters:
Returns:
extractKeyPhrasesBatch
public Mono
Returns a list of strings denoting the key phrases in the document with provided language code and request options. See this for the list of enabled languages.
Extract key phrases in a list of documents with a provided language and request options. Subscribes to the call asynchronously and prints out the key phrases when a response is received.
List<String> documents = Arrays.asList(
"Hello world. This is some input text that I love.",
"Bonjour tout le monde");
textAnalyticsAsyncClient.extractKeyPhrasesBatch(documents, "en", null).subscribe(
extractKeyPhraseResults -> {
// Batch statistics
TextDocumentBatchStatistics batchStatistics = extractKeyPhraseResults.getStatistics();
System.out.printf("Batch statistics, transaction count: %s, valid document count: %s.%n",
batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
extractKeyPhraseResults.forEach(extractKeyPhraseResult -> {
System.out.println("Extracted phrases:");
extractKeyPhraseResult.getKeyPhrases().forEach(keyPhrase -> System.out.printf("%s.%n", keyPhrase));
});
});
Parameters:
Returns:
extractKeyPhrasesBatchWithResponse
public Mono<>
Returns a list of strings denoting the key phrases in the document with provided request options. See this for the list of enabled languages.
Extract key phrases in a list of TextDocumentInput with provided request options. Subscribes to the call asynchronously and prints out the key phrases when a response is received.
List<TextDocumentInput> textDocumentInputs1 = Arrays.asList(
new TextDocumentInput("0", "I had a wonderful trip to Seattle last week.").setLanguage("en"),
new TextDocumentInput("1", "I work at Microsoft.").setLanguage("en"));
TextAnalyticsRequestOptions requestOptions = new TextAnalyticsRequestOptions().setIncludeStatistics(true);
textAnalyticsAsyncClient.extractKeyPhrasesBatchWithResponse(textDocumentInputs1, requestOptions)
.subscribe(response -> {
// Response's status code
System.out.printf("Status code of request response: %d%n", response.getStatusCode());
ExtractKeyPhrasesResultCollection resultCollection = response.getValue();
// Batch statistics
TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
System.out.printf("Batch statistics, transaction count: %s, valid document count: %s.%n",
batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
for (ExtractKeyPhraseResult extractKeyPhraseResult : resultCollection) {
System.out.println("Extracted phrases:");
for (String keyPhrase : extractKeyPhraseResult.getKeyPhrases()) {
System.out.printf("%s.%n", keyPhrase);
}
}
});
Parameters:
Returns:
getDefaultCountryHint
public String getDefaultCountryHint()
Gets default country hint code.
Returns:
getDefaultLanguage
public String getDefaultLanguage()
Gets default language when the builder is setup.
Returns:
recognizeEntities
public Mono
Returns a list of general categorized entities in the provided document. For a list of supported entity types, check: this. For a list of enabled languages, check: this. This method will use the default language that can be set by using method defaultLanguage(String language). If none is specified, service will use 'en' as the language.
Code sample
Recognize entities in a document. Subscribes to the call asynchronously and prints out the recognized entity details when a response is received.
String document = "Satya Nadella is the CEO of Microsoft";
textAnalyticsAsyncClient.recognizeEntities(document)
.subscribe(entityCollection -> entityCollection.forEach(entity ->
System.out.printf("Recognized categorized entity: %s, category: %s, confidence score: %f.%n",
entity.getText(),
entity.getCategory(),
entity.getConfidenceScore())));
Parameters:
Returns:
recognizeEntities
public Mono
Returns a list of general categorized entities in the provided document. For a list of supported entity types, check: this. For a list of enabled languages, check: this.
Code sample
Recognize entities in a document with provided language code. Subscribes to the call asynchronously and prints out the entity details when a response is received.
String document = "Satya Nadella is the CEO of Microsoft";
textAnalyticsAsyncClient.recognizeEntities(document, "en")
.subscribe(entityCollection -> entityCollection.forEach(entity ->
System.out.printf("Recognized categorized entity: %s, category: %s, confidence score: %f.%n",
entity.getText(),
entity.getCategory(),
entity.getConfidenceScore())));
Parameters:
Returns:
recognizeEntitiesBatch
public Mono
Returns a list of general categorized entities for the provided list of documents with the provided language code and request options.
Code sample
Recognize entities in a document with the provided language code. Subscribes to the call asynchronously and prints out the entity details when a response is received.
List<String> documents = Arrays.asList(
"I had a wonderful trip to Seattle last week.", "I work at Microsoft.");
textAnalyticsAsyncClient.recognizeEntitiesBatch(documents, "en", null)
.subscribe(batchResult -> {
// Batch statistics
TextDocumentBatchStatistics batchStatistics = batchResult.getStatistics();
System.out.printf("Batch statistics, transaction count: %s, valid document count: %s.%n",
batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
// Batch Result of entities
batchResult.forEach(recognizeEntitiesResult ->
recognizeEntitiesResult.getEntities().forEach(entity -> System.out.printf(
"Recognized categorized entity: %s, category: %s, confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getConfidenceScore())));
});
Parameters:
Returns:
recognizeEntitiesBatchWithResponse
public Mono<>
Returns a list of general categorized entities for the provided list of TextDocumentInput with provided request options.
Code sample
Recognize entities in a list of TextDocumentInput. Subscribes to the call asynchronously and prints out the entity details when a response is received.
List<TextDocumentInput> textDocumentInputs1 = Arrays.asList(
new TextDocumentInput("0", "I had a wonderful trip to Seattle last week.").setLanguage("en"),
new TextDocumentInput("1", "I work at Microsoft.").setLanguage("en"));
TextAnalyticsRequestOptions requestOptions = new TextAnalyticsRequestOptions().setIncludeStatistics(true);
textAnalyticsAsyncClient.recognizeEntitiesBatchWithResponse(textDocumentInputs1, requestOptions)
.subscribe(response -> {
// Response's status code
System.out.printf("Status code of request response: %d%n", response.getStatusCode());
RecognizeEntitiesResultCollection resultCollection = response.getValue();
// Batch statistics
TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
System.out.printf("Batch statistics, transaction count: %s, valid document count: %s.%n",
batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
resultCollection.forEach(recognizeEntitiesResult ->
recognizeEntitiesResult.getEntities().forEach(entity -> System.out.printf(
"Recognized categorized entity: %s, category: %s, confidence score: %f.%n",
entity.getText(),
entity.getCategory(),
entity.getConfidenceScore())));
});
Parameters:
Returns:
recognizeLinkedEntities
public Mono
Returns a list of recognized entities with links to a well-known knowledge base for the provided document. See this for supported languages in Text Analytics API. This method will use the default language that can be set by using method defaultLanguage(String language). If none is specified, service will use 'en' as the language.
Recognize linked entities in a document. Subscribes to the call asynchronously and prints out the entity details when a response is received.
String document = "Old Faithful is a geyser at Yellowstone Park.";
textAnalyticsAsyncClient.recognizeLinkedEntities(document).subscribe(
linkedEntityCollection -> linkedEntityCollection.forEach(linkedEntity -> {
System.out.println("Linked Entities:");
System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n",
linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(),
linkedEntity.getDataSource());
linkedEntity.getMatches().forEach(entityMatch -> System.out.printf(
"Matched entity: %s, confidence score: %f.%n",
entityMatch.getText(), entityMatch.getConfidenceScore()));
}));
Parameters:
Returns:
recognizeLinkedEntities
public Mono
Returns a list of recognized entities with links to a well-known knowledge base for the provided document. See this for supported languages in Text Analytics API.
Recognize linked entities in a text with provided language code. Subscribes to the call asynchronously and prints out the entity details when a response is received.
String document = "Old Faithful is a geyser at Yellowstone Park.";
textAnalyticsAsyncClient.recognizeLinkedEntities(document, "en").subscribe(
linkedEntityCollection -> linkedEntityCollection.forEach(linkedEntity -> {
System.out.println("Linked Entities:");
System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n",
linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(),
linkedEntity.getDataSource());
linkedEntity.getMatches().forEach(entityMatch -> System.out.printf(
"Matched entity: %s, confidence score: %f.%n",
entityMatch.getText(), entityMatch.getConfidenceScore()));
}));
Parameters:
Returns:
recognizeLinkedEntitiesBatch
public Mono
Returns a list of recognized entities with links to a well-known knowledge base for the list of documents with provided language code and request options. See this for supported languages in Text Analytics API.
Recognize linked entities in a list of documents with provided language code. Subscribes to the call asynchronously and prints out the entity details when a response is received.
List<String> documents = Arrays.asList(
"Old Faithful is a geyser at Yellowstone Park.",
"Mount Shasta has lenticular clouds."
);
textAnalyticsAsyncClient.recognizeLinkedEntitiesBatch(documents, "en", null)
.subscribe(batchResult -> {
// Batch statistics
TextDocumentBatchStatistics batchStatistics = batchResult.getStatistics();
System.out.printf("Batch statistics, transaction count: %s, valid document count: %s.%n",
batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
batchResult.forEach(recognizeLinkedEntitiesResult ->
recognizeLinkedEntitiesResult.getEntities().forEach(linkedEntity -> {
System.out.println("Linked Entities:");
System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n",
linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(),
linkedEntity.getDataSource());
linkedEntity.getMatches().forEach(entityMatch -> System.out.printf(
"Matched entity: %s, confidence score: %f.%n",
entityMatch.getText(), entityMatch.getConfidenceScore()));
}));
});
Parameters:
Returns:
recognizeLinkedEntitiesBatchWithResponse
public Mono<>
Returns a list of recognized entities with links to a well-known knowledge base for the list of TextDocumentInput with provided request options. See this supported languages in Language service API.
Recognize linked entities in a list of TextDocumentInput and provided request options to show statistics. Subscribes to the call asynchronously and prints out the entity details when a response is received.
List<TextDocumentInput> textDocumentInputs1 = Arrays.asList(
new TextDocumentInput("0", "Old Faithful is a geyser at Yellowstone Park.").setLanguage("en"),
new TextDocumentInput("1", "Mount Shasta has lenticular clouds.").setLanguage("en"));
TextAnalyticsRequestOptions requestOptions = new TextAnalyticsRequestOptions().setIncludeStatistics(true);
textAnalyticsAsyncClient.recognizeLinkedEntitiesBatchWithResponse(textDocumentInputs1, requestOptions)
.subscribe(response -> {
// Response's status code
System.out.printf("Status code of request response: %d%n", response.getStatusCode());
RecognizeLinkedEntitiesResultCollection resultCollection = response.getValue();
// Batch statistics
TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
System.out.printf("Batch statistics, transaction count: %s, valid document count: %s.%n",
batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
resultCollection.forEach(recognizeLinkedEntitiesResult ->
recognizeLinkedEntitiesResult.getEntities().forEach(linkedEntity -> {
System.out.println("Linked Entities:");
System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n",
linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(),
linkedEntity.getDataSource());
linkedEntity.getMatches().forEach(entityMatch -> System.out.printf(
"Matched entity: %s, confidence score: %.2f.%n",
entityMatch.getText(), entityMatch.getConfidenceScore()));
}));
});
Parameters:
Returns:
recognizePiiEntities
public Mono
Returns a list of Personally Identifiable Information(PII) entities in the provided document. For a list of supported entity types, check: this. For a list of enabled languages, check: this. This method will use the default language that is set using defaultLanguage(String language). If none is specified, service will use 'en' as the language.
Code sample
Recognize the PII entities details in a document. Subscribes to the call asynchronously and prints out the recognized entity details when a response is received.
String document = "My SSN is 859-98-0987";
textAnalyticsAsyncClient.recognizePiiEntities(document).subscribe(piiEntityCollection -> {
System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText());
piiEntityCollection.forEach(entity -> System.out.printf(
"Recognized Personally Identifiable Information entity: %s, entity category: %s,"
+ " entity subcategory: %s, confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()));
});
Parameters:
Returns:
recognizePiiEntities
public Mono
Returns a list of Personally Identifiable Information(PII) entities in the provided document with provided language code. For a list of supported entity types, check: this. For a list of enabled languages, check: this.
Code sample
Recognize the PII entities details in a document with provided language code. Subscribes to the call asynchronously and prints out the entity details when a response is received.
String document = "My SSN is 859-98-0987";
textAnalyticsAsyncClient.recognizePiiEntities(document, "en")
.subscribe(piiEntityCollection -> {
System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText());
piiEntityCollection.forEach(entity -> System.out.printf(
"Recognized Personally Identifiable Information entity: %s, entity category: %s,"
+ " entity subcategory: %s, confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()));
});
Parameters:
Returns:
recognizePiiEntities
public Mono
Returns a list of Personally Identifiable Information(PII) entities in the provided document with provided language code. For a list of supported entity types, check: this. For a list of enabled languages, check: this.
Code sample
Recognize the PII entities details in a document with provided language code and RecognizePiiEntitiesOptions. Subscribes to the call asynchronously and prints out the entity details when a response is received.
String document = "My SSN is 859-98-0987";
textAnalyticsAsyncClient.recognizePiiEntities(document, "en",
new RecognizePiiEntitiesOptions().setDomainFilter(PiiEntityDomain.PROTECTED_HEALTH_INFORMATION))
.subscribe(piiEntityCollection -> {
System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText());
piiEntityCollection.forEach(entity -> System.out.printf(
"Recognized Personally Identifiable Information entity: %s, entity category: %s,"
+ " entity subcategory: %s, confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()));
});
Parameters:
Returns:
recognizePiiEntitiesBatch
public Mono
Returns a list of Personally Identifiable Information(PII) entities for the provided list of documents with the provided language code and request options.
Code sample
Recognize Personally Identifiable Information entities in a document with the provided language code. Subscribes to the call asynchronously and prints out the entity details when a response is received.
List<String> documents = Arrays.asList(
"My SSN is 859-98-0987.",
"Visa card 0111 1111 1111 1111."
);
// Show statistics and model version
RecognizePiiEntitiesOptions requestOptions = new RecognizePiiEntitiesOptions().setIncludeStatistics(true)
.setModelVersion("latest");
textAnalyticsAsyncClient.recognizePiiEntitiesBatch(documents, "en", requestOptions)
.subscribe(piiEntitiesResults -> {
// Batch statistics
TextDocumentBatchStatistics batchStatistics = piiEntitiesResults.getStatistics();
System.out.printf("Batch statistics, transaction count: %s, valid document count: %s.%n",
batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
piiEntitiesResults.forEach(recognizePiiEntitiesResult -> {
PiiEntityCollection piiEntityCollection = recognizePiiEntitiesResult.getEntities();
System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText());
piiEntityCollection.forEach(entity -> System.out.printf(
"Recognized Personally Identifiable Information entity: %s, entity category: %s,"
+ " entity subcategory: %s, confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()));
});
});
Parameters:
Returns:
recognizePiiEntitiesBatchWithResponse
public Mono<>
Returns a list of Personally Identifiable Information entities for the provided list of TextDocumentInput with provided request options.
Code sample
Recognize the PII entities details with http response in a list of TextDocumentInput with provided request options. Subscribes to the call asynchronously and prints out the entity details when a response is received.
List<TextDocumentInput> textDocumentInputs1 = Arrays.asList(
new TextDocumentInput("0", "My SSN is 859-98-0987."),
new TextDocumentInput("1", "Visa card 0111 1111 1111 1111."));
// Show statistics and model version
RecognizePiiEntitiesOptions requestOptions = new RecognizePiiEntitiesOptions().setIncludeStatistics(true)
.setModelVersion("latest");
textAnalyticsAsyncClient.recognizePiiEntitiesBatchWithResponse(textDocumentInputs1, requestOptions)
.subscribe(response -> {
RecognizePiiEntitiesResultCollection piiEntitiesResults = response.getValue();
// Batch statistics
TextDocumentBatchStatistics batchStatistics = piiEntitiesResults.getStatistics();
System.out.printf("Batch statistics, transaction count: %s, valid document count: %s.%n",
batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
piiEntitiesResults.forEach(recognizePiiEntitiesResult -> {
PiiEntityCollection piiEntityCollection = recognizePiiEntitiesResult.getEntities();
System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText());
piiEntityCollection.forEach(entity -> System.out.printf(
"Recognized Personally Identifiable Information entity: %s, entity category: %s,"
+ " entity subcategory: %s, confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()));
});
});
Parameters:
Returns: