Share via


@azure-rest/ai-vision-image-analysis package

Interfaces

AnalyzeFromImageData
AnalyzeFromImageData200Response

The request has succeeded.

AnalyzeFromImageDataBodyParam
AnalyzeFromImageDataDefaultHeaders
AnalyzeFromImageDataDefaultResponse
AnalyzeFromImageDataMediaTypesParam
AnalyzeFromImageDataQueryParam
AnalyzeFromImageDataQueryParamProperties
AnalyzeFromUrl200Response

The request has succeeded.

AnalyzeFromUrlBodyParam
AnalyzeFromUrlDefaultHeaders
AnalyzeFromUrlDefaultResponse
AnalyzeFromUrlMediaTypesParam
AnalyzeFromUrlQueryParam
AnalyzeFromUrlQueryParamProperties
CaptionResultOutput

Represents a generated phrase that describes the content of the whole image.

CropRegionOutput

A region at the desired aspect ratio that can be used as image thumbnail. The region preserves as much content as possible from the analyzed image, with priority given to detected faces.

DenseCaptionOutput

Represents a generated phrase that describes the content of the whole image or a region in the image

DenseCaptionsResultOutput

Represents a list of up to 10 image captions for different regions of the image. The first caption always applies to the whole image.

DetectedObjectOutput

Represents a physical object detected in an image.

DetectedPersonOutput

Represents a person detected in an image.

DetectedTagOutput

A content entity observation in the image. A tag can be a physical object, living being, scenery, or action that appear in the image.

DetectedTextBlockOutput

Represents a single block of detected text in the image.

DetectedTextLineOutput

Represents a single line of text in the image.

DetectedTextWordOutput

A word object consisting of a contiguous sequence of characters. For non-space delimited languages, such as Chinese, Japanese, and Korean, each character is represented as its own word.

ImageAnalysisClientOptions

The optional parameters for the client

ImageAnalysisResultOutput

Represents the outcome of an Image Analysis operation.

ImageBoundingBoxOutput

A basic rectangle specifying a sub-region of the image.

ImageMetadataOutput

Metadata associated with the analyzed image.

ImagePointOutput

Represents the coordinates of a single pixel in the image.

ImageUrl

An object holding the publicly reachable URL of an image to analyze.

ObjectsResultOutput

Represents a list of physical object detected in an image and their location.

PeopleResultOutput

Represents a list of people detected in an image and their location.

ReadResultOutput

The results of a Read (OCR) operation.

Routes
SmartCropsResultOutput

Smart cropping result. A list of crop regions at the desired as aspect ratios (if provided) that can be used as image thumbnails. These regions preserve as much content as possible from the analyzed image, with priority given to detected faces.

TagsResultOutput

A list of entities observed in the image. Tags can be physical objects, living being, scenery, or actions that appear in the image.

Function Details

default(string, TokenCredential | KeyCredential, ImageAnalysisClientOptions)

Initialize a new instance of ImageAnalysisClient

function default(endpointParam: string, credentials: TokenCredential | KeyCredential, options?: ImageAnalysisClientOptions): ImageAnalysisClient

Parameters

endpointParam

string

Azure AI Computer Vision endpoint (protocol and hostname, for example: https://<resource-name>.cognitiveservices.azure.com).

credentials

TokenCredential | KeyCredential

uniquely identify client credential

options
ImageAnalysisClientOptions

the parameter for all optional parameters

Returns

isUnexpected(AnalyzeFromImageData200Response | AnalyzeFromImageDataDefaultResponse)

function isUnexpected(response: AnalyzeFromImageData200Response | AnalyzeFromImageDataDefaultResponse): response

Parameters

Returns

response

isUnexpected(AnalyzeFromUrl200Response | AnalyzeFromUrlDefaultResponse)

function isUnexpected(response: AnalyzeFromUrl200Response | AnalyzeFromUrlDefaultResponse): response

Parameters

Returns

response