If you're using the client SDK, you have the appropriate SDK package installed and you have a running quickstart application. You modify this quickstart application based on code examples here.
If you're using 4.0 REST API calls directly, you have successfully made a curl.exe call to the service (or used an alternative tool). You modify the curl.exe call based on the examples here.
The quickstart shows you how to extract visual features from an image, however, the concepts are similar to background removal. Therefore you benefit from starting from the quickstart and making modifications.
Important
Background removal is only available in the following Azure regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
Authenticate against the service
To authenticate against the Image Analysis service, you need an Azure AI Vision key and endpoint URL.
Tip
Don't include the key directly in your code, and never post it publicly. See the Azure AI services security article for more authentication options like Azure Key Vault.
The SDK example assumes that you defined the environment variables VISION_KEY and VISION_ENDPOINT with your key and endpoint.
Start by creating a VisionServiceOptions object using one of the constructors. For example:
var serviceOptions = new VisionServiceOptions(
Environment.GetEnvironmentVariable("VISION_ENDPOINT"),
new AzureKeyCredential(Environment.GetEnvironmentVariable("VISION_KEY")));
Start by creating a VisionServiceOptions object using one of the constructors. For example:
import azure.ai.vision as sdk
service_options = sdk.VisionServiceOptions(os.environ["VISION_ENDPOINT"],
os.environ["VISION_KEY"])
Start by creating a VisionServiceOptions object using one of the constructors. For example:
VisionServiceOptions serviceOptions = new VisionServiceOptions(
new URL(System.getenv("VISION_ENDPOINT")),
System.getenv("VISION_KEY"));
At the start of your code, use one of the static constructor methods VisionServiceOptions::FromEndpoint to create a VisionServiceOptions object. For example:
auto serviceOptions = VisionServiceOptions::FromEndpoint(
GetEnvironmentVariable("VISION_ENDPOINT"),
GetEnvironmentVariable("VISION_KEY"));
Where we used this helper function to read the value of an environment variable:
Authentication is done by adding the HTTP request header Ocp-Apim-Subscription-Key and setting it to your vision key. The call is made to the URL https://<endpoint>/computervision/imageanalysis:segment?api-version=2023-02-01-preview, where <endpoint> is your unique Azure AI Vision endpoint URL. See Select a mode section for another query string you add to this URL.
Select the image to analyze
The code in this guide uses remote images referenced by URL. You may want to try different images on your own to see the full capability of the Image Analysis features.
Create a new VisionSource object from the URL of the image you want to analyze, using the static constructor VisionSource.FromUrl.
using var imageSource = VisionSource.FromUrl(
new Uri("https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png"));
VisionSource implements IDisposable, therefore create the object with a using statement or explicitly call Dispose method after analysis completes.
You can also analyze a local image by passing in the full-path image file name to the VisionSource constructor instead of the image URL (see argument name filename). Alternatively, you can analyze an image in a memory buffer by constructing VisionSource using the argument image_source_buffer. For more details, see Call the Analyze API.
Create a new VisionSource object from the URL of the image you want to analyze, using the static constructor VisionSource.fromUrl.
VisionSource imageSource = VisionSource.fromUrl(
new URL("https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png"));
VisionSource implements AutoCloseable, therefore create the object in a try-with-resources block, or explicitly call the close method on this object when you're done analyzing the image.
When analyzing a remote image, you specify the image's URL by formatting the request body like this: {"url":"https://learn.microsoft.com/azure/ai-services/computer-vision/images/windows-kitchen.jpg"}. The Content-Type should be application/json.
To analyze a local image, you'd put the binary image data in the HTTP request body. The Content-Type should be application/octet-stream or multipart/form-data.
auto analysisOptions = ImageAnalysisOptions::Create();
analysisOptions->SetSegmentationMode(ImageSegmentationMode::BackgroundRemoval);
Set the query string mode* to one of these two values. This query string is mandatory if you want to do image segmentation.
URL parameter
Value
Description
mode
backgroundRemoval
Outputs an image of the detected foreground object with a transparent background.
mode
foregroundMatting
Outputs a gray-scale alpha matte image showing the opacity of the detected foreground object.
A populated URL for backgroundRemoval would look like this: https://<endpoint>/computervision/imageanalysis:segment?api-version=2023-02-01-preview&mode=backgroundRemoval
Get results from the service
This section shows you how to make the API call and parse the results.
The following code calls the Image Analysis API and saves the resulting segmented image to a file named output.png. It also displays some metadata about the segmented image.
using var analyzer = new ImageAnalyzer(serviceOptions, imageSource, analysisOptions);
var result = analyzer.Analyze();
if (result.Reason == ImageAnalysisResultReason.Analyzed)
{
using var segmentationResult = result.SegmentationResult;
var imageBuffer = segmentationResult.ImageBuffer;
Console.WriteLine($" Segmentation result:");
Console.WriteLine($" Output image buffer size (bytes) = {imageBuffer.Length}");
Console.WriteLine($" Output image height = {segmentationResult.ImageHeight}");
Console.WriteLine($" Output image width = {segmentationResult.ImageWidth}");
string outputImageFile = "output.png";
using (var fs = new FileStream(outputImageFile, FileMode.Create))
{
fs.Write(imageBuffer.Span);
}
Console.WriteLine($" File {outputImageFile} written to disk");
}
else
{
var errorDetails = ImageAnalysisErrorDetails.FromResult(result);
Console.WriteLine(" Analysis failed.");
Console.WriteLine($" Error reason : {errorDetails.Reason}");
Console.WriteLine($" Error code : {errorDetails.ErrorCode}");
Console.WriteLine($" Error message: {errorDetails.Message}");
Console.WriteLine(" Did you set the computer vision endpoint and key?");
}
SegmentationResult implements IDisposable, therefore create the object with a using statement or explicitly call Dispose method after analysis completes.
The following code calls the Image Analysis API and saves the resulting segmented image to a file named output.png. It also displays some metadata about the segmented image.
image_analyzer = sdk.ImageAnalyzer(service_options, vision_source, analysis_options)
result = image_analyzer.analyze()
if result.reason == sdk.ImageAnalysisResultReason.ANALYZED:
image_buffer = result.segmentation_result.image_buffer
print(" Segmentation result:")
print(" Output image buffer size (bytes) = {}".format(len(image_buffer)))
print(" Output image height = {}".format(result.segmentation_result.image_height))
print(" Output image width = {}".format(result.segmentation_result.image_width))
output_image_file = "output.png"
with open(output_image_file, 'wb') as binary_file:
binary_file.write(image_buffer)
print(" File {} written to disk".format(output_image_file))
else:
error_details = sdk.ImageAnalysisErrorDetails.from_result(result)
print(" Analysis failed.")
print(" Error reason: {}".format(error_details.reason))
print(" Error code: {}".format(error_details.error_code))
print(" Error message: {}".format(error_details.message))
print(" Did you set the computer vision endpoint and key?")
The following code calls the Image Analysis API and saves the resulting segmented image to a file named output.png. It also displays some metadata about the segmented image.
try (
ImageAnalyzer analyzer = new ImageAnalyzer(serviceOptions, imageSource, analysisOptions);
ImageAnalysisResult result = analyzer.analyze()) {
if (result.getReason() == ImageAnalysisResultReason.ANALYZED) {
SegmentationResult segmentationResult = result.getSegmentationResult();
// Get the resulting output image buffer (PNG format)
byte[] imageBuffer = segmentationResult.getImageBuffer();
System.out.println(" Segmentation result:");
System.out.println(" Output image buffer size (bytes) = " + imageBuffer.length);
// Get output image size
System.out.println(" Output image height = " + segmentationResult.getImageHeight());
System.out.println(" Output image width = " + segmentationResult.getImageWidth());
// Write the buffer to a file
String outputImageFile = "output.png";
try (FileOutputStream fos = new FileOutputStream(outputImageFile)) {
fos.write(imageBuffer);
}
System.out.println(" File " + outputImageFile + " written to disk");
} else { // result.getReason() == ImageAnalysisResultReason.ERROR
ImageAnalysisErrorDetails errorDetails = ImageAnalysisErrorDetails.fromResult(result);
System.out.println(" Analysis failed.");
System.out.println(" Error reason: " + errorDetails.getReason());
System.out.println(" Error code: " + errorDetails.getErrorCode());
System.out.println(" Error message: " + errorDetails.getMessage());
System.out.println(" Did you set the computer vision endpoint and key?");
}
} catch (Exception e) {
e.printStackTrace();
}
SegmentationResult implements AutoCloseable, therefore create the object in a try-with-resources block, or explicitly call the close method on this object when you're done analyzing the image.
The following code calls the Image Analysis API and saves the resulting segmented image to a file named output.png. It also displays some metadata about the segmented image.
The service returns a 200 HTTP response on success with Content-Type: image/png, and the body contains the returned PNG image in the form of a binary stream.
As an example, assume background removal is run on the following image:
On a successful background removal call, The following four-channel PNG image is the response for the backgroundRemoval mode:
The following one-channel PNG image is the response for the foregroundMatting mode:
The API returns an image the same size as the original for the foregroundMatting mode, but at most 16 megapixels (preserving image aspect ratio) for the backgroundRemoval mode.
The sample code for getting analysis results shows how to handle errors and get the ImageAnalysisErrorDetails object that contains the error information. The error information includes:
Error code and error message. Select the REST API tab to see a list of some common error codes and messages.
In addition to those errors, the SDK has a few other error messages, including:
Missing Image Analysis options: You must set at least one visual feature (or model name) for the 'analyze' operation. Or set segmentation mode for the 'segment' operation
Invalid combination of Image Analysis options: You cannot set both visual features (or model name), and segmentation mode
To help resolve issues, look at the Image Analysis Samples repository and run the closest sample to your scenario. Search the GitHub issues to see if your issue was already address. If not, create a new.
The sample code for getting analysis results shows how to handle errors and get the ImageAnalysisErrorDetails object that contains the error information. The error information includes:
Error code and error message. Select the REST API tab to see a list of some common error codes and messages.
In addition to those errors, the SDK has a few other error messages, including:
Missing Image Analysis options: You must set at least one visual feature (or model name) for the 'analyze' operation. Or set segmentation mode for the 'segment' operation
Invalid combination of Image Analysis options: You cannot set both visual features (or model name), and segmentation mode
To help resolve issues, look at the Image Analysis Samples repository and run the closest sample to your scenario. Search the GitHub issues to see if your issue was already address. If not, create a new.
The sample code for getting analysis results shows how to handle errors and get the ImageAnalysisErrorDetails object that contains the error information. The error information includes:
Error code and error message. Select the REST API tab to see a list of some common error codes and messages.
In addition to those errors, the SDK has a few other error messages, including:
Missing Image Analysis options: You must set at least one visual feature (or model name) for the 'analyze' operation. Or set segmentation mode for the 'segment' operation
Invalid combination of Image Analysis options: You cannot set both visual features (or model name), and segmentation mode
To help resolve issues, look at the Image Analysis Samples repository and run the closest sample to your scenario. Search the GitHub issues to see if your issue was already address. If not, create a new.
The sample code for getting analysis results shows how to handle errors and get the ImageAnalysisErrorDetails object that contains the error information. The error information includes:
Error code and error message. Select the REST API tab to see a list of some common error codes and messages.
In addition to those errors, the SDK has a few other error messages, including:
Missing Image Analysis options: You must set at least one visual feature (or model name) for the 'analyze' operation. Or set segmentation mode for the 'segment' operation
Invalid combination of Image Analysis options: You cannot set both visual features (or model name), and segmentation mode
To help resolve issues, look at the Image Analysis Samples repository and run the closest sample to your scenario. Search the GitHub issues to see if your issue was already address. If not, create a new.
On error, the Image Analysis service response contains a JSON payload that includes an error code and error message. It may also include other details in the form of and inner error code and message. For example:
{
"error":
{
"code": "InvalidRequest",
"message": "Analyze query is invalid.",
"innererror":
{
"code": "NotSupportedVisualFeature",
"message": "Specified feature type is not valid"
}
}
}
Following is a list of common errors and their causes. List items are presented in the following format:
HTTP response code
Error code and message in the JSON response
[Optional] Inner error code and message in the JSON response
List of common errors:
400 Bad Request
InvalidRequest - Image URL is badly formatted or not accessible. Make sure the image URL is valid and publicly accessible.
InvalidRequest - The image size is not allowed to be zero or larger than 20971520 bytes. Reduce the size of the image by compressing it and/or resizing, and resubmit your request.
InvalidRequest - The feature 'Caption' is not supported in this region. The feature is only supported in specific Azure regions. See Quickstart prerequisites for the list of supported Azure regions.
InvalidRequest - The provided image content type ... is not supported. The HTTP header Content-Type in the request isn't an allowed type:
For an image URL, Content-Type should be application/json
For a binary image data, Content-Type should be application/octet-stream or multipart/form-data
InvalidRequest - Either 'features' or 'model-name' needs to be specified in the query parameter.
InvalidRequest - Image format is not valid
InvalidImageFormat - Image format is not valid. See the Image requirements section for supported image formats.
InvalidRequest - Analyze query is invalid
NotSupportedVisualFeature - Specified feature type is not valid. Make sure the features query string has a valid value.
NotSupportedLanguage - The input language is not supported. Make sure the language query string has a valid value for the selected visual feature, based on the following table.
BadArgument - 'smartcrops-aspect-ratios' aspect ratio is not in allowed range [0.75 to 1.8]
401 PermissionDenied
401 - Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource.
404 Resource Not Found
404 - Resource not found. The service couldn't find the custom model based on the name provided by the model-name query string.
Tip
While working with Azure AI Vision, you might encounter transient failures caused by rate limits enforced by the service, or other transient problems like network outages. For information about handling these types of failures, see Retry pattern in the Cloud Design Patterns guide, and the related Circuit Breaker pattern.