Events
Mar 17, 9 PM - Mar 21, 10 AM
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Get started with the Custom Vision client library for .NET. Follow these steps to install the package and try out the example code for building an image classification model. You can create a project, add tags, train the project, and use the project's prediction endpoint URL to test it programmatically. Use this example as a template for building your own image recognition app.
Note
If you want to build and train a classification model without writing code, see the browser-based guidance.
Reference documentation | Library source code for training and prediction | Package (NuGet) for training and prediction | Samples
F0
) to try the service, and upgrade later to a paid tier for production.In this example, you'll write your credentials to environment variables on the local machine running the application.
Go to the Azure portal. If the Custom Vision resources you created in the Prerequisites section deployed successfully, select the Go to Resource button under Next steps. You can find your keys and endpoints in the resources' Keys and Endpoint pages, under Resource Management. You'll need to get the keys for both your training resource and prediction resource, along with the API endpoints.
You can find the prediction resource ID on the prediction resource's Properties tab in the Azure portal, listed as Resource ID.
Tip
You also use https://www.customvision.ai to get these values. After you sign in, select the Settings icon at the top right. On the Setting pages, you can view all the keys, resource ID, and endpoints.
To set the environment variables, open a console window and follow the instructions for your operating system and development environment.
VISION_TRAINING KEY
environment variable, replace <your-training-key>
with one of the keys for your training resource.VISION_TRAINING_ENDPOINT
environment variable, replace <your-training-endpoint>
with the endpoint for your training resource.VISION_PREDICTION_KEY
environment variable, replace <your-prediction-key>
with one of the keys for your prediction resource.VISION_PREDICTION_ENDPOINT
environment variable, replace <your-prediction-endpoint>
with the endpoint for your prediction resource.VISION_PREDICTION_RESOURCE_ID
environment variable, replace <your-resource-id>
with the resource ID for your prediction resource.Important
We recommend Microsoft Entra ID authentication with managed identities for Azure resources to avoid storing credentials with your applications that run in the cloud.
Use API keys with caution. Don't include the API key directly in your code, and never post it publicly. If using API keys, store them securely in Azure Key Vault, rotate the keys regularly, and restrict access to Azure Key Vault using role based access control and network access restrictions. For more information about using API keys securely in your apps, see API keys with Azure Key Vault.
For more information about AI services security, see Authenticate requests to Azure AI services.
export VISION_TRAINING_KEY=<your-training-key>
export VISION_TRAINING_ENDPOINT=<your-training-endpoint>
export VISION_PREDICTION_KEY=<your-prediction-key>
export VISION_PREDICTION_ENDPOINT=<your-prediction-endpoint>
export VISION_PREDICTION_RESOURCE_ID=<your-resource-id>
After you add the environment variables, run source ~/.bashrc
from your console window to make the changes effective.
Using Visual Studio, create a new .NET Core application.
After you create a new project, install the client library by right-clicking on the project solution in the Solution Explorer and selecting Manage NuGet Packages. Select Browse in the package manager that opens, then check Include prerelease, and search for Microsoft.Azure.CognitiveServices.Vision.CustomVision.Training
and Microsoft.Azure.CognitiveServices.Vision.CustomVision.Prediction
. Select the latest version and then choose Install.
Tip
Want to view the whole quickstart code file at once? You can find it on GitHub, which contains the code examples in this quickstart.
From the project directory, open the program.cs file and add the following using
directives:
using Microsoft.Azure.CognitiveServices.Vision.CustomVision.Prediction;
using Microsoft.Azure.CognitiveServices.Vision.CustomVision.Training;
using Microsoft.Azure.CognitiveServices.Vision.CustomVision.Training.Models;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
In the application's main
method, create variables that retrieve your resource's keys and endpoints from environment variables. You'll also declare some basic objects to be used later.
// Retrieve the environment variables for your credentials:
private static string trainingEndpoint = Environment.GetEnvironmentVariable("VISION_TRAINING_ENDPOINT");
private static string trainingKey = Environment.GetEnvironmentVariable("VISION_TRAINING_KEY");
private static string predictionEndpoint = Environment.GetEnvironmentVariable("VISION_PREDICTION_ENDPOINT");
private static string predictionKey = Environment.GetEnvironmentVariable("VISION_PREDICTION_KEY");
private static string predictionResourceId = Environment.GetEnvironmentVariable("VISION_PREDICTION_RESOURCE_ID");
private static List<string> hemlockImages;
private static List<string> japaneseCherryImages;
private static Tag hemlockTag;
private static Tag japaneseCherryTag;
private static Iteration iteration;
private static string publishedModelName = "treeClassModel";
private static MemoryStream testImage;
In the application's main
method, add calls for the methods used in this quickstart. You implement these later.
CustomVisionTrainingClient trainingApi = AuthenticateTraining(trainingEndpoint, trainingKey);
CustomVisionPredictionClient predictionApi = AuthenticatePrediction(predictionEndpoint, predictionKey);
Project project = CreateProject(trainingApi);
AddTags(trainingApi, project);
UploadImages(trainingApi, project);
TrainProject(trainingApi, project);
PublishIteration(trainingApi, project);
TestIteration(predictionApi, project);
DeleteProject(trainingApi, project);
In a new method, instantiate training and prediction clients using your endpoint and keys.
private static CustomVisionTrainingClient AuthenticateTraining(string endpoint, string trainingKey)
{
// Create the Api, passing in the training key
CustomVisionTrainingClient trainingApi = new CustomVisionTrainingClient(new Microsoft.Azure.CognitiveServices.Vision.CustomVision.Training.ApiKeyServiceClientCredentials(trainingKey))
{
Endpoint = endpoint
};
return trainingApi;
}
private static CustomVisionPredictionClient AuthenticatePrediction(string endpoint, string predictionKey)
{
// Create a prediction endpoint, passing in the obtained prediction key
CustomVisionPredictionClient predictionApi = new CustomVisionPredictionClient(new Microsoft.Azure.CognitiveServices.Vision.CustomVision.Prediction.ApiKeyServiceClientCredentials(predictionKey))
{
Endpoint = endpoint
};
return predictionApi;
}
This next bit of code creates an image classification project. The created project shows up on the Custom Vision website. See the CreateProject method to specify other options when you create your project (explained in the Build a classifier web portal guide).
private static Project CreateProject(CustomVisionTrainingClient trainingApi)
{
// Create a new project
Console.WriteLine("Creating new project:");
return trainingApi.CreateProject("My New Project");
}
This method defines the tags that you train the model on.
private static void AddTags(CustomVisionTrainingClient trainingApi, Project project)
{
// Make two tags in the new project
hemlockTag = trainingApi.CreateTag(project.Id, "Hemlock");
japaneseCherryTag = trainingApi.CreateTag(project.Id, "Japanese Cherry");
}
First, download the sample images for this project. Save the contents of the sample Images folder to your local device.
Then define a helper method to upload the images in this directory. You might need to edit the GetFiles
argument to point to the location where your images are saved.
private static void LoadImagesFromDisk()
{
// this loads the images to be uploaded from disk into memory
hemlockImages = Directory.GetFiles(Path.Combine("Images", "Hemlock")).ToList();
japaneseCherryImages = Directory.GetFiles(Path.Combine("Images", "Japanese_Cherry")).ToList();
testImage = new MemoryStream(File.ReadAllBytes(Path.Combine("Images", "Test", "test_image.jpg")));
}
Next, define a method to upload the images, applying tags according to their folder location. The images are already sorted. You can upload and tag images iteratively, or in a batch (up to 64 per batch). This code snippet contains examples of both.
private static void UploadImages(CustomVisionTrainingClient trainingApi, Project project)
{
// Add some images to the tags
Console.WriteLine("\tUploading images");
LoadImagesFromDisk();
// Images can be uploaded one at a time
foreach (var image in hemlockImages)
{
using (var stream = new MemoryStream(File.ReadAllBytes(image)))
{
trainingApi.CreateImagesFromData(project.Id, stream, new List<Guid>() { hemlockTag.Id });
}
}
// Or uploaded in a single batch
var imageFiles = japaneseCherryImages.Select(img => new ImageFileCreateEntry(Path.GetFileName(img), File.ReadAllBytes(img))).ToList();
trainingApi.CreateImagesFromFiles(project.Id, new ImageFileCreateBatch(imageFiles, new List<Guid>() { japaneseCherryTag.Id }));
}
This method creates the first training iteration in the project. It queries the service until training is completed.
private static void TrainProject(CustomVisionTrainingClient trainingApi, Project project)
{
// Now there are images with tags start training the project
Console.WriteLine("\tTraining");
iteration = trainingApi.TrainProject(project.Id);
// The returned iteration will be in progress, and can be queried periodically to see when it has completed
while (iteration.Status == "Training")
{
Console.WriteLine("Waiting 10 seconds for training to complete...");
Thread.Sleep(10000);
// Re-query the iteration to get it's updated status
iteration = trainingApi.GetIteration(project.Id, iteration.Id);
}
}
Tip
Train with selected tags
You can optionally train on only a subset of your applied tags. You might want to do this if you haven't applied enough of certain tags yet, but you do have enough of others. In the TrainProject call, use the trainingParameters
parameter. Construct a TrainingParameters and set its SelectedTags
property to a list of IDs of the tags you want to use. The model will train to only recognize the tags on that list.
This method makes the current iteration of the model available for querying. You can use the model name as a reference to send prediction requests. You need to enter your own value for predictionResourceId
. You can find the prediction resource ID on the resource's Properties tab in the Azure portal, listed as Resource ID.
private static void PublishIteration(CustomVisionTrainingClient trainingApi, Project project)
{
trainingApi.PublishIteration(project.Id, iteration.Id, publishedModelName, predictionResourceId);
Console.WriteLine("Done!\n");
// Now there is a trained endpoint, it can be used to make a prediction
}
This part of the script loads the test image, queries the model endpoint, and outputs prediction data to the console.
private static void TestIteration(CustomVisionPredictionClient predictionApi, Project project)
{
// Make a prediction against the new project
Console.WriteLine("Making a prediction:");
var result = predictionApi.ClassifyImage(project.Id, publishedModelName, testImage);
// Loop over each prediction and write out the results
foreach (var c in result.Predictions)
{
Console.WriteLine($"\t{c.TagName}: {c.Probability:P1}");
}
}
Run the application by clicking the Debug button at the top of the IDE window.
As the application runs, it should open a console window and write the following output:
Creating new project:
Uploading images
Training
Done!
Making a prediction:
Hemlock: 95.0%
Japanese Cherry: 0.0%
You can then verify that the test image (found in Images/Test/) is tagged appropriately. Press any key to exit the application. You can also go back to the Custom Vision website and see the current state of your newly created project.
If you wish to implement your own image classification project (or try an object detection project instead), you might want to delete the tree identification project from this example. A free subscription allows for two Custom Vision projects.
On the Custom Vision website, navigate to Projects and select the trash can under My New Project.
Now you've seen how every step of the object detection process can be done in code. This sample executes a single training iteration, but often you'll need to train and test your model multiple times in order to make it more accurate.
This guide provides instructions and sample code to help you get started using the Custom Vision client library for Go to build an image classification model. You'll create a project, add tags, train the project, and use the project's prediction endpoint URL to programmatically test it. Use this example as a template for building your own image recognition app.
Note
If you want to build and train a classification model without writing code, see the browser-based guidance.
Use the Custom Vision client library for Go to:
Reference documentation for (training) and (prediction)
F0
) to try the service, and upgrade later to a paid tier for production.In this example, you'll write your credentials to environment variables on the local machine running the application.
Go to the Azure portal. If the Custom Vision resources you created in the Prerequisites section deployed successfully, select the Go to Resource button under Next steps. You can find your keys and endpoints in the resources' Keys and Endpoint pages, under Resource Management. You'll need to get the keys for both your training resource and prediction resource, along with the API endpoints.
You can find the prediction resource ID on the prediction resource's Properties tab in the Azure portal, listed as Resource ID.
Tip
You also use https://www.customvision.ai to get these values. After you sign in, select the Settings icon at the top right. On the Setting pages, you can view all the keys, resource ID, and endpoints.
To set the environment variables, open a console window and follow the instructions for your operating system and development environment.
VISION_TRAINING KEY
environment variable, replace <your-training-key>
with one of the keys for your training resource.VISION_TRAINING_ENDPOINT
environment variable, replace <your-training-endpoint>
with the endpoint for your training resource.VISION_PREDICTION_KEY
environment variable, replace <your-prediction-key>
with one of the keys for your prediction resource.VISION_PREDICTION_ENDPOINT
environment variable, replace <your-prediction-endpoint>
with the endpoint for your prediction resource.VISION_PREDICTION_RESOURCE_ID
environment variable, replace <your-resource-id>
with the resource ID for your prediction resource.Important
We recommend Microsoft Entra ID authentication with managed identities for Azure resources to avoid storing credentials with your applications that run in the cloud.
Use API keys with caution. Don't include the API key directly in your code, and never post it publicly. If using API keys, store them securely in Azure Key Vault, rotate the keys regularly, and restrict access to Azure Key Vault using role based access control and network access restrictions. For more information about using API keys securely in your apps, see API keys with Azure Key Vault.
For more information about AI services security, see Authenticate requests to Azure AI services.
export VISION_TRAINING_KEY=<your-training-key>
export VISION_TRAINING_ENDPOINT=<your-training-endpoint>
export VISION_PREDICTION_KEY=<your-prediction-key>
export VISION_PREDICTION_ENDPOINT=<your-prediction-endpoint>
export VISION_PREDICTION_RESOURCE_ID=<your-resource-id>
After you add the environment variables, run source ~/.bashrc
from your console window to make the changes effective.
To write an image analysis app with Custom Vision for Go, you need the Custom Vision service client library. Run the following command in PowerShell:
go get -u github.com/Azure/azure-sdk-for-go/...
Or if you use dep
, within your repo run:
dep ensure -add github.com/Azure/azure-sdk-for-go
This example uses the images from the Azure AI services Python SDK Samples repository on GitHub. Clone or download this repository to your development environment. Remember its folder location for a later step.
Create a new file called sample.go in your preferred project directory, and open it in your preferred code editor.
Add the following code to your script to create a new Custom Vision service project.
See the CreateProject method to specify other options when you create your project (explained in the Build a classifier web portal guide).
import(
"context"
"bytes"
"fmt"
"io/ioutil"
"path"
"log"
"time"
"github.com/Azure/azure-sdk-for-go/services/cognitiveservices/v3.0/customvision/training"
"github.com/Azure/azure-sdk-for-go/services/cognitiveservices/v3.0/customvision/prediction"
)
var (
training_key string = os.Getenv("VISION_TRAINING_KEY")
prediction_key string = os.Getenv("VISION_PREDICTION_KEY")
prediction_resource_id = os.Getenv("VISION_PREDICTION_RESOURCE_ID")
endpoint string = os.Getenv("VISION_ENDPOINT")
project_name string = "Go Sample Project"
iteration_publish_name = "classifyModel"
sampleDataDirectory = "<path to sample images>"
)
func main() {
fmt.Println("Creating project...")
ctx = context.Background()
trainer := training.New(training_key, endpoint)
project, err := trainer.CreateProject(ctx, project_name, "sample project", nil, string(training.Multilabel))
if (err != nil) {
log.Fatal(err)
}
To create classification tags to your project, add the following code to the end of sample.go:
// Make two tags in the new project
hemlockTag, _ := trainer.CreateTag(ctx, *project.ID, "Hemlock", "Hemlock tree tag", string(training.Regular))
cherryTag, _ := trainer.CreateTag(ctx, *project.ID, "Japanese Cherry", "Japanese cherry tree tag", string(training.Regular))
To add the sample images to the project, insert the following code after the tag creation. This code uploads each image with its corresponding tag. You can upload up to 64 images in a single batch.
Note
You'll need to change the path to the images based on where you downloaded the Azure AI services Go SDK Samples project earlier.
fmt.Println("Adding images...")
japaneseCherryImages, err := ioutil.ReadDir(path.Join(sampleDataDirectory, "Japanese Cherry"))
if err != nil {
fmt.Println("Error finding Sample images")
}
hemLockImages, err := ioutil.ReadDir(path.Join(sampleDataDirectory, "Hemlock"))
if err != nil {
fmt.Println("Error finding Sample images")
}
for _, file := range hemLockImages {
imageFile, _ := ioutil.ReadFile(path.Join(sampleDataDirectory, "Hemlock", file.Name()))
imageData := ioutil.NopCloser(bytes.NewReader(imageFile))
trainer.CreateImagesFromData(ctx, *project.ID, imageData, []string{ hemlockTag.ID.String() })
}
for _, file := range japaneseCherryImages {
imageFile, _ := ioutil.ReadFile(path.Join(sampleDataDirectory, "Japanese Cherry", file.Name()))
imageData := ioutil.NopCloser(bytes.NewReader(imageFile))
trainer.CreateImagesFromData(ctx, *project.ID, imageData, []string{ cherryTag.ID.String() })
}
This code creates the first iteration of the prediction model and then publishes that iteration to the prediction endpoint. The name given to the published iteration can be used to send prediction requests. An iteration isn't available in the prediction endpoint until it's published.
fmt.Println("Training...")
iteration, _ := trainer.TrainProject(ctx, *project.ID)
for {
if *iteration.Status != "Training" {
break
}
fmt.Println("Training status: " + *iteration.Status)
time.Sleep(1 * time.Second)
iteration, _ = trainer.GetIteration(ctx, *project.ID, *iteration.ID)
}
fmt.Println("Training status: " + *iteration.Status)
trainer.PublishIteration(ctx, *project.ID, *iteration.ID, iteration_publish_name, prediction_resource_id))
To send an image to the prediction endpoint and retrieve the prediction, add the following code to the end of the file:
fmt.Println("Predicting...")
predictor := prediction.New(prediction_key, endpoint)
testImageData, _ := ioutil.ReadFile(path.Join(sampleDataDirectory, "Test", "test_image.jpg"))
results, _ := predictor.ClassifyImage(ctx, *project.ID, iteration_publish_name, ioutil.NopCloser(bytes.NewReader(testImageData)), "")
for _, prediction := range *results.Predictions {
fmt.Printf("\t%s: %.2f%%", *prediction.TagName, *prediction.Probability * 100)
fmt.Println("")
}
}
Run the application by using the following command:
go run sample.go
The output of the application should be similar to the following text:
Creating project...
Adding images...
Training...
Training status: Training
Training status: Training
Training status: Training
Training status: Completed
Done!
Hemlock: 93.53%
Japanese Cherry: 0.01%
You can then verify that the test image (found in <base_image_url>/Images/Test/) is tagged appropriately. You can also go back to the Custom Vision website and see the current state of your newly created project.
If you wish to implement your own image classification project (or try an object detection project instead), you might want to delete the tree identification project from this example. A free subscription allows for two Custom Vision projects.
On the Custom Vision website, navigate to Projects and select the trash can under My New Project.
Now you've seen how every step of the object detection process can be done in code. This sample executes a single training iteration, but often you'll need to train and test your model multiple times in order to make it more accurate.
Get started using the Custom Vision client library for Java to build an image classification model. Follow these steps to install the package and try out the example code for basic tasks. Use this example as a template for building your own image recognition app.
Note
If you want to build and train a classification model without writing code, see the browser-based guidance.
Use the Custom Vision client library for Java to:
Reference documentation | Library source code for (training) and (prediction)| Artifact (Maven) for (training) and (prediction) | Samples
F0
) to try the service, and upgrade later to a paid tier for production.In this example, you'll write your credentials to environment variables on the local machine running the application.
Go to the Azure portal. If the Custom Vision resources you created in the Prerequisites section deployed successfully, select the Go to Resource button under Next steps. You can find your keys and endpoints in the resources' Keys and Endpoint pages, under Resource Management. You'll need to get the keys for both your training resource and prediction resource, along with the API endpoints.
You can find the prediction resource ID on the prediction resource's Properties tab in the Azure portal, listed as Resource ID.
Tip
You also use https://www.customvision.ai to get these values. After you sign in, select the Settings icon at the top right. On the Setting pages, you can view all the keys, resource ID, and endpoints.
To set the environment variables, open a console window and follow the instructions for your operating system and development environment.
VISION_TRAINING KEY
environment variable, replace <your-training-key>
with one of the keys for your training resource.VISION_TRAINING_ENDPOINT
environment variable, replace <your-training-endpoint>
with the endpoint for your training resource.VISION_PREDICTION_KEY
environment variable, replace <your-prediction-key>
with one of the keys for your prediction resource.VISION_PREDICTION_ENDPOINT
environment variable, replace <your-prediction-endpoint>
with the endpoint for your prediction resource.VISION_PREDICTION_RESOURCE_ID
environment variable, replace <your-resource-id>
with the resource ID for your prediction resource.Important
We recommend Microsoft Entra ID authentication with managed identities for Azure resources to avoid storing credentials with your applications that run in the cloud.
Use API keys with caution. Don't include the API key directly in your code, and never post it publicly. If using API keys, store them securely in Azure Key Vault, rotate the keys regularly, and restrict access to Azure Key Vault using role based access control and network access restrictions. For more information about using API keys securely in your apps, see API keys with Azure Key Vault.
For more information about AI services security, see Authenticate requests to Azure AI services.
export VISION_TRAINING_KEY=<your-training-key>
export VISION_TRAINING_ENDPOINT=<your-training-endpoint>
export VISION_PREDICTION_KEY=<your-prediction-key>
export VISION_PREDICTION_ENDPOINT=<your-prediction-endpoint>
export VISION_PREDICTION_RESOURCE_ID=<your-resource-id>
After you add the environment variables, run source ~/.bashrc
from your console window to make the changes effective.
In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.
mkdir myapp && cd myapp
Run the gradle init
command from your working directory. This command creates essential build files for Gradle, including build.gradle.kts, which is used at runtime to create and configure your application.
gradle init --type basic
When prompted to choose a DSL, select Kotlin.
Locate build.gradle.kts and open it with your preferred IDE or text editor. Then copy in the following build configuration. This configuration defines the project as a Java application whose entry point is the class CustomVisionQuickstart
. It imports the Custom Vision libraries.
plugins {
java
application
}
application {
mainClassName = "CustomVisionQuickstart"
}
repositories {
mavenCentral()
}
dependencies {
compile(group = "com.azure", name = "azure-cognitiveservices-customvision-training", version = "1.1.0-preview.2")
compile(group = "com.azure", name = "azure-cognitiveservices-customvision-prediction", version = "1.1.0-preview.2")
}
From your working directory, run the following command to create a project source folder:
mkdir -p src/main/java
Navigate to the new folder and create a file called CustomVisionQuickstart.java. Open it in your preferred editor or IDE and add the following import
statements:
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.UUID;
import com.google.common.io.ByteStreams;
import com.microsoft.azure.cognitiveservices.vision.customvision.training.models.Classifier;
import com.microsoft.azure.cognitiveservices.vision.customvision.training.models.Domain;
import com.microsoft.azure.cognitiveservices.vision.customvision.training.models.DomainType;
import com.microsoft.azure.cognitiveservices.vision.customvision.training.models.ImageFileCreateBatch;
import com.microsoft.azure.cognitiveservices.vision.customvision.training.models.ImageFileCreateEntry;
import com.microsoft.azure.cognitiveservices.vision.customvision.training.models.Iteration;
import com.microsoft.azure.cognitiveservices.vision.customvision.training.models.Project;
import com.microsoft.azure.cognitiveservices.vision.customvision.training.models.Region;
import com.microsoft.azure.cognitiveservices.vision.customvision.training.models.TrainProjectOptionalParameter;
import com.microsoft.azure.cognitiveservices.vision.customvision.training.CustomVisionTrainingClient;
import com.microsoft.azure.cognitiveservices.vision.customvision.training.Trainings;
import com.microsoft.azure.cognitiveservices.vision.customvision.training.CustomVisionTrainingManager;
import com.microsoft.azure.cognitiveservices.vision.customvision.prediction.models.ImagePrediction;
import com.microsoft.azure.cognitiveservices.vision.customvision.prediction.models.Prediction;
import com.microsoft.azure.cognitiveservices.vision.customvision.prediction.CustomVisionPredictionClient;
import com.microsoft.azure.cognitiveservices.vision.customvision.prediction.CustomVisionPredictionManager;
import com.microsoft.azure.cognitiveservices.vision.customvision.training.models.Tag;
Tip
Want to view the whole quickstart code file at once? You can find it on GitHub, which contains the code examples in this quickstart.
In the application's CustomVisionQuickstart
class, create variables that retrieve your resource's keys and endpoint from environment variables.
// retrieve environment variables
final static String trainingApiKey = System.getenv("VISION_TRAINING_KEY");
final static String trainingEndpoint = System.getenv("VISION_TRAINING_ENDPOINT");
final static String predictionApiKey = System.getenv("VISION_PREDICTION_KEY");
final static String predictionEndpoint = System.getenv("VISION_PREDICTION_ENDPOINT");
final static String predictionResourceId = System.getenv("VISION_PREDICTION_RESOURCE_ID");
Important
Remember to remove the keys from your code when you're done, and never post them publicly. For production, use a secure way of storing and accessing your credentials like Azure Key Vault. See the Azure AI services security article for more information.
In the application's main
method, add calls for the methods used in this quickstart. You'll define these later.
Project project = createProject(trainClient);
addTags(trainClient, project);
uploadImages(trainClient, project);
trainProject(trainClient, project);
publishIteration(trainClient, project);
testProject(predictor, project);
The following classes and interfaces handle some of the major features of the Custom Vision Java client library.
Name | Description |
---|---|
CustomVisionTrainingClient | This class handles the creation, training, and publishing of your models. |
CustomVisionPredictionClient | This class handles the querying of your models for image classification predictions. |
ImagePrediction | This class defines a single prediction on a single image. It includes properties for the object ID and name, and a confidence score. |
These code snippets show you how to do the following tasks with the Custom Vision client library for Java:
In your main
method, instantiate training and prediction clients using your endpoint and keys.
// Authenticate
CustomVisionTrainingClient trainClient = CustomVisionTrainingManager
.authenticate(trainingEndpoint, trainingApiKey)
.withEndpoint(trainingEndpoint);
CustomVisionPredictionClient predictor = CustomVisionPredictionManager
.authenticate(predictionEndpoint, predictionApiKey)
.withEndpoint(predictionEndpoint);
This next method creates an image classification project. The created project will show up on the Custom Vision website that you visited earlier. See the CreateProject method overloads to specify other options when you create your project (explained in the Build a detector web portal guide).
public static Project createProject(CustomVisionTrainingClient trainClient) {
System.out.println("ImageClassification Sample");
Trainings trainer = trainClient.trainings();
System.out.println("Creating project...");
Project project = trainer.createProject().withName("Sample Java Project").execute();
return project;
}
This method defines the tags that you will train the model on.
public static void addTags(CustomVisionTrainingClient trainClient, Project project) {
Trainings trainer = trainClient.trainings();
// create hemlock tag
Tag hemlockTag = trainer.createTag().withProjectId(project.id()).withName("Hemlock").execute();
// create cherry tag
Tag cherryTag = trainer.createTag().withProjectId(project.id()).withName("Japanese Cherry").execute();
}
First, download the sample images for this project. Save the contents of the sample Images folder to your local device.
public static void uploadImages(CustomVisionTrainingClient trainClient, Project project) {
Trainings trainer = trainClient.trainings();
System.out.println("Adding images...");
for (int i = 1; i <= 10; i++) {
String fileName = "hemlock_" + i + ".jpg";
byte[] contents = GetImage("/Hemlock", fileName);
AddImageToProject(trainer, project, fileName, contents, hemlockTag.id(), null);
}
for (int i = 1; i <= 10; i++) {
String fileName = "japanese_cherry_" + i + ".jpg";
byte[] contents = GetImage("/Japanese_Cherry", fileName);
AddImageToProject(trainer, project, fileName, contents, cherryTag.id(), null);
}
}
The previous code snippet makes use of two helper functions that retrieve the images as resource streams and upload them to the service (you can upload up to 64 images in a single batch).
private static void AddImageToProject(Trainings trainer, Project project, String fileName, byte[] contents,
UUID tag, double[] regionValues) {
System.out.println("Adding image: " + fileName);
ImageFileCreateEntry file = new ImageFileCreateEntry().withName(fileName).withContents(contents);
ImageFileCreateBatch batch = new ImageFileCreateBatch().withImages(Collections.singletonList(file));
// If Optional region is specified, tack it on and place the tag there,
// otherwise
// add it to the batch.
if (regionValues != null) {
Region region = new Region().withTagId(tag).withLeft(regionValues[0]).withTop(regionValues[1])
.withWidth(regionValues[2]).withHeight(regionValues[3]);
file = file.withRegions(Collections.singletonList(region));
} else {
batch = batch.withTagIds(Collections.singletonList(tag));
}
trainer.createImagesFromFiles(project.id(), batch);
}
private static byte[] GetImage(String folder, String fileName) {
try {
return ByteStreams.toByteArray(CustomVisionSamples.class.getResourceAsStream(folder + "/" + fileName));
} catch (Exception e) {
System.out.println(e.getMessage());
e.printStackTrace();
}
return null;
}
This method creates the first training iteration in the project. It queries the service until training is completed.
public static void trainProject(CustomVisionTrainingClient trainClient, Project project) {
System.out.println("Training...");
Trainings trainer = trainClient.trainings();
Iteration iteration = trainer.trainProject(project.id(), new TrainProjectOptionalParameter());
while (iteration.status().equals("Training")) {
System.out.println("Training Status: " + iteration.status());
Thread.sleep(1000);
iteration = trainer.getIteration(project.id(), iteration.id());
}
System.out.println("Training Status: " + iteration.status());
}
This method makes the current iteration of the model available for querying. You can use the model name as a reference to send prediction requests. You need to enter your own value for predictionResourceId
. You can find the prediction resource ID on the resource's Properties tab in the Azure portal, listed as Resource ID.
public static String publishIteration(CustomVisionTrainingClient trainClient, Project project) {
Trainings trainer = trainClient.trainings();
// The iteration is now trained. Publish it to the prediction endpoint.
String publishedModelName = "myModel";
trainer.publishIteration(project.id(), iteration.id(), publishedModelName, predictionResourceId);
}
This method loads the test image, queries the model endpoint, and outputs prediction data to the console.
// load test image
public static void testProject(CustomVisionPredictionClient predictor, Project project) {
byte[] testImage = GetImage("/Test", "test_image.jpg");
// predict
ImagePrediction results = predictor.predictions().classifyImage().withProjectId(project.id())
.withPublishedName(publishedModelName).withImageData(testImage).execute();
for (Prediction prediction : results.predictions()) {
System.out.println(String.format("\t%s: %.2f%%", prediction.tagName(), prediction.probability() * 100.0f));
}
}
You can build the app with:
gradle build
Run the application with the gradle run
command:
gradle run
If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
If you wish to implement your own image classification project (or try an object detection project instead), you might want to delete the tree identification project from this example. A free subscription allows for two Custom Vision projects.
On the Custom Vision website, navigate to Projects and select the trash can under My New Project.
Now you've seen how every step of the object detection process can be done in code. This sample executes a single training iteration, but often you'll need to train and test your model multiple times in order to make it more accurate.
This guide provides instructions and sample code to help you get started using the Custom Vision client library for Node.js to build an image classification model. You can create a project, add tags, train the project, and use the project's prediction endpoint URL to programmatically test it. Use this example as a template for building your own image recognition app.
Note
If you want to build and train a classification model without writing code, see the browser-based guidance.
Use the Custom Vision client library for Node.js to:
Reference documentation for (training) and (prediction) | Package (npm) for (training) and (prediction) | Samples
F0
) to try the service, and upgrade later to a paid tier for production.In this example, you'll write your credentials to environment variables on the local machine running the application.
Go to the Azure portal. If the Custom Vision resources you created in the Prerequisites section deployed successfully, select the Go to Resource button under Next steps. You can find your keys and endpoints in the resources' Keys and Endpoint pages, under Resource Management. You'll need to get the keys for both your training resource and prediction resource, along with the API endpoints.
You can find the prediction resource ID on the prediction resource's Properties tab in the Azure portal, listed as Resource ID.
Tip
You also use https://www.customvision.ai to get these values. After you sign in, select the Settings icon at the top right. On the Setting pages, you can view all the keys, resource ID, and endpoints.
To set the environment variables, open a console window and follow the instructions for your operating system and development environment.
VISION_TRAINING KEY
environment variable, replace <your-training-key>
with one of the keys for your training resource.VISION_TRAINING_ENDPOINT
environment variable, replace <your-training-endpoint>
with the endpoint for your training resource.VISION_PREDICTION_KEY
environment variable, replace <your-prediction-key>
with one of the keys for your prediction resource.VISION_PREDICTION_ENDPOINT
environment variable, replace <your-prediction-endpoint>
with the endpoint for your prediction resource.VISION_PREDICTION_RESOURCE_ID
environment variable, replace <your-resource-id>
with the resource ID for your prediction resource.Important
We recommend Microsoft Entra ID authentication with managed identities for Azure resources to avoid storing credentials with your applications that run in the cloud.
Use API keys with caution. Don't include the API key directly in your code, and never post it publicly. If using API keys, store them securely in Azure Key Vault, rotate the keys regularly, and restrict access to Azure Key Vault using role based access control and network access restrictions. For more information about using API keys securely in your apps, see API keys with Azure Key Vault.
For more information about AI services security, see Authenticate requests to Azure AI services.
export VISION_TRAINING_KEY=<your-training-key>
export VISION_TRAINING_ENDPOINT=<your-training-endpoint>
export VISION_PREDICTION_KEY=<your-prediction-key>
export VISION_PREDICTION_ENDPOINT=<your-prediction-endpoint>
export VISION_PREDICTION_RESOURCE_ID=<your-resource-id>
After you add the environment variables, run source ~/.bashrc
from your console window to make the changes effective.
In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.
mkdir myapp && cd myapp
Run the npm init
command to create a node application with a package.json
file. Press ENTER multiple times to complete the process.
npm init
To write an image analysis app with Custom Vision for Node.js, you need the Custom Vision npm packages. To install them, run the following command in PowerShell:
npm install @azure/cognitiveservices-customvision-training
npm install @azure/cognitiveservices-customvision-prediction
Your app's package.json
file is updated with the dependencies.
Create a file named index.js
and import the following libraries:
const util = require('util');
const fs = require('fs');
const TrainingApi = require("@azure/cognitiveservices-customvision-training");
const PredictionApi = require("@azure/cognitiveservices-customvision-prediction");
const msRest = require("@azure/ms-rest-js");
Tip
Want to view the whole quickstart code file at once? You can find it on GitHub, which contains the code examples in this quickstart.
Create variables for your resource's Azure endpoint and keys.
// retrieve environment variables
const trainingKey = process.env["VISION_TRAINING_KEY"];
const trainingEndpoint = process.env["VISION_TRAINING_ENDPOINT"];
const predictionKey = process.env["VISION_PREDICTION_KEY"];
const predictionResourceId = process.env["VISION_PREDICTION_RESOURCE_ID"];
const predictionEndpoint = process.env["VISION_PREDICTION_ENDPOINT"];
Also add fields for your project name and a timeout parameter for asynchronous calls.
const publishIterationName = "classifyModel";
const setTimeoutPromise = util.promisify(setTimeout);
Name | Description |
---|---|
TrainingAPIClient | This class handles the creation, training, and publishing of your models. |
PredictionAPIClient | This class handles the querying of your models for image classification predictions. |
Prediction | This interface defines a single prediction on a single image. It includes properties for the object ID and name, and a confidence score. |
These code snippets show you how to do the following tasks with the Custom Vision client library for JavaScript:
Instantiate client objects with your endpoint and key. Create an ApiKeyCredentials object with your key, and use it with your endpoint to create a TrainingAPIClient and PredictionAPIClient object.
const credentials = new msRest.ApiKeyCredentials({ inHeader: { "Training-key": trainingKey } });
const trainer = new TrainingApi.TrainingAPIClient(credentials, trainingEndpoint);
const predictor_credentials = new msRest.ApiKeyCredentials({ inHeader: { "Prediction-key": predictionKey } });
const predictor = new PredictionApi.PredictionAPIClient(predictor_credentials, predictionEndpoint);
Start a new function to contain all of your Custom Vision function calls. Add the following code to create a new Custom Vision service project.
(async () => {
console.log("Creating project...");
const sampleProject = await trainer.createProject("Sample Project");
To create classification tags to your project, add the following code to your function:
const hemlockTag = await trainer.createTag(sampleProject.id, "Hemlock");
const cherryTag = await trainer.createTag(sampleProject.id, "Japanese Cherry");
First, download the sample images for this project. Save the contents of the sample Images folder to your local device.
To add the sample images to the project, insert the following code after the tag creation. This code uploads each image with its corresponding tag.
const sampleDataRoot = "Images";
console.log("Adding images...");
let fileUploadPromises = [];
const hemlockDir = `${sampleDataRoot}/Hemlock`;
const hemlockFiles = fs.readdirSync(hemlockDir);
hemlockFiles.forEach(file => {
fileUploadPromises.push(trainer.createImagesFromData(sampleProject.id, fs.readFileSync(`${hemlockDir}/${file}`), { tagIds: [hemlockTag.id] }));
});
const cherryDir = `${sampleDataRoot}/Japanese_Cherry`;
const japaneseCherryFiles = fs.readdirSync(cherryDir);
japaneseCherryFiles.forEach(file => {
fileUploadPromises.push(trainer.createImagesFromData(sampleProject.id, fs.readFileSync(`${cherryDir}/${file}`), { tagIds: [cherryTag.id] }));
});
await Promise.all(fileUploadPromises);
Important
You need to change the path to the images (sampleDataRoot
) based on where you downloaded the Azure AI services Python SDK Samples repo.
This code creates the first iteration of the prediction model.
console.log("Training...");
let trainingIteration = await trainer.trainProject(sampleProject.id);
// Wait for training to complete
console.log("Training started...");
while (trainingIteration.status == "Training") {
console.log("Training status: " + trainingIteration.status);
await setTimeoutPromise(1000, null);
trainingIteration = await trainer.getIteration(sampleProject.id, trainingIteration.id)
}
console.log("Training status: " + trainingIteration.status);
This code publishes the trained iteration to the prediction endpoint. The name given to the published iteration can be used to send prediction requests. An iteration isn't available in the prediction endpoint until it's published.
// Publish the iteration to the end point
await trainer.publishIteration(sampleProject.id, trainingIteration.id, publishIterationName, predictionResourceId);
To send an image to the prediction endpoint and retrieve the prediction, add the following code to your function.
const testFile = fs.readFileSync(`${sampleDataRoot}/Test/test_image.jpg`);
const results = await predictor.classifyImage(sampleProject.id, publishIterationName, testFile);
// Show results
console.log("Results:");
results.predictions.forEach(predictedResult => {
console.log(`\t ${predictedResult.tagName}: ${(predictedResult.probability * 100.0).toFixed(2)}%`);
});
Then, close your Custom Vision function and call it.
})()
Run the application with the node
command on your quickstart file.
node index.js
The output of the application should be similar to the following text:
Creating project...
Adding images...
Training...
Training started...
Training status: Training
Training status: Training
Training status: Training
Training status: Completed
Results:
Hemlock: 94.97%
Japanese Cherry: 0.01%
You can then verify that the test image (found in <sampleDataRoot>/Test/) is tagged appropriately. You can also go back to the Custom Vision website and see the current state of your newly created project.
If you wish to implement your own image classification project (or try an object detection project instead), you might want to delete the tree identification project from this example. A free subscription allows for two Custom Vision projects.
On the Custom Vision website, navigate to Projects and select the trash can under My New Project.
This guide shows how every step of the object detection process can be done in code. This sample executes a single training iteration, but often you'll need to train and test your model multiple times in order to make it more accurate.
Get started with the Custom Vision client library for Python. Follow these steps to install the package and try out the example code for building an image classification model. You'll create a project, add tags, train the project, and use the project's prediction endpoint URL to programmatically test it. Use this example as a template for building your own image recognition app.
Note
If you want to build and train a classification model without writing code, see the browser-based guidance.
Use the Custom Vision client library for Python to:
Reference documentation | Library source code | Package (PyPI) | Samples
pip --version
on the command line. Get pip by installing the latest version of Python.F0
) to try the service, and upgrade later to a paid tier for production.In this example, you'll write your credentials to environment variables on the local machine running the application.
Go to the Azure portal. If the Custom Vision resources you created in the Prerequisites section deployed successfully, select the Go to Resource button under Next steps. You can find your keys and endpoints in the resources' Keys and Endpoint pages, under Resource Management. You'll need to get the keys for both your training resource and prediction resource, along with the API endpoints.
You can find the prediction resource ID on the prediction resource's Properties tab in the Azure portal, listed as Resource ID.
Tip
You also use https://www.customvision.ai to get these values. After you sign in, select the Settings icon at the top right. On the Setting pages, you can view all the keys, resource ID, and endpoints.
To set the environment variables, open a console window and follow the instructions for your operating system and development environment.
VISION_TRAINING KEY
environment variable, replace <your-training-key>
with one of the keys for your training resource.VISION_TRAINING_ENDPOINT
environment variable, replace <your-training-endpoint>
with the endpoint for your training resource.VISION_PREDICTION_KEY
environment variable, replace <your-prediction-key>
with one of the keys for your prediction resource.VISION_PREDICTION_ENDPOINT
environment variable, replace <your-prediction-endpoint>
with the endpoint for your prediction resource.VISION_PREDICTION_RESOURCE_ID
environment variable, replace <your-resource-id>
with the resource ID for your prediction resource.Important
We recommend Microsoft Entra ID authentication with managed identities for Azure resources to avoid storing credentials with your applications that run in the cloud.
Use API keys with caution. Don't include the API key directly in your code, and never post it publicly. If using API keys, store them securely in Azure Key Vault, rotate the keys regularly, and restrict access to Azure Key Vault using role based access control and network access restrictions. For more information about using API keys securely in your apps, see API keys with Azure Key Vault.
For more information about AI services security, see Authenticate requests to Azure AI services.
export VISION_TRAINING_KEY=<your-training-key>
export VISION_TRAINING_ENDPOINT=<your-training-endpoint>
export VISION_PREDICTION_KEY=<your-prediction-key>
export VISION_PREDICTION_ENDPOINT=<your-prediction-endpoint>
export VISION_PREDICTION_RESOURCE_ID=<your-resource-id>
After you add the environment variables, run source ~/.bashrc
from your console window to make the changes effective.
To write an image analysis app with Custom Vision for Python, you need the Custom Vision client library. After installing Python, run the following command in PowerShell or a console window:
pip install azure-cognitiveservices-vision-customvision
Create a new Python file and import the following libraries.
from azure.cognitiveservices.vision.customvision.training import CustomVisionTrainingClient
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from azure.cognitiveservices.vision.customvision.training.models import ImageFileCreateBatch, ImageFileCreateEntry, Region
from msrest.authentication import ApiKeyCredentials
import os, time, uuid
Tip
Want to view the whole quickstart code file at once? You can find it on GitHub, which contains the code examples in this quickstart.
Create variables for your resource's Azure endpoint and keys.
# retrieve environment variables
ENDPOINT = os.environ["VISION_TRAINING_ENDPOINT"]
training_key = os.environ["VISION_TRAINING_KEY"]
prediction_key = os.environ["VISION_PREDICTION_KEY"]
prediction_resource_id = os.environ["VISION_PREDICTION_RESOURCE_ID"]
Name | Description |
---|---|
CustomVisionTrainingClient | This class handles the creation, training, and publishing of your models. |
CustomVisionPredictionClient | This class handles the querying of your models for image classification predictions. |
ImagePrediction | This class defines a single object prediction on a single image. It includes properties for the object ID and name, the bounding box location of the object, and a confidence score. |
These code snippets show you how to do the following with the Custom Vision client library for Python:
Instantiate a training and prediction client with your endpoint and keys. Create ApiKeyServiceClientCredentials
objects with your keys, and use them with your endpoint to create a CustomVisionTrainingClient and CustomVisionPredictionClient object.
credentials = ApiKeyCredentials(in_headers={"Training-key": training_key})
trainer = CustomVisionTrainingClient(ENDPOINT, credentials)
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
predictor = CustomVisionPredictionClient(ENDPOINT, prediction_credentials)
Add the following code to your script to create a new Custom Vision service project.
See the create_project method to specify other options when you create your project (explained in the Build a classifier web portal guide).
publish_iteration_name = "classifyModel"
credentials = ApiKeyCredentials(in_headers={"Training-key": training_key})
trainer = CustomVisionTrainingClient(ENDPOINT, credentials)
# Create a new project
print ("Creating project...")
project_name = uuid.uuid4()
project = trainer.create_project(project_name)
To add classification tags to your project, add the following code:
# Make two tags in the new project
hemlock_tag = trainer.create_tag(project.id, "Hemlock")
cherry_tag = trainer.create_tag(project.id, "Japanese Cherry")
First, download the sample images for this project. Save the contents of the sample Images folder to your local device.
To add the sample images to the project, insert the following code after the tag creation. This code uploads each image with its corresponding tag. You can upload up to 64 images in a single batch.
base_image_location = os.path.join (os.path.dirname(__file__), "Images")
print("Adding images...")
image_list = []
for image_num in range(1, 11):
file_name = "hemlock_{}.jpg".format(image_num)
with open(os.path.join (base_image_location, "Hemlock", file_name), "rb") as image_contents:
image_list.append(ImageFileCreateEntry(name=file_name, contents=image_contents.read(), tag_ids=[hemlock_tag.id]))
for image_num in range(1, 11):
file_name = "japanese_cherry_{}.jpg".format(image_num)
with open(os.path.join (base_image_location, "Japanese_Cherry", file_name), "rb") as image_contents:
image_list.append(ImageFileCreateEntry(name=file_name, contents=image_contents.read(), tag_ids=[cherry_tag.id]))
upload_result = trainer.create_images_from_files(project.id, ImageFileCreateBatch(images=image_list))
if not upload_result.is_batch_successful:
print("Image batch upload failed.")
for image in upload_result.images:
print("Image status: ", image.status)
exit(-1)
Note
You need to change the path to the images based on where you downloaded the Azure AI services Python SDK Samples repo.
This code creates the first iteration of the prediction model.
print ("Training...")
iteration = trainer.train_project(project.id)
while (iteration.status != "Completed"):
iteration = trainer.get_iteration(project.id, iteration.id)
print ("Training status: " + iteration.status)
print ("Waiting 10 seconds...")
time.sleep(10)
Tip
Train with selected tags
You can optionally train on only a subset of your applied tags. You might want to do this if you haven't applied enough of certain tags yet, but you do have enough of others. In the train_project call, set the optional parameter selected_tags
to a list of the ID strings of the tags you want to use. The model will train to only recognize the tags on that list.
An iteration isn't available in the prediction endpoint until it's published. The following code makes the current iteration of the model available for querying.
# The iteration is now trained. Publish it to the project endpoint
trainer.publish_iteration(project.id, iteration.id, publish_iteration_name, prediction_resource_id)
print ("Done!")
To send an image to the prediction endpoint and retrieve the prediction, add the following code to the end of the file:
# Now there is a trained endpoint that can be used to make a prediction
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
predictor = CustomVisionPredictionClient(ENDPOINT, prediction_credentials)
with open(os.path.join (base_image_location, "Test/test_image.jpg"), "rb") as image_contents:
results = predictor.classify_image(
project.id, publish_iteration_name, image_contents.read())
# Display the results.
for prediction in results.predictions:
print("\t" + prediction.tag_name +
": {0:.2f}%".format(prediction.probability * 100))
Run the application by using the following command:
python CustomVisionQuickstart.py
The output of the application should be similar to the following text:
Creating project...
Adding images...
Training...
Training status: Training
Training status: Completed
Done!
Hemlock: 93.53%
Japanese Cherry: 0.01%
You can then verify that the test image (found in <base_image_location>/images/Test/) is tagged appropriately. You can also go back to the Custom Vision website and see the current state of your newly created project.
If you wish to implement your own image classification project (or try an object detection project instead), you might want to delete the tree identification project from this example. A free subscription allows for two Custom Vision projects.
On the Custom Vision website, navigate to Projects and select the trash can under My New Project.
Now you've seen how every step of the image classification process can be done in code. This sample executes a single training iteration, but often you'll need to train and test your model multiple times in order to make it more accurate.
Get started with the Custom Vision REST API. Follow these steps to call the API and build an image classification model. You'll create a project, add tags, train the project, and use the project's prediction endpoint URL to programmatically test it. Use this example as a template for building your own image recognition app.
Note
Custom Vision is most easily used through a client library SDK or through the browser-based guidance.
Use the Custom Vision client library for the REST API to:
F0
) to try the service, and upgrade later to a paid tier for production.You'll use a command like the following to create an image classification project. The created project will show up on the Custom Vision website.
curl -v -X POST -H "Training-key: {subscription key}" "https://{endpoint}/customvision/v3.3/Training/projects?name={name}"
Copy the command to a text editor and make the following changes:
{subscription key}
with your valid key.{endpoint}
with the endpoint that corresponds to your key.
Note
New resources created after July 1, 2019, will use custom subdomain names. For more information and a complete list of regional endpoints, see Custom subdomain names for Azure AI services.
{name}
with the name of your project.A JSON response like the following example appears. Save the "id"
value of your project to a temporary location.
{
"id": "00000000-0000-0000-0000-000000000000",
"name": "string",
"description": "string",
"settings": {
"domainId": "00000000-0000-0000-0000-000000000000",
"classificationType": "Multiclass",
"targetExportPlatforms": [
"CoreML"
],
"useNegativeSet": true,
"detectionParameters": "string",
"imageProcessingSettings": {
"augmentationMethods": {}
}
},
"created": "string",
"lastModified": "string",
"thumbnailUri": "string",
"drModeEnabled": true,
"status": "Succeeded"
}
Use the following command to define the tags that you'll train the model on.
curl -v -X POST -H "Training-key: {subscription key}" "https://{endpoint}/customvision/v3.3/Training/projects/{projectId}/tags?name={name}"
{projectId}
with your own project ID.{name}
with the name of the tag you want to use.Repeat this process for all the tags you'd like to use in your project. If you're using the example images provided, add the tags "Hemlock"
and "Japanese Cherry"
.
A JSON response like the following example appears. Save the "id"
value of each tag to a temporary location.
{
"id": "00000000-0000-0000-0000-000000000000",
"name": "string",
"description": "string",
"type": "Regular",
"imageCount": 0
}
Next, download the sample images for this project. Save the contents of the sample Images folder to your local device.
Use the following command to upload the images and apply tags; once for the "Hemlock" images, and separately for the "Japanese Cherry" images. See the Create Images From Data API for more options.
curl -v -X POST -H "Content-Type: multipart/form-data" -H "Training-key: {subscription key}" "https://{endpoint}/customvision/v3.3/Training/projects/{projectId}/images?tagIds={tagArray}"
--data-ascii "{binary data}"
{projectId}
with your own project ID.{tagArray}
with the ID of a tag.This method trains the model on the tagged images you uploaded and returns an ID for the current project iteration.
curl -v -X POST -H "Content-Type: application/json" -H "Training-key: {subscription key}" "https://{endpoint}/customvision/v3.3/Training/projects/{projectId}/train"
{projectId}
with your own project ID.{tagArray}
with the ID of a tag.Tip
Train with selected tags
You can optionally train on only a subset of your applied tags. You might want to do this if you haven't applied enough of certain tags yet, but you do have enough of others. Add the optional JSON content to the body of your request. Populate the "selectedTags"
array with the IDs of the tags you want to use.
{
"selectedTags": [
"00000000-0000-0000-0000-000000000000"
]
}
The JSON response contains information about your trained project, including the iteration ID ("id"
). Save this value for the next step.
{
"id": "00000000-0000-0000-0000-000000000000",
"name": "string",
"status": "string",
"created": "string",
"lastModified": "string",
"trainedAt": "string",
"projectId": "00000000-0000-0000-0000-000000000000",
"exportable": true,
"exportableTo": [
"CoreML"
],
"domainId": "00000000-0000-0000-0000-000000000000",
"classificationType": "Multiclass",
"trainingType": "Regular",
"reservedBudgetInHours": 0,
"trainingTimeInMinutes": 0,
"publishName": "string",
"originalPublishResourceId": "string"
}
This method makes the current iteration of the model available for querying. You use the returned model name as a reference to send prediction requests.
curl -v -X POST -H "Training-key: {subscription key}" "https://{endpoint}/customvision/v3.3/Training/projects/{projectId}/iterations/{iterationId}/publish?publishName={publishName}&predictionId={predictionId}"
{projectId}
with your own project ID.{iterationId}
with the ID returned in the previous step.{publishedName}
with the name you'd like to assign to your prediction model.{predictionId}
with your own prediction resource ID. You can find the prediction resource ID on the resource's Properties tab in the Azure portal, listed as Resource ID.Finally, use this command to test your trained model by uploading a new image for it to classify with tags. You can use the image in the Test folder of the sample files you downloaded earlier.
curl -v -X POST -H "Training-key: {subscription key}" "https://{endpoint}/customvision/v3.3/Training/projects/{projectId}/iterations/{iterationId}/publish?publishName={publishName}&predictionId={predictionId}"
{projectId}
with your own project ID.{publishedName}
with the name you used in the previous step.The returned JSON response lists each of the tags that the model applied to your image, along with probability scores for each tag.
{
"id": "00000000-0000-0000-0000-000000000000",
"project": "00000000-0000-0000-0000-000000000000",
"iteration": "00000000-0000-0000-0000-000000000000",
"created": "string",
"predictions": [
{
"probability": 0.0,
"tagId": "00000000-0000-0000-0000-000000000000",
"tagName": "string",
"boundingBox": {
"left": 0.0,
"top": 0.0,
"width": 0.0,
"height": 0.0
},
"tagType": "Regular"
}
]
}
If you wish to implement your own image classification project (or try an object detection project instead), you might want to delete the tree identification project from this example. A free subscription allows for two Custom Vision projects.
On the Custom Vision website, navigate to Projects and select the trash can under My New Project.
Now you've done every step of the image classification process using the REST API. This sample executes a single training iteration, but often you'll need to train and test your model multiple times in order to make it more accurate.
Events
Mar 17, 9 PM - Mar 21, 10 AM
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowTraining
Learning path
Best practices for Java apps on Azure - Training
Start here and learn how you can monitor, automate, tune, autoscale, secure and build Java apps on Azure. As always, use tools and frameworks that you know and love – Spring, Tomcat, WildFly, JBoss, WebLogic, WebSphere, Maven, Gradle, IntelliJ, Eclipse, Jenkins, Terraform and more.
Certification
Microsoft Certified: Azure AI Fundamentals - Certifications
Demonstrate fundamental AI concepts related to the development of software and services of Microsoft Azure to create AI solutions.