Quickstart: Use the Bing Visual Search client library
Warning
On October 30, 2020, the Bing Search APIs moved from Azure AI services to Bing Search Services. This documentation is provided for reference only. For updated documentation, see the Bing search API documentation. For instructions on creating new Azure resources for Bing search, see Create a Bing Search resource through the Azure Marketplace.
Use this quickstart to begin getting image insights from the Bing Visual Search service, using the C# client library. While Bing Visual Search has a REST API compatible with most programming languages, the client library provides an easy way to integrate the service into your applications. The source code for this sample can be found on GitHub.
Reference documentation | Library source code | Package (NuGet) | Samples
Prerequisites
- Visual Studio 2019.
- If you are using Linux/MacOS, this application can be run using Mono.
- The NuGet Visual Search package.
- From the Solution Explorer in Visual Studio, right-click on your project and select
Manage NuGet Packages
from the menu. Install theMicrosoft.Azure.CognitiveServices.Search.VisualSearch
package. Installing the NuGet packages also installs the following:- Microsoft.Rest.ClientRuntime
- Microsoft.Rest.ClientRuntime.Azure
- Newtonsoft.Json
- From the Solution Explorer in Visual Studio, right-click on your project and select
Create an Azure resource
Start using the Bing Visual Search API by creating one of the following Azure resources:
- Available through the Azure portal until you delete the resource.
- Select the
S9
pricing tier.
- Available through the Azure portal until you delete the resource.
- Use the same key and endpoint for your applications, across multiple Azure AI services.
Create and initialize the application
In Visual Studio, create a new project. Then add the following directives.
using Microsoft.Azure.CognitiveServices.Search.VisualSearch; using Microsoft.Azure.CognitiveServices.Search.VisualSearch.Models;
Instantiate the client with your subscription key.
var client = new VisualSearchClient(new ApiKeyServiceClientCredentials("YOUR-ACCESS-KEY"));
Send a search request
Create a
FileStream
to your images (in this caseTestImages/image.jpg
). Then use the client to send a search request usingclient.Images.VisualSearchMethodAsync()
.System.IO.FileStream stream = new FileStream(Path.Combine("TestImages", "image.jpg"), FileMode.Open); // The knowledgeRequest parameter is not required if an image binary is passed in the request body var visualSearchResults = client.Images.VisualSearchMethodAsync(image: stream, knowledgeRequest: (string)null).Result;
Parse the results of the previous query:
// Visual Search results if (visualSearchResults.Image?.ImageInsightsToken != null) { Console.WriteLine($"Uploaded image insights token: {visualSearchResults.Image.ImageInsightsToken}"); } else { Console.WriteLine("Couldn't find image insights token!"); } // List of tags if (visualSearchResults.Tags.Count > 0) { var firstTagResult = visualSearchResults.Tags[0]; Console.WriteLine($"Visual search tag count: {visualSearchResults.Tags.Count}"); // List of actions in first tag if (firstTagResult.Actions.Count > 0) { var firstActionResult = firstTagResult.Actions[0]; Console.WriteLine($"First tag action count: {firstTagResult.Actions.Count}"); Console.WriteLine($"First tag action type: {firstActionResult.ActionType}"); } else { Console.WriteLine("Couldn't find tag actions!"); } }
Next steps
Use this quickstart to begin getting image insights from the Bing Visual Search service, using the Java client library. While Bing Visual Search has a REST API compatible with most programming languages, the client library provides an easy way to integrate the service into your applications. The source code for this quickstart can be found on GitHub.
Use the Bing Visual Search client library for Java to:
- Upload an image to send a visual search request.
- Get the image insight token and visual search tags.
Reference documentation | Library source code | Artifact (Maven) | Samples
Prerequisites
- Azure subscription - Create one for free
- The current version of the Java Development Kit(JDK)
- The Gradle build tool, or another dependency manager
Create an Azure resource
Start using the Bing Visual Search API by creating one of the following Azure resources:
- Available through the Azure portal until you delete the resource.
- Select the
S9
pricing tier.
- Available through the Azure portal until you delete the resource.
- Use the same key and endpoint for your applications, across multiple Azure AI services.
After you get a key from your resource, create an environment variable for the key, named BING_SEARCH_V7_SUBSCRIPTION_KEY
.
Create a new Gradle project
In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.
mkdir myapp && cd myapp
Run the gradle init
command from your working directory. This command will create essential build files for Gradle, including build.gradle.kts which is used at runtime to create and configure your application.
gradle init --type basic
When prompted to choose a DSL, select Kotlin.
Locate build.gradle.kts and open it with your preferred IDE or text editor. Then copy in this build configuration:
plugins {
java
application
}
application {
mainClassName = "main.java.BingVisualSearchSample"
}
repositories {
mavenCentral()
}
dependencies {
compile("org.slf4j:slf4j-simple:1.7.25")
compile("com.microsoft.azure.cognitiveservices:azure-cognitiveservices-visualsearch:1.0.2-beta")
compile("com.google.code.gson:gson:2.8.5")
}
Create a folder for your sample app. From your working directory, run the following command:
mkdir -p src/main/java
Create a folder for the image you want to upload to the API. Place the image inside the resources folder.
mkdir -p src/main/resources
Navigate to the new folder and create a file called BingVisualSearchSample.java. Open it in your preferred editor or IDE and add the following import
statements:
package main.java;
import com.google.common.io.ByteStreams;
import com.google.gson.Gson;
import com.microsoft.azure.cognitiveservices.search.visualsearch.BingVisualSearchAPI;
import com.microsoft.azure.cognitiveservices.search.visualsearch.BingVisualSearchManager;
import com.microsoft.azure.cognitiveservices.search.visualsearch.models.CropArea;
import com.microsoft.azure.cognitiveservices.search.visualsearch.models.ErrorResponseException;
import com.microsoft.azure.cognitiveservices.search.visualsearch.models.Filters;
import com.microsoft.azure.cognitiveservices.search.visualsearch.models.ImageInfo;
import com.microsoft.azure.cognitiveservices.search.visualsearch.models.ImageKnowledge;
import com.microsoft.azure.cognitiveservices.search.visualsearch.models.ImageTag;
import com.microsoft.azure.cognitiveservices.search.visualsearch.models.KnowledgeRequest;
import com.microsoft.azure.cognitiveservices.search.visualsearch.models.VisualSearchRequest;
Then create a new class.
public class BingVisualSearchSample {
}
In the application's main
method, create variables for your resource's Azure endpoint and key. If you created the environment variable after you launched the application, you will need to close and reopen the editor, IDE, or shell running it to access the variable. Then create a byte[]
for the image you'll be uploading. Create a try
block for the methods you'll define later, and load the image and convert it to bytes using toByteArray()
.
// IMPORTANT: MAKE SURE TO USE S9 PRICING TIER OF THE BING SEARCH V7 API KEY FOR VISUAL SEARCH.
// Otherwise, you will get an invalid subscription key error.
public static void main(String[] args) {
// Set the BING_SEARCH_V7_SUBSCRIPTION_KEY environment variable with your subscription key,
// then reopen your command prompt or IDE. If not, you may get an API key not found exception.
final String subscriptionKey = System.getenv("BING_SEARCH_V7_SUBSCRIPTION_KEY");
BingVisualSearchAPI client = BingVisualSearchManager.authenticate(subscriptionKey);
//runSample(client);
byte[] imageBytes;
try {
imageBytes = ByteStreams.toByteArray(ClassLoader.getSystemClassLoader().getResourceAsStream("image.jpg"));
visualSearch(client, imageBytes);
searchWithCropArea(client, imageBytes);
// wait 1 second to avoid rate limiting
Thread.sleep(1000);
searchWithFilter(client);
searchUsingCropArea(client);
searchUsingInsightToken(client);
}
catch (java.io.IOException f) {
System.out.println(f.getMessage());
f.printStackTrace();
}
catch (java.lang.InterruptedException f){
f.printStackTrace();
}
}
Install the client library
This quickstart uses the Gradle dependency manager. You can find the client library and information for other dependency managers on the Maven Central Repository.
In your project's build.gradle.kts file, be sure to include the client library as an implementation
statement.
dependencies {
compile("org.slf4j:slf4j-simple:1.7.25")
compile("com.microsoft.azure.cognitiveservices:azure-cognitiveservices-visualsearch:1.0.2-beta")
compile("com.google.code.gson:gson:2.8.5")
}
Code examples
These code snippets show you how to do the following tasks with the Bing Visual Search client library and Java:
- Authenticate the client
- Send a visual search request
- Print the image insight token and visual search tags
Authenticate the client
Note
This quickstart assumes you've created an environment variable for your Bing Visual Search key, named BING_SEARCH_V7_SUBSCRIPTION_KEY
.
In your main method, be sure to use your subscription key to instantiate a BingVisualSearchAPI object.
BingVisualSearchAPI client = BingVisualSearchManager.authenticate(subscriptionKey);
Send a visual search request
In a new method, send the image byte array (which was created in the main()
method) using the client's bingImages().visualSearch() method.
public static void visualSearch(BingVisualSearchAPI client, byte[] imageBytes){
System.out.println("Calling Bing Visual Search with image binary");
ImageKnowledge visualSearchResults = client.bingImages().visualSearch()
.withImage(imageBytes)
.execute();
PrintVisualSearchResults(visualSearchResults);
}
Print the image insight token and visual search tags
Check if the ImageKnowledge object is null. If not, print the image insights token, the number of tags, the number of actions, and the first action type.
static void PrintVisualSearchResults(ImageKnowledge visualSearchResults) {
if (visualSearchResults == null) {
System.out.println("No visual search result data.");
} else {
// Print token
if (visualSearchResults.image() != null && visualSearchResults.image().imageInsightsToken() != null) {
System.out.println("Found uploaded image insights token: " + visualSearchResults.image().imageInsightsToken());
} else {
System.out.println("Couldn't find image insights token!");
}
// List tags
if (visualSearchResults.tags() != null && visualSearchResults.tags().size() > 0) {
System.out.format("Found visual search tag count: %d\n", visualSearchResults.tags().size());
ImageTag firstTagResult = visualSearchResults.tags().get(0);
// List of actions in first tag
if (firstTagResult.actions() != null && firstTagResult.actions().size() > 0) {
System.out.format("Found first tag action count: %d\n", firstTagResult.actions().size());
System.out.println("Found first tag action type: " + firstTagResult.actions().get(0).actionType());
}
} else {
System.out.println("Couldn't find image tags!");
}
}
}
Run the application
You can build the app with:
gradle build
Run the application with the run
goal:
gradle run
Clean up resources
If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
Next steps
Use this quickstart to begin getting image insights from the Bing Visual Search service, using the JavaScript client library. While Bing Visual Search has a REST API compatible with most programming languages, the client library provides an easy way to integrate the service into your applications. The source code for this sample can be found on GitHub.
Reference documentation | | Package (NPM) | Samples
Prerequisites
- The latest version of Node.js.
- The Bing Visual Search SDK for JavaScript
- To install, run
npm install @azure/cognitiveservices-visualsearch
- To install, run
- The
CognitiveServicesCredentials
class from@azure/ms-rest-azure-js
package to authenticate the client.- To install, run
npm install @azure/ms-rest-azure-js
- To install, run
Create an Azure resource
Start using the Bing Visual Search API by creating one of the following Azure resources:
- Available through the Azure portal until you delete the resource.
- Select the
S9
pricing tier.
- Available through the Azure portal until you delete the resource.
- Use the same key and endpoint for your applications, across multiple Azure AI services.
Create and initialize the application
Create a new JavaScript file in your favorite IDE or editor, and add the following requirements. Then create variables for your subscription key, Custom Configuration ID, and file path to the image you want to upload.
const os = require("os"); const async = require('async'); const fs = require('fs'); const Search = require('@azure/cognitiveservices-visualsearch'); const CognitiveServicesCredentials = require('@azure/ms-rest-azure-js').CognitiveServicesCredentials; let keyVar = 'YOUR-VISUAL-SEARCH-ACCESS-KEY'; let credentials = new CognitiveServicesCredentials(keyVar); let filePath = "../Data/image.jpg";
Instantiate the client.
let visualSearchClient = new Search.VisualSearchClient(credentials);
Search for images
Use
fs.createReadStream()
to read in your image file, and create variables for your search request and results. Then use the client to search images.let fileStream = fs.createReadStream(filePath); let visualSearchRequest = JSON.stringify({}); let visualSearchResults; try { visualSearchResults = await visualSearchClient.images.visualSearch({ image: fileStream, knowledgeRequest: visualSearchRequest }); console.log("Search visual search request with binary of image"); } catch (err) { console.log("Encountered exception. " + err.message); }
Parse the results of the previous query:
// Visual Search results if (visualSearchResults.image.imageInsightsToken) { console.log(`Uploaded image insights token: ${visualSearchResults.image.imageInsightsToken}`); } else { console.log("Couldn't find image insights token!"); } // List of tags if (visualSearchResults.tags.length > 0) { let firstTagResult = visualSearchResults.tags[0]; console.log(`Visual search tag count: ${visualSearchResults.tags.length}`); // List of actions in first tag if (firstTagResult.actions.length > 0) { let firstActionResult = firstTagResult.actions[0]; console.log(`First tag action count: ${firstTagResult.actions.length}`); console.log(`First tag action type: ${firstActionResult.actionType}`); } else { console.log("Couldn't find tag actions!"); } } else { console.log("Couldn't find image tags!"); }
Next steps
Use this quickstart to begin getting image insights from the Bing Visual Search service, using the Python client library. While Bing Visual Search has a REST API compatible with most programming languages, the client library provides an easy way to integrate the service into your applications. The source code for this sample can be found on GitHub
Reference documentation | Library source code | Package (PyPi) | Samples
Prerequisites
- Python 2.x or 3.x
- It is recommended to use a virtual environment. Install and initialize the virtual environment with the venv module.
- The Bing Visual Search client library for Python. You can install it with the following commands:
cd mytestenv
python -m pip install azure-cognitiveservices-search-visualsearch
Create an Azure resource
Start using the Bing Visual Search API by creating one of the following Azure resources:
- Available through the Azure portal until you delete the resource.
- Select the
S9
pricing tier.
- Available through the Azure portal until you delete the resource.
- Use the same key and endpoint for your applications, across multiple Azure AI services.
Create and initialize the application
Create a new Python file in your favorite IDE or editor, and add the following import statements.
import http.client, urllib.parse import json import os.path from azure.cognitiveservices.search.visualsearch import VisualSearchClient from azure.cognitiveservices.search.visualsearch.models import ( VisualSearchRequest, CropArea, ImageInfo, Filters, KnowledgeRequest, ) from msrest.authentication import CognitiveServicesCredentials
Create variables for your subscription key, Custom Configuration ID, and the image you want to upload.
subscription_key = 'YOUR-VISUAL-SEARCH-ACCESS-KEY' PATH = 'C:\\Users\\USER\\azure-cognitive-samples\\mytestenv\\TestImages\\' image_path = os.path.join(PATH, "image.jpg")
Instantiate the client
client = VisualSearchClient(endpoint="https://api.cognitive.microsoft.com", credentials=CognitiveServicesCredentials(subscription_key))
Send the search request
With the image file open, serialize
VisualSearchRequest()
, and pass it as theknowledge_request
parameter for thevisual_search()
.with open(image_path, "rb") as image_fd: # You need to pass the serialized form of the model knowledge_request = json.dumps(VisualSearchRequest().serialize()) print("\r\nSearch visual search request with binary of dog image") result = client.images.visual_search(image=image_fd, knowledge_request=knowledge_request)
If any results were returned, print them, the tags, and the actions in the first tag.
if not result: print("No visual search result data.") # Visual Search results if result.image.image_insights_token: print("Uploaded image insights token: {}".format(result.image.image_insights_token)) else: print("Couldn't find image insights token!") # List of tags if result.tags: first_tag = result.tags[0] print("Visual search tag count: {}".format(len(result.tags))) # List of actions in first tag if first_tag.actions: first_tag_action = first_tag.actions[0] print("First tag action count: {}".format(len(first_tag.actions))) print("First tag action type: {}".format(first_tag_action.action_type)) else: print("Couldn't find tag actions!") else: print("Couldn't find image tags!")