Azure AI Projects client library samples for JavaScript (Beta)

These sample programs show how to use the JavaScript client libraries for Azure AI Projects in some common scenarios.

File Name Description
agents/agentBasic.js This sample demonstrates how to create an agent, create a conversation, generate responses using the agent, and clean up resources.
agents/agentBasicWithDefaultProject.js This sample demonstrates how to create an agent, create a conversation, generate responses using the agent, and clean up resources.
agents/agentCodeInterpreter.js This sample demonstrates how to create a response with code interpreter tool to solve mathematical equations.
agents/tools/agentAgentToAgent.js This sample demonstrates how to create an agent with A2A tool capabilities, enable inter-agent communication, and process streaming responses.
agents/tools/agentAiSearch.js This sample demonstrates how to create an agent with Azure AI Search tool capabilities, send queries to search indexed content, and process streaming responses with citations.
agents/tools/agentBingCustomSearch.js This sample demonstrates how to create an agent with Bing Custom Search tool capabilities, search custom search instances, and process streaming responses with citations.
agents/tools/agentBingGrounding.js This sample demonstrates how to create an agent with Bing grounding tool capabilities, search the web for current information, and process streaming responses with source citations.
agents/tools/agentBrowserAutomation.js This sample demonstrates how to create an agent with Browser Automation tool, perform web browsing tasks, and process streaming responses with browser automation events.
agents/tools/agentComputerUse.js This sample demonstrates how to create a Computer Use Agent that can interact with computer interfaces through simulated actions and screenshots.
agents/tools/agentFabric.js This sample demonstrates how to create an agent with Microsoft Fabric tool capabilities, send queries to Fabric data sources, and clean up resources.
agents/tools/agentFileSearch.js This sample demonstrates how to create a vector store, upload a file, create an agent with file search capabilities, generate responses, and clean up resources.
agents/tools/agentMcp.js This sample demonstrates how to create an agent with MCP tool capabilities, send requests that trigger MCP approval workflows, handle approval requests, and clean up resources.
agents/tools/agentMcpConnectionAuth.js This sample demonstrates how to create an agent with MCP tool capabilities using project connection authentication, send requests that trigger MCP approval workflows, handle approval requests, and clean up resources.
agents/tools/agentMemorySearch.js Create an agent with Memory Search, capture memories from a conversation, and retrieve them in a new conversation.
agents/tools/agentOpenApi.js This sample demonstrates how to create an agent with OpenAPI tool capabilities, load OpenAPI specifications from local assets, and process streaming responses that may include tool outputs.
agents/tools/agentOpenApiConnectionAuth.js Demonstrates how to create an OpenAPI-enabled agent that uses a project connection for authentication and stream responses that include tool invocation details.
agents/tools/agentSharepoint.js This sample demonstrates how to create an agent with SharePoint tool capabilities, search SharePoint content, and process streaming responses with citations.
agents/tools/agentWebSearch.js This sample demonstrates how to create an agent with web search capabilities, send a query to search the web, and clean up resources.
responses/responseBasic.js This sample demonstrates how to create responses with and without conversation context.
responses/responseStream.js This sample demonstrates how to create a non-streaming response and then use streaming for a follow-up response with conversation context.
agents/agentFunctionTool.js Demonstrates how to create an agent with function tools, handle function calls, and provide function results to get the final response.
agents/agentConversationCurd.js demonstrates how to use basic conversation operations.
agents/agentCurd.js demonstrates how to use basic agent operations.
agents/agentStreamEvents.js This sample demonstrates how to create an agent, create a conversation, and stream responses using the agent with event handling.
agents/agentStructureOutput.js This sample demonstrates how to create an agent with structured output, create a conversation, generate responses using the agent, and clean up resources.
agents/tools/agentFileSearchStream.js This sample demonstrates how to create an agent with file search capabilities, upload documents to a vector store, and stream responses that include file search results.
agents/tools/agentImageGeneration.js This sample demonstrates how to create an agent with ImageGenTool configured for image generation, make requests to generate images from text prompts, extract base64-encoded image data from the response, decode and save the generated image to a local file, and clean up created resources.
agents/tools/agentWebSearchStream.js This sample demonstrates how to create an agent with web search capabilities, send queries to search the web, and stream responses that include web search results.
agents/tools/computerUseUtil.js Utility functions for Computer Use Agent samples. Shared helper functions and classes for Computer Use Agent samples.
agents/workflowMultiAgent.js This sample demonstrates how to create a multi-agent workflow with a student agent and teacher agent, process streaming responses with workflow action events.
connections/connectionsBasics.js Given an AIProjectClient, this sample demonstrates how to enumerate the properties of all connections, get the properties of a default connection, and get the properties of a connection by its name.
conversations/conversationsBasics.js This sample demonstrates how to create, retrieve, update, list, and delete conversations using the OpenAI client.
datasets/datasetsBasics.js Given an AIProjectClient, this sample demonstrates how to enumerate the properties of datasets, upload files/folders, create datasets, manage dataset versions, and delete datasets.
deployments/deploymentsBasics.js Given an AIProjectClient, this sample demonstrates how to enumerate the properties of all deployments, get the properties of a deployment by its name, and delete a deployment.
evaluations/agentEvaluation.js This sample demonstrates how to create an agent, create an evaluation, run the evaluation with agent target, and clean up resources.
evaluations/agentResponseEvaluation.js This sample demonstrates how to create an agent, generate a response, create an evaluation, run the evaluation with agent response target, and clean up resources.
evaluations/agentResponseEvaluationWithFunctionTool.js This sample demonstrates how to create an agent with function tools, generate responses with tool calls, create an evaluation, run the evaluation with agent response target, and clean up resources.
evaluations/agentic_evaluators/coherenceEvaluation.js This sample demonstrates how to create an evaluation using the builtin coherence evaluator, run it with inline JSONL data, and retrieve results.
evaluations/agentic_evaluators/fluencyEvaluation.js This sample demonstrates how to create an evaluation with the fluency evaluator, run it with inline data, and retrieve results.
evaluations/agentic_evaluators/groundednessEvaluation.js This sample demonstrates how to use the synchronous openai.evals.* methods to create, get and list evaluation and eval runs for Groundedness evaluator using inline dataset content. DESCRIPTION: Given an AIProjectClient, this sample demonstrates how to use the openai.evals.* methods to create, get and list evaluation and eval runs for Groundedness evaluator using inline dataset content. USAGE: Set these environment variables with your own values: 1) FOUNDRY_PROJECT_ENDPOINT - Required. The Azure AI Project endpoint, as found in the overview page of your Microsoft Foundry project. It has the form: https://<account_name>.services.ai.azure.com/api/projects/<project_name>. 2) FOUNDRY_MODEL_NAME - Required. The name of the model deployment to use for evaluation.
evaluations/agentic_evaluators/intentResolutionEvaluation.js This sample demonstrates how to create an evaluation for Intent Resolution evaluator with inline data and retrieve the results. Before running the sample: Set these environment variables with your own values: 1) FOUNDRY_PROJECT_ENDPOINT - Required. The Azure AI Project endpoint, as found in the overview page of your Microsoft Foundry project. It has the form: https://<account_name>.services.ai.azure.com/api/projects/<project_name>. 2) FOUNDRY_MODEL_NAME - Required. The name of the model deployment to use for evaluation.
evaluations/agentic_evaluators/relevanceEvaluation.js This sample demonstrates how to create an evaluation for the Relevance evaluator with inline data, run the evaluation, and retrieve results. Before running the sample: 1. Set FOUNDRY_PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the overview page of your Microsoft Foundry project. Format: https://<account_name>.services.ai.azure.com/api/projects/<project_name> 2. Set FOUNDRY_MODEL_NAME - The name of the model deployment to use for evaluation.
evaluations/agentic_evaluators/responseCompletenessEvaluation.js This sample demonstrates how to create an evaluation for Response Completeness evaluator with inline data, run the evaluation, and retrieve results. Before running the sample: Set these environment variables with your own values: 1) FOUNDRY_PROJECT_ENDPOINT - Required. The Azure AI Project endpoint, as found in the overview page of your Microsoft Foundry project. It has the form: https://<account_name>.services.ai.azure.com/api/projects/<project_name> 2) FOUNDRY_MODEL_NAME - Required. The name of the model deployment to use for evaluation.
evaluations/agentic_evaluators/taskAdherenceEvaluation.js This sample demonstrates how to create an evaluation for Task Adherence and run it with inline data containing various query/response scenarios. Set these environment variables before running the sample: 1) FOUNDRY_PROJECT_ENDPOINT - The Azure AI Project endpoint 2) FOUNDRY_MODEL_NAME - The name of the model deployment to use for evaluation
evaluations/agentic_evaluators/taskCompletionEvaluation.js This sample demonstrates how to create an evaluation, run it with inline data for Task Completion evaluator, and retrieve the results.
evaluations/agentic_evaluators/taskNavigationEfficiencyEvaluation.js This sample demonstrates how to create an evaluation for Task Navigation Efficiency evaluator, run it with inline data, and retrieve results.
evaluations/agentic_evaluators/toolCallAccuracyEvaluation.js This sample demonstrates how to create an evaluation for Tool Call Accuracy with inline data, run the evaluation, and retrieve results. Before running the sample: npm install
evaluations/agentic_evaluators/toolCallSuccessEvaluation.js This sample demonstrates how to create an evaluation for the Tool Call Success evaluator with inline data, run the evaluation, and retrieve results.
evaluations/agentic_evaluators/toolInputAccuracyEvaluation.js This sample demonstrates how to create an evaluation for Tool Input Accuracy using inline dataset content with various query and response formats.
evaluations/agentic_evaluators/toolOutputUtilizationEvaluation.js This sample demonstrates how to create an evaluation, run it with inline data for tool output utilization, and retrieve the results.
evaluations/agentic_evaluators/toolSelectionEvaluation.js This sample demonstrates how to create an evaluation for Tool Selection with inline data, run the evaluation, and retrieve results.
evaluations/continuousEvaluationRule.js This sample demonstrates how to create an agent, create an evaluation, create a continuous evaluation rule that runs on agent response completions, and clean up resources.
evaluations/evaluationAIAssisted.js This sample demonstrates how to create an evaluation using built-in AI-assisted evaluators (Similarity, ROUGE, METEOR, GLEU, F1, BLEU), upload a dataset, run the evaluation, monitor its progress, and clean up resources. Before running the sample: npm install
evaluations/evaluationBuiltInWithDatasetId.js This sample demonstrates how to create an evaluation using built-in evaluators (Violence, F1 Score, Coherence), upload a dataset file, run the evaluation using the dataset ID, monitor its progress, and clean up resources. Before running the sample: npm install
evaluations/evaluationBuiltInWithInlineData.js This sample demonstrates how to create an evaluation using built-in evaluators (Violence, F1 Score, Coherence), run the evaluation with inline data, monitor its progress, and clean up resources. Before running the sample: npm install
evaluations/evaluationClusterInsight.js This sample demonstrates how to create an evaluation with sentiment analysis, run it on a dataset, and generate cluster insights from the results. Before running the sample: npm install
evaluations/evaluationCompareInsight.js This sample demonstrates how to create an evaluation, run it multiple times, and then compare the runs using the insights API to generate comparison insights. Before running the sample: npm install
evaluations/evaluationGraders.js This sample demonstrates how to create an evaluation using OpenAI graders, upload a dataset file, run the evaluation, monitor its progress, and clean up resources. Before running the sample: npm install
evaluations/evaluatorsCatalog.js This sample demonstrates how to create prompt-based and code-based custom evaluators, retrieve them, update them, list them, and clean up resources. Before running the sample: npm install
evaluations/evaluatorsCatalogCode.js This sample demonstrates how to create a custom code-based evaluator, create an evaluation with inline data, run the evaluation, monitor its progress, and clean up resources. Before running the sample: npm install
evaluations/evaluatorsCatalogPromptBased.js This sample demonstrates how to create a custom prompt-based evaluator, create an evaluation with inline data, run the evaluation, monitor its progress, and clean up resources. For Custom Prompt Based Evaluators: Following are the possible outputs that can be used in the prompt definition: result could be int, float or boolean based on the metric type defined. reason is a brief explanation for the score. (Optional) - An ordinal metric with a score from 1 to 5 (int) ### Output Format (JSON): { "result": <integer from 1 to 5>, "reason": "<brief explanation for the score>" } - A Continuous metric with a score from 0 to 1 (float) ### Output Format (JSON): { "result": <float from 0 to 1>, "reason": "<brief explanation for the score>" } - A boolean metric with a true/false ### Output Format (JSON): { "result": "true", "reason": "<brief explanation for the score>" } ### Output Format (JSON): { "result": "false", "reason": "<brief explanation for the score>" } Before running the sample: npm install
evaluations/intentResolutionEvaluation.js This sample demonstrates how to evaluate intent resolution with inline data, including simple string examples and complex conversation examples with tool calls.
evaluations/modelEvaluation.js This sample demonstrates how to create an evaluation with a custom data source configuration, run the evaluation against an Azure AI model with inline test data, monitor its progress, and retrieve the results. Before running the sample: npm install
evaluations/redTeamEvaluation.js This sample demonstrates how to create an agent, define testing criteria for red teaming, create evaluation taxonomies, run red teaming evaluation, and clean up resources. Before running the sample: npm install
evaluations/scheduledDatasetEvaluation.js This sample demonstrates how to create a scheduled evaluation using built-in evaluators (Violence, F1 Score, Coherence), upload a dataset file, create a schedule to run the evaluation daily, and clean up resources. Before running the sample: npm install
files/filesBasic.js Using an OpenAI client, this sample demonstrates how to perform files operations: create, retrieve, content, list, and delete.
finetuning/finetuningDpoJob.js Using an OpenAI client, this sample demonstrates how to create and cancel dpo fine-tuning jobs.
finetuning/finetuningOssModelsSupervisedJob.js Using an OpenAI client, this sample demonstrates how to create and cancel sft fine-tuning jobs.
finetuning/finetuningReinforcementJob.js Using an OpenAI client, this sample demonstrates how to create and cancel reinforcement fine-tuning jobs.
finetuning/finetuningSupervisedJob.js Using an OpenAI client, this sample demonstrates how to perform sft operations: create, retrieve, list, pause, resume, list events, list checkpoints, deploy, infer, and cancel.
indexes/indexesBasics.js Given an AIProjectClient, this sample demonstrates how to enumerate the properties of all indexes, get the properties of an index by its name, and delete an index.
mcpTools/mcpToolsBasic.js This sample demonstrates how to interact with MCP tools using the MCP client library.
memories/memoriesBasics.js Create a memory store, add user memories, search for stored memories, and clean up resources using the Memory Store APIs in the Azure AI Projects client.
memories/memoryCrud.js Create, get, update, list, and delete a memory store using the Memory Store APIs in the Azure AI Projects client.
redTeam/redTeamBasic.js Given an AIProjectClient, this sample demonstrates how to create, get, and list Red Team scans.
responses/responseImageInput.js This sample demonstrates how to create a response with image input.
responses/responseStreamManager.js This sample demonstrates how to use the responses stream manager for streaming responses.
responses/responseStructureOutput.js This sample demonstrates how to create responses with structured output using a JSON schema.
telemetry/remoteTelemetry.js Demonstrates sending telemetry data with AIProjectClient and Azure Monitor OpenTelemetry.
telemetry/telemetryBasics.js Given the AIProjectClient, this sample shows how to get the connection string for telemetry.

Prerequisites

The sample programs are compatible with LTS versions of Node.js.

You need an Azure subscription to run these sample programs.

Samples retrieve credentials to access the service endpoint from environment variables. Alternatively, edit the source code to include the appropriate credentials. See each individual sample for details on which environment variables/credentials it requires to function.

Adapting the samples to run in the browser may require some additional consideration. For details, please see the package README.

Setup

To run the samples using the published version of the package:

  1. Install the dependencies using npm:
npm install
  1. Edit the file sample.env, adding the correct credentials to access the Azure service and run the samples. Then rename the file from sample.env to just .env. The sample programs will read this file automatically.

  2. Run whichever samples you like (note that some samples may require additional setup, see the table above):

node agents/agentBasic.js

Alternatively, run a single sample with the required environment variables set (setting up the .env file is not required if you do this), for example (cross-platform):

npx cross-env FOUNDRY_PROJECT_ENDPOINT="<azure ai project endpoint>" FOUNDRY_MODEL_NAME="<model deployment name>" node agents/agentBasic.js

Next Steps

Take a look at our API Documentation for more information about the APIs that are available in the clients.