Azure AI Inference client library samples for JavaScript (Beta)
These sample programs show how to use the JavaScript client libraries for Azure AI Inference in some common scenarios.
File Name | Description |
---|---|
chatCompletions.js | Get chat completions. |
chatCompletionsWithStructuredOutput.js | Get chat completions with structured output. |
embeddings.js | Get embeddings. |
getModelInfo.js | Get model info. |
imageEmbeddings.js | Get image embeddings. |
imageFileCompletions.js | Get chat completions with image file. |
streamChatCompletions.js | List chat completions. |
streamingToolCall.js | Get chat completions with streaming and function call. |
telemetry.js | get instrumentation by open telemetry. |
telemetryWithToolCall.js | Get chat completions with function call with instrumentation. |
toolCall.js | Get chat completions with function call. |
Prerequisites
The sample programs are compatible with LTS versions of Node.js.
You need an Azure subscription to run these sample programs.
Samples retrieve credentials to access the service endpoint from environment variables. Alternatively, edit the source code to include the appropriate credentials. See each individual sample for details on which environment variables/credentials it requires to function.
Adapting the samples to run in the browser may require some additional consideration. For details, please see the package README.
Setup
To run the samples using the published version of the package:
- Install the dependencies using
npm
:
npm install
Edit the file
sample.env
, adding the correct credentials to access the Azure service and run the samples. Then rename the file fromsample.env
to just.env
. The sample programs will read this file automatically.Run whichever samples you like (note that some samples may require additional setup, see the table above):
node chatCompletions.js
Alternatively, run a single sample with the correct environment variables set (setting up the .env
file is not required if you do this), for example (cross-platform):
npx dev-tool run vendored cross-env ENDPOINT="<endpoint>" KEY="<key>" MODEL_NAME="<model name>" node chatCompletions.js
Next Steps
Take a look at our API Documentation for more information about the APIs that are available in the clients.