Region availability for models in serverless API endpoints | Azure AI Studio

Important

Some of the features described in this article might only be available in preview. This preview is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

In this article, you learn about which regions are available for each of the models supporting serverless API endpoint deployments.

Certain models in the model catalog can be deployed as a serverless API with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription.

Region availability

Availability of serverless API endpoints for select models are listed in the following tables:

Cohere models

Region Cohere Command R Cohere Command R+ Cohere Embed v3
East US 2
Sweden Central

Mistral models

Region Mistral-Small Mistral-Large
East US 2
France Central unavailable
Sweden Central

Meta Llama models

Region Llama-2 Llama-3
East US 2
West US 3 unavailable

Nixtla TimeGEN-1 model

Region Nixtla TimeGEN-1
East US
East US 2
North Central US
South Central US
West US
West US 3
Sweden Central

Phi 3 models

Region Phi-3-mini Phi-3-medium
East US 2
Sweden Central

Alternatives to region availability

If most of your infrastructure is in a particular region and you want to take advantage of models available only as serverless API endpoints, you can create a hub or project on the supported region and then consume the endpoint from another region.

Read Consume serverless API endpoints from a different hub or project to learn how to configure an existing serverless API endpoint in a different hub or project than the one where it was deployed.