แก้ไข

แชร์ผ่าน


Overview: Deploy AI models in Azure AI Studio

The model catalog in Azure AI studio is the hub to discover and use a wide range of models for building generative AI applications. Models need to be deployed to make them available for receiving inference requests. The process of interacting with a deployed model is called inferencing. Azure AI Studio offer a comprehensive suite of deployment options for those models depending on your needs and model requirements.

Deploying models

Deployment options vary depending on the model type:

  • Azure OpenAI models: The latest OpenAI models that have enterprise features from Azure.
  • Models as a Service models: These models don't require compute quota from your subscription. This option allows you to deploy your Model as a Service (MaaS). You use a serverless API deployment and are billed per token in a pay-as-you-go fashion.
  • Open and custom models: The model catalog offers access to a large variety of models across modalities that are of open access. You can host open models in your own subscription with a managed infrastructure, virtual machines, and the number of instances for capacity management. There's a wide range of models from Azure OpenAI, Hugging Face, and NVIDIA.

Azure AI studio offers four different deployment options:

Name Azure OpenAI Service Azure AI model inference service Serverless API Managed compute
Which models can be deployed? Azure OpenAI models Azure OpenAI models and Models as a Service Models as a Service Open and custom models
Deployment resource Azure OpenAI service Azure AI services AI project AI project
Best suited when You are planning to use only OpenAI models You are planning to take advantage of the flagship models in Azure AI catalog, including OpenAI. You are planning to use a single model from a specific provider (excluding OpenAI). If you plan to use open models and you have enough compute quota available in your subscription.
Billing bases Token usage Token usage Token usage1 Compute core hours2
Deployment instructions Deploy to Azure OpenAI Service Deploy to Azure AI model inference Deploy to Serverless API Deploy to Managed compute

1 A minimal endpoint infrastructure is billed per minute. You aren't billed for the infrastructure that hosts the model in pay-as-you-go. After you delete the endpoint, no further charges accrue.

2 Billing is on a per-minute basis, depending on the product tier and the number of instances used in the deployment since the moment of creation. After you delete the endpoint, no further charges accrue.

Tip

To learn more about how to track costs, see Monitor costs for models offered through Azure Marketplace.

How should I think about deployment options?

Azure AI studio encourages customers to explore the deployment options and pick the one that best suites their business and technical needs. In general you can use the following thinking process:

  1. Start with the deployment options that have the bigger scopes. This allows you to iterate and prototype faster in your application without having to rebuild your architecture each time you decide to change something. Azure AI model inference service is a deployment target that supports all the flagship models in the Azure AI catalog, including latest innovation from Azure OpenAI.

  2. When you are looking to use a specific model:

    1. When you are interested in OpenAI models, use the Azure OpenAI Service which offers a wide range of capabilities for them and it's designed for them.

    2. When you are interested in a particular model from Models as a Service, and you don't expect to use any other type of model, use Serverless API endpoints. They allow deployment of a single model under a unique set of endpoint URL and keys.

  3. When your model is not available in Models as a Service and you have compute quota available in your subscription, use Managed Compute which support deployment of open and custom models. It also allows high level of customization of the deployment inference server, protocols, and detailed configuration.

Tip

Each deployment option may offer different capabilities in terms of networking, security, and additional features like content safety. Review the documentation for each of them to understand their limitations.