Model Serving limits and regions

This article summarizes the limitations and region availability for Mosaic AI Model Serving and supported endpoint types.

Limitations

Mosaic AI Model Serving imposes default limits to ensure reliable performance. If you have feedback on these limits, please reach out to your Databricks account team.

The following table summarizes resource and payload limitations for model serving endpoints.

Feature Granularity Limit
Payload size Per request 16 MB
Queries per second (QPS) Per workspace 200, but can be increased to 3000 or more by reaching out to your Databricks account
Model execution duration Per request 120 seconds
CPU endpoint model memory usage Per endpoint 4GB
GPU endpoint model memory usage Per endpoint Greater than or equal to assigned GPU memory, depends on the GPU workload size
Provisioned concurrency Per workspace 200 concurrency. Can be increased by reaching out to your Databricks account.
Overhead latency Per request Less than 50 milliseconds
Foundation Model APIs (pay-per-token) rate limits Per workspace Reach out to your Databricks account team to increase the following limits.

* The DBRX Instruct model has a limit of 1 query per second.
* Other chat and completion models have a default rate limit of 2 queries per second.
* Embedding models have a default 300 embedding inputs per second.
Foundation Model APIs (provisioned throughput) rate limits Per workspace Same as Model Serving QPS limit listed above.

Model Serving endpoints are protected by access control and respect networking-related ingress rules configured on the workspace, like IP allowlists and Private Link.

Additional limitations exist as well:

  • It is possible for a workspace to be deployed in a supported region, but be served by a control plane in a different region. These workspaces do not support Model Serving and result in an error message saying that your workspace is not supported. Reach out to your Azure Databricks account team for more information.
  • Model Serving does not support init scripts.
  • By default, Model Serving does not support Private Link to external endpoints (like, Azure OpenAI). Support for this functionality is evaluated and implemented on a per region basis. Reach out to your Azure Databricks account team for more information.

Foundation Model APIs limits

Note

As part of providing the Foundation Model APIs, Databricks may process your data outside of the region where your data originated, but not outside of the relevant geographical location.

The following are limits relevant to Foundation Model APIs workloads:

  • Provisioned throughput supports the HIPAA compliance profile and should be used for workloads requiring compliance certifications. Pay-per-token workloads are not HIPAA or compliance security profile compliant.
  • For Foundation Model APIs endpoints, only workspace admins can change the governance settings, like the rate limits. To change rate limits use the following steps:
    1. Open the Serving UI in your workspace to see your serving endpoints.
    2. From the kebab menu on the Foundation Model APIs endpoint you want to edit, select View details.
    3. From the kebab menu on the upper-right side of the endpoints details page, select Change rate limit.
  • To use the DBRX model architecture for a provisioned throughput workload, your serving endpoint must be in one of the following regions:
    • eastus
    • eastus2
    • westus
    • centralus
    • westeurope
    • northeurope
    • australiaeast
    • canadacentral
    • brazilsouth

Region availability

Note

If you require an endpoint in an unsupported region, reach out to your Azure Databricks account team.

For more information on regional availability of features, see Features with limited regional availability