Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Azure DocumentDB uses Premium SSD v2 disks to deliver significantly higher performance for I/O-intensive workloads by de-coupling storage capacity from IOPS and bandwidth settings.
With Premium SSD v2 storage on Azure DocumentDB, the maximum configurable IOPS and bandwidth settings are available by default regardless of the storage capacity configured for the cluster. The IOPS and bandwidth capacity of the Compute tier determines the achievable IOPS and bandwidth in the storage layer without the need to scale up storage capacity.
Only the required storage capacity needs to be selected, while the highest achievable IOPS and bandwidth are auto configured by Azure DocumentDB at no added cost. No additional user intervention is needed to ensure the cluster is set up for optimal performance. The result is a 12x performance boost at no added cost.
Previously, a jump from 5,000 IOPS to 20,000 IOPS required increasing the size of the disk from 1TB to 20TB, even in the absence of higher storage needs. With Premium SSD v2, 20,000 IOPS can be achieved on the same 1TB disk so long as the cluster's compute tier has the capacity to push and maintain 20,000 IOPS. Moreover, Premium SSD v2 disks can support up to 80,000 IOPS - a 4x increase over Premium SSD.
Guidance
The maximum performance for your Azure DocumentDB cluster is now only dependent on the compute tier and not the storage size. Start by choosing just the desired storage size needed for the cluster, then select a compute tier that provides the required (IOPS) and throughput (MBps) for your workload. Tabulated below are the highest achievable and sustainable IOPS and bandwidth limits per compute tier.
IOPS and throughput caps
With Premium SSD v2 disks, the cluster will be auto configured with the upper bound values tabulated below, at no added cost.
| Compute Tier | Max IOPS | Max bandwidth (MBps) |
|---|---|---|
| M30 (2 core) | 3,750 | 85 |
| M40 (4 core) | 6,400 | 145 |
| M50 (8 core) | 12,800 | 290 |
| M60 (16 core) | 25,600 | 600 |
| M80 (32 core) | 51,200 | 865 |
| M200 (64 core) | 80,000 | 1,200 |
Prerequisites
An Azure subscription
- If you don't have an Azure subscription, create a free account
An existing Azure DocumentDB cluster
- If you don't have a cluster, create a new cluster
Use the Bash environment in Azure Cloud Shell. For more information, see Get started with Azure Cloud Shell.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish the authentication process, follow the steps displayed in your terminal. For other sign-in options, see Authenticate to Azure using Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about extensions, see Use and manage extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade.
- Terraform 1.2.0 or later.
Create a cluster with high performance storage
Configure a cluster using Premium SSD v2 (high performance) storage as part of the cluster creation step.
Sign in to the Azure portal (https://portal.azure.com).
From the Azure portal menu or the Home page, select Create a resource.
On the New page, search for and select Azure DocumentDB.
On the Create Azure DocumentDB cluster page and within the Basics section, select the Configure option within the Cluster tier section.
On the Configure page, choose the cluster tier and storage size as required. Select the storage type as Premium SSD v2 to enable high-performance storage, then select Save to apply the changes.
Fill in the remaining details and then select Review + create.
Review the settings you provide, and then select Create. It takes a few minutes to create the cluster. Wait for the resource deployment is complete.
Finally, select Go to resource to navigate to the Azure DocumentDB cluster in the portal.
Open a new terminal.
Sign in to Azure CLI.
Create a new Bicep file to define your role definition. Name the file main.bicep.
Add this template to the file's content. Replace the
<cluster-name>,<location>,<username>, and<password>placeholders with appropriate values.resource cluster 'Microsoft.DocumentDB/mongoClusters@2025-09-01' = { name: '<cluster-name>' location: '<location>' properties: { administrator: { userName: '<username>' password: '<password>' } serverVersion: '8.0' storage: { sizeGb: 32 type: 'PremiumSSDv2' } compute: { tier: 'M30' } sharding: { shardCount: 1 } highAvailability: { targetMode: 'Disabled' } } }Deploy the Bicep template using
az deployment group create. Specify the name of the Bicep template and replace the<resource-group>placeholder with the name of your target Azure resource group.az deployment group create \ --resource-group "<resource-group>" \ --template-file main.bicepWait for the deployment to complete. Review the output from the deployment.
Open a new terminal.
Sign in to Azure CLI.
Check your target Azure subscription.
az account showDefine your cluster in a new Terraform file. Name the file cluster.
tf.Add this resource configuration to the file's content. Replace the
<cluster-name>,<resource-group>, and<location>placeholders with appropriate values.variable "admin_username" { type = string description = "Administrator username for the cluster." sensitive = true } variable "admin_password" { type = string description = "Administrator password for the cluster." sensitive = true } terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "~> 4.0" } } } provider "azurerm" { features {} } data "azurerm_resource_group" "existing" { name = "<resource-group>" } resource "azurerm_mongo_cluster" "cluster" { name = "<cluster-name>" resource_group_name = data.azurerm_resource_group.existing.name location = "<location>" administrator_username = var.admin_username administrator_password = var.admin_password shard_count = "1" compute_tier = "M30" high_availability_mode = "Disabled" storage_size_in_gb = "32" storage_type = "PremiumSSDv2" version = "8.0" }Tip
For more information on options using the
azurerm_mongo_clusterresource, seeazurermprovider documentation in Terraform Registry.Initialize the Terraform deployment.
terraform init --upgradeCreate an execution plan and save it to a file named cluster.tfplan. Provide values when prompted for the
admin_usernameandadmin_passwordvariables.ARM_SUBSCRIPTION_ID=$(az account show --query id --output tsv) terraform plan --out "cluster.tfplan"Note
This command sets the
ARM_SUBSCRIPTION_IDenvironment variable temporarily. This setting is required for theazurermprovider starting with version 4.0 For more information, see subscription ID inazurerm.Apply the execution plan to deploy the cluster to Azure.
ARM_SUBSCRIPTION_ID=$(az account show --query id --output tsv) terraform apply "cluster.tfplan"Wait for the deployment to complete. Review the output from the deployment.
Open a new terminal.
Sign in to Azure CLI.
Create a new JSON file named cluster.json.
Add this document to the file's content. Replace the
<location>,<username>, and<password>placeholders with appropriate values.{ "location": "<location>", "properties": { "administrator": { "userName": "<username>", "password": "<password>" }, "serverVersion": "8.0", "storage": { "sizeGb": 32, "type": "PremiumSSDv2" }, "compute": { "tier": "M30" }, "sharding": { "shardCount": 1 }, "highAvailability": { "targetMode": "Disabled" } } }Use the
az restAzure CLI command to create a new cluster with the configuration specified in the JSON file. Specify the name of the JSON file as thebodyof the request and replace the following placeholders:Description <subscription-id>The unique identifier of your target Azure subscription <resource-group>The name of your target Azure resource group <cluster-name>The unique name of your new Azure DocumentDB cluster az rest \ --method "GET" \ --url "https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.DocumentDB/mongoClusters/<cluster-name>/users?api-version=2025-09-01" \ --body @cluster.jsonTip
Use
az account showto get the unique identifier of your target Azure subscription.Wait for the deployment to complete. Review the output from the deployment.
Current limitations of high performance storage (Premium SSD v2 storage)
Customer-managed keys (CMK) aren't supported with Premium SSD v2 storage.
Storage capacity settings on Premium SSD v2 disks can be adjusted up to four times within a 24-hour period. For newly created clusters, a maximum of three storage capacity adjustments can be made during the first 24 hours.
Replication from Premium SSD to Premium SSD v2 is supported only for migration scenarios. Ongoing replication isn't supported because Premium SSD can't match the performance of Premium SSD v2 and may result in higher latency.
Online migration from Premium SSD to Premium SSD v2 isn't currently supported. To upgrade from Premium SSD to Premium SSD V2, you can perform a point-in-time-restore to a new server using Premium SSD v2. Alternatively, you can create a read replica from a Premium SSD server to a Premium SSD v2 server and promote it after replication completes.
If you perform any operation that requires disk hydration the following error might occur. This error occurs because Premium SSD v2 disks don't support any operation while the disk is still hydrating.
- Error message: Unable to complete the operation because the disk is still being hydrated. Retry after some time.
- Operations that can trigger this behavior include:
- Performing compute scaling, storage scaling, enabling high availability (HA) in quick succession.
- This also includes service-triggered failovers to guarantee high availability.
- Using PITR (point-in-time-restore) to create a new cluster and immediately enabling High Availability while the disk is still being hydrated.
- As a best practice, when using Premium SSD v2 disks, space out these operations or complete them sequentially, ensuring disk hydration finishes between actions.