Edit

High performance storage in Azure DocumentDB

Azure DocumentDB uses Premium SSD v2 disks to deliver significantly higher performance for I/O-intensive workloads by de-coupling storage capacity from IOPS and bandwidth settings.

With Premium SSD v2 storage on Azure DocumentDB, the maximum configurable IOPS and bandwidth settings are available by default regardless of the storage capacity configured for the cluster. The IOPS and bandwidth capacity of the Compute tier determines the achievable IOPS and bandwidth in the storage layer without the need to scale up storage capacity.

Only the required storage capacity needs to be selected, while the highest achievable IOPS and bandwidth are auto configured by Azure DocumentDB at no added cost. No additional user intervention is needed to ensure the cluster is set up for optimal performance. The result is a 12x performance boost at no added cost.

Previously, a jump from 5,000 IOPS to 20,000 IOPS required increasing the size of the disk from 1TB to 20TB, even in the absence of higher storage needs. With Premium SSD v2, 20,000 IOPS can be achieved on the same 1TB disk so long as the cluster's compute tier has the capacity to push and maintain 20,000 IOPS. Moreover, Premium SSD v2 disks can support up to 80,000 IOPS - a 4x increase over Premium SSD.

Guidance

The maximum performance for your Azure DocumentDB cluster is now only dependent on the compute tier and not the storage size. Start by choosing just the desired storage size needed for the cluster, then select a compute tier that provides the required (IOPS) and throughput (MBps) for your workload. Tabulated below are the highest achievable and sustainable IOPS and bandwidth limits per compute tier.

IOPS and throughput caps

With Premium SSD v2 disks, the cluster will be auto configured with the upper bound values tabulated below, at no added cost.

Compute Tier Max IOPS Max bandwidth (MBps)
M30 (2 core) 3,750 85
M40 (4 core) 6,400 145
M50 (8 core) 12,800 290
M60 (16 core) 25,600 600
M80 (32 core) 51,200 865
M200 (64 core) 80,000 1,200

Prerequisites

  • An Azure subscription

    • If you don't have an Azure subscription, create a free account
  • An existing Azure DocumentDB cluster

Create a cluster with high performance storage

Configure a cluster using Premium SSD v2 (high performance) storage as part of the cluster creation step.

  1. Sign in to the Azure portal (https://portal.azure.com).

  2. From the Azure portal menu or the Home page, select Create a resource.

  3. On the New page, search for and select Azure DocumentDB.

    Screenshot of the Azure portal search feature to locate Azure DocumentDB.

  4. On the Create Azure DocumentDB cluster page and within the Basics section, select the Configure option within the Cluster tier section.

    Screenshot of the options available to configure an Azure DocumentDB cluster.

  5. On the Configure page, choose the cluster tier and storage size as required. Select the storage type as Premium SSD v2 to enable high-performance storage, then select Save to apply the changes.

    Screenshot of the configuration option specific to premium SSD v2 disks in Azure DocumentDB.

  6. Fill in the remaining details and then select Review + create.

  7. Review the settings you provide, and then select Create. It takes a few minutes to create the cluster. Wait for the resource deployment is complete.

  8. Finally, select Go to resource to navigate to the Azure DocumentDB cluster in the portal.

Screenshot of the deployment completion step with an option to navigate to the new Azure DocumentDB cluster.

  1. Open a new terminal.

  2. Sign in to Azure CLI.

  3. Create a new Bicep file to define your role definition. Name the file main.bicep.

  4. Add this template to the file's content. Replace the <cluster-name>, <location>, <username>, and <password> placeholders with appropriate values.

    resource cluster 'Microsoft.DocumentDB/mongoClusters@2025-09-01' = {
      name: '<cluster-name>'
      location: '<location>'
      properties: {
        administrator: {
          userName: '<username>'
          password: '<password>'
        }
        serverVersion: '8.0'
        storage: {
          sizeGb: 32
          type: 'PremiumSSDv2'
        }
        compute: {
          tier: 'M30'
        }
        sharding: {
          shardCount: 1
        }
        highAvailability: {
          targetMode: 'Disabled'
        }
      }
    }
    
  5. Deploy the Bicep template using az deployment group create. Specify the name of the Bicep template and replace the <resource-group> placeholder with the name of your target Azure resource group.

    az deployment group create \
        --resource-group "<resource-group>" \
        --template-file main.bicep
    
  6. Wait for the deployment to complete. Review the output from the deployment.

  1. Open a new terminal.

  2. Sign in to Azure CLI.

  3. Check your target Azure subscription.

    az account show
    
  4. Define your cluster in a new Terraform file. Name the file cluster.tf.

  5. Add this resource configuration to the file's content. Replace the <cluster-name>, <resource-group>, and <location> placeholders with appropriate values.

    variable "admin_username" {
      type        = string
      description = "Administrator username for the cluster."
      sensitive   = true
    }
    
    variable "admin_password" {
      type        = string
      description = "Administrator password for the cluster."
      sensitive   = true
    }
    
    terraform {
      required_providers {
        azurerm = {
          source  = "hashicorp/azurerm"
          version = "~> 4.0"
        }
      }
    }
    
    provider "azurerm" {
      features {}
    }
    
    data "azurerm_resource_group" "existing" {
      name = "<resource-group>"
    }
    
    resource "azurerm_mongo_cluster" "cluster" {
      name                   = "<cluster-name>"
      resource_group_name    = data.azurerm_resource_group.existing.name
      location               = "<location>"
      administrator_username = var.admin_username
      administrator_password = var.admin_password
      shard_count            = "1"
      compute_tier           = "M30"
      high_availability_mode = "Disabled"
      storage_size_in_gb     = "32"
      storage_type           = "PremiumSSDv2"
      version                = "8.0"
    }
    

    Tip

    For more information on options using the azurerm_mongo_cluster resource, see azurerm provider documentation in Terraform Registry.

  6. Initialize the Terraform deployment.

    terraform init --upgrade
    
  7. Create an execution plan and save it to a file named cluster.tfplan. Provide values when prompted for the admin_username and admin_password variables.

    ARM_SUBSCRIPTION_ID=$(az account show --query id --output tsv) terraform plan --out "cluster.tfplan"
    

    Note

    This command sets the ARM_SUBSCRIPTION_ID environment variable temporarily. This setting is required for the azurerm provider starting with version 4.0 For more information, see subscription ID in azurerm.

  8. Apply the execution plan to deploy the cluster to Azure.

    ARM_SUBSCRIPTION_ID=$(az account show --query id --output tsv) terraform apply "cluster.tfplan"
    
  9. Wait for the deployment to complete. Review the output from the deployment.

  1. Open a new terminal.

  2. Sign in to Azure CLI.

  3. Create a new JSON file named cluster.json.

  4. Add this document to the file's content. Replace the <location>, <username>, and <password> placeholders with appropriate values.

    {
      "location": "<location>",
      "properties": {
        "administrator": {
          "userName": "<username>",
          "password": "<password>"
        },
        "serverVersion": "8.0",
        "storage": {
          "sizeGb": 32,
          "type": "PremiumSSDv2"
        },
        "compute": {
          "tier": "M30"
        },
        "sharding": {
          "shardCount": 1
        },
        "highAvailability": {
          "targetMode": "Disabled"
        }
      }
    }
    
  5. Use the az rest Azure CLI command to create a new cluster with the configuration specified in the JSON file. Specify the name of the JSON file as the body of the request and replace the following placeholders:

    Description
    <subscription-id> The unique identifier of your target Azure subscription
    <resource-group> The name of your target Azure resource group
    <cluster-name> The unique name of your new Azure DocumentDB cluster
    az rest \
        --method "GET" \
        --url "https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.DocumentDB/mongoClusters/<cluster-name>/users?api-version=2025-09-01" \
        --body @cluster.json
    

    Tip

    Use az account show to get the unique identifier of your target Azure subscription.

  6. Wait for the deployment to complete. Review the output from the deployment.

Current limitations of high performance storage (Premium SSD v2 storage)

  • Customer-managed keys (CMK) aren't supported with Premium SSD v2 storage.

  • Storage capacity settings on Premium SSD v2 disks can be adjusted up to four times within a 24-hour period. For newly created clusters, a maximum of three storage capacity adjustments can be made during the first 24 hours. 

  • Replication from Premium SSD to Premium SSD v2 is supported only for migration scenarios. Ongoing replication isn't supported because Premium SSD can't match the performance of Premium SSD v2 and may result in higher latency.

  • Online migration from Premium SSD to Premium SSD v2 isn't currently supported. To upgrade from Premium SSD to Premium SSD V2, you can perform a point-in-time-restore to a new server using Premium SSD v2. Alternatively, you can create a read replica from a Premium SSD server to a Premium SSD v2 server and promote it after replication completes.

  • If you perform any operation that requires disk hydration the following error might occur. This error occurs because Premium SSD v2 disks don't support any operation while the disk is still hydrating.

    • Error message: Unable to complete the operation because the disk is still being hydrated. Retry after some time.
    • Operations that can trigger this behavior include:
      • Performing compute scaling, storage scaling, enabling high availability (HA) in quick succession.
      • This also includes service-triggered failovers to guarantee high availability.
      • Using PITR (point-in-time-restore) to create a new cluster and immediately enabling High Availability while the disk is still being hydrated.
    • As a best practice, when using Premium SSD v2 disks, space out these operations or complete them sequentially, ensuring disk hydration finishes between actions.