Scalability and performance targets for Blob storage
This reference details scalability and performance targets for Azure Storage. The scalability and performance targets listed here are high-end targets, but are achievable. In all cases, the request rate and bandwidth achieved by your storage account depends upon the size of objects stored, the access patterns utilized, and the type of workload your application performs.
Make sure to test your service to determine whether its performance meets your requirements. If possible, avoid sudden spikes in the rate of traffic and ensure that traffic is well-distributed across partitions.
When your application reaches the limit of what a partition can handle for your workload, Azure Storage begins to return error code 503 (Server Busy) or error code 500 (Operation Timeout) responses. If 503 errors are occurring, consider modifying your application to use an exponential backoff policy for retries. The exponential backoff allows the load on the partition to decrease, and to ease out spikes in traffic to that partition.
The service-level agreement (SLA) for Azure Storage accounts is available at SLA for Storage Accounts.
Scale targets for Blob storage
|Maximum size of single blob container||Same as maximum storage account capacity|
|Maximum number of blocks in a block blob or append blob||50,000 blocks|
|Maximum size of a block in a block blob||4000 MiB|
|Maximum size of a block blob||50,000 X 4000 MiB (approximately 190.7 TiB)|
|Maximum size of a block in an append blob||4 MiB|
|Maximum size of an append blob||50,000 x 4 MiB (approximately 195 GiB)|
|Maximum size of a page blob||8 TiB2|
|Maximum number of stored access policies per blob container||5|
|Target request rate for a single blob||Up to 500 requests per second|
|Target throughput for a single page blob||Up to 60 MiB per second2|
|Target throughput for a single block blob||Up to storage account ingress/egress limits1|
1 Throughput for a single blob depends on several factors. These factors include but aren't limited to: concurrency, request size, performance tier, speed of source for uploads, and destination for downloads. To take advantage of the performance enhancements of high-throughput block blobs, upload larger blobs or blocks. Specifically, call the Put Blob or Put Block operation with a blob or block size that is greater than 4 MiB for standard storage accounts. For premium block blob or for Data Lake Storage Gen2 storage accounts, use a block or blob size that is greater than 256 KiB.
2 Page blobs aren't yet supported in accounts that have a hierarchical namespace enabled.
The following table describes the maximum block and blob sizes permitted by service version.
|Service version||Maximum block size (via Put Block)||Maximum blob size (via Put Block List)||Maximum blob size via single write operation (via Put Blob)|
|Version 2019-12-12 and later||4000 MiB||Approximately 190.7 TiB (4000 MiB X 50,000 blocks)||5000 MiB|
|Version 2016-05-31 through version 2019-07-07||100 MiB||Approximately 4.75 TiB (100 MiB X 50,000 blocks)||256 MiB|
|Versions prior to 2016-05-31||4 MiB||Approximately 195 GiB (4 MiB X 50,000 blocks)||64 MiB|