Reaching Azure disk storage limit on General Purpose Azure SQL Database Managed Instance
Azure SQL Database Managed Instance is SQL Server implementation on Azure cloud that keeps all database files on Azure storage. In this post you will see how Managed Instance allocates disks in the storage layer and why is this important.
Azure SQL Database Managed Instance has General Purpose tier that separates compute and storage layers where the database files are placed on Azure Premium disks.
Managed Instance uses pre-defined sizes of azure disks (128GB, 256GB, 512GB, etc.) for every file so every file is placed on a single disk with the smallest size that is enough to fit the file with the current file size.
In General Purpose tier you can have up to 8TB of storage space that you can use for your database files. In this limit are included both data and log files in user and system database. However, General Purpose Managed Instance uses up to 35 TB internal storage where it places disks. This means that once you provision Managed Instance you have two storage limits:
- Managed instance user storage is the managed instance storage size that you choose on portal and you pay for this amount of storage. this storage you choose on the azure portal and it cannot exceed 8TB of total user databases.
- Internal physically allocated azure premium disk storage that cannot exceed 35TB. As a result, you cannot have more than 280 files on the GP instance because 280 files placed on the smallest 128GB disks will reach 35TB limit.
When you create database files, they are allocated on the azure premium disks with the sizes that are greater than file size so Managed Instance has some "internal fragmentation" of files. This is implemented because Azure Premium Disk storage offers fixed set of disk sizes, so Managed Instance tries to fit database files on the matching disk.
The sum of the allocated disk sizes cannot be greater than 35TB. If you reach the limit, you might start getting the errors even if you don't reach user-defined Managed Instance storage limit.
In this post, you will see some scripts that can help you to see are you reaching this storage limit.
First, we will create a schema and view that wraps standard sys.master_files view and returns the allocated disk size for every file:
CREATE SCHEMA mi;
CREATE OR ALTER VIEW mi.master_files
WITH mi_master_files AS
( SELECT *, size_gb = CAST(size * 8. / 1024 / 1024 AS decimal(12,4))
FROM sys.master_files where physical_name LIKE 'https:%')
SELECT *, azure_disk_size_gb = IIF(
database_id <> 2,
CASE WHEN size_gb <= 128 THEN 128
WHEN size_gb > 128 AND size_gb <= 256 THEN 256
WHEN size_gb > 256 AND size_gb <= 512 THEN 512
WHEN size_gb > 512 AND size_gb <= 1024 THEN 1024
WHEN size_gb > 1024 AND size_gb <= 2048 THEN 2048
WHEN size_gb > 2048 AND size_gb <= 4096 THEN 4096
Now we can see the size allocated for the underlying Azure Premium Disks for every database file:
SELECT db = db_name(database_id), name, size_gb, azure_disk_size_gb
Sum of the azure disk sizes should not exceed 35TB - otherwise you will reach the azure storage limit errors. You can check total allocated azure storage space using the following query:
SELECT storage_size_tb = SUM(azure_disk_size_gb) /1024.
Using this information, you can find out how many additional files you can add on a managed instance (assuming that new file will be smaller than 128GB):
SELECT remaining_number_of_128gb_files =
(35 - ROUND(SUM(azure_disk_size_gb) /1024,0)) * 8
This is important check because if this count became zero, you will not be able to add more files of database on the instance.