One option is to move to Azure Hyperscale were 100 TB is the size limit.
On Azure SQL Managed Instance you can store up to 16 TB of data as stated here.
You can also decide to leave Azure SQL (PaaS) and go back to Azure SQL Server VMs (IaaS).
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Hi Team,
One of my Azure DB (Size) is almost reached 3.5 TB. We are anticipating data growth will be another 3-4 TB in the next couple of weeks.
Configuration Details:
Compute tier: Provisioned
HardWare configuration: Gen5 up to 80 vCores, up to 408 GB memory.
Under this config the Data Max Size is 4TB.
In this case, what are the options?
One option is to move to Azure Hyperscale were 100 TB is the size limit.
On Azure SQL Managed Instance you can store up to 16 TB of data as stated here.
You can also decide to leave Azure SQL (PaaS) and go back to Azure SQL Server VMs (IaaS).
@Vijay Kumar
Agreed with @Alberto Morillo 's answer.
You can use Hyperscale database tier where you can use the database beyond 4TB and supports up to 100TB of database size.
Hyperscale databases aren't created with a defined max size. A Hyperscale database grows as needed - and you're billed only for the capacity you use. For read-intensive workloads, the Hyperscale service tier provides rapid scale-out by provisioning additional replicas as needed for offloading read workloads.
Reference link: What is the Hyperscale service tier? - Azure SQL Database | Microsoft Learn
Regards,
Oury
Is there any other option?
Like moving the old data to cold storage and retrieving back as and when required?
Also, to compress the data before storing in DB?
I had the same problem, one database which is BC_Gen5 V20 Core reaches to 3.5 TB.
Because my database is used for BI reporting/Analysis purpose, so I used page compression on the large tables.
After all large tables compressed, the size was 1.5 TB.
Of course, it will cause CPU consumption higher than before.