Azure SQL DBCC ShrinkDatabase takes too long

RJ 326 Reputation points
2024-10-01T14:53:18.8733333+00:00

Hi there,

I ran DBCC shrinkdatabase in Azure sql provisioned with 10 vcores.

The one DB size is 500GB and other is 800GB. Both ran for 12 hrs and progress was only 30 percent.

Sometimes it gets killed since ETL jobs start to run every 12 hrs.

Anything im doing wrong?

Thanks

Azure SQL Database
{count} votes

1 answer

Sort by: Most helpful
  1. Alberto Morillo 34,671 Reputation points MVP Volunteer Moderator
    2024-10-01T15:13:08.06+00:00

    When database allocated space is in hundreds of gigabytes or higher, shrink might require a significant time to complete, often measured in hours, or days for multi-terabyte databases. Some suggestions to speed up the process:

    1. Rebuild indexes with low page density.
    2. It is recommended to first execute shrink for each data file with the TRUNCATEONLY parameter. This way, if there is any allocated but unused space at the end of the file, it is removed quickly and without any data movement.
    3. If the database has multiple data files, you can speed up the process by shrinking multiple data files in parallel. You do this by opening multiple database sessions, and using DBCC SHRINKFILE on each session with a different file_id value

    For more information, read this this documentation.


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.