increase in database size

Sam 1,476 Reputation points
2024-06-19T05:36:23.6066667+00:00

Hi All,

We have a 2 TB disk to stored of the sql database files. db has 3 data files and 1 log file. log file is stored on a different drive.

The issue is with the data file drive. Current disk usage (E:\ drive) reached is 1.20 TB out of 2TB.

After monitoring for last week 5 days , we are seeing one of the index data is getting increased as below.

monday - 100GB

tuesday - 20GB

wednesday -150GB

thursday - 50GB

friday - 60GB

So, i am little worried the disk will be full very soon.

Questions

  1. how much more disk can I add? with this rate of growth?
  2. How many days will this drive will this drive sustain before the drive gets completely filled up. I want to know the calculation behind it.

The problem is, we are reached out the number of disk limits for the Azure VM. only option is to go to next VM size. or create new VM and move some db's to the new VM.

Before that, we need to provide data to the management and app team that the data is growing and very soon we might run out of disk space and hit database downtime.

Alternatively, we are figuring out to archiving some of the unwanted backup tables and old data in large tables within the databases. Also, trying to figureout unused indexes and disable / dropping them.

Regards,

Sam

SQL Server
SQL Server
A family of Microsoft relational database management and analysis systems for e-commerce, line-of-business, and data warehousing solutions.
13,997 questions
0 comments No comments
{count} votes

Accepted answer
  1. Olaf Helper 45,106 Reputation points
    2024-06-19T06:13:36.8866667+00:00

    No one can really answer your post.

    You have to dig deeper into, which table increases and is this wanted or only "trash data" or so called "unused space", cause by heap tables etc.?

    You can query table/index size with

    -- List all database tables and there indexes with 
    -- detailed information about row count and 
    -- used + reserved data space. 
    SELECT DISTINCT SCH.name AS SchemaName
          ,OBJ.name AS ObjName
          ,OBJ.type_desc AS ObjType
          ,INDX.name AS IndexName
          ,INDX.type_desc AS IndexType
          ,PART.partition_number AS PartitionNumber
          ,PART.rows AS PartitionRows
          ,STAT.row_count AS StatRowCount
          ,STAT.used_page_count * 8 AS UsedSizeKB
          ,STAT.reserved_page_count * 8 AS ReservedSizeKB
          ,PART.data_compression_desc
          ,DS.name AS FilegroupName
          ,(STAT.reserved_page_count - STAT.used_page_count) * 8 AS Unused
          ,CONVERT(money, ROUND(FRG.avg_fragmentation_in_percent, 2)) AS FragScale, FRG.fragment_count AS FragCnt
    FROM sys.partitions AS PART
         INNER JOIN sys.dm_db_partition_stats AS STAT
             ON PART.partition_id = STAT.partition_id
                AND PART.partition_number = STAT.partition_number
         INNER JOIN sys.objects AS OBJ
             ON STAT.object_id = OBJ.object_id
         INNER JOIN sys.schemas AS SCH
             ON OBJ.schema_id = SCH.schema_id
         INNER JOIN sys.indexes AS INDX
             ON STAT.object_id = INDX.object_id
                AND STAT.index_id = INDX.index_id
         INNER JOIN sys.data_spaces AS DS
             ON INDX.data_space_id = DS.data_space_id
         INNER JOIN
         sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) AS FRG
            ON FRG.object_id = OBJ.object_id           
               AND FRG.index_id = INDX.index_id
    ORDER BY OBJ.name
            ,INDX.name
            ,PART.partition_number
    

2 additional answers

Sort by: Most helpful
  1. MikeyQiaoMSFT-0444 3,190 Reputation points
    2024-06-19T09:51:58.54+00:00

    Hello,Sam

    The current size and growth rate of the data file you mentioned are common in some large-scale applications because the growth of the data file is closely related to the way client applications are used.

    If you find this growth unacceptable, you must identify which tables are occupying the file space, why it is so large, and how to quickly free up space.

    You can shrink the database to reduce the size of the MDF file, but this does not fundamentally solve the issue of your large, incrementally growing file.

    Here are some commands to help you troubleshoot abnormal data file sizes:

    1.Calculate the total number of extents and the number of used extents in a database data file.

    DBCC SHOWFILESTATS
    

    2.Use the Dynamic Management View (sys.dm_db_partition_stats) and the sys.objects table. Calculates and aggregates the different types of page counts and row counts for each table or index object, with the results ordered by reserved page count in descending order:

    SELECT o.name ,
             SUM (p.reserved_page_count) AS reserved_page_count,
             SUM (p.used_page_count) AS used_page_count,
             SUM (
                CASE
                    WHEN (p.index_id < 2) THEN (p.in_row_data_page_count + 
                      p.lob_used_page_count + p.row_overflow_used_page_count)
                    ELSE p.lob_used_page_count + p.row_overflow_used_page_count
                END
                ) AS DataPages,
             SUM (
                CASE
                    WHEN (p.index_id < 2) THEN row_count
                    ELSE 0
                END
                ) AS rowCounts
        FROM sys.dm_db_partition_stats p INNER JOIN sys.objects o
            ON p.object_id = o.object_id
        GROUP BY o.name
    	ORDER BY reserved_page_count DESC
    
    
    

    3.Calculate the storage usage for each table in the database and display it in MB.

    SELECT 
        schema_name(o.schema_id) AS SchemaName,
        o.name AS TableName,
        SUM(p.reserved_page_count) * 8 / 1024 AS reserved_mb,
        SUM(p.used_page_count) * 8 / 1024 AS used_mb,
        SUM(p.in_row_data_page_count + p.lob_used_page_count + p.row_overflow_used_page_count) * 8 / 1024 AS data_mb
    FROM 
        sys.dm_db_partition_stats p
    JOIN 
        sys.objects o ON p.object_id = o.object_id
    WHERE 
        o.type = 'U'  
    GROUP BY 
        schema_name(o.schema_id), 
        o.name
    ORDER BY 
        reserved_mb DESC;
    
    
    

    In summary, please do not forget to balance business needs with storage space limitations, and discard unnecessary data in a timely manner.


  2. yunus emre ISIK 181 Reputation points
    2024-06-27T06:53:35.4166667+00:00

    You should think index optimization along with increasing disk size . Find redundant, duplicate and unused indexes and delete them.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.