Data Drive Full

Pds 46 Reputation points
2021-11-08T21:20:49.207+00:00

Hello,
Currently I am trying to investigate as one of Vendor Application Database having Disk DATA drive almost full.
Right now it's hard to clean up data from the database as it's a vendor application.
I was looking some file growth and size from database side and have some question.

I was looking the File size:
Database FileName CurrentSize in MB FreeSpaceMB
msdb MSDBData 23
msdb MSDBLog 1918
LSSql LSSqlData 338882 15871.125
LSSql LSSqlLog 38191 37977.82031

Why MSDBlog shows more size then MSDBData?
LSSqlData and LSSqlLog is our main database.
All the files are configured in MB for auto Growth.

One of the big table also i was looking:

TableName   No_Of_Rows  ReservedSpace   DataSpace   Index_Size  UnUsed_Space
MSGLog  2640599 56448216 KB 55755552 KB 662008 KB   30656 KB

From Database side any indication with File size level?

TempDB is also currently not showing any size holding:

Database    FileName    CurrentSize in MB   FreeSpaceMB
msdb    MSDBData    23  
msdb    MSDBLog 1918    
LSSql   LSSqlData   338882  15871.125
LSSql   LSSqlLog    38191   37977.82031
tempdb  tempdev 8192    
tempdb  templog 1024    
SQL Server
SQL Server
A family of Microsoft relational database management and analysis systems for e-commerce, line-of-business, and data warehousing solutions.
13,164 questions
{count} votes

Accepted answer
  1. origae 76 Reputation points
    2021-11-27T06:41:24.273+00:00

    Cathy,
    Thanks for follow up.
    Currently moving files to another disk or adding disk space is not feasible so we are thinking to delete the records from tables based certain criteria and will run periodically as a job to clean up data.
    Is it any down side running DELETE to remove records over 365 days regularly?
    IS it Deleting records from the table will help to gain the disk space?
    Any other maintenance you are suggesting in Database to avoid any fragmentation in table after deleting?
    I am not in favor to shrink the DB or files as it will cause heavy fragmentation.

    If I run to delete records over 365 days like below will be right way?
    Table is BIG around 2651000

    DELETE FROM SqlTab_1
    WHERE CreateDate < DATEADD(dd,-365,GETDATE())

    OR following

    DELETE FROM SqlTab_1
    WHERE CreateDate < GETDATE()) -365

    0 comments No comments

32 additional answers

Sort by: Most helpful
  1. Pds 46 Reputation points
    2021-11-18T22:36:59.997+00:00

    Thanks Erland.
    I am currently taking 5000 as batch size and commit Transaction.
    We are taking log backup hourly.


  2. Pds 46 Reputation points
    2021-11-18T22:51:56.477+00:00

    Thanks Erland.
    Agreed, currently I am doing one of the table is which is smaller then other table to check how it works but you are absolutely right that frequent Log backup needed.

    0 comments No comments

  3. Pds 46 Reputation points
    2021-11-18T22:57:26.927+00:00

    I am running following query, please let me know if you think something better which will be more helpful
    I have also created Non-Clustered Index on Status and CreateDate.

    DECLARE @PurgeDate datetime;
    DECLARE @BATCHSIZE int
            ,@ITERATION int
           ,@TOTALROWS int
             ,@MSG varchar(500);
    SET DEADLOCK_PRIORITY LOW;
    SET @PurgeDate = DATEADD(d,-365,GETDate());
    SET @BATCHSIZE = 5000;
    SET @ITERATION = 0;  
    SET @TOTALROWS = 0;  
    
    WHILE @BATCHSIZE > 0
    BEGIN
    BEGIN TRANSACTION; 
        DELETE TOP (@BATCHSIZE)
        FROM RptEvnt 
        WHERE Status = 'A' and createdt <= @PurgeDate;
              SET @BATCHSIZE = @@ROWCOUNT;
        SET @ITERATION = @ITERATION + 1;
        SET @TOTALROWS = @TOTALROWS + @BATCHSIZE;
    COMMIT TRANSACTION; 
        SET @MSG = 'Iteration: ' + CAST(@ITERATION AS varchar) + ' Total deletes:' + CAST(@TOTALROWS AS varchar);
        RAISERROR( @MSG, 0, 1 ) WITH NOWAIT;
     END;
    

  4. Pds 46 Reputation points
    2021-11-18T23:08:39.183+00:00

    Thanks once again Erland.
    So do you suggest to modify this query as I will be running for Bigger Table, almost same query but table will be bigger?
    Currently it's taking average 20 sec to delete 1000 records.
    So average for 67000 records it's took 27 minute.