Hi Deepak Patil,
Thank you for sharing the details and the screenshot from the Azure Portal. I’ll address your queries in a structured manner to ensure clarity.Locating the SKU ID in the Azure Portal
I’m sorry to hear you couldn’t find the SKU ID.
To Find SKU ID of the SQL Database:
Go to the Azure Portal and select your database (e.g., SQL Database).
In the left-hand menu, click on Overview or Settings.
Look for the Pricing tier or Compute + storage section. The SKU ID (e.g., S0, S1, or a specific DTU/vCore configuration) is usually displayed there. If you’re still unable to locate it, you might need to check the Resource JSON view (available under the Automation or Export template section), where the SKU details are explicitly listed. Let me know if you need further assistance with this!
Observations on Storage and Cost
I’m glad to hear the graphs in the Azure Portal reflect the adjusted storage size correctly. The slight decrease in cost, as you noted, aligns with the reduction of the PITR retention period from 7 days to 1 day. This change typically reduces the backup storage overhead, which seems to have had the expected partial impact on cost.
Database Size Increasing Despite Record Balance
As you mentioned that despite reducing the storage from 55 GB to around 10 GB, the database size is increasing again, even though 4,860 old records are deleted and 4,860 new records are inserted daily. Ideally, this should maintain the data size, but as you observed, new data appears to be inserted into new space rather than reusing freed space.
From the screenshot:
Used space: 9.48 GB
Remaining space: 5.52 GB
Allocated space: 9.54 GB
Max storage: 15 GB
Usage: 63.21%
The allocated space (9.54 GB) being slightly higher than the used space (9.48 GB) indicates that the database has reserved additional space for growth, which is expected behavior in SQL Server. However, the issue of new data not reusing freed space is likely due to how SQL Server manages its data files and fragmentation.
Addressing into Your Points:
1. Is it necessary to regularly run DBCC SHRINKDATABASE to reclaim space and prevent unnecessary growth of the database files?
Running DBCC SHRINKDATABASE can reclaim unused space, but it’s not always the best practice to do this regularly. Here’s an explanation.
When you delete records, the space they occupied is marked as free within the data file, but the file itself doesn’t automatically shrink. SQL Server keeps this space available for future inserts to avoid frequent file growth operations, which can be resource-intensive.
In your case, since new records are being inserted into new space rather than reusing the freed space, this could be due to index fragmentation or the way the data is organized within the database.
Running DBCC SHRINKDATABASE will reduce the file size, but it can lead to fragmentation, which may degrade performance over time. It also doesn’t address the root cause of why the space isn’t being reused.
Instead of relying on frequent shrinking, I’d recommend addressing the underlying issue (e.g., fragmentation or fill factor) to ensure space is reused efficiently.
2. Is there a recommended best practice to avoid frequent shrinking and still keep the database size optimized?
Yes, there are several best practices to optimize database size and avoid frequent shrinking.
Rebuild or Reorganize Indexes: Fragmentation can prevent SQL Server from reusing freed space efficiently. Run INDEX REBUILD or INDEX REORGANIZE operations periodically to defragment your indexes.
You can check fragmentation levels using the sys.dm_db_index_physical_stats DMV.
Adjust Fill Factor: If your database has a high rate of inserts and deletes, consider setting a lower fill factor (e.g., 70-80%) on your indexes. This leaves more free space on each page, reducing fragmentation and allowing SQL Server to reuse space more effectively.
Monitor and Manage Log Files: Ensure the transaction log isn’t growing unnecessarily. If you’re in Full Recovery Mode, make sure to take regular log backups to truncate the log. If log growth isn’t a concern, you can confirm this from the screenshot, where the focus is on data file usage.
Partitioning (if applicable): If your database supports it, consider table partitioning to manage large datasets more efficiently. This can help with both performance and space management.
Avoid Frequent Shrinking: Instead of shrinking, allow the database to maintain some free space for growth. Shrinking and growing data files repeatedly can lead to performance overhead and fragmentation. If you must shrink, do it sparingly (e.g., after a one-time major cleanup) and follow it with an index rebuild.
Monitor Growth Trends: Continue monitoring the database size using the Azure Portal graphs, as you’ve done. If the size keeps increasing despite these measures, there might be other factors (e.g., unmaintained statistics, inefficient queries, or temporary data accumulation) that need investigation.
Please do let us know if you need further assistance with implementing any of these solutions or have additional queries.
If the comment is helpful, please click "Upvote it".