Azure SQL Database Cost Not Decreasing After Reducing Storage Size

Deepak Patil 20 Reputation points
2025-05-05T07:53:17.3566667+00:00

I am using an Azure SQL Database on the Free – General Purpose – Serverless: Gen5 tier with 6 vCores. Over time, the database grew to 55 GB in size. To optimize storage and reduce costs, I performed the following actions:

  • Deleted over 2 years of old records, retaining only the last 1 month of data

Reindexed all tables

Executed DBCC SHRINKDATABASE

Reduced the data size to approximately 5 GB

Adjusted the maximum storage size to 10 GB for future usage

Changed the backup retention period from 7 days to 1 day

Despite these actions, the Cost Management dashboard still reflects the same cost as when the storage was at 50 GB. I expected a reduction in the billed amount after these optimizations, but there has been no visible change.

Request:

Could you please clarify:

Why the cost has not decreased even though the actual data size is now around 10 GB?

Is there a delay in reflecting cost changes?

Are there any hidden storage components (e.g., transaction logs, backup snapshots, temp storage) that may still be contributing to the cost?

I would appreciate your guidance on how to ensure that the billing reflects the actual reduced usage.

Azure SQL Database
{count} votes

Accepted answer
  1. Manasa Akula 680 Reputation points Microsoft External Staff Moderator
    2025-05-06T06:35:32.8566667+00:00

    Hi Deepak Patil,

    Thank you for sharing the details and the screenshot from the Azure Portal. I’ll address your queries in a structured manner to ensure clarity.Locating the SKU ID in the Azure Portal

    I’m sorry to hear you couldn’t find the SKU ID.

    To Find SKU ID of the SQL Database:

    Go to the Azure Portal and select your database (e.g., SQL Database).

    In the left-hand menu, click on Overview or Settings.

    Look for the Pricing tier or Compute + storage section. The SKU ID (e.g., S0, S1, or a specific DTU/vCore configuration) is usually displayed there. If you’re still unable to locate it, you might need to check the Resource JSON view (available under the Automation or Export template section), where the SKU details are explicitly listed. Let me know if you need further assistance with this!

    Observations on Storage and Cost

    I’m glad to hear the graphs in the Azure Portal reflect the adjusted storage size correctly. The slight decrease in cost, as you noted, aligns with the reduction of the PITR retention period from 7 days to 1 day. This change typically reduces the backup storage overhead, which seems to have had the expected partial impact on cost.

    Database Size Increasing Despite Record Balance

    As you mentioned that despite reducing the storage from 55 GB to around 10 GB, the database size is increasing again, even though 4,860 old records are deleted and 4,860 new records are inserted daily. Ideally, this should maintain the data size, but as you observed, new data appears to be inserted into new space rather than reusing freed space.

    From the screenshot:

    Used space: 9.48 GB

    Remaining space: 5.52 GB

    Allocated space: 9.54 GB

    Max storage: 15 GB

    Usage: 63.21%

    The allocated space (9.54 GB) being slightly higher than the used space (9.48 GB) indicates that the database has reserved additional space for growth, which is expected behavior in SQL Server. However, the issue of new data not reusing freed space is likely due to how SQL Server manages its data files and fragmentation.

    Addressing into Your Points:

    1. Is it necessary to regularly run DBCC SHRINKDATABASE to reclaim space and prevent unnecessary growth of the database files?

    Running DBCC SHRINKDATABASE can reclaim unused space, but it’s not always the best practice to do this regularly. Here’s an explanation.

    When you delete records, the space they occupied is marked as free within the data file, but the file itself doesn’t automatically shrink. SQL Server keeps this space available for future inserts to avoid frequent file growth operations, which can be resource-intensive.

    In your case, since new records are being inserted into new space rather than reusing the freed space, this could be due to index fragmentation or the way the data is organized within the database.

    Running DBCC SHRINKDATABASE will reduce the file size, but it can lead to fragmentation, which may degrade performance over time. It also doesn’t address the root cause of why the space isn’t being reused.

    Instead of relying on frequent shrinking, I’d recommend addressing the underlying issue (e.g., fragmentation or fill factor) to ensure space is reused efficiently.

    2. Is there a recommended best practice to avoid frequent shrinking and still keep the database size optimized?

    Yes, there are several best practices to optimize database size and avoid frequent shrinking.

    Rebuild or Reorganize Indexes: Fragmentation can prevent SQL Server from reusing freed space efficiently. Run INDEX REBUILD or INDEX REORGANIZE operations periodically to defragment your indexes.

    You can check fragmentation levels using the sys.dm_db_index_physical_stats DMV.

    Adjust Fill Factor: If your database has a high rate of inserts and deletes, consider setting a lower fill factor (e.g., 70-80%) on your indexes. This leaves more free space on each page, reducing fragmentation and allowing SQL Server to reuse space more effectively.

    Monitor and Manage Log Files: Ensure the transaction log isn’t growing unnecessarily. If you’re in Full Recovery Mode, make sure to take regular log backups to truncate the log. If log growth isn’t a concern, you can confirm this from the screenshot, where the focus is on data file usage.

    Partitioning (if applicable): If your database supports it, consider table partitioning to manage large datasets more efficiently. This can help with both performance and space management.

    Avoid Frequent Shrinking: Instead of shrinking, allow the database to maintain some free space for growth. Shrinking and growing data files repeatedly can lead to performance overhead and fragmentation. If you must shrink, do it sparingly (e.g., after a one-time major cleanup) and follow it with an index rebuild.

    Monitor Growth Trends: Continue monitoring the database size using the Azure Portal graphs, as you’ve done. If the size keeps increasing despite these measures, there might be other factors (e.g., unmaintained statistics, inefficient queries, or temporary data accumulation) that need investigation.

    Please do let us know if you need further assistance with implementing any of these solutions or have additional queries.

    If the comment is helpful, please click "Upvote it".

    1 person found this answer helpful.
    0 comments No comments

1 additional answer

Sort by: Most helpful
  1. Shikha Ghildiyal 6,615 Reputation points Microsoft Employee Moderator
    2025-05-05T08:35:24.7633333+00:00

    Hi Deepak Patil

    Thanks for reaching out to Microsoft Q&A.

    Azure SQL Database (except for serverless) is billed on a predictable, hourly rate. If the SQL database is active for less than one hour, you are billed for the highest service tier selected, provisioned storage, and IO that applied during that hour, regardless of usage or whether the database was active for less than an hour.

    Billing depends on the SKU of your product, the generation hardware of your SKU, and the meter category. Azure SQL Database has the following possible SKUs:

    • Basic (B)
    • Standard (S)
    • Premium (P)
    • General Purpose (GP)
    • Business Critical (BC)
    • And for storage: geo-redundant storage (GRS), locally redundant storage (LRS), and zone-redundant storage (ZRS)
    • It's also possible to have a deprecated SKU from deprecated resource offerings

    For more information, see vCore-based purchasing model, DTU-based purchasing model, or compare purchasing models.

    The following table shows the most common billing meters and their possible SKUs for single databases:

    Expand table

    Measurement Possible SKU(s) Description
    Backup* GP/BC/HS Measures the consumption of storage used by backups, billed by the amount of storage utilized in GB per month.
    Backup* GP/BC/HS Measures the consumption of storage used by backups, billed by the amount of storage utilized in GB per month.
    Backup (LTR) GRS/LRS/ZRS/GF Measures the consumption of storage used by long-term backups configured via long-term retention, billed by the amount of storage utilized.
    Compute B/S/P/GP/BC Measures the consumption of your compute resources per hour.
    Compute (primary/named replica) HS Measures the consumption of your compute resources per hour of your primary HS replica.
    Compute (HA replica) HS Measures the consumption of your compute resources per hour of your secondary HS replica.
    Compute (ZR add-on) GP Measures the consumption of your compute resources per minute of your zone redundant added-on replica.
    Compute (serverless) GP Measures the consumption of your serverless compute resources per minute.
    License GP/BC/HS The billing for your SQL Server license accrued per month.
    Storage B/S*/P*/G/BC/HS Billed monthly, by the amount of data stored per hour.

    Kindly check reference doc- https://learn.microsoft.com/en-us/azure/azure-sql/database/cost-management?view=azuresql for more details. Hope it helps

    Please do not forget to "Accept the answer” and “up-vote” wherever the information provided helps you, this can be beneficial to other community members.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.