How can we scale the read operations on our SQL DB instance?

Shea Martin 0 Reputation points
2025-04-23T21:14:01.88+00:00

We have a web application with a SQL Server DB. The application sees about 40-75 daily active users, with only half of those performing more than a couple tasks. So a pretty light load at this point.

Our production Azure SQL DB is a "Basic" tier running on 200 DTU's, and has about 25GB of data allocated. I have experimented with setting the DTU back down to 50, and as a user, you can't tell the difference until a users runs a report that involves analyzing historical data. But as soon as users run reports we get the odd timeout.

We have optimized the reports pretty well. Is it possible for them to be better, I am sure, but they are not written by a 1st year junior dev, that is for sure.

We are in a bit of a cost saving measure, and our SQL instance is 50% of our Azure costs right now. ($500 CAD for our SQL instance).

I started looking into creating a read-only replica, but there it seems the data sync feature is going away: https://azure.microsoft.com/en-us/updates?id=sql-data-sync-retirement

The recommendation is to use HyperScale or go to a higher pricing tier which has auto-readonly-scale-out. But it is getting confusing, and I could use some advice.... (I guess our 'Developer' Support Plan now only allows us access to this Q&A instead of talking to someone.)

Hyperscale in "serverless" could potentially save us money, as we have such low usage, other than the odd spike. Is there any way to estimate our monthly cost of a serverless hyperscale instance?

Any insights on optimizing our bill and performance, would be helpful. There has to be a more efficient way to support so few users without comprimising performance.

Azure SQL Database
{count} votes

2 answers

Sort by: Most helpful
  1. Sai Raghunadh M 4,640 Reputation points Microsoft External Staff Moderator
    2025-04-23T22:29:30.5633333+00:00

    Hi @ Shea Martin

    Given your scenario with 40–75 daily active users, a 25 GB Azure SQL Database on the Basic tier (200 DTUs), and report-related timeouts, here are some strategies to scale read operations and optimize your $500 CAD/month cost. Since Azure SQL Data Sync is retiring,

    A couple of options might help you scale reads without increasing cost unnecessarily:

    Hyperscale with Serverless Compute:

    Hyperscale is designed for scalability and can handle large databases efficiently. The serverless compute tier automatically scales based on workload demand and pauses during inactivity, which could save costs for your low-usage scenario.

    You can estimate the monthly cost using the Azure Pricing Calculator. It allows you to input your expected usage patterns and compute requirements to get a tailored cost estimate.

    Read-Only Replicas:

    While the Azure SQL Data Sync feature is retiring, Hyperscale includes auto-read-only-scale-out, which creates replicas for read-heavy workloads. This could improve performance for report generation without requiring a higher pricing tier.

    Elastic Pools:

    If you have multiple databases with varying usage, consider elastic pools. They allow you to share resources across databases, optimizing costs.

    Optimize Query Performance:

    Since reports are causing timeouts, further optimization of queries might help. Techniques like indexing, partitioning, or caching frequently accessed data could reduce the load on the database.

    Azure Hybrid Benefit:

    If you have existing SQL Server licenses, you can use Azure Hybrid Benefit to reduce costs.

    Please go through these Documentations that might help you:

    https://learn.microsoft.com/en-us/azure/azure-sql/database/serverless-tier-overview?view=azuresql&tabs=general-purpose

    https://learn.microsoft.com/en-us/azure/azure-sql/database/cost-management?view=azuresql

    Hope this helps. Do let us know if you any further queries.


    If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.


  2. Alberto Morillo 34,671 Reputation points MVP Volunteer Moderator
    2025-04-26T04:59:45.3533333+00:00

    The storage of Basic, S0 and S1 tiers is not intended for Production databases. The Production application using those tiers is destined for poor performance.

    "Regarding data storage, the Standard S0 and S1 service tiers are placed on Standard Page Blobs. Standard Page Blobs use hard disk drive (HDD)-based storage media and are best suited for development, testing, and other infrequently accessed workloads that are less sensitive to performance variability" Source here.

    The rest of standard tiers use remote SSD storage. Premium tiers using locally attached SSD storage.

    About considering Serverless option, please know the following: "Memory for serverless databases is reclaimed more frequently than for provisioned compute databases. This behavior is important to control costs in serverless and can impact performance." (Source). This means you may see queries that retrieve a good amount of data taking some good time to place data in memory, thus showing really bad performance.

    DTU premium tiers offer a free replica that allow you to redirect read workload to the replica.


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.