July 2014

Volume 29 Number 7

Azure Web Sites : Scaling Your Web Application with Azure Web Sites

Yochay Kiriaty

Surprisingly, scale is an often overlooked aspect of Web application development. Typically, the scale of a Web application becomes a concern only when things starts to fail and the UX is compromised due to slowness or timeouts at the presentation layer. When a Web application starts exhibiting such performance deficits, it has reached its scalability point—the point at which lack of resources, such as CPU, memory or bandwidth, rather than a logical bug in the code, hampers its ability to function.

This is the time to scale your Web application and give it extra resources, whether more compute, additional storage or a stronger database back end. The most common form of scaling in the cloud is horizontal—adding additional compute instances that allow a Web application to run simultaneously on multiple Web servers (instances). Cloud platforms, such as Microsoft Azure, make it very easy to scale the underlying infrastructure that supports your Web application by supplying any number of Web servers, in the form of virtual machines (VMs), at the flick of your finger. However, if your Web application isn’t designed to scale and run across multiple instances, it won’t be able to take advantage of the extra resources and will not yield the expected results.

This article takes a look at key design concepts and patterns for scaling Web applications. The implementation details and examples focus on Web applications running on Microsoft Azure Web Sites. 

Before I start, it’s important to note that scaling a Web application is very much dependent on the context and on the way your application is architected. The Web application used in this article is simple, yet it touches the fundamentals of scaling a Web application, specifically addressing scale when running on Azure Web Sites.

There are different levels of scale that cater to different business needs. In this article, I’ll look at four different levels of scaling capabilities, from a Web application that can’t run on multiple instances, to one that can scale across multiple instances—even across multiple geographical regions and datacenters.

Step One: Meet the Application

I’m going to start by reviewing the limitations of the sample Web application. This step will set the baseline from which the required modifications to enhance the scalability of the application will be made. I chose to modify an existing application, because in real life that’s often what you’re asked to do, rather than creating a brand-new application and design from scratch.

The application I’ll use for this article is the WebMatrix Photo Gallery Template for ASP.NET Web Pages (bit.ly/1llAJdQ). This template is a great way to learn how to use ASP.NET Web Pages to create real-world Web applications. It is a fully functional Web application that enables users to create photo albums and upload images. Anyone can view images, and logged-in users can leave comments. The Photo Gallery Web application can be deployed to Azure Web Sites from WebMatrix, or directly from the Azure Portal via the Azure Web Sites Gallery.

Looking closely at the Web application code reveals at least three significant architecture issues that limit the application’s scalability: the use of a local SQL Server Express as the database; the use of an In-Process (local Web server memory) session state; and the use of the local file system to store photos.

I’ll review each of these limitations in-depth.

The PhotoGallery.sdf file, found in the App_Data folder, is the default SQL Server Express database that’s distributed with the application. SQL Server Express makes it easy to start developing an application and offers a great learning experience, but it also imposes serious restrictions on the application’s ability to scale. A SQL Server Express database is, essentially, a file in the file system. The Photo Gallery application in its current state can’t safely scale across multiple instances. Trying to scale across multiple instances can leave you with multiple instances of the SQL Server Express database file, each one a local file and likely out of sync with the others. Even if all Web server instances share the same file system, the SQL Server Express file can get locked by any one instance at different times, causing the other instances to fail.

The Photo Gallery application is also limited by the way it manages a user’s session state. A session is defined as a series of requests issued by the same user within a certain period of time, and is managed by associating a session ID with each unique user. The ID is used for each subsequent HTTP request and is provided by the client, either in a cookie or as a special fragment of the request URL. The session data is stored on the server side in one of the supported session state stores, which include in-process memory, a SQL Server database or ASP.NET State Server.

The Photo Gallery application uses the WebMatrix WebSecurity class to manage a user’s login and state, and WebSecurity uses the default ASP.NET membership provider session state. By default, the session state mode of the ASP.NET membership provider is in-process (InProc). In this mode, session state values and variables are stored in memory on a local Web server instance (VM). Having user session state stored locally per Web server limits the ability of the application to run on multiple instances because subsequent HTTP requests from a single user can end up on different instances of Web servers. Because each Web server instance keeps its own copy of the state in its own local memory, you can end up with different InProc session state objects on different instances for the same users. This can lead to unexpected and inconsistent UXes. Here, you can see the WebSecurity class being used to manage a user’s state:

_AppStart.cshtml

@{
  WebSecurity.InitializeDatabaseConnection
    ("PhotoGallery", "UserProfiles", "UserId", "Email", true);
}

Upload.cshtml

@{
  WebSecurity.RequireAuthenticatedUser();
    ...
...
}

The WebSecurity class is a helper, a component that simplifies programming in ASP.NET Web Pages. Behind the scenes, the Web­Security class interacts with an ASP.NET membership provider, which in turn performs the lower-level work required to perform security tasks. The default membership provider in ASP.NET Web Pages is the SimpleMembershipProvider class and, by default, its session state mode is InProc.

Finally, the current version of the Photo Gallery Web application stores photos in the database, each photo as an array of bytes. Essentially, because the application is using SQL Server Express, photos are saved on the local disk. For a photo gallery application, one of the main scenarios is viewing photos, so the application may need to handle and show a great many photo requests. Reading photos from a database is less than ideal. Even using a more sophisticated database, such as SQL Server or Azure SQL Database, isn’t ideal, mainly because retrieving photos is such an expensive operation.

In short, this version of Photo Gallery is a stateful application, and stateful applications do not scale well across multiple instances.

Now that I’ve explained some of the limitations of the Photo Gallery application with regard to scaling, I’ll address them one by one to improve the application’s scale capabilities. In Step Two, I’ll make the necessary changes to convert the Photo Gallery from stateful to stateless. At the end of Step Two, the updated Photo Gallery application will be able to safely scale and run across multiple Web server instances (VMs).

First, I’ll replace SQL Server Express with a more powerful database server—Azure SQL Database, a cloud-based service from Microsoft that offers data-storage capabilities as part of the Azure services platform. Azure SQL Database Standard and Premium SKUs offer advanced business continuity features I’ll use in Step Four. For now, I’ll just migrate the database from SQL Server Express to Azure SQL Database. You can do this easily using the WebMatrix database migration tool, or any other tool you want, to convert the SDF file into the Azure SQL Database format.

As long as I’m already migrating the database, it’s a good opportunity to make some schema modifications that, though small, will have significant impact on the application’s scaling capabilities.

First, I’ll convert the ID column type of some of the tables (Galleries, Photos, UserProfiles and so on) from INT to GUID. This change will prove useful in Step Four, when I update the application to run across multiple regions and need to keep the database and photo content in sync. It’s important to note that this modification doesn’t force any code changes on the application; all SQL queries in the application remain the same.

Next, I’ll stop storing photos as arrays of bytes in the database. This change involves both schema and code modifications. I’ll remove the FileContents and FileSize columns from the Photos table, store photos directly to disk, and use the photo ID, which is now a GUID, as means of distinguishing photos.

The following code snippet shows the INSERT statement before the change (note that both fileBytes and fileBytes.Length are stored directly in the database):

db.Execute(@"INSERT INTO Photos
  (Id, GalleryId, UserName, Description, FileTitle, FileExtension,
  ContentType, FileSize, UploadDate, FileContents, Likes)
  VALUES (@0, @1, @2, @3, @4, @5, @6, @7, @8, @9, @10)",
  guid.ToString(), galleryId, Request.GetCurrentUser(Response), "",
  fileTitle, fileExtension, fileUpload.ImageFormat, fileBytes.Length,
  DateTime.Now, fileBytes, 0);

And here’s the code after the database changes:

using (var db = Database.Open("PhotoGallery"))
{
  db.Execute(@"INSERT INTO Photos
  (Id, GalleryId, UserName, Description, FileTitle, FileExtension,
  UploadDate, Likes)
  VALUES (@0, @1, @2, @3, @4, @5, @6, @7)", imageId, galleryId,
  userName, "", imageId, extension,
  DateTime.UtcNow, 0);
}

In Step Three, I’ll explore in more detail how I modified the application. For now, suffice it to say the photos are saved to a central location, such as a shared disk, that all Web server instances can access.

The last change I’ll make in Step Two is to stop using the InProc session state. As noted earlier, WebSecurity is a helper class that interacts with ASP.NET membership providers. By default, ASP.NET SimpleMembership session state mode is InProc. There are several out-of-process options you can use with SimpleMembership, including SQL Server and ASP.NET State Server service. These two options enable session state to be shared among multiple Web server instances and avoid server affinity; that is, they don’t require the session to be tied to one specific Web server.

My approach also manages state out of process, specifically using a database and cookie. However, I rely on my own implementation rather than ASP.NET, essentially because I want to keep things simple. The implementation uses a cookie and stores the session ID and its state in the database. Once a user logs in, I assign a new GUID as a session ID, which I store in the database. That GUID is also returned to the user in the form of a cookie. The following code shows the CreateNewUser method, which is called every time a user logs in:

private static string CreateNewUser()
{
  var newUser = Guid.NewGuid();
  var db = Database.Open("PhotoGallery");
db.Execute(@"INSERT INTO GuidUsers (UserName, TotalLikes) VALUES (@0, @1)",
  newUser.ToString(), 0);
  return newUser.ToString();
}

When responding to an HTTP request, the GUID is embedded in the HTTP response as a cookie. The username passed to the AddUser method is the product of the CreateNewUser function just shown, like so:

public static class ResponseExtensions
{
  public static void AddUser(this HttpResponseBase response, 
    string userName)
  {
    var userCookie = new HttpCookie("GuidUser")
    {
      Value = userName,
      Expires = DateTime.UtcNow.AddYears(1)
    };
    response.Cookies.Add(userCookie);
  }
}

When handling an incoming HTTP request, first I try to extract the user ID, represented as a GUID, from the GuidUser cookie. Next, I look for that userID (GUID) in the database and extract any user-specific information. Figure 1shows part of the Get­CurrentUser implementation.

Figure 1 GetCurrentUser

public static string GetCurrentUser(this HttpRequestBase request,
   HttpResponseBase response = null)
  {
    string userName;
    try
    {
      if (request.Cookies["GuidUser"] != null)
        {
          userName = request.Cookies["GuidUser"].Value;
          var db = Database.Open("PhotoGallery");
          var guidUser = db.QuerySingle(
            "SELECT * FROM GuidUsers WHERE UserName = @0", userName);
          if (guidUser == null || guidUser.TotalLikes > 5)
            {
              userName = CreateNewUser();
            }
        }
      ...
      ...
}

Both CreateNewUser and GetCurrentUser are part of the RequestExtensions class. Similarly, AddUser is part of the ResponseExtensions class. Both classes plug into the ASP.NET request-processing pipeline, handling requests and responses, respectively.

My approach to managing session state is a rather a naïve one as it isn’t secure and doesn’t enforce any authentication. However, it shows the benefit of managing sessions out of process, and it scales. When you implement your own session-state management, whether or not you base it on ASP.NET, make sure you use a secure solution that includes authentication and a secure way to encrypt the cookie you return.

At this point, I can safely claim the updated Photo Gallery application is now a stateless Web application. By replacing the local SQL Server Express database implementation with Azure SQL Database, and changing the session state implementation from InProc to out of process, using a cookie and a database, I successfully converted the application from stateful to stateless, as illustrated in Figure 2.

Logical Representation of the Modified Photo Gallery Application
Figure 2 Logical Representation of the Modified Photo Gallery Application

Taking the necessary steps to ensure your Web application is stateless is probably the most meaningful task during the development of a Web application. The ability to safely run across multiple Web server instances, with no concerns about user state, data corruption or functional correctness, is one of the most important factors in scaling a Web application.

Step Three: Additional Improvements That Go a Long Way

Changes made to the Photo Gallery Web application in Step Two ensure the application is stateless and can safely scale across multiple Web server instances. Now, I’ll make some additional improvements that can further enhance the application’s scalability characteristics, enabling the Web application to handle larger loads with fewer resources. In this step, I’ll review storage strategies, and address async design patterns that enhance UX performance.

One of the changes discussed in Step Two was saving photos to a central location, such as a shared disk that all Web server instances could access, rather than to a database. Azure Web Sites architecture ensures that all instances of a Web application running across multiple Web servers share the same disk, as Figure 3 illustrates.

With Microsoft Azure Web Sites, All Instances of a Web Application See the Same Shared Disk
Figure 3 With Microsoft Azure Web Sites, All Instances of a Web Application See the Same Shared Disk

From the Photo Gallery Web application perspective, “shared disk” means when a photo gets uploaded by a user, it’s saved to the …/uploaded folder, which looks like a local folder. However, when the image is written to disk, it isn’t saved “locally” on the specific Web server that handles the HTTP request, but instead saved to a central location that all Web servers can access. Therefore, any server can write any photo to the shared disk and all other Web servers can read that image. The photo metadata is stored in the database and used by the application to read the photo ID—a GUID—and return the image URL as part of the HTML response. The following code snippet is part of view.cshtml, which is the page I use to enable viewing images:

<img class="large-photo" src="@ImagePathHelper.GetFullImageUrl(photoId,
  photo.FileExtension.ToString())" alt="@Html.AttributeEncode(photo.FileTitle)" />

The source of the image HTML element is populated by the return value of the GetFullImageUrl helper function, which takes a photo ID and file extension (.jpg, .png and so on) and returns a string representing the URL of the image.

Saving the photos to a central location ensures that the Web application is stateless. However, with the current implementation, a given image is served directly from one of the Web servers running the Web application. Specifically, the URL of each image source points to the Web application’s URL. As a result, the image itself is served from one of the Web servers running the Web application, meaning the actual bytes of the image are sent as an HTTP response from the Web server. This means your Web server, on top of handling dynamic Web pages, also serves static content, such as images. Web servers can serve a lot of static content at great scale, but doing so imposes a toll on resources, including CPU, IO and memory. If you could make sure only static content, such as photos, isn’t served directly from the Web server running your Web application, but rather from somewhere else, you could reduce the number of HTTP requests hitting the Web servers. By doing so, you’d free resources on the Web server to handle more dynamic HTTP requests.

The first change to make is to use Azure Blob storage (bit.ly/TOK3yb) to store and serve user photos. When a user asks to view an image, the URL returned by the updated GetFullImageUrl points to an Azure Blob. The end result looks like the following HTML, where the image URL points to Blob storage:

<img class="large-photo" alt="764beb6b-1988-42d7-9900-03ee8a60749b"
  src="https://photogalcontentwestus.blob.core.windows.net/
  full/764beb6b-1988-42d7-9900-03ee8a60749b.jpg">

This means an image is served directly from the Blob storage and not from the Web servers running the Web application.

In contrast, the following shows photos saved to an Azure Web Sites shared disk:

<img class="large-photo" alt="764beb6b-1988-42d7-9900-03ee8a60749b"
  src="https:// builddemophotogal2014.websites.net/
  full/764beb6b-1988-42d7-9900-03ee8a60749b.jpg">

The Photo Gallery Web application uses two containers, full and thumbnail. As you’d expect, full stores photos in their original sizes, while thumbnail stores the smaller images that are shown in the gallery view.

public static string GetFullImageUrl(string imageId, 
  string imageExtension)
{
  return String.Format("{0}/full/{1}{2}",
    Environment.ExpandEnvironmentVariables("%AZURE_STORAGE_BASE_URL%"),
    imageId, imageExtension);
}

AZURE_STORAGE_BASE_URL is an environment variable that contains the base URL for the Azure Blob, in this case, - https://­photogalcontentwestus.blob.core.windows.net. This environment variable can be set in the Azure Portal at the Site Config tab, or it can be part of the application web.config. Setting environment variables from the Azure Portal gives you more flexibility, however, because it’s easier to change without the need to redeploy.

Azure Storage is used in much the same way as a content delivery network (CDN), mainly because HTTP requests for images aren’t being served from the application’s Web servers, but directly from an Azure Storage container. This substantially reduces the amount of static HTTP request traffic that ever reaches your Web servers, allowing the Web servers to handle more dynamic requests. Note also that Azure Storage can handle far more traffic than your average Web server—a single container can scale to serve many tens of thousands of requests per second.

In addition to using Blob storage for static content, you can also add the Microsoft Azure CDN. Adding a CDN on top of your Web application further improves performance, as the CDN will serve all static content. Requests for a photo already cached on the CDN won’t reach the Blob storage. Moreover, a CDN also enhances perceived performance, as the CDN typically has an edge server closer to the end customer. The details of adding a CDN to the sample application are beyond the scope of this article, as the changes are mostly around DNS registration and configuration. But when you’re addressing production at scale and you want to make sure your customers will enjoy a quick and responsive UI, you should consider using the CDN.

I haven’t reviewed the code that handles a user’s uploaded images, but this is a great opportunity to address a basic async pattern that improves both Web application performance and UX. This will also help data synchronization between two different regions, as you’ll see in Step Four.

The next change I’ll make to the Photo Gallery Web application is to add an Azure Storage Queue, as a way to separate the front end of the application (the Web site) from the back-end business logic (WebJob + database). Without a Queue, the Photo Gallery code handled both front end and back end, as the upload code saved the full-size image to storage, created a thumbnail and saved it to storage, and updated the SQL Server database. During that time, the user waited for a response. With the introduction of an Azure Storage Queue, however, the front end just writes a message to the Queue and immediately returns a response to the user. A background process, WebJob (bit.ly/1mw0A3w), picks up the message from the Queue and performs the required back-end business logic. For Photo Gallery, this includes manipulating images, saving them to the correct location and updating the database. Figure 4 illustrates the changes made in Step Three, including using Azure Storage and adding a Queue.

Logical Representation of Photo Gallery Post Step Three
Figure 4 Logical Representation of Photo Gallery Post Step Three

Now that I have a Queue, I need to change the upload.cshtml code. In the following code you can see that instead of performing complicated business logic with image manipulation, I use StorageHelper to enqueue a message (the message includes the photo ID, the photo file extension and gallery ID):

var file = Request.Files[i];
var fileExtension = Path.GetExtension(file.FileName).Trim();
guid = Guid.NewGuid();
using
var fileStream = new FileStream(
 Path.Combine( HostingEnvironment.MapPath("~/App_Data/Upload/"),
 guid + fileExtension), FileMode.Create))
{
  file.InputStream.CopyTo(fileStream);
  StorageHelper.EnqueueUploadAsync(
    Request.GetCurrentUser(Response),
     galleryId, guid.ToString(), fileExtension);
}

StorageHelper.EnqueueUploadAsync simply creates a CloudQueueMessage and asynchronously uploads it to the Azure Storage Queue: 

public static Task EnqueueUploadAsync
  (string userName, string galleryId, string imageId, 
    string imageExtension)
{
  return UploadQueue.AddMessageAsync(
    new CloudQueueMessage(String.Format("{0}, {1}, {2}, {3}",
    userName, galleryId, imageId,
    imageExtension)));
}

WebJob is now responsible for the back-end business logic. The new WebJobs feature of Azure Web Sites provides an easy way to run programs such as services or background tasks on a Web site. The WebJob listens for changes on the Queue and picks up any new message. The ProcessUploadQueueMessages method, shown in Figure 5, is called whenever there’s at least one message in the Queue. The QueueInput attribute is part of the Microsoft Azure WebJobs SDK (bit.ly/1cN9eCx), which is a framework that simplifies the task of adding background processing to Azure Web Sites. The WebJobs SDK is out of scope for this article, but all you really need to know is that the WebJob SDK lets you easily bind to a Queue, in my case uploadqueue, and listen to incoming messages.

Figure 5 Reading a Message from a Queue and Updating the Database

public static void ProcessUploadQueueMessages
  ([QueueInput(“uploadqueue”)] string queueMessage, IBinder binder)
{
  var splited = queueMessage
    .Split(‘,’).Select(m => m.Trim()).ToArray();
  var userName = splited[0];
  var galleryId = splited[1];
  var imageId = splited[2];
  var extension = splited[3];
  var filePath = Path.Combine(ImageFolderPath, 
    imageId + extension);
  UploadFullImage(filePath, imageId + extension, binder);
  UploadThumbnail(filePath, imageId + extension, binder);
  SafeGuard(() => File.Delete(filePath));
  using (var db = Database.Open(“PhotoGallery”))
  {
    db.Execute(@”INSERT INTO Photos Id, GalleryId, UserName,
      Description, FileTitle, FileExtension, UploadDate, Likes)
      VALUES @0, @1, @2, @3, @4, @5, @6, @7)”, imageId,
      galleryId, userName, “”, imageId, extension, DateTime.UtcNow, 0);
  }
}

Each message is decoded, splitting the input string into its individual parts. Next, the method calls two helper functions to manipulate and upload the images to the Blob containers. Last, the database is  updated.

At this point, the updated Photo Gallery Web application can handle many millions of HTTP requests per day.

Step Four: Global Reach

I’ve already made tremendous improvements to the scaling ability of the Photo Gallery Web application. As I noted, the application can now handle many millions of HTTP requests using only a few large servers on Azure Web Sites. Currently, all these servers are located in a single Azure datacenter. While running from a single datacenter isn’t exactly a scale limitation—at least not by the standard definition of scale—if your customers from around the globe require low latency, you’ll need to run your Web application from more than one datacenter. This also improves your Web application’s durability and business-continuity capabilities. In the rare case that one datacenter experiences an outage, your Web application will continue to serve traffic from the second location.

In this step, I’ll make the changes to the application that enable it to run across multiple datacenters. For this article, I’ll focus on running from two locations in an active-active mode, where the application in both datacenters allows the user to view photos, as well as upload photos and comments.

Keep in mind that because I understand the context for the Photo Gallery Web application, I know the majority of user operations are read operations, showing photos. Only a small number of user requests involve uploading new photos or updating comments. For the Photo Gallery Web application, I can safely state the read/write ratio is at least 95 percent reads. This allows me to make some assumptions, for example, that having eventual consistency across the system is acceptable, as is a slower response for write operations.

It’s important to understand these assumptions are context-aware and depend on the specific characteristics of a given application, and will most likely change from one application to the other.

Surprisingly, the amount of work required to run Photo Gallery from two different locations is small, as most of the heavy lifting was done in Step Two and Step Three. Figure 6 shows a high-level block diagram of the application topology running from two different datacenters. The application in West U.S. is the “main” application and basically has the output of Step Three. The application in East U.S. is the “secondary” site, and the Azure Traffic Manager is placed on top of both. Azure Traffic Manager has several configuration options. I’ll use the Performance option, which causes Traffic Manager to monitor both sites for latency in their respective regions and route traffic based on the lowest latency. In this case, customers from New York (east coast) will be directed to the East U.S. site and customers from San Francisco (west coast) will be directed to the West U.S. site. Both sites are active at the same time, serving traffic. Should the application in one region experience performance issues, for whatever reason, the Traffic Manager will route traffic to the other application. Because the data is synced, no data should be lost.

Logical Representation of Photo Gallery Post Step Four
Figure 6 Logical Representation of Photo Gallery Post Step Four

I’ll look at the changes to the West U.S. application. The only code change is to the WebJob listening for messages in the Queue. Instead of saving photos to one Blob, the WebJob saves photos to the local and “remote” Blob store. In Figure 5, UploadFullImage is a helper method that saves photos to Blob storage. To enable copying a photo to a remote Blob as well as a local Blob, I added the ReplicateBlob helper function at the end of UploadFullImage, as you can see here:

private static void UploadFullImage(
  string imagePath, string blobName, IBinder binder)
{
  using (var fileStream = new FileStream(imagePath, FileMode.Open))
  {
    using (var outputStream =
      binder.Bind<Stream>(new BlobOutputAttribute(
      String.Format("full/{0}", blobName))))
    {
      fileStream.CopyTo(outputStream);
    }
  }
  RemoteStorageManager.ReplicateBlob("full", blobName);
}

The ReplicateBlob method in the following code has one important line—the last line calls the StartCopyFromBlob method, which asks the service to copy all the contents, properties and metadata of a Blob to a new Blob (I let the Azure SDK and storage service take care of the rest):

public static void ReplicateBlob(string container, string blob)
{
  if (sourceBlobClient == null || targetBlobClient == null)
    return;
  var sourceContainer = 
    sourceBlobClient.GetContainerReference(container);
  var targetContainer = 
    targetBlobClient.GetContainerReference(container);
  if (targetContainer.CreateIfNotExists())
  {
    targetContainer.SetPermissions(sourceContainer.GetPermissions());
  }
  var targetBlob = targetContainer.GetBlockBlobReference(blob);
  targetBlob.StartCopyFromBlob(sourceContainer.GetBlockBlobReference(blob));
}

In East U.S., the ProcessLikeQueueMessages method doesn’t process anything; it simply pushes the message to the West U.S. Queue. The message will be processed in the West U.S., images will be replicated as explained earlier and the database will get synced, as I’ll explain now.

This is the last missing piece of magic—synchronizing the database. To achieve this I’ll use the Active Geo-Replication (continuous copy) preview feature of Azure SQL Database. With this feature, you can have secondary read-only replicas of your master database. Data written to the master database is automatically copied to the secondary database. The master is configured as a read-write database and all secondary databases are read-only, which is the reason in my scenario that messages are pushed from the East U.S. Queue to West U.S. Once you configure Active Geo-Replication (via the Portal), the databases will be in sync. No code changes are required beyond what I’ve already covered.

Wrapping Up

Microsoft Azure lets you build Web apps that can scale a lot with very little effort. In this article, I showed how, in just a few steps, you can modify a Web application from one that can’t really scale at all because it can’t run on multiple instances, to one that can run not only across multiple instances, but also across multiple regions, handling millions (high tens of millions) of HTTP requests. The examples are specific to the particular application, but the concept is valid and can be implemented on any given Web application.


Yochay Kiriaty is a principal program manager lead on the Microsoft Azure team, working on Azure Web Sites. Reach him at yochay@microsoft.com and follow him on Twitter at twitter.com/yochayk.

Thanks to the following Microsoft technical expert for reviewing this article: Mohamed Ameen Ibrahim