Share via



January 2010

Volume 25 Number 01

Cloud Patterns - Designing Services for Microsoft Azure

By Thomas Erl | January 2010

Download the Code Sample

Azure is a new cloud computing platform under development by Microsoft (microsoft.com/windowsazure). Cloud computing allows developers to host applications in an Internet-accessible virtual environment. The environment transparently provides the hardware, software, network and storage needed by the application.

As with other cloud environments, Azure provides a hosted environment for applications. The added benefit of Azure is that .NET Framework applications can be deployed with minimal changes from their desktop siblings.

Applying service-oriented architecture (SOA) patterns and utilizing the experiences collected when implementing service-oriented solutions will be key to success when moving your services and applications into the new arena of cloud computing. To better understand how SOA patterns can be applied to Azure deployments, let’s take a look at a scenario in which a fictional bank moves its services to the cloud.

Cloud Banking

Woodgrove Bank is a small financial institution that has decided to focus on a new online banking initiative branded Woodgrove Bank Online. One of Woodgrove Bank’s most important clients, Fourth Coffee, volunteered to try out the new solution for processing card transactions. A subset of the services planned for the solution is already live, and the availability of these services has generated more interest from other customers. However, as more of the solution’s rollout is planned, challenges emerge.

The first issue pertains to scalability and reliability. Woodgrove Bank never wanted to take responsibility for hosting its IT solutions. Instead, it established a provisioning agreement with a local ISP called the Sesame Hosting Company. To date, Sesame Hosting has fulfilled the Web hosting needs of Woodgrove Bank, but the new card-processing solution has introduced scalability requirements that Sesame Hosting is not prepared to handle.

The Woodgrove Bank technology architecture team suggests redundantly deploying the Woodgrove Bank Online services, as per the Redundant Implementation pattern (descriptions of the patterns discussed here can be found at soapatterns.org). In essence, the pattern suggests an approach whereby services are intentionally deployed redundantly for increased scalability and failover. The Sesame Hosting company investigates this option, but cannot afford to expand its infrastructure in order to accommodate redundant service deployments. It simply doesn’t have the resources or budget to handle the increase in hardware, operational software maintenance and networking appliances that would be required.

The time frame is also a problem. Even if Sesame Hosting could make the necessary infrastructure available, it could not do so in time for Woodgrove Bank to meet its planned rollout schedule. The need to hire and train personnel alone would prolong the infrastructure expansion far beyond Woodgrove Bank’s timetable.

After realizing that Sesame Hosting wouldn’t be able to meet its needs, the Woodgrove Bank team begins to explore the option of hosting its services in a public cloud. Azure provides a way of virtualizing services that naturally apply the Redundant Implementation pattern. This feature of Azure is called On-Demand Application Instance (discussed in the May 2009). This feature, and the ability to use Microsoft datacenters without a long-term commitment, looks promising to the Woodgrove Bank team. Let’s take a closer look at how Woodgrove Bank migrates its solution to Azure.

Deployment Basics

The first order of business is to deploy a Web service by following a contract-first approach that adheres to the Standardized Service Contract principle. The team uses the WSCF.blue tool to generate Windows Communication Foundation (WCF) contracts from WSDL and XSDs that were modeled for optimal interoperability. The service contracts are shown in Figure 1.

Figure 1 The Initial Service Contracts

image: The Initial Service Contracts

Because services will need to change and evolve over time, the developers also decide to let their data contracts implement the IExtensibleObject interface in support of the Forward Compatibility pattern (see Figure 2).

Figure 2 The Initial Data Contracts

image: The Initial Data Contracts

To store the necessary data, the Woodgrove Bank team wants to use SQL Azure because it already has an existing database structure that the team wants to retain. If the developers were able to use a non-relational store, they might consider Azure Storage instead.

Woodgrove Bank architects proceed to create a Visual Studio Template Cloud Service and use Visual Studio to publish it. They then log onto the Azure portal to create their new cloud service (see Figure 3).

Figure 3 Creating a Service in the Azure Portal

image: Creating a Service in the Azure Portal

Next, they are presented with a screen that allows them to start deploying the service. They click the Deploy button and specify an application package, configuration settings and a deployment name. After a few more clicks, their service is residing in the cloud.

Figure 4 shows an example of the service configuration.

Figure 4 Azure Service Configuration

<Role name="BankWebRole">
  <Instances count="1" />
  <ConfigurationSettings>
    <Setting 
      name="DataConnectionString" 
      value="DefaultEndpointsProtocol=https;AccountName=YOURACCOUNTNAME;AccountKey=YOURKEY" />
    <Setting 
      name="DiagnosticsConnectionString" 
      value="DefaultEndpointsProtocol=https;AccountName=YOURDIAGNOSTICSACCOUNTNAME;AccountKey=YOURKEY" />

The key to making the solution elastic with regard to Woodgrove Bank’s scalability requirements is the following configuration element:

<Instances count="1" />

For example, if the developers want 10 instances, this element would be set to:

<Instances count="10" />

Figure 5 shows the screen that confirms that only one instance is up and running. Clicking the Configure button brings up a screen where they are able to edit the service configuration and change the Instances setting as required.

Figure 5 Instances Running in Azure

Performance and Flexibility

After some stress testing, the Woodgrove Bank development team found that having only one central data store in SQL Azure led to slower and slower response times when traffic increased. The developers decided to address this performance issue by using Azure table storage, which is designed to improve scalability by distributing the partitions across many storage nodes. Azure table storage also provides fast data access because the system monitors usage of the partitions and automatically load-balances them. However, because Azure table storage isn’t a relational data store, the team had to design some new data storage structures and pick a combination of partition and row keys that would provide good response times.

They ended up with three tables as shown in Figure 6. UserAccountBalance will store user account balances. AccountTransactionLogg will be used for storing all transaction messages for specific accounts. The UserAccountTransaction table will be used for storing account transactions. The partition keys for the UserAccountTransaction and AccountTransactionLogg tables were created by concatenating UserId and AccountId because these are a part of all queries and can give quick response times. The partition key for the UserAccountBalance table is UserId and the row key is AccountId. Together they provide a unique identification of a user and his account.

Figure 6 Azure Table Storage Models

image: Azure Table Storage Models

Woodgrove Bank considers the project a success thus far and wants more customers to start using the solution. Soon World Wide Importers is ready to join in—though with some new functional requirements.

The request that appears to matter most is that the service interface (or information structure) should be changed. According to World Wide Importers, the information structure that Woodgrove Bank uses is not compatible with theirs. Due to the importance of this particular customer, the Woodgrove Bank development team suggests the application of the Data Model Transformation pattern. 
The developers would create several new services with the interfaces that World Wide Importers requested and these services would contain logic to translate the requests between the World Wide Importers data models and the Woodgrove Bank data models.

To satisfy this requirement, a new structure for the UserAccount is created. The developers are careful to ensure that there is a clear mapping between the UserAccountWwi and UserAccount classes, as shown in Figure 7.

Figure 7 UserAccount Structure for Data Model Transformation

image: UserAccount Structure for Data Model Transformation

Service contracts need to accept a specific data contract (UserAccountWwi) that transforms requests to UserAccount before passing on the call to other parts of the solution, and then transform it back in the reply. The architects at Woodgrove Bank realize that they could reuse a base service interface when implementing these new requirements. The final design is shown in Figure 8.

Figure 8 Service Contracts for World Wide Importers

image: Service Contracts for World Wide Importers

The developers choose to implement the data transformations by creating a couple of extension methods for the UserAccount class, including the methods TransformToUserAccountWwi and TransformToUserAccount.

The new service accepts the UserAccountWwi data contract. Prior to sending requests on to other layers, the data is transformed to UserAccount by calling the extension method TransformToUserAccount. Before sending a response back to the consumer, the UserAccount contract is transformed back to UserAccountWwi by calling the TransformToUserAccountWwi. For details about these elements see the source code for UserAccountServiceAdvanced in the code download for this article.

Messaging and Queuing

Although Woodgrove Bank is now up and running and able to facilitate a great number of incoming requests, analysts have noted significant peaks in service usage. Some of these peaks come regularly (specifically, on Monday mornings and Thursday afternoons). However, some fluctuations are unpredictable.

Putting more resources online via Azure configuration would be one easy solution, but now that some large clients such as World Wide Importers are interested in the new services, concurrent usage fluctuations are expected to increase.

The developers at Woodgrove Bank took a closer look at Azure offerings and discovered features that allow for the application of the Reliable Messaging and Asynchronous Queuing patterns. They concluded that Reliable Messaging was not the most suitable choice as it would restrict their customer’s technical choices. Asynchronous Queuing requires no special technology from the customers so they would focus on that. Inside the Azure cloud, however, Reliable Messaging made perfect sense since the technology used there was all provided by Microsoft.

The objective is that no message should be lost even if services are offline due to error conditions or planned maintenance. The Asynchronous Queuing pattern allows this, though some offerings are not suitable for this pattern. For example, prompt answers with confirmation or denial of money transfers are necessary when dealing with online card transactions. But in other situations the pattern would do fine.

Communication between the Web and Worker roles (see msdn.microsoft.com/magazine/dd727504 for an explanation of these roles) is done with Azure Queues (as of the November CTP version it is possible to communicate directly between role instances), which are by default both asynchronous and reliable. This doesn’t automatically mean that the communication between the end user and Woodgrove Bank’s services is reliable. In fact, the lines of communication between the client and the services residing in the Web role are clearly unreliable. The Woodgrove Bank team decided not to address this because implementing reliability mechanisms all the way down to the customers would in practice require customers to adhere to the same technological choices as Woodgrove Bank. This was considered unrealistic and undesirable.

Putting Queues to Work

As soon as a customer sends a message to UserAccountService, this message is placed in a Azure Queue and the customer receives a confirmation message. UserAccountWorker will then be able to get the message from the queue. Should UserAccountWorker be down, the message will not be lost as it is stored securely in the queue.

If the processing inside UserAccountWorker goes wrong, the message will not be removed from the queue. To ensure this, the call to the DeleteMessage method of the queue is made only after the work has been completed. If UserAccountWorker didn’t finish processing the message before the timeout elapsed (the timeout is hardcoded to 20 seconds), the message will again be made visible on the queue so that another instance of UserAccountWorker can attempt to process it.

As soon as a customer sends a message to UserAccountService, this message is placed in a queue and the customer receives a confirmation message of type TransactionResponse. From the perspective of the customer, Asynchronous Queuing is used. ReliableMessaging is used to communicate between UserAccountStorageAction and AccountStorageWorker, which reside in the Web role and Worker role, respectively. Here’s how the call handler put messages into the queue:

public TransactionResponse ReliableInsertMoney(
  AccountTransactionRequest accountTransactionrequest) {
//last parameter (true) means that we want to serialize
//message to the queue as XML (serializeAsXml=true)
  return UserAccountHandler.ReliableInsertMoney(
    accounttransactionRequest.UserId, 
    accounttransactionRequest.AccountId, 
    accounttransactionRequest.Amount, true);
}

UserAccountHandler is a property that returns an IUserAccountAction, which is injected in the runtime. This makes it easier to separate implementation from the contract and later change the implementation:

public IUserAccountAction<Models.UserAccount> UserAccountHandler
  {get;set;}
public UserAccountService(
  IUserAccountAction<Models.UserAccount> action) {
  UserAccountHandler = action;
}

After the message is sent to one of the responsible actions, it will be put in the queue. The first method in Figure 9 shows how data can be stored as serializable XML and the second method shows how data can be stored as a string in the queue. Note that there is a limitation in Azure Queues where the maximum message size is 8KB.

Figure 9 Storing Data

public TransactionResponse ReliableHandleMoneyInQueueAsXml( 
  UserAccountTransaction accountTransaction){ 
  using (MemoryStream m = new MemoryStream()){ 
    XmlSerializer xs = 
      new XmlSerializer(typeof(UserAccountTransaction)); 
    xs.Serialize(m, accountTransaction); 
    try 
    { 
      QueueManager.AccountTransactionsQueue.AddMessage( 
        new CloudQueueMessage(m.ToArray())); 
      response.StatusForTransaction = TransactionStatus.Succeded; 
    } 
    catch(StorageClientException) 
    { 
      response.StatusForTransaction = TransactionStatus.Failed; 
      response.Message = 
        String.Format("Unable to insert message in the account transaction queue userId|AccountId={0}, messageId={1}", 
        accountTransaction.PartitionKey, accountTransaction.RowKey); 
    } 
  } 
  return response; 
} 
public TransactionResponse ReliableHandleMoneyInQueue( 
  UserAccountTransaction accountTransaction){ 
  TransactionResponse response = this.CheckIfTransactionExists( 
    accountTransaction.PartitionKey, accountTransaction.RowKey); 
       
  if (response.StatusForTransaction == TransactionStatus.Proceed) 
  { 
    //userid|accountid is partkey 
    //userid|accountid|transactionid|amount 
    string msg = string.Format("{0}|{1}|{2}", 
      accountTransaction.PartitionKey, 
      accountTransaction.RowKey, 
      accountTransaction.Amount); 
    try 
    { 
      QueueManager.AccountTransactionsQueue.AddMessage( 
        new CloudQueueMessage(msg)); 
      response.StatusForTransaction = TransactionStatus.Succeded; 
    } 
    catch(StorageClientException) 
    { 
      response.StatusForTransaction = TransactionStatus.Failed; 
      response.Message = 
        String.Format("Unable to insert message in the account transaction queue userId|AccountId={0}, messageId={1}", 
        accountTransaction.PartitionKey, accountTransaction.RowKey); 
    } 
  } 
  return response; 
}

The QueueManager class will initialize queues using definitions from the configuration:

CloudQueueClient queueClient = 
  CloudStorageAccount.FromConfigurationSetting(
    "DataConnectionString").CreateCloudQueueClient();

accountTransQueue = queueClient.GetQueueReference(
  Helpers.Queues.AccountTransactionsQueue);
accountTransQueue.CreateIfNotExist();
loggQueue = queueClient.GetQueueReference(
  Helpers.Queues.AccountTransactionLoggQueue);
loggQueue.CreateIfNotExist();

AccountStorageWorker listens for the messages on AccountTransactionQueue and gets the messages from the queue. To be able to listen for the message, the worker must open the correct queue:

var storageAccount = CloudStorageAccount.FromConfigurationSetting(
  "DataConnectionString");
// initialize queue storage 
CloudQueueClient queueStorage = storageAccount.CreateCloudQueueClient();
accountTransactionQueue = queueStorage.GetQueueReference(
  Helpers.Queues.AccountTransactionsQueue);

After the queue is opened and AccountStorageWorker reads the message, the message will be invisible in the queue for 20 seconds (the visibility timeout was set to 20). During that time the worker will try to process the message.

If processing of the message succeeds, the message will be deleted from the queue. If processing fails, the message will be put back in the queue.

Processing Messages

The ProcessMessage method first needs to get the content of the message. This can be done in one of two ways. First, the message could be stored as a string in the queue:

//userid|accountid|transactionid|amount
var str = msg.AsString.Split('|');...

Second, the message could be serialized XML:

using (MemoryStream m = 
  new MemoryStream(msg.AsBytes)) {
  if (m != null) {
    XmlSerializer xs = new XmlSerializer(
      typeof(Core.TableStorage.UserAccountTransaction));
    var t = xs.Deserialize(m) as 
      Core.TableStorage.UserAccountTransaction;
    if (t != null) { ....... }
  }
}

Should the AccountStorageWorker for some reason be down or unable to process the message, no message will be lost as it is saved in the queue. If processing inside the AccountStorageWorker should fail, the message will not be removed from the queue and it will become visible in the queue after 20 seconds.

To ensure this behavior, the call to the DeleteMessage method of the queue is made only after the work has been completed. If AccountStorageWorker didn’t finish processing the message 
before the timeout elapsed,  the message will yet again be made visible on the queue so that another instance of AccountStorageWorker can attempt processing it. Figure 10 works on a message that was stored as a string.

Figure 10 Handling Queued Messages

if (str.Length == 4){
  //userid|accountid|transactionid|amount
  UserAccountSqlAzureAction ds = new UserAccountSqlAzureAction(
    new Core.DataAccess.UserAccountDB("ConnStr"));
  try
  {
    Trace.WriteLine(String.Format("About to insert data to DB:{0}", str),      
      "Information");
    ds.UpdateUserAccountBalance(new Guid(str[0]), new Guid(str[1]), 
      double.Parse(str[3]));
    Trace.WriteLine(msg.AsString, "Information");
    accountTransactionLoggQueue.DeleteMessage(msg);
    Trace.WriteLine(String.Format("Deleted:{0}", str), "Information");
  }
  catch (Exception ex)
  {
    Trace.WriteLine(String.Format(
      "fail to insert:{0}", str, ex.Message), "Error");
  }
}

Idempotent Capability

What if one of Woodgrove Bank’s customers sends a request to transfer money from one account to another and the message gets lost? If the customer resends the message, it is possible that two or more of the requests reach the services and gets treated separately.

One of the Woodgrove Bank team members immediately identified this scenario as one that requires the Idempotent Capability pattern. This pattern demands that capabilities or operations are implemented in such a way that they are safe to repeat. In short, the solution that Woodgrove Bank wants to implement requires well-behaved clients that attach a unique ID to each request and promise that they will resend the exact same message including the same unique ID in case of a retry. To be able to handle this, the unique ID is saved in the Azure table storage. Before processing any requests, it is necessary to check if a message with that ID was already processed. If it has been processed, a correct reply will be created, but the processing associated with the new request will not take place.

Although this means bothering the central data store with extra queries, it was deemed necessary. It will result in some deterioration of performance since some queries are made to the central data store before any other processing can take place. However, allowing this to consume extra time and other resources is a reasonable choice in order to meet Woodgrove Bank’s requirements.

The Woodgrove Bank team updated the methods ReliableInsertMoney and ReliableWithDrawMoney in the IUserAccountAction and their implementations by adding a transaction ID:

TransactionResponse ReliableInsertMoney(
  Guid userId, Guid accountId, Guid transactionId, 
  double amount, bool serializeToQueue);
TransactionResponse ReliableWithDrawMoney(
  Guid userId, Guid accountId, Guid transactionId, 
  double amount, bool serializeToQueue);

The UserAccountTransaction table (Azure Storage) was updated by adding TransactionId as RowKey, so that each insert into the table would have a unique transaction ID.

The responsibility for sending a unique message ID for each unique transaction is set to the client:

WcfClient.Using(new AccountServiceClient(), client =>{ 
  using (new OperationContextScope(client.InnerChannel)) 
  { 
    OperationContext.Current.OutgoingMessageHeaders.MessageId = 
      messageId; 
    client.ReliableInsertMoney(new AccountTransactionRequest { 
      UserId = userId, AccountId = accountId, Amount = 1000 }); 
  } 
});

The helper class used here can be found at soamag.com/I32/0909-4.asp.

The IUserAccountService definition was left unchanged. The only change that is necessary to implement this functionality is to read the MessageId from the incoming message headers, which was sent by the client, and use it in the processing behind the scenes (see Figure 11).

Figure 11 Capturing Message IDs

public TransactionResponse ReliableInsertMoney(
  AccountTransactionRequest accountTransactionrequest) {
  var messageId = 
    OperationContext.Current.IncomingMessageHeaders.MessageId;
  Guid messageGuid = Guid.Empty;
  if (messageId.TryGetGuid(out messageGuid))
    //last parameter (true) means that we want to serialize
    //message to the queue as XML (serializeAsXml=true)
    return UserAccountHandler.ReliableInsertMoney(
      accounttransactionRequest.UserId, 
      accounttransactionRequest.AccountId, messageId, 
      accounttransactionRequest.Amount, true);
  else 
    return new TransactionResponse { StatusForTransaction = 
      Core.Types.TransactionStatus.Failed, 
      Message = "MessageId invalid" };      
}

The updated UserAccountAction will now get a transaction ID for each idempotent operation. When the service tries to complete one idempotent operation, it will check to see if the transaction exists in the table storage. If the transaction exists, the service returns the message of the transaction that was stored in the AccountTransactionLogg table. The transaction ID will be saved as RowKey in storage table UserAccountTransaction. To find the correct user and account, the service sends the partition key (userid|accountid). If the transaction ID is not found, the message will be put in the AccountTransactionsQueue for further processing:

public TransactionResponse ReliableHandleMoneyInQueueAsXml(
  UserAccountTransaction accountTransaction) {
  TransactionResponse response = this.CheckIfTransactionExists(
    accountTransaction.PartitionKey, accountTransaction.RowKey);
  if(response.StatusForTransaction == TransactionStatus.Proceed) {
    ...
  }
  return response;
}

The CheckIfTransactionExists method (see Figure 12) is used to ensure that the transaction has not been processed. It will try to find the transaction ID for a specific user account. If the transaction ID is found, the client will get a response message with the details of the already completed transaction.

Figure 12 Checking Transaction Status and ID

private TransactionResponse CheckIfTransactionExists(
  string userIdAccountId, string transId) {
  TransactionResponse transactionResponse = 
    new Core.Models.TransactionResponse();
  var transaction = this.TransactionExists(userIdAccountId, transId);
  if (transaction != null) {
    transactionResponse.Message = 
      String.Format("messageId:{0}, Message={1}, ", 
      transaction.RowKey, transaction.Message);
    transactionResponse.StatusForTransaction = 
      TransactionStatus.Completed;
  }
  else
    transactionResponse.StatusForTransaction = 
      TransactionStatus.Proceed;
  return transactionResponse;
}
private UserAccountTransaction TransactionExists(
  string userIdAccountId, string transId) {
  UserAccountTransaction userAccountTransaction = null;
  using (var db = new UserAccountDataContext()) {
    try {
      userAccountTransaction = 
        db.UserAccountTransactionTable.Where(
        uac => uac.PartitionKey == userIdAccountId && 
        uac.RowKey == transId).FirstOrDefault();
      userAccountTransaction.Message = "Transaction Exists";
    }
    catch (DataServiceQueryException e) {
      HttpStatusCode s;
      if (TableStorageHelpers.EvaluateException(e, out s) && 
        s == HttpStatusCode.NotFound) {
        // this would mean the entity was not found
        userAccountTransaction = null;
      }
    }
  }
  return userAccountTransaction;
}

An interesting property of CheckIfTransactionExists is that if the data you want to find is not found, Azure Storage returns a 404 HTTP status code (because it uses a REST interface). Furthermore, if the data is not found, an exception will be thrown by ADO.NET client services (System.Data.Services.Client).

More Info

For more information about the implementation of this proof-of-concept solution, please check the provided source code available online. SOA pattern descriptions are published at soapatterns.org. For questions, contact herbjorn@wilhelmsen.se.


Arman Kurtagić* is a consultant focusing on new Microsoft technologies, working at Omegapoint, which provides business-driven, secure IT solutions. He has worked in various roles including, developer, architect, mentor and entrepreneur, and has experience in industries such as finance, gaming and media.*

Herbjörn Wilhelmsen* is a consultant working for Forefront Consulting Group and is based in Stockholm. His main focus areas are service-oriented architecture and business architecture. Wilhelmsen is the chair of the SOA Patterns Review Committee and also currently leads the Business 2 IT group within the Swedish chapter of IASA. He is co-authoring the book* SOA with .NET and Azure* as part of the *Prentice Hall Service-Oriented Computing Series from Thomas Erl.

Thomas Erl* is the world’s top-selling SOA author, series editor of the*  Prentice Hall Service-Oriented Computing Series from Thomas Erl, and editor of SOA Magazine. Erl is the founder of SOA Systems Inc. and the SOASchool.com SOA Certified Professional program. Erl is the founder of the SOA Manifesto working group and is a speaker and instructor for private and public events. For more information, visit thomaserl.com.

Thanks to the following technical expert for reviewing this article: Steve Marx