Share via



August 2012

Volume 27 Number 08

Forecast: Cloudy - Decoupling the Cloud with MEF

By Joseph Fultz | August 2012

Joseph FultzA colleague and I have been working on a project over the past several months that leverages the Microsoft Extensibility Framework (MEF). In this article, we’ll look at how you might use MEF to make a cloud deployment a little more manageable and flexible. MEF—and similar frameworks such as Unity—are the software fabric that frees developers from managing dependency resolution, object creation and instantiation. Now and again you might find yourself writing a factory method or creating dependent objects inside of a constructor or required initialization method, but for the most part such work is no longer necessary thanks to frameworks such as MEF.

By using MEF in our deployment in conjunction with the StorageClient API, we can deploy and make available new classes without recycling or redeploying our Web roles. Moreover, we can deploy updated versions of types into the cloud without a full redeploy and simply recycle the application instead. Note that while we’re using MEF here, following a similar structure using Unity, Castle Windsor, StructureMap or any of the other similar containers should net the same results, with the primary differences being syntax and type registration semantics.

Design and Deployment

As the saying goes: To get a little more out, you have to put a little more in. In this case that requires certain construction standards and some additional work around the deployment. First, if you’re used to using a dependency injection (DI) or composition container, chances are you’re pretty keen on keeping implementation and interface separated within your code. We don’t stray from that goal here—all our concrete class implementations have inheritance that traces back to an interface type. That doesn’t mean every class will directly inherit from an interface, but classes will generally have layers of abstraction that follow a pattern like Interface “ Virtual “ Concrete.

Figure 1 shows that not only does the primary class I’m interested in have such a chain, but in fact one of its required properties is also abstracted. All of the abstraction makes it easier to replace parts or add additional functionality in the form of a new library that exports the desired contract (in this case the interface). Beyond composition, a nice side effect of being a martinet about abstracting your class design is that it better enables testing via mocked interfaces.

Class Diagram
Figure 1 Class Diagram

The harder part of the requirement is the change in the deployment model for the application. Because we want to build our catalog of imports and exports at run time, and refresh it without having to deploy again, we have to deploy the binaries that hold our concrete classes outside of the Web role deployment. That also forces a little extra work for the application at startup. Figure 2 depicts the startup work in the Global.asax as it calls into a helper class that we’ve created named MEFContext.

Building the Catalog at Startup
Figure 2 Building the Catalog at Startup

Runtime Composition

Because we’re going to be loading the catalog from files in storage, we’ll have to get those files into our cloud storage container. Therefore, getting the files into the Azure Storage location needs to become part of the deployment process. This is probably most easily done using Azure PowerShell cmdlets (wasp.codeplex.com) and some post-build steps. For our purposes, we’ll manually move the binaries using the Azure Storage Explorer (azurestorageexplorer.codeplex.com).

We created a project that contains a common diagnostics class, a customer entity and a couple of rule libraries. All of the rule libraries have to inherit from and export an interface of type IBusinessRule<t>, where t represents the entity against which rules are enforced. Here are the import parts of the class declaration for a rule:

[Export(typeof(IBusinessRule<ICustomer>))]
public class CustomerNameRule : IBusinessRule<ICustomer>
{
  [Import(typeof(IDiagnostics))]
  IDiagnostics _diagnostics;
    ...
}

You can see the export as well as the diagnostics dependency that MEF will inject for us when we ask for the rule object. It’s important to know what’s being exported as that will in turn be the contract by which you resolve the instances you want. The Microsoft .NET Framework 4.5 will bring some enhancements to MEF that will allow a loosening of some of the constraints currently around generics in the container. For example, currently you can register and retrieve something such as IBusinessRule<ICustomer>, but not something like IBusiness-Rule<t>. Sometimes you want all instances of a type beyond its actual template type. Currently, the easiest way to accomplish this is to register a string contract name that’s an agreed convention in your project or solution. For our sample, a declaration like the preceding will work.

We have two rules, one for phone number and one for name, and a diagnostics library, each of which will be available through the MEF container. The first thing we have to do is to grab the libraries out of Azure Storage and bring them down to a local resource (local directory) so we can load them with a DirectoryCatalog. To do this, we include a couple of function calls in the Application_Start of the Global.asax:

// Store the local directory for later use (directory catalog)
MEFContext.CacheFolderPath = 
  RoleEnvironment.GetLocalResource("ResourceCache").RootPath.ToLower();
MEFContext.InitializeContainer();

We’re just grabbing the needed resource path, which is configured as part of the Web role, and then calling the method to set up the container. That initialization method in turn calls UpdateFromStorage to get the files and BuildContainer to create the catalog and then the MEF container.

The UpdateFromStorage method looks in a predetermined container and iterates over the files in the container, downloading each of them into the local resource folder. The first part of this method is shown in Figure 3.

Figure 3 First Half of UpdateFromStorage

// Could also pull from config, etc.
string containerName = CONTAINER_NAME;
// Using development storage account
CloudStorageAccount storageAccount = 
  CloudStorageAccount.DevelopmentStorageAccount;
// Create the blob client and use it to create the container object
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Note that here is where the container name is passed
// in order to get to the files we want
CloudBlobContainer blobContainer = new CloudBlobContainer(
  storageAccount.BlobEndpoint.ToString() + 
  "/" + containerName,
  blobClient);
// Create the options needed to get the blob list
BlobRequestOptions options = new BlobRequestOptions();
options.AccessCondition = AccessCondition.None;
options.BlobListingDetails = BlobListingDetails.All;
options.UseFlatBlobListing = true;
options.Timeout = new TimeSpan(0, 1, 0);

In the first half we set up the storage client to fetch what we need. For this scenario, we’re asking for whatever is there. In cases where you’re bringing files down from storage to a local resource, it might be worth doing a full pass and getting everything. For a more tar-geted fetch of the files, you could assign some IfMatch condition to the options.AccessCondition property. This would require that etags be set on the blobs when uploaded. Additionally, you could optimize the update side of rebuilding the MEF container by storing the last update time and applying an AccessCondition of IfModifiedSince.

Figure 4 shows the second half of UpdateFromStorage.

Figure 4 Second Half of UpdateFromStorage

// Iterate over the collect
// Grab the files and save them locally
foreach (IListBlobItem item in blobs)
{
  string fileAbsPath = item.Uri.AbsolutePath.ToLower();
  // Just want the file name ...
  fileAbsPath = 
    fileAbsPath.Substring(fileAbsPath.LastIndexOf('/') + 1);
  try
  {
    Microsoft.WindowsAzure.StorageClient.CloudPageBlob pageblob =
      new CloudPageBlob(item.Uri.ToString());
    pageblob.DownloadToFile(MEFContext.CacheFolderPath + fileAbsPath, 
      options);
  }
  catch (Exception)
  {
    // Ignore exceptions, if we can't write it's because
    // we've already got the file, move on
    }
}

Once the storage client is ready, we simply iterate over the blob items and download them to the resource. Depending on the conditions and goals of the overall download, you could replicate folder structures locally in this operation or build a folder structure based on convention. Sometimes a folder structure is a requirement to avoid name collisions. We’re just going with the shotgun method and grabbing all of the files and sticking them in one place because we know it’s just two or three DLLs for this sample.

With this, we have the files in place and just need to build the container. In MEF, the composition container is built from one or more catalogs. In this case, we’re going to use a DirectoryCatalog because this makes it easy to simply point the catalog to the directory and load the binaries that are available. Thus, the code to register the types and prepare the container is short and simple:

// Store the container for later use (resolve type instances)
var catalog = new DirectoryCatalog(CacheFolderPath);
MEFContainer = new CompositionContainer(catalog);
MEFContainer.ComposeParts();

Now we’ll run the site and we should see a dump of the types available in the container, as shown in Figure 5.

Initial Exports
Figure 5 Initial Exports

We’re not dumping the entire container here, but rather asking specifically for the IDiagnostics interface and then all exports of type IBusinessRule<ICustomer>. As you can see, we have one of each of these prior to uploading a new business rule library into the storage container.

We’ve placed NewRules.dll into the storage location and now need to get it loaded into the application. Ideally, you want to trigger the container rebuild by doing a little bit of file watching on the storage container. Again, this is easily accomplished with a quick poll using the IfModifiedSince AccessCondition. However, we’ve opted for the more manual process of clicking Update Catalog on our test app. Figure 8 shows the results.

Updated Rules Exports
Figure 8 Updated Rules Exports

We just repeat the steps to create the catalog and initialize the container, and now we have a new rule library to enforce. Note that we haven’t restarted the app or redeployed, but we have new code running in the environment. The only loose end here is that some synchronization method is needed, because we can’t have code trying to use the composition container while we’re replacing the reference:

var catalog = new DirectoryCatalog(CacheFolderPath);
CompositionContainer newContainer = 
  new CompositionContainer(catalog);
newContainer.ComposeParts();
lock(MEFContainer)
{
  MEFContainer = newContainer;
}

The primary reason for building a secondary container and then just replacing the reference is to reduce the lock quantum and return the container to use right away.

To further evolve the code base, the next step would be to implement your own custom catalog type—for example, AzureStorageCatalog, as shown in Figure 9. Unfortunately, the current object model doesn’t have a proper interface or an easily reusable base defined, so using a bit of inheritance as well as some encapsulation is probably the best bet. Implementing a class similar to the AzureStorageCatalog listing would enable a simple model of instantiating the custom catalog and using it directly in the composition container.

Figure 9 AzureStorageCatalog

public class AzureStorageCatalog:ComposablePartCatalog
{
  private string _localCatalogDirectory = default(string);
  private DirectoryCatalog _directoryCatalog = 
    default(DirectoryCatalog);
  AzureStorageCatalog(string StorageSetting, string ContainerName)
    :base()
  {
    // Pull the files to the local directory
    _localCatalogDirectory = 
      GetStorageCatalog(StorageSetting, ContainerName);
    // Load the exports using an encapsulated DirectoryCatalog
    _directoryCatalog = new DirectoryCatalog(_localCatalogDirectory);
  }
  // Return encapsulated parts
  public override IQueryable<ComposablePartDefinition> Parts
  {
    get { return _directoryCatalog.Parts; }
  }
  private string GetStorageCatalog(string StorageSetting, 
    string ContainerName)
  {  }
}

Updating Existing Functionality

Adding new functionality to our deployment was pretty easy, but we don’t have the same good news for updating existing functionality or libraries. Though the process is better than a complete redeployment, it’s still fairly involved because we have to move the files to storage and the relevant Web roles have to update their local resource folders. However, we’ll also recycle the roles because we need to unload and reload the AppDomain to refresh the type definition stored in the container. Even if you load the Compo-sition Container and types into a secondary AppDomain and try to load from there, the AppDomain in which you’re requesting the type will load it from previously loaded metadata. The only way around this we could see would be to send the entities to the secondary AppDomain and add some custom marshaling rather than using the exported types on the primary AppDomain. That pattern seems problematic to us; the double AppDomain in itself seems problematic. Thus, a simpler solution is to recycle the roles after the new binaries are made available.

There’s some good news regarding Azure update domains. Take a look at my February 2012 column, “Azure Deployment Domains” (msdn.microsoft.com/magazine/hh781019), which describes walking the update domains and restarting instances in each. On the positive side, the site stays up with no need for a full redeployment. However, you could potentially experience two different behaviors during the refresh. This is an acceptable risk, though, because the same would be true during a rolling update if you did a full deployment.

You could configure this to happen within the deployment, but the problem is one of coordination. To do this would require that the restarts of the instances be coordinated, so the instances would either need to elect a leader or have some voting system. Rather than writing some artificial intelligence into the Web roles, we feel the task is more easily handled by a monitoring process and the Azure cmdlets referenced earlier.

There are many reasons to use a framework such as MEF that are beyond the narrow bit of functionality we’ve highlighted here. What we wanted to highlight is that, by using the inherent capabilities of Azure in combination with a composition/DI/Inversion of Control-type framework, you could create a dynamic cloud application that could easily respond to the last-minute changes that always seem to pop up.


Joseph Fultz is a software architect at Hewlett-Packard Co., working as part of the HP.com Global IT group. Previously he was a software architect for Microsoft, working with its top-tier enterprise and ISV customers to define architecture and design solutions.

Chris Mabry is a lead developer at Hewlett-Packard Co. with a current focus on leading a team to deliver a rich UI experience based on service-enabled client frameworks.

Thanks to the following technical expert for reviewing this article: Chris Brooks