January 2009

Volume 24 Number 01

Foundations - Easily Apply Transactions To Services

By Juval Lowy | January 2009

Code download available


State Management and Transactions
Per-Call Transactional Services
Instance Management and Transactions
Session-Based Services and VRMs
Transactional Durable Services
Transactional Behavior
Adding Context to the IPC Binding
InProcFactory and Transactions

A fundamental problem in programming is error recovery. After an error, your application must restore itself to the state it had before the error took place. Consider an application that tries to perform an operation comprising several smaller operations, potentially concurrently, where each of the individual operations can fail or succeed independently of the others. An error in any one of the smaller operations means the system is at an inconsistent state.

Take a banking application, for example, that transfers funds between two accounts by crediting one account and debiting the other. Successfully debiting one account but failing to credit the other is an inconsistent state, because the funds cannot be in both places at the same time, and failing to debit while successfully crediting results with an equally inconsistent state in which the money is gone. It is always up to the application to recover from the error by restoring the system to the original state.

This is far easier said than done for a number of reasons. First, for a large operation, the sheer number of permutations of partial success and partial failure gets quickly out of hand. This results in fragile code that is very expensive to develop and maintain and quite often does not really work, since developers often deal only with the easy recovery cases, which are the cases that they are both aware of and know how to handle. Second, your composite operation could be part of a much larger operation, and even if your code executed flawlessly, you still may have to undo it if something else outside your control encounters an error. This implies tight coupling between the participating parties on the management and coordination of the operations. Finally, you also need to isolate what you do from whomever else is required to interact with the system, because if you later on recover from an error by rolling back some of your actions, you will put someone else implicitly in an error state.

As you can see, it is practically impossible to write robust error-recovery code by hand. This realization is not new. Ever since software was used in business contexts (in the 1960s), it was clear there had to be a better way of managing recovery. There is a better way: transactions. A transaction is a set of operations where the failure of any individual operation causes the entire set to fail, as one atomic operation. When using transactions, there is no need to write recovery logic, since there is nothing to recover. Either all operations succeeded so there is nothing to recover, or they all failed, and failed to affect the system's state, so there is also nothing to recover.

When using transactions, it is essential to use transactional resource managers. The resource manager is capable of rolling back all changes made during the transaction if the transaction aborts and persisting the changes if the transaction commits. The resource manager also provides isolation; that is, while a transaction is in progress, the resource manager prevents all other parties (besides the transaction) from accessing it and seeing the changes, which could still roll back. This also means that the transaction should never access non-resource managers, since any changes made to those will not roll back if the transaction aborts, and thus recovery will be necessary.

Traditionally, resource managers were durable resources such as databases and message queues. However, in the article from the May 2005 issue of MSDN Magazinetitled " Can't Commit? Volatile Resource Managers in .NET Bring Transactions to the Common Type", I presented my technique for implementing a general-purpose volatile resource manager (VRM) called Transactional<T>:

public class Transactional<T> : ... { public Transactional(T value); public Transactional(); public T Value {get;set;} /* Conversion operators to and from T */ }

By specifying any serializable type parameter (such as an int or a string) to Transactional<T>, you turn that type into a full-blown volatile resource manager that auto-enlists in the ambient transaction, commits or rolls back the changes according to the outcome of the transaction, and isolates the current changes from all other transactions.

Figure 1demonstrates the use of Transactional<T>. Since the scope is not completed, the transaction aborts, and the values of number and city revert to their pre-transaction state.

Figure 1 Using Transactional<T>

Transactional<int> number = new Transactional<int>(3); Transactional<string> city = new Transactional<string>("New York, "); city.Value += "NY"; //Can use with or without transactions using(TransactionScope scope = new TransactionScope()) { city.Value = "London, "; city.Value += "UK"; number.Value = 4; number.Value++; } Debug.Assert(number == 3); //Conversion operators at work Debug.Assert(city == "New York, NY");

In addition to Transactional<T>, I have also provided a transactional array as well as transactional versions for all of the collections in System.Collections.Generic, such as TransactionalDictionary<K,T>. These collections are polymorphic with their non-transactional cousins and are used exactly the same way.

State Management and Transactions

The sole purpose of transactional programming is to leave the system in a consistent state. In the case of Windows Communication Foundation (WCF), the state of the system consists of the resource managers plus the in-memory state of the service instances. While the resource managers will automatically manage their state as a product of the transaction's outcome, that is not the case with in-memory objects or static variables.

The solution to this state management problem is to develop a service as a state-aware service and proactively manage its state. Between transactions, the service should store its state in a resource manager. At the beginning of each transaction, the service should retrieve its state from the resource, and by doing so enlist the resource in the transaction. At the end of the transaction, the service should save its state back to the resource manager. This technique elegantly provides for state auto-recovery. Any changes made to the instance state will commit or roll back as part of the transaction.

If the transaction commits, the next time the service gets its state it will have the post-transaction state. If the transaction aborts, the next time it will have its pre-transaction state. Either way, the service will have a consistent state ready to be accessed by a new transaction.

There are two remaining problems when writing transactional services. The first is how the service can know when transactions start and end, so that it can get and save its state. The service may be part of a much larger transaction that spans multiple services and machines. At any moment between calls, the transaction might end. Who will call the service, letting it know to save its state? The second problem has to do with isolation. Different clients might call the service concurrently, on different transactions. How can the service isolate from one transaction change made to its state by another transaction? If the other transaction was to access its state and operate based on its values, that transaction would be inconsistent if the original transaction aborted and the changes rolled back.

The solution to both problems is to equate method boundaries with transaction boundaries. At the beginning of every method call, the service should read its state from the resource manager, and at the end of each method call, the service should save its state to the resource manager. Doing so ensures that if a transaction ends between method calls, the service's state will either persist or roll back with it. In addition, reading and storing the state in the resource manager in each method call addresses the isolation challenge because the service simply lets the resource manager isolate access to the state between concurrent transactions.

Since the service equates method boundaries with transaction boundaries, the service instance must also vote on the transaction's outcome at the end of every method call. From the service perspective, the transaction completes once the method returns. In WCF, this is done automatically via the TransactionAutoComplete property of the OperationBehavior attribute. When this property is set to true, if there were no unhandled exceptions in the operation, WCF will automatically vote to commit. If there was an unhandled exception, WCF will vote to abort. Since TransactionAutoComplete defaults to true, any transactional method will use auto-completion by default, like so:

//These two definitions are equivalent: [OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete = true)] public void MyMethod(...) {...} [OperationBehavior(TransactionScopeRequired = true)] public void MyMethod(...) {...}

For more on WCF transactional programming, see my Foundations column " WCF Transaction Propagation" in the May 2007 issue.

Per-Call Transactional Services

With a per-call service, once the call returns, the instance is destroyed. Therefore, the resource manager used to store the state between calls must be outside the scope of the instance. The client and the service must also agree on which operations are responsible for creating or removing the instance from the resource manager.

Because there could be many instances of the same service type accessing the same resource manager, every operation must contain some parameter that allows the service instance to find its state in the resource manager and bind against it. I call that parameter the instance ID. The client must also call a dedicated operation to remove the instance state from the store. Note that the behavioral requirements for a state-aware transactional object and a per-call object are the same: both retrieve and save their state at method boundaries. With a per-call service, any resource manager can be used to store the service state. You might use a database or you might use a VRM, as shown in Figure 2.

Figure 2 Per-Call Service Using a VRM

[ServiceContract] interface IMyCounter { [OperationContract] [TransactionFlow(TransactionFlowOption.Allowed)] void Increment(string instanceId); [OperationContract] [TransactionFlow(TransactionFlowOption.Allowed)] void RemoveCounter(string instanceId); } [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)] class MyService : IMyCounter { static TransactionalDictionary<string,int> m_StateStore = new TransactionalDictionary<string,int>(); [OperationBehavior(TransactionScopeRequired = true)] public void Increment(string instanceId) { if(m_StateStore.ContainsKey(instanceId) == false) { m_StateStore[instanceId] = 0; } m_StateStore[instanceId]++; Trace.WriteLine(m_StateStore[instanceId]); } [OperationBehavior(TransactionScopeRequired = true)] public void RemoveCounter(string instanceId) { if(m_StateStore.ContainsKey(instanceId)) { m_StateStore.Remove(instanceId); } } } //Client side: MyCounterClient proxy = new MyCounterClient(); using(TransactionScope scope = new TransactionScope()) { proxy.Increment("MyInstance"); scope.Complete(); } //This transaction will abort since the scope is not completed using(TransactionScope scope = new TransactionScope()) { proxy.Increment("MyInstance"); } using(TransactionScope scope = new TransactionScope()) { proxy.Increment("MyInstance"); proxy.RemoveCounter("MyInstance"); scope.Complete(); } proxy.Close(); //Traces: 1 2 2

Instance Management and Transactions

WCF forces the service instance to equate method boundaries with transaction boundaries and to be state-aware, which means to purge all its instance state at method boundaries. By default, once the transaction completes, WCF destroys the service instance, ensuring there are no leftovers in memory that might jeopardize consistency.

The lifecycle of any transactional service is controlled by the ReleaseServiceInstanceOnTransactionComplete property of the ServiceBehavior attribute. When ReleaseServiceInstanceOn­TransactionComplete is set to true (the default), it disposes of the service instance once the method completes the transaction, in effect turning any WCF service into a per-call service as far as the instance programming model is concerned.

This heavy-handed approach did not originate with WCF. All distributed transactional programming models on the Microsoft platform, ever since MTS, through COM+ and Enterprise Services, equated a transactional object with a per-call object. The architects of these technologies simply did not trust developers to properly manage the state of the object in the face of transactions, something that is both intricate and a non-intuitive programming model. The main disadvantage is that all developers that want to benefit from transactions have to adopt the non-trivial per-call programming model (see Figure 2), while most developers feel much more at ease with the familiar session-based stateful programming model of regular Microsoft .NET Framework objects.

I personally have always felt that equating transactions with per-call instantiation is a necessary evil, and yet, conceptually, it is distorted. One should only choose the per-call instancing mode when scalability is required, and, ideally, transactions should be separated from the object instance management and the application's regard for scalability.

If your application is required to scale, then choosing per-call and using transactions will work very well together. However, if you do not need scalability (which is probably the common case with most applications) your services should be allowed to be session-based, stateful, and transactional. The rest of this column presents my solution to the problem of enabling and preserving the session-based programming model while using transactions with common services.

Session-Based Services and VRMs

WCF does allow you to maintain the session semantic with a transactional service by setting ReleaseServiceInstanceOn­TransactionComplete to false. In this case WCF will stay out of the way and will let the service developer worry about managing the state of the service instance in the face of transactions. The per-session service still must equate method boundaries with transaction boundaries because every method call may be in a different transaction and a transaction may end between method calls in the same session. While you could manage that state manually just as with a per-call service (or use some other advanced WCF features outside the scope of this column), you could use VRMs for the service members, as shown in Figure 3.

Figure 3 Using VRMs by Per-Session Transactional Service

[ServiceBehavior(ReleaseServiceInstanceOnTransactionComplete = false)] class MyService : IMyContract { Transactional<string> m_Text = new Transactional<string>("Some initial value"); TransactionalArray<int> m_Numbers = new TransactionalArray<int>(3); [OperationBehavior(TransactionScopeRequired = true)] public void MyMethod() { m_Text.Value = "This value will roll back if the transaction aborts"; //These will roll back if the transaction aborts m_Numbers[0] = 11; m_Numbers[1] = 22; m_Numbers[2] = 33; } }

The use of VRMs enables a stateful programming model: the service instance simply accesses its state as if no transactions were involved. Any changes made to the state will commit or roll back with the transaction. However, I find Figure 3to be an expert programming model, and thus it has its own pitfalls. It requires familiarity with VRM, meticulous definition of members, as well as the discipline to always configure all operations to require transactions, and to disable releasing the instance upon completion.

Transactional Durable Services

In the October 2008 installment of this column (" Managing State With Durable Services"), I presented the support that WCF offers for durable services. A durable service retrieves its state from the configured store and then saves its state back into that store on every operation. The state store may or may not be a transactional resource manager.

If the service is transactional, it should of course use only transactional storage and enlist it in each operation's transaction. That way, if a transaction aborts, the state store will roll back to its pre-transaction state. However, WCF does not know whether a service is designed to propagate its transactions to the state store, and by default it will not enlist the storage in the transaction even if the storage is a transactional resource manager, such as SQL Server 2005 or SQL Server 2008. To instruct WCF to propagate the transaction and enlist the underlying storage, set the SaveStateInOperationTransaction property of the DurableService attribute to true:

[Serializable] [DurableService(SaveStateInOperationTransaction = true)] class MyService: IMyContract {...}

SaveStateInOperationTransaction defaults to false, thus state storage will not participate in the transaction. Since only a transactional service could benefit from having SaveStateInOperation­Transaction set to true, if it is true then WCF will insist that all operations on the service either have TransactionScopeRequired set to true or have mandatory transaction flow. If the operation is configured with TransactionScopeRequired set to true, the ambient transaction of the operation will be the one used to enlist the storage.

Transactional Behavior

In the case of the DurableService attribute, the word durable is a misnomer, since it does not necessarily indicate a durable behavior here. All it means is that WCF will automatically deserialize the service state from a configured storage and then serialize it back again on every operation. Similarly, the persistence provider behavior does not necessarily mean persistence, since any provider that derives from the prescribed abstract provider class will suffice.

The fact that the durable service infrastructure is, in reality, a serialization infrastructure enabled me to leverage it into a technique for managing service state in the face of transactions, while relying underneath on a volatile resource manager, without having the service instance do anything about it. This further streamlines the transactional programming model of WCF and yields the benefit of the superior programming model of transactions for mere objects and common services.

The first step was to define two transactional in-memory provider factories called TransactionalMemoryProviderFactory and TransactionalInstanceProviderFactory. The TransactionalMemory­ProviderFactory uses a static TransactionalDictionary<ID,T> to store the service instances. The dictionary is shared among all clients and sessions. As long as the host is running, TransactionalMemory­ProviderFactory allows clients to connect and disconnect from the service. When using TransactionalMemoryProviderFactory you should designate a completing operation that removes the instance state from the store using the CompletesInstance property of the DurableOperation attribute.

TransactionalInstanceProviderFactory, on the other hand, matches each session with a dedicated instance of Transactional<T>. There is no need for a completing operation since the service state will be garbage collected after the session is closed.

Next, I defined the TransactionalBehavior attribute, shown in Figure 4. TransactionalBehavior is a service behavior attribute that performs these configurations. First, it injects into the service description a DurableService attribute with SaveStateIn­OperationTransaction set to true. Second, it adds the use of either TransactionalMemoryProviderFactory or TransactionalInstance­ProviderFactory for the persistent behavior according to the value of the AutoCompleteInstance property. If AutoCompleteInstance is set to true (the default) it will use TransactionalInstance­ProviderFactory. Finally, if the TransactionRequiredAllOperations property is set to true (the default), TransactionalBehavior will set Transaction­ScopeRequired to true on all the service operation behaviors, thus providing all operations with an ambient transaction. When it is explicitly set to false, the service developer can choose which operations will be transactional.

Figure 4 The TransactionalBehavior Attribute

[AttributeUsage(AttributeTargets.Class)] public class TransactionalBehaviorAttribute : Attribute,IServiceBehavior { public bool TransactionRequiredAllOperations {get;set;} public bool AutoCompleteInstance {get;set;} public TransactionalBehaviorAttribute() { TransactionRequiredAllOperations = true; AutoCompleteInstance = true; } void IServiceBehavior.Validate(ServiceDescription description, ServiceHostBase host) { DurableServiceAttribute durable = new DurableServiceAttribute(); durable.SaveStateInOperationTransaction = true; description.Behaviors.Add(durable); PersistenceProviderFactory factory; if(AutoCompleteInstance) { factory = new TransactionalInstanceProviderFactory(); } else { factory = new TransactionalMemoryProviderFactory(); } PersistenceProviderBehavior persistenceBehavior = new PersistenceProviderBehavior(factory); description.Behaviors.Add(persistenceBehavior); if(TransactionRequiredAllOperations) { foreach(ServiceEndpoint endpoint in description.Endpoints) { foreach(OperationDescription operation in endpoint.Contract.Operations) { OperationBehaviorAttribute operationBehavior = operation.Behaviors.Find<OperationBehaviorAttribute>(); operationBehavior.TransactionScopeRequired = true; } } } } ... }

When using the TransactionalBehavior attribute with the default values, the client is not required to manage or interact in any way with the instance ID. All that is necessary for the client to do is use the proxy over one of the context bindings and let the binding manage the instance ID, as shown in Figure 5. Note that the service is interacting with a normal integer as its member variable. The interesting thing is that because of the durable behavior, the instance is still, of course, deactivated like a per-call service on method boundaries, yet the programming model is that of a common .NET object.

Figure 5 Using the TransactionalBehavior Attribute

[ServiceContract] interface IMyCounter { [OperationContract] [TransactionFlow(TransactionFlowOption.Allowed)] void Increment(); } [Serializable] [TransactionalBehavior] class MyService : IMyCounter { int m_Counter = 0; public void Increment() { m_Counter++; Trace.WriteLine(m_Counter); } } //Client side: MyCounterClient proxy = new MyCounterClient(); using(TransactionScope scope = new TransactionScope()) { proxy.Increment(); scope.Complete(); } //This transaction will abort since the scope is not completed using(TransactionScope scope = new TransactionScope()) { proxy.Increment(); } using(TransactionScope scope = new TransactionScope()) { proxy.Increment(); scope.Complete(); } proxy.Close(); //Traces: 1 2 2

Adding Context to the IPC Binding

TransactionalBehavior requires a binding that supports the context protocol. While WCF provides context support for the basic, Web services (WS), and TCP bindings, missing from that list is the inter-process communication (IPC; also called pipes), binding. It would be valuable to have that support for the IPC binding since that would enable the use of TransactionalBehavior over IPC, yielding the benefits of IPC for intimate calls. To that end, I defined the NetNamedPipeContextBinding class:

public class NetNamedPipeContextBinding : NetNamedPipeBinding { /* Same constructors as NetNamedPipeBinding */ public ProtectionLevel ContextProtectionLevel {get;set;} }

NetNamedPipeContextBinding is used exactly like its base class. You can use this binding programmatically like any other built-in binding. However, when using a custom binding in the application .config file, you need to inform WCF where the custom binding is defined. While you can do this on a per-application basis, the easier option is to reference the helper class NetNamedPipe­ContextBindingCollectionElement in machine.config to affect every application on the machine, as shown here:

<!--In machine.config--> <bindingExtensions> ... <add name = "netNamedPipeContextBinding" type = "ServiceModelEx.NetNamedPipeContextBindingCollectionElement, ServiceModelEx" /> </bindingExtensions>

You can use NetNamedPipeContextBinding also in your Workflow applications.

Figure 6lists an excerpt from the implementation of NetNamedPipeContextBinding and its supporting classes (the full implementation can be found in this month's code download). The constructors of NetNamed­PipeContextBinding all delegate the actual construction to the base constructors of NetNamedPipeBinding, and the only initialization they do is to set the context protection level to default to ProtectionLevel.EncryptAndSign.

Figure 6 Implementing NetNamedPipeContextBinding

public class NetNamedPipeContextBinding : NetNamedPipeBinding { internal const string SectionName = "netNamedPipeContextBinding"; public ProtectionLevel ContextProtectionLevel {get;set;} public NetNamedPipeContextBinding() { ContextProtectionLevel = ProtectionLevel.EncryptAndSign; } public NetNamedPipeContextBinding(NetNamedPipeSecurityMode securityMode) : base(securityMode) { ContextProtectionLevel = ProtectionLevel.EncryptAndSign; } public NetNamedPipeContextBinding(string configurationName) { ContextProtectionLevel = ProtectionLevel.EncryptAndSign; ApplyConfiguration(configurationName); } public override BindingElementCollection CreateBindingElements() { BindingElement element = new ContextBindingElement(ContextProtectionLevel, ContextExchangeMechanism.ContextSoapHeader); BindingElementCollection elements = base.CreateBindingElements(); elements.Insert(0,element); return elements; } ... //code excerpted for space }

The heart of any binding class is the CreateBindingElements method. NetNamedPipeContextBinding accesses its base binding collection of binding elements and adds to it the ContextBinding­Element. Inserting this element into the collection adds support for the context protocol.

The rest of the implementation is mere bookkeeping to enable administrative configuration. The ApplyConfiguration method is called by the constructor, which takes the binding section configuration name. ApplyConfiguration uses the ConfigurationManager class to parse out of the .config file the netNamedPipeContextBinding section, and from it an instance of NetNamedPipeContextBinding­Element. That binding element is then used to configure the binding instance by calling its ApplyConfiguration method.

The constructors of NetNamedPipeContextBinding­Element add to its base class Properties collection of configuration properties a single property for the context protection level. In OnApply­Configuration (which is called as a result of NetNamedPipeContextBinding.ApplyConfiguration calling Apply­Configuration), the method first configures its base element and then sets the context protection level according to the configured level.

The NetNamedPipeContextBindingCollectionElement type is used to bind NetNamedPipeContextBinding with the NetNamed­PipeContextBindingElement. This way, when adding NetNamedPipeContextBindingCollectionElement as a binding extension, the configuration manager knows which type to instantiate and provide with the binding parameters.

InProcFactory and Transactions

The TransactionalBehavior attribute allows you to treat almost every class in your application as transactional without compromising on the programming model of familiar .NET. The downside is that WCF was never designed to be used at a very granular level—you will have to create, open, and close multiple hosts, and your application .config file will become unmanageable with scores of service and client sections. To address these issues, in my book Programming WCF, 2nd Edition I defined a class called the InProcFactory, which lets you instantiate a service class over WCF:

public static class InProcFactory { public static I CreateInstance<S,I>() where I : class where S : I; public static void CloseProxy<I>(I instance) where I : class; //More members }

When using InProcFactory, you utilize WCF at the class level without ever resorting to explicitly managing the host or having client or service .config files. To make the programming model of TransactionalBehavior accessible at every class level, the InProcFactory class uses NetNamedPipeContextBinding with transaction flow enabled. Using the definitions of Figure 5, InProcFactory enables the programming model of Figure 7.

Figure 7 Combining TransactionalBehavior with InProcFactory

IMyCounter proxy = InProcFactory.CreateInstance<MyService,IMyCounter>(); using(TransactionScope scope = new TransactionScope()) { proxy.Increment(); scope.Complete(); } //This transaction will abort since the scope is not completed using(TransactionScope scope = new TransactionScope()) { proxy.Increment(); } using(TransactionScope scope = new TransactionScope()) { proxy.Increment(); scope.Complete(); } InProcFactory.CloseProxy(proxy); //Traces: Counter = 1 Counter = 2 Counter = 2

The programming model of Figure 7is identical to that of plain C# classes, without any ownership overhead, and yet the code fully benefits from transactions. I see it as a fundamental step toward the future, where memory itself will be transactional and it will be possible for every object to be transactional.

Figure 8shows the implementation of the InProcFactory, with some code removed for brevity. InProcFactory's static constructor is called once per app domain, allocating in each a new unique base address using a GUID. This lets InProcFactory be used multiple times on the same machine, across app domains and processes.

Figure 8 The InProcFactory Class

public static class InProcFactory { struct HostRecord { public HostRecord(ServiceHost host,string address) { Host = host; Address = new EndpointAddress(address); } public readonly ServiceHost Host; public readonly EndpointAddress Address; } static readonly Uri BaseAddress = new Uri("net.pipe://localhost/" + Guid.NewGuid().ToString()); static readonly Binding Binding; static Dictionary<Type,HostRecord> m_Hosts = new Dictionary<Type,HostRecord>(); static InProcFactory() { NetNamedPipeBinding binding = new NetNamedPipeContextBinding(); binding.TransactionFlow = true; Binding = binding; AppDomain.CurrentDomain.ProcessExit += delegate { foreach(HostRecord hostRecord in m_Hosts.Values) { hostRecord.Host.Close(); } }; } public static I CreateInstance<S,I>() where I : class where S : I { HostRecord hostRecord = GetHostRecord<S,I>(); return ChannelFactory<I>.CreateChannel(Binding,hostRecord.Address); } static HostRecord GetHostRecord<S,I>() where I : class where S : I { HostRecord hostRecord; if(m_Hosts.ContainsKey(typeof(S))) { hostRecord = m_Hosts[typeof(S)]; } else { ServiceHost host = new ServiceHost(typeof(S),BaseAddress); string address = BaseAddress.ToString() + Guid.NewGuid().ToString(); hostRecord = new HostRecord(host,address); m_Hosts.Add(typeof(S),hostRecord); host.AddServiceEndpoint(typeof(I),Binding,address); host.Open(); } return hostRecord; } public static void CloseProxy<I>(I instance) where I : class { ICommunicationObject proxy = instance as ICommunicationObject; Debug.Assert(proxy != null); proxy.Close(); } }

InProcFactory internally manages a dictionary that maps service types to a particular host instance. When CreateInstance is called to create an instance of a particular type, it looks in the dictionary, using a helper method called GetHostRecord. If the dictionary does not already contain the service type, this helper method creates a host instance for it and adds an endpoint to that host, using a new GUID as the unique pipe name. CreateInstance then grabs the address of the endpoint from the host record and uses ChannelFactory<T> to create the proxy.

In its static constructor, which is called upon the first use of the class, InProcFactory subscribes to the process exit event to close all hosts when the process shuts down. Finally, to help the clients close the proxy, InProcFactory provides the CloseProxy method, which queries the proxy to ICommunicationObject and closes it. To learn how you can take advantage of transactional memory, see the Insights sidebar "What Is Transactional Memory?."

What Is Transactional Memory?

You may have heard of transactional memory, the new technology for managing shared data that many claim will solve all the problems you encounter when authoring concurrent code. You may have also heard that transactional memory promises more than it can deliver and is nothing more than a research toy. The truth lies somewhere between these two extremes.

Transactional memory allows you to avoid the management of individual locks. Instead, you can structure your program in well-defined sequential blocks—units of work, or transactions, as they're called in the database world. You can then let the underlying runtime system, compiler, hardware, or a combination provide the desired isolation and consistency guarantees.

Typically, the underlying transactional memory system provides optimistic concurrency control on a rather finely grained basis. Instead of always locking a resource, the transactional memory system assumes that there is no contention. It also detects when these assumptions were incorrect and then rolls back any tentative changes made in the transaction. Depending on the implementation, the transactional memory system may then attempt to re-execute the block of code until it can complete without contention. Again, the system is able to detect and manage contention without requiring you to specify or code creative back-off and retry mechanisms yourself. When you have optimistic, fine-grained concurrency control, contention management, and no need to specify and manage specific locks, you are able to think about solving your problem in a serial way while using components that take advantage of concurrency.

Transactional memory promises to provide composition, a feat that existing locking mechanisms cannot easily perform. To compose multiple operations or multiple objects together, you generally need to increase the granularity of the lock—usually by wrapping these operations or objects together under one lock. Transactional memory automatically manages fine-grained locking on behalf of your code while providing deadlock avoidance, such that composition is provided without hurting scalability or introducing deadlocks.

There are no large-scale commercial implementations of transactional memory today. Experimental software approaches using libraries, language extensions, or compiler directives have been published in academia and to the Web. Hardware that can provide limited transactional memory does exist in some high-end, highly concurrent environments, but the software that leverages this hardware hides its explicit use. The research community is very excited about transactional memory, and you should expect to see some of this research making it into more accessible products over the next decade.

The accompanying column describes the creation of volatile resource managers that you can use with current transaction technologies to provide atomicity—all or nothing execution characteristics—and the subsequent manageability, quality, and other advantages that transactional programming provides. Transactional memory provides similar functionality, but for any arbitrary data type, using fairly lightweight runtime software or hardware primitives, and focuses on providing scalability, isolation, and composition as well as atomicity without the need to create your own resource manager. When transactional memory is broadly available, programmers will gain not only the benefits of volatile resource managers such as a simpler programming model but will also realize the scalability gains brought about by a transactional memory manager.

— Dana Groff, Senior Program Manager, Microsoft Parallel Computing Platform Team

Send your questions and comments to mmnet30@microsoft.com.

Juval Lowy is a software architect with IDesign, providing WCF training and architecture consulting. His recent book is Programming WCF Services, 2nd Edition(O'Reilly, 2008). He is also the Microsoft Regional Director for the Silicon Valley. Contact Juval at www.idesign.net.