Compartilhar via


Design of the Caching Application Block

The Caching Application Block is designed specifically so that:

  • It provides a set of APIs that are manageable in size.
  • It enables developers to incorporate the standard caching operations into their applications without having to learn the internal workings of the block.
  • It uses the Enterprise Library configuration tools for easy configuration.
  • It performs efficiently.
  • It is thread safe. Code is considered to be thread safe when it can be called from multiple programming threads without unwanted interaction among those threads.
  • It ensures that the backing store remains intact if an exception occurs while it is being accessed.
  • It ensures that the states of the in-memory cache and the backing store remain synchronized.

This topic describes the design of the caching system, describing the highlights and specific design details. Other topics in this section include Design of the Expiration Process and Design of the Scavenging Process.

Design Highlights

The following schematic illustrates the interrelationships between the key classes in the Caching Application Block.

Ff664712.07005a92-5797-490f-8b88-9e417685b531(en-us,PandP.50).png

The Caching Application Block contains the class CacheManager, which is the default implementation of the ICacheManager interface. When you initialize an instance of the default CacheManager, it internally creates a Cache object. After the Cache object is created, all data in the backing store is loaded into an in-memory representation that is contained in the Cache object. Applications can then make requests to the default CacheManager object to retrieve cached data, add data to the cache, and remove data from the cache.

When an application uses the GetData method to send a request to the CacheManager object to retrieve an item, the CacheManager object forwards the request to the Cache object. If the item is in the cache, it is returned from the in-memory representation in the cache to the application. If it is not in the cache, the request returns null. If the item is expired, the item also returns null.

When an application uses the Add method to send a request to the CacheManager object to add an item to the cache, the CacheManager object again forwards the request to the Cache object. If there is already an item with the same key, the Cache object first removes it before adding the new item to the in-memory store and the backing store. If the backing store is the default backing store, NullBackingStore, the data is written only to memory. If the number of cached items exceeds a predetermined limit when the item is added, the BackgroundScheduler object begins scavenging. When adding an item, the application can use an overload of the Add method to specify an array of expiration policies, the scavenging priority, and an object that implements the ICacheItemRefreshAction interface. This object can be used to refresh an expired item from the cache.

When adding an item that is not already in the in-memory hash table, the Cache object first creates a dummy cache item and adds it to the in-memory hash table. It then locks the cache item in the in-memory hash table, adds the item to backing store, and finally replaces the existing cache item in the in-memory hash table with the new cache item. (In the case where the item was already in the in-memory hash table, it replaces the dummy item.) If there is an exception while writing to the backing store, it removes the dummy item added to the in-memory hash table and does not continue. The Caching Application Block enforces a strong exception safety guarantee. This means that if an Add operation fails, the state of the cache rolls back to what it was before it tries to add the item. In other words, either an operation is completed successfully or the state of the cache remains unchanged. (This is also true for the Remove and Flush methods.)

The BackgroundScheduler object periodically monitors the lifetime of items in the cache. When an item expires, the BackgroundScheduler object first removes it and then, optionally, notifies the application that the item was removed. At this point, it is the responsibility of the application to refresh the cache.

Design Details

The ICacheManager default implementation CacheManager class is the interface between the application and the rest of the Caching Application Block. All caching operations occur through this class. For developers who will be using the block unmodified, the default CacheManager object provides all the methods needed to add, retrieve, and remove items from the cache. Every method call made through the default CacheManager object is thread safe.

Each name applies to only one cache. To create instances of multiple caches, use multiple names. Note that different caches, meaning caches with different names, cannot share the same backing store. There can be only one backing store for each CacheManager object.

The Cache object receives requests from the CacheManager object and implements all operations between the backing store and in-memory representation of the cached data. It contains a hash table that holds the in-memory representation of the data. (This is the form that users see.) An item of data is packaged as a CacheItem object. This object includes the data itself, together with other information such as the item's key, its priority, the RefreshAction object, and the expiration policy (or array of policies). It is stored in the hash table. The Cache object also uses a synchronized hash table to control access to the items in the cache, both from the application and from the BackgroundScheduler. The Cache object provides thread safety for the entire Caching Application Block.

The BackgroundScheduler object is responsible for expiring aging cache items and scavenging lower-priority cache items. A PollTimer object triggers the expiration cycle, and a numeric limit triggers the scavenging process. These are set in the configuration file.

The BackgroundScheduler object is an implementation of the active object pattern. This means that any other object (in this case, the PollTimer) talks to the BackgroundScheduler as if it existed on the thread of the calling object. After it is called, the BackgroundScheduler packages the request as a message and puts it in a queue collection object instead of immediately executing the requested behavior. (Remember that this all occurs in the caller's thread.) This queue is an example of the Producer-Consumer pattern. When the BackgroundScheduler is ready to process the message, an internal thread pulls the message from the queue. In effect, the BackgroundScheduler serializes all scavenging and expiration requests.

From its own thread, the BackgroundScheduler object sequentially removes messages from the queue and then executes the request. The advantage of performing operations serially on a single thread is that it guarantees that the code will run in a single-threaded environment. This makes both the code and its effects simpler to understand.

The cache storage classes that are included with the Caching Application Block are the DataBackingStore class, the IsolatedStorageBackingStore class, and the NullBackingStore class. If you are interested in developing your own backing store, your class must either implement the IBackingStore interface or inherit from the abstract BaseBackingStore class, which implements the IBackingStore interface. This class contains implementations of common policies and utilities that can be used by all backing stores.

The DataBackingStore class is used when the backing store is the Data Access Application Block. Using the configuration tools, it is configured to use a named database instance. The IsolatedStorageBackingStore class stores cache items in domain-specific isolated storage. Using the Configuration Console, it is configured to use a named isolated storage. The Caching Application Block communicates with all backing stores through the IBackingStore interface.

The DataBackingStore and IsolatedStorageBackingStore classes can encrypt cache item data before it is persisted to storage. The encryption of cache item data is enabled through configuration. Using the configuration tools, cache storage can be configured to use a named symmetric encryption algorithm provider. The named provider is also used when reading data from the cache storage to decrypt the data before populating the cache with the item data.