Share via


Selecting a Backing Store

Each cache manager can be configured to store data only in memory, which means that it uses the null backing store, or each cache manager can be configured to store data both in memory and in persistent storage. The type of persistent storage is specified when you configure the backing store. Backing stores let cached data survive if the application must be restarted. In its original state, the Caching Application Block supports two types of persistent backing stores, each of which is suited to particular situations:

  • Isolated storage – see Using the Isolated Storage Backing Store.
  • Database cache storage – see Using the Data Access Application Block Backing Store.

If you intend to perform caching in a multiple-server environment, such as a Web farm, see Considerations for Server Scenarios.

To protect external data stores from unauthorized access, consider this list of Usage Notes.

Developers can extend the Caching Application Block to support additional types of backing stores. For more information about this topic, see Extending and Modifying the Caching Application Block.

Note

An application can use more than one cache; each cache will be represented by a cache manager in the application's configuration. The Caching Application Block does not support the use of the same persistent backing store location by multiple cache managers in an application. However, multiple cache managers in an application can have the same partition name.

Usage Notes

The following procedures help protect external data stores from unauthorized access:

  • Protect the cached data in an external data store by using Access Control Lists and by setting appropriate permissions on the caching database (Permissions Hierarchy (Database Engine)).
  • Protect data in transit between the data block and the external store by taking one of the following steps:
    • Install the database within the trust boundary
    • Encrypt the data using the encryption storage provider
    • Implement Transport Layer Security (TLS) between the data block and the data cache storage.
  • If a shared caching database is used by multiple applications to access a shared database you should ensure that each application uses unique partition names, by generating a GUID for example during configuration for the partition names, in the configuration tool.
  • Protect the database by using SQL permissions to limit cache storage access to authorized applications.
  • Whenever possible, use separate caching databases for each application, especially in partial trust environments.
  • The Cache Manager can experience denial of service issues when the machine where the Caching Application Block is deployed runs out of memory. This can happen when a large number of items are added to the cache concurrently, when items are repeatedly added, or when large items are added with the NotRemovable option. In a shared environment, one application can cause other applications to fail.
  • The data backing store can grow to the point where it runs out of space if a large number of items are added to the cache and scavenging and expiration are not appropriately configured. You can avoid this problem by restricting the size of the database during deployment or by restricting the quota size in the security policy to ensure that the isolated storage area does not grow above the user application-specific acceptable size for the specific deployment scenario.

Using the Null Backing Store

The null backing store is the default option when you configure the Caching Application Block. It does not persist cached items. This means that cached data exists in memory only; it does not exist in persistent storage. The null backing store is suitable for situations where you want to refresh cached items from the original data source when the application restarts. It can be used with all the supported application types. For a list of these types, see When Should I Use the Caching Application Block?

Using the Isolated Storage Backing Store

Isolated storage is appropriate in the following situations:

  • Persistent storage is required and there are a low number of users.
  • The overhead of using a database is significant.
  • No database facility exists.

For more information about when to use isolated storage, see Scenarios for Isolated Storage on MSDN. When configured to use isolated storage, the backing store is isolated by the cache instance name, the user name, the assembly, and the application domain.

Note

The isolated storage backing store is not compliant with Federal information processing standards (FIPS). The .NET Framework Isolated Storage mechanism relies on non–FIPS-certified cryptography.

Isolated storage is suitable for smart clients and for server applications where each application domain has its own cache. Also note that because isolated storage is always segregated by user, server applications must impersonate the user who is making a request to the application.

An Isolated storage store can grow out of space if a large number of items are added to the cache and scavenging and expiration are not appropriately configured.

Restrict the quota size in the security policy to ensure that the isolated storage area does not grow above the acceptable size depending on the deployment scenario (what is acceptable will be specific to the application and user).

Note

To mitigate the possibility that applications with the same privileges to the isolated storage could read each other's cache item and the attendant security issues involved with that access, ensure that partition names are unique for each application which uses the isolated storage application by using the Enterprise Library configuration tool and also use encryption with the key used for writing and reading to isolated storage.

Using the Data Access Application Block Backing Store

By using the Data Access Application Block, you can store cached data in a database. Currently, the Caching Application Block includes a script to create the required database schema for SQL Server, and the block has been tested against SQL Server databases. Developers can use other database types as backing stores, but they must modify the block source code. Each database type must have a database provider for the Data Access Application Block and include a compatible schema.

The Data Access Application Block backing store option is suitable for smart clients and for server applications where each application domain has its own cache and where you have access to a database.

Each CacheManager object that is running in a single application domain must use a different portion of the database. A portion is defined as a combination of the application name and the cache instance name. The database can be running on the same server as the application using the cache or on a different server. The number of applications using a cache that the database can support depends only on the database's storage limits.

Considerations for Server Scenarios

A single cache manager cannot be shared across application domains. Server applications that are deployed on multiple computers have a unique copy of an in-memory cache on each computer. This is also true for multiple processes running on the same computer, including Enterprise Services components that each run in their own process and use the Caching Application Block. Each process has its own copy of the in-memory cache.

Different applications should not use the same Data Access Application Block backing store instance and partition. Running different applications with the Caching Application Block configured to use the same database instance and partition can cause unpredictable results and is not recommended.

When the same application runs in multiple processes (for example, if the application is deployed on multiple computers in a Web farm), you can configure the Caching Application Block in one of three ways:

  • All instances of the application use the same database instance, but each instance of the application uses a different database partition. For more information, see Scenario One: Partitioned Caches.
  • All instances of the application use the same database instance and the same database partition and all cache managers can read from and write to the cache. For more information, see Scenario Two: Shared Partition.
  • All instances of the application use the same database instance and the same database partition and only one cache manager can write to the cache. All cache managers can read from the cache. For more information, see Scenario Three: Single Writer.

Scenario One: Partitioned Caches

Scenario One is the case where all instances of the application use the same database instance but each instance of the application uses a different database partition. In this scenario, each cache manager operates independently. Although they share the same backing store database instance, each cache manager persists the cache data to a different partition. In effect, there is one cache for each application instance. When an application restarts, each cache manager loads its data from its own partition in the backing store.

If the application preloads the cache, each deployed instance of the application retrieves the data from the original data source. The preloaded data uses backing store storage space for each deployed application instance. This means that in terms of using the cache, deploying the same application to multiple processes is no more efficient than deploying different applications.

Deploying the same application to multiple servers, where each Configuration Application Block is configured identically (for example, all the blocks have the same expiration policy), does not guarantee that the data in each backing store partition is identical. The data in the backing store partition duplicates the in-memory cache data of the cache manager configured to use that backing store partition. The contents of the in-memory cache vary according to how a particular instance of the application uses the cache. Because application requests are routed to different servers, the in-memory cache on each server is likely to be different. Therefore, the contents of the backing store partitions are also likely to be different. This means that even if all the applications are shut down and restarted at the same time, there is no guarantee that they will have identical data in their in-memory caches after each cache is initialized with data from the backing store.

Scenario Two: Shared Partition

Scenario Two is the case where all instances of the application use the same database instance and the same database partition and all cache managers can read from and write to the cache. In this scenario, each instance of an application operates against a unique in-memory cache. When an application creates a cache manager, the cache manager populates the in-memory cache with the data in the backing store. This means that if an application creates a cache manager when it starts, and if all of the application instances are started at the same time, each in-memory cache will be loaded with identical data. Because the applications are using the same partition, each application instance does not require additional storage in the backing store.

The only time data is loaded from the backing store into the in-memory cache is when the cache manager is created. After this, the in-memory cache contents are determined by the application instance using the cache. The way an instance of the application uses the cache can vary from one instance to another because requests are routed to different servers. Different instances of an executing application can have in-memory caches with different contents.

As an application adds and removes items, the contents of the in-memory cache change. The in-memory cache contents also change when the cache manager removes or scavenges expired items. As the in-memory cache changes, the cache manager updates the backing store to reflect these changes. The backing store does not notify cache manager instances when its contents have changed. Therefore, when one application instance changes the backing store contents, the other application instances will have in-memory caches that do not match the backing store data. This means that after an application restarts, the in-memory cache can have contents that are different from the contents it contained before the application restarted.

Applications can be notified when an item expires by subscribing to events that are provided by the cache manager. An application can use this notification to refresh the cache with data from the original data source. When the application adds the refreshed cache item to the cache, the cache manager also updates the backing store with this data. If the application is deployed to multiple computers, each instance of the application can receive the event and initiate requests to the original data source for the same item. These multiple requests can negatively impact the performance of both the application and the original data source. Therefore, using notifications to monitor expirations for the purpose of refreshing expired cache items is not recommended in this scenario.

Scenario Three: Single Writer

Scenario Three is the case where all instances of the application use the same database instance and the same database partition and only one cache manager can write to the cache. All cache managers can read from the cache. In this scenario, only one instance of the application writes to the cache. All other application instances can only read from the cache. The instance of the application that writes to the cache is the master. The in-memory cache of the master is always identical to the data in the backing store. The in-memory cache in each application instance is populated with data from the backing store at the time the cache manager is created. The application instances that can only read data from the cache receive a snapshot of the data. Because the application instances do not have the ability to refresh their caches, their caches becomes stale and shrink as items expire.