Caching Architecture Guide for .NET Framework Applications
Understanding Advanced Caching Issues
Summary: This chapter covers some of the more advanced issues that you will face when implementing caching in distributed .NET applications; including security, monitoring, and synchronization.
Now that you have seen what caching technologies are available and how to use them in Microsoft .NET-based applications, you have an overall picture of how to implement caching. In this chapter, you learn about some of the advanced issues related to the subject.
This chapter contains the following sections:
- "Designing a Custom Cache"
- "Securing a Custom Cache"
- "Monitoring a Cache"
- "Synchronizing Caches in a Server Farm"
Designing a Custom Cache
This section of the guide describes the design and implementation of a custom cache framework. The cache design provides a simple yet extensible framework for creating a custom cache. It includes the design goals and a solution blueprint for a generic cache solution. It can easily be used as a building block within your own .NET-based application.
Introducing the Design Goals
The cache is designed to:
- Decouple the front-end application interface from the internal implementation of the cache storage and management functions.
- Provide best practices for a high performance, scalable caching solution.
- Provide support for cache specific features, such as dependencies and expirations, and enables you to write your own expiration and dependency implementations.
- Provide support for cache management features such as scavenging.
- Enable you to design your own cache storage solution by implementing the storage class interfaces provided.
- Enable you to design your own cache scavenging algorithms by implementing the classes and interfaces provided.
This design should enable you to create reusable cache mechanisms for your .NET distributed applications.
Introducing the Solution Blueprint
The solution blueprint details the design of a cache that meets the goals described in the preceding section. This section describes the main cache components, deployment scenarios, custom cache design, custom cache configuration, and custom cache use cases.
Introducing the Main Cache Components
The design of the custom cache includes the cache blocks shown in Figure 6.1.
Figure 6.1. Custom cache block diagram
These components communicate together to provide the overall caching system. The following list describes each of these cache blocks:
- Cache manager—The cache manager provides the application interface to the cache itself. Classes and interfaces included in this package provide the interfaces required for adding, retrieving, and removing items and metadata to and from the cache.
- Cache service—The cache service is responsible for managing cache metadata. The cache service can be deployed either in the same AppDomain as the cache manager or in a different process, depending on the required storage and its scope.
- Cache storage—The cache storage separates the cache data store from the cache functional implementation. The cache storage handles insertion, retrieval, and deletion of cached items to and from the cache data store.
Understanding Deployment Scenarios
One of the design goals of the custom cache is to enable different types of cache storages, such as static variables, memory-mapped files, and Microsoft SQL Server. To enable optimal use of these types of storages, different deployment methods are required.
Three deployment scenarios that you can use for your custom cache are:
- AppDomain scope deployment
- Machine scope deployment
- Application farm scope deployment
The next sections describe each of these scenarios.
Using AppDomain Scope Deployment
When using a static variable cache, you should deploy the three blocks in the same AppDomain as the application as shown in Figure 6.2.
Figure 6.2. AppDomain scope deployment
Because the storage is accessed only from threads executing in its AppDomain, this deployment option provides you with the optimal performance.
Using Machine Scope Deployment
Storage with a machine scope requires a different deployment method. Because the storage may be shared across multiple applications that reside in different AppDomains on the same computer, the CacheManager is deployed in each application's AppDomain while the CacheService is deployed in a single, separate AppDomain. To enable access to the shared storage, a CacheStorage proxy is deployed in each application AppDomain. This is shown in Figure 6.3.
Figure 6.3. Machine scope deployment
In this configuration, every application can access the storage directly while the metadata is managed separately from a single location.
Using Application Farm Scope Deployment
In this deployment scenario the storage is implemented as a SQL Server durable storage system as shown in Figure 6.4.
Figure 6.4. Application farm scope deployment
In application farm scope deployment, the CacheManager is deployed in each application's AppDomain, and the CacheService is deployed on the same computer as the SQL Server. To enable access to the shared storage, a CacheStorage proxy is deployed in each application's AppDomain.
Understanding the Custom Cache Detailed Design
This section describes the detailed design of the custom cache starting with the class diagram and then looking at the classes and interfaces that constitute the custom cache. Finally, the cache use cases are overviewed with sequence diagrams.
Figure 6.5 shows the three main classes used in a custom cache and the functionality that each one provides.
Figure 6.5. Class diagram
The next sections describe each of these classes in detail.
Using the CacheManager Class
The CacheManager class has references to each of the other classes. It references the:
- CacheStorage for inserting, getting, and removing items from the cache storage.
- CacheService for registering cache item metadata.
The definition of this class is shown in Figure 6.6.
Figure 6.6. The CacheManager class
For each cache operation, the CacheManager first synchronously calls the CacheStorage to update the cache data store. After that, it asynchronously calls the CacheService to update the cached items' expirations, priority, and callback metadata in the cache metadata store. This design provides the quickest possible response times to the cache client by performing any operations on the cache metadata after returning the control to the cache client.
The CacheManager is a static variable within the scope of the application, and as such, the cache can be accessed from any class or thread concurrently without the need to recreate the CacheManager class multiple times. It is important to note that the CacheManager does not hold any state and is simply a front-end interface to the cache.
The CacheManager class implements the following default interface:
- Add—The Add method inserts an item into the cache. The Add method enables you to add items to the cache with or without metadata. In the simplest case, you can use the Add method with a key/value pair.
- Remove—The Remove method removes a specific item and its metadata from the cache identified by the provided key.
- Clear—The Clear method clears all items and their metadata from the cache.
- Get—The Get method retrieves a specific item from the cache identified by the provided key. This method returns an object reference to that item in the cache.
- GetItem—The GetItem method returns a specific CacheItem object identified by the provided key. This object includes the metadata associated with the cached item alongside the cached item value.
The CacheManager class provided in the custom cache does not need to be changed regardless of the storage technology you choose and your deployment model.
Using the CacheService Class
The CacheService class is responsible for managing the metadata associated with cached items. This metadata may include expiration policies on cached items, cached item priorities, and callbacks.
The definition of this class is shown in Figure 6.7.
Figure 6.7. The CacheService class
The CacheService class is implemented as a singleton, meaning that only one instance of this class can exist for the cache. The instantiation of the CacheService may change, depending on the scope of your cache data store and the deployment scenario. If the scope of your cache data store is an AppDomain (for example, when using static variable based cache storage systems) the CacheService class can exist within the AppDomain of your application and cache. On the other hand, if your cache store is shared across multiple AppDomains or processes (for example, when using memory-mapped files or SQL Server based cache storage systems), the CacheService class needs to be instantiated by a separate process and it is accessible from all processes using the cache. Note that each AppDomain or process using the cache has its own instance of the CacheManager class regardless of the deployment scenario.
The CacheService implements a default asynchronous remoting interface, which includes the following methods:
- Add—The Add method has the following cache items metadata arguments: key, expiration, priority, and callback. These arguments are used by the CacheService to invalidate expired items and to notify the calling application when items are invalidated.
- Remove—The Remove method removes a specific item's metadata from the cache service identified by the provided key.
- Clear—The Clear method clears all metadata from the CacheService.
- Get—The Get method returns a specified CacheItem object identified by the provided key. This object includes the metadata associated with the cached item.
- Notify—The Notify method is used by the CacheManager to indicate that a cached item was accessed. This notification is passed by the CacheService to the expiration object associated with a cached item. For example, this notification can be used to manage sliding expiration implementations where the expiration time is reset every time the cached item is accessed.
These methods provide all of the functionality that you need to implement a class to handle the metadata associated with cached items.
Using the CacheStorage Class
The CacheStorage implementation separates the cache functionality from the cache data store. In the custom cache, three cache data stores are presented:
- Static variables—This is the simplest form of cache data store. The static variable data store is an in-process data store that can be used only within the scope of the application domain. For more information about static variables based caching, see Chapter 2, "Understanding Caching Technologies."
- Memory-mapped files—The memory-mapped files data store is a fast access memory data store with cross-process capabilities. A cache based on memory-mapped files is limited to the scope of the computer on which it is running. For more information about caching using memory-mapped files, see Chapter 2, "Understanding Caching Technologies."
- SQL Server—The SQL Server cache data store is the only data store that provides data persistency, meaning that your cached data is not lost if the data store computer fails. The scope of a SQL Server-based cache is the server farm. For more information about SQL Server-based caching, see Chapter 2, "Understanding Caching Technologies."
Every cache storage implementation implements the following interfaces:
- ICacheStorage
- ICacheMetadata
The next sections describe each of these interfaces.
Using the ICacheStorage Interface
The ICacheStorage interface is used to store only the cached data. The interface definition of this is shown in Figure 6.8.
Figure 6.8. ICacheStorage interface
The ICacheStorage has the following interface:
- Add—The Add method inserts a key/value pair into the storage.
- Remove—The Remove method takes a key as input and removes the stored item value identified by this key from the storage.
- Clear—The Clear method clears all items from the storage.
- Get—The Get method takes a cached item's key as input and returns the stored item value identified by this key from the storage.
- GetSize—The GetSize method is used by the IScavengingAlgorithm implementation.
These methods are used by the CacheManager to store cache items.
Using the ICacheMetadata Interface
The CacheStorage can also implement the ICacheMetadata interface. The interface definition of this is shown in Figure 6.9. This is used by the CacheService to provide persistency for the metadata. This interface should be implemented only by storages that are persistent, such as SQL Server-based storage systems. In such cases, the metadata has to be persisted alongside the cached items.
Figure 6.9. ICacheMetadata interface
The ICacheMetadata interface includes the following methods:
- Add—The Add method inserts a cached item's metadata to the persistent store.
- Remove—The Remove method removes an item's metadata from the persistent storage. This method is used by the CacheService when an item is removed from cache.
- Clear—The Clear method clears all metadata information from the storage.
- Get—The Get method retrieves all metadata from the storage. This method is used by the CacheService to initialize cache metadata from the persistent store after a power fail or process recycle.
You need to implement this interface only when you are using persistent storage systems, for example, SQL Server.
Using the CacheItem Class
The CacheItem class is a wrapper class around a cached item's value and metadata. The class declaration is shown in Figure 6.10.
Figure 6.10. CacheItem class
An object of this type is returned by the CacheManager class and the CacheService class when the item's metadata is requested using the GetItem method.
Using the ICacheItemExpiration
This interface needs to be implemented by your expiration class. The interface definition is shown in Figure 6.11.
Figure 6.11. ICacheItemExpiration interface
The ICacheItemExpiration interface has the following members:
- HasExpired—This method is periodically called by the CacheService to check whether a cached item has expired. The service scans the cache metadata for each cache item that has an expiration attached to it and checks whether it has expired. If the item has expired, the service removes the item from cache.
- Notify—This method is used by the CacheService to notify the expiration object attached to a cached item that that item has been accessed. For example, this is useful for sliding expiration schemes where the expiration time resets every time the item is accessed.
- OnChange—This event is listened to by the CacheService that, in response, invalidates the cached item and removes it from the cache.
You can use these methods to manage the expiration of your cached items.
Using the IScavengingAlgorithm Interface
This interface needs to be implemented by your scavenging algorithm class. The interface definition of this is shown in Figure 6.12.
Figure 6.12. IScavengingAlgorithm interface
The IScavengingAlgorithm interface includes the following methods:
- Add—The Add method inserts a new key with its priority metadata to the scavenger class.
- Remove—The Remove method removes item metadata from the scavenger class.
- Clear—The Clear method clears all metadata information from the scavenger class.
- Scavenge—The Scavenge method is called by the CacheService and is used to run the scavenging algorithm that removes items from the cache to free cache storage resources.
- Notify—The Notify method is called by the CacheService to notify the scavenging algorithm object that a cached item was accessed. The scavenging algorithm uses this information when performing the scavenging process to decide which items should be removed from cache based on the scavenging policy (LRU, LFU, or any other implementation).
These methods are used by the CacheStorage class to execute the scavenging process when cache storage resources become scarce.
Using Additional Support Classes
In addition to the classes described so far, there are several other classes supporting the design, as shown in Figure 6.13.
Figure 6.13. Supporting classes
These classes include delegates and enumerations.
Configuring a Custom Cache
As part of the custom cache implementation, there is an option to use custom configuration settings to customize the initialization and behavior of your custom cache. The explanations for the purpose and function of each field in the configuration file appear in the file, as shown in the following configuration sections.
<configuration>
<configSections>
<section name="CacheManagerSettings"
type="Microsoft.ApplicationBlocks.Cache.CacheConfigurationHandler,
Microsoft.ApplicationBlocks.Cache" />
</configSections>
<CacheManagerSettings>
<!-- STORAGE SETTINGS
Use StorageInfo to set the assembly and class which implement the storage interfaces for the cache.
Modes: InProc, OutProc, SqlServer
-->
<StorageInfo
AssemblyName="Microsoft.ApplicationBlocks.Cache"
ClassName="Microsoft.ApplicationBlocks.Cache.Storages.
SingletonCacheStorage"
Mode="InProc"
ConnectionString="Data Source=localhost; Database= cacheab;UserID=sa;password=;"
ServiceConnectionString="tcpip=127.0.0.1:8282"
/>
<!-- SCAVENGING SETTINGS
Use the ScavengingAlgorithm to set a class that will be executed when is performed.
-->
<ScavengingInfo
AssemblyName="Microsoft.ApplicationBlocks.Cache"
ClassName="Microsoft.ApplicationBlocks.Cache.Scavenging.
LruScavenging"
UtilizationForScavenging="80"
/>
<!-- EXPIRATION SETTINGS
Use the ExpirationCheckInterval to change the interval to check for Cache items expiration. The value attribute is represented in seconds.
-->
<ExpirationInfo
Interval="10"
/>
</CacheManagerSettings>
</configuration>
Because this is an XML file, it is easy to view and update without recompiling or redistributing sections of your caching system.
Understanding Custom Cache Use Cases
This section presents the major use cases of a custom cache.
The use case shown in Figure 6.14 details the sequence of events when the cache is initialized.
Figure 6.14. Cache initialization use case
The use case shown in Figure 6.15 details the sequence of events when an item is added to the cache.
Figure 6.15. Add item use case
The use case shown in Figure 6.16 details the sequence of events when an item is removed from the cache.
Figure 6.16. Remove item use case
The use case shown in Figure 6.17 details the sequence of events when an item is retrieved from the cache.
Figure 6.17. Get item use case
The use case shown in Figure 6.18 details the sequence of events when an item with metadata is retrieved from the cache.
Figure 6.18. Get item and metadata use case
After you design and implement a cache for your application, you may decide that the data in there is sensitive to your organization. If this is the case, you should add some code to secure the contents, just as you would secure them in their natural storage mechanism.
Securing a Custom Cache
This section of the guide describes how to secure your cached items against data tampering and data spoofing. You should ensure that any data that is secured in its original format is also secured when being transmitted to and from the cache and when stored inside the cache.
This section describes two methods of signing and verifying cache items to prevent tampering, and also how to encrypt data to prevent spoofing.
Signing Cache Items
You can use cache item signing to ensure that your cached data is not tampered with. To do this, all cache clients share a secret key that is used to compute the hash code of the cached item. This hash code is stored in the cache alongside the cached item. When a client retrieves the item from cache, it re-computes the hash using the secret key and if both versions of the hash code are not identical, the cached item has been tampered with.
The following code samples show how to sign and verify your cache items using different signing techniques.
Implementing MACTripleDES Data Signing and Verifying
Use the following code to sign a cache item.
public byte[] SignDataMAC( byte[] macKey, string item )
{
//Encode the data to be hashed
byte[] hashedData = Encoding.ASCII.GetBytes( item );
//Compute the hash
MACTripleDES mac = new MACTripleDES ( macKey );
byte[] hash = mac.ComputeHash( hashedData, 0, hashedData.Length );
return hash;
}
Use the following code to verify a MACTripleDES signed item.
public bool VerifySignDataMAC( byte[] macKey,
byte[] cachedElementHash,
string cachedElement )
{
byte[] hashedData = Encoding.ASCII.GetBytes( cachedElement );
//Compute the hash
MACTripleDES mac = new MACTripleDES( macKey );
byte[] newHash = mac.ComputeHash( hashedData, 0, hashedData.Length
);
//Compare the hashes
for( int i = 0; i < 8; i++ )
{
if( newHash[ i ] != cachedElementHash[ i ] ) return false;
}
return true;
}
Using this code, you can ensure that your application is aware of any tampering that occurs to items stored in your cache.
Implementing XML Data Signing and Verifying
Use the following code to sign a cache item.
public string SignDataXml( RSA key, XmlNode xmlNode )
{
// Create the SignedXml message.
SignedXml signedXml = new SignedXml();
signedXml.SigningKey = key;
// Create a data object to hold the data to sign.
DataObject dataObject = new DataObject();
dataObject.Data = xmlNode.ChildNodes;
dataObject.Id = "SignedCacheElement";
// Add the data object to the signature.
signedXml.AddObject( dataObject );
// Create a reference to be able to package everything into the
message.
Reference reference = new Reference();
reference.Uri = "#SignedCacheElement";
// Add it to the message.
signedXml.AddReference( reference );
// Add a KeyInfo.
KeyInfo keyInfo = new KeyInfo();
keyInfo.AddClause( new RSAKeyValue( key ) );
signedXml.KeyInfo = keyInfo;
// Compute the signature.
signedXml.ComputeSignature();
return signedXml.GetXml().OuterXml;
}
Use the following code to verify a SignedXML signed item.
public bool VerifySignDataXml( string cachedData )
{
XmlDocument cachedDoc = new XmlDocument();
cachedDoc.LoadXml( cachedData );
//Create a signedXml
SignedXml signedXml = new SignedXml();
//Load the xml document into the signed xml instance
signedXml.LoadXml( (XmlElement)cachedDoc.ChildNodes[0] );
//Check the signature
if(!signedXml.CheckSignature()) return false;
else return true;
}
Using this code, you can ensure that your application is aware of any tampering that occurs to items stored in your cache.
Encrypting Cached Items
You can use encryption to protect your cached data against data spoofing. Before inserting your item into the cache, you encrypt it using a secret key shared between all cache clients. When a client gets a cached item, it decrypts the item using that same key prior to using it.
The following code samples show how to encrypt and decrypt your cached data.
Implementing General Type Definitions
The following types are used in the encryption and decryption code.
#region P/Invoke structures
[StructLayout(LayoutKind.Sequential, CharSet=CharSet.Unicode)]
internal struct DATA_BLOB
{
public int cbData;
public IntPtr pbData;
}
[StructLayout(LayoutKind.Sequential, CharSet=CharSet.Unicode)]
internal struct CRYPTPROTECT_PROMPTSTRUCT
{
public int cbSize;
public int dwPromptFlags;
public IntPtr hwndApp;
public String szPrompt;
}
#endregion
#region External methods
[DllImport("Crypt32.dll", SetLastError=true, CharSet=CharSet.Auto)]
private static extern bool CryptProtectData(
ref DATA_BLOB pDataIn,
String szDataDescr,
ref DATA_BLOB pOptionalEntropy,
IntPtr pvReserved,
ref CRYPTPROTECT_PROMPTSTRUCT pPromptStruct,
int dwFlags,
ref DATA_BLOB pDataOut);
[DllImport("Crypt32.dll", SetLastError=true, CharSet=CharSet.Auto)]
private static extern bool CryptUnprotectData(
ref DATA_BLOB pDataIn,
String szDataDescr,
ref DATA_BLOB pOptionalEntropy,
IntPtr pvReserved,
ref CRYPTPROTECT_PROMPTSTRUCT pPromptStruct,
int dwFlags,
ref DATA_BLOB pDataOut);
[DllImport("kernel32.dll", CharSet=CharSet.Auto)]
private unsafe static extern int FormatMessage(int dwFlags,
ref IntPtr lpSource,
int dwMessageId,
int dwLanguageId,
ref String lpBuffer,
int nSize,
IntPtr *Arguments);
#endregion
#region Constants
public enum Store {Machine = 1, User};
static private IntPtr NullPtr = ((IntPtr)((int)(0)));
private const int CRYPTPROTECT_UI_FORBIDDEN = 0x1;
private const int CRYPTPROTECT_LOCAL_MACHINE = 0x4;
#endregion
These types must be declared for the encryption and decryption code to function.
Implementing Item Data Encryption
Use the following code to encrypt your data before storing it in the cache.
public byte[] Encrypt(byte[] plainText, byte[] optionalEntropy)
{
bool retVal = false;
DATA_BLOB plainTextBlob = new DATA_BLOB();
DATA_BLOB cipherTextBlob = new DATA_BLOB();
DATA_BLOB entropyBlob = new DATA_BLOB();
CRYPTPROTECT_PROMPTSTRUCT prompt = new CRYPTPROTECT_PROMPTSTRUCT();
prompt.cbSize = Marshal.SizeOf(typeof(CRYPTPROTECT_PROMPTSTRUCT));
prompt.dwPromptFlags = 0;
prompt.hwndApp = NullPtr;
prompt.szPrompt = null;
int dwFlags;
try
{
try
{
int bytesSize = plainText.Length;
plainTextBlob.pbData = Marshal.AllocHGlobal(bytesSize);
if(IntPtr.Zero == plainTextBlob.pbData)
throw new Exception("Unable to allocate plaintext
buffer.");
plainTextBlob.cbData = bytesSize;
Marshal.Copy(plainText, 0, plainTextBlob.pbData, bytesSize);
}
catch(Exception ex)
{
throw new Exception("Exception marshalling data. " +
ex.Message);
}
dwFlags = CRYPTPROTECT_LOCAL_MACHINE|CRYPTPROTECT_UI_FORBIDDEN;
//Check to see if the entropy is null
if(null == optionalEntropy)
{//Allocate something
optionalEntropy = new byte[0];
}
try
{
int bytesSize = optionalEntropy.Length;
entropyBlob.pbData =
Marshal.AllocHGlobal(optionalEntropy.Length);
if(IntPtr.Zero == entropyBlob.pbData)
throw new Exception("Unable to allocate entropy data
buffer.");
Marshal.Copy(optionalEntropy, 0, entropyBlob.pbData,
bytesSize);
entropyBlob.cbData = bytesSize;
}
catch(Exception ex)
{
throw new Exception("Exception entropy marshalling data. " +
ex.Message);
}
retVal = CryptProtectData( ref plainTextBlob, "", ref
entropyBlob,
IntPtr.Zero, ref prompt, dwFlags, ref cipherTextBlob);
if(false == retVal) throw new Exception("Encryption failed.");
}
catch(Exception ex)
{
throw new Exception("Exception encrypting. " + ex.Message);
}
byte[] cipherText = new byte[cipherTextBlob.cbData];
Marshal.Copy(cipherTextBlob.pbData, cipherText, 0,
cipherTextBlob.cbData);
return cipherText;
}
This code ensures that the data stored in the cache cannot be read without access to the shared key.
Implementing Item Data Decryption
Use the following code to decrypt your data before using it in the client.
public byte[] Decrypt(byte[] cipherText, byte[] optionalEntropy)
{
bool retVal = false;
DATA_BLOB plainTextBlob = new DATA_BLOB();
DATA_BLOB cipherBlob = new DATA_BLOB();
CRYPTPROTECT_PROMPTSTRUCT prompt = new CRYPTPROTECT_PROMPTSTRUCT();
prompt.cbSize = Marshal.SizeOf(typeof(CRYPTPROTECT_PROMPTSTRUCT));
prompt.dwPromptFlags = 0;
prompt.hwndApp = NullPtr;
prompt.szPrompt = null;
try
{
try
{
int cipherTextSize = cipherText.Length;
cipherBlob.pbData = Marshal.AllocHGlobal(cipherTextSize);
if(IntPtr.Zero == cipherBlob.pbData)
throw new Exception("Unable to allocate cipherText
buffer.");
cipherBlob.cbData = cipherTextSize;
Marshal.Copy(cipherText, 0, cipherBlob.pbData,
cipherBlob.cbData);
}
catch(Exception ex)
{
throw new Exception("Exception marshalling data. " +
ex.Message);
}
DATA_BLOB entropyBlob = new DATA_BLOB();
int dwFlags;
dwFlags = CRYPTPROTECT_LOCAL_MACHINE|CRYPTPROTECT_UI_FORBIDDEN;
//Check to see if the entropy is null
if(null == optionalEntropy)
{//Allocate something
optionalEntropy = new byte[0];
}
try
{
int bytesSize = optionalEntropy.Length;
entropyBlob.pbData = Marshal.AllocHGlobal(bytesSize);
if(IntPtr.Zero == entropyBlob.pbData)
throw new Exception("Unable to allocate entropy buffer.");
entropyBlob.cbData = bytesSize;
Marshal.Copy(optionalEntropy, 0, entropyBlob.pbData,
bytesSize);
}
catch(Exception ex)
{
throw new Exception("Exception entropy marshalling data. " +
ex.Message);
}
retVal = CryptUnprotectData(ref cipherBlob, null, ref
entropyBlob, IntPtr.Zero, ref prompt,
dwFlags, ref plainTextBlob);
if(false == retVal) throw new Exception("Decryption failed.");
//Free the blob and entropy.
if(IntPtr.Zero != cipherBlob.pbData)
Marshal.FreeHGlobal(cipherBlob.pbData);
if(IntPtr.Zero != entropyBlob.pbData)
Marshal.FreeHGlobal(entropyBlob.pbData);
}
catch(Exception ex)
{
throw new Exception("Exception decrypting. " + ex.Message);
}
byte[] plainText = new byte[plainTextBlob.cbData];
Marshal.Copy(plainTextBlob.pbData, plainText, 0,
plainTextBlob.cbData);
return plainText;
}
This code uses the shared key to decrypt the data that is encrypted in the cache.
After you implement your cache, and optionally secure it, you are ready to start using it. As with all pieces of software, you should monitor the performance of your cache and be prepared to tune any aspects that do not behave as expected.
Monitoring a Cache
Monitoring your cache usage and performance can help you understand whether your cache is performing as expected and helps you to fine tune your cache solution. This section of the guide describes which cache parameters should be monitored and what to look for when monitoring your cache.
Implementing Performance Counters
Implementing performance counters in your custom cache is the most effective way to monitor it. This section explains which performance counters to implement in your custom cache and what to look for when monitoring these counters.
Using the Total Cache Size Counter
This counter indicates the overall size of your cache in bytes. As the size of your cache increases, you often require the removal of unused items. This can be implemented by a process referred to as scavenging, which performs remove operations on the cache. This locks items in the cache and may result in degrading performance while the scavenging algorithm is running.
Using the Total Cache Entries Counter
This counter indicates the overall number of items in your cache. This performance counter on its own does not provide enough information to reach any conclusion regarding your cache performance. However, when combined with other counters, it can provide valuable information.
Using the Cache Hit/Miss Rate Counter
A cache hit occurs when you request an item from the cache and that item is available and returned to you. A cache miss occurs when you request an item from the cache and that item is not available. Obviously, your cache hit rate should be as high as possible and cache miss rate as low as possible. The higher your hit rate, the more effective your cache is performing because the cache can serve more of the items that your application needs. These counters indicate whether your application is using the cache effectively. If you observe low hit rates or high miss rates, consider the following points to improve your cache hit/miss ratio:
- Check your cache loading technique. If required items are not readily available in the cache, you may be using an inappropriate loading mechanism. For more information about the mechanisms available, see Chapter 5, "Managing the Contents of a Cache".
- Increase the expiration time of your cached items. This retains items in cache for a longer period, resulting in an increased hit rate.
After you tune the caching mechanism to improve the hit rate, you should continue to monitor these figures for any unexpected changes.
Using the Cache Turnover Counter
The cache turnover rate refers to the number of insertions and deletions of items from the cache per second. A high cache turnover rate, together with a low cache hit rate, indicates that items are added and removed from cache frequently but are seldom used from cache, resulting in inefficient use of the cache.
Using the Cache Insert Time Counter
The cache insert time refers to the time it takes the cache to store your item and its metadata in the cache data store. This counter should be as low as possible for your cache to be more responsive to the application. High cache insert times adversely affect the performance of your application and defy the purpose for which caching is considered in the first place.
High cache insert times can indicate a problem with your cache implementation. You should note that the cache insert time should be constant regardless of the number of items in cache.
Using the Cache Retrieve Time Counter
The cache retrieve time refers to the time it takes the cache to get an item from the cache data store and return it to the client. Note that this time is equally important in cache misses where the requested item is not available in cache. This number should be as low as possible for improved performance of your application. High cache retrieve times limit the usefulness of your cache because they degrade your application's performance.
Cache retrieve times should remain constant regardless of the number of items in your cache.
Note To implement performance counters in your custom cache, use the System.Diagnostics.PerformanceCounter class in the .NET Framework. For samples, see "How To: Diagnostics" in the MSDN Library.
Monitoring Your Cache Performance
After you implement performance counters into your custom cache, you can use the Windows performance monitor application (Perfmon) to view and analyze your cache performance data.
You will usually decide to monitor your cache when it is not delivering the expected performance. The following steps present a recommended scenario for monitoring your cache performance.
Note Chapter 7, "Appendix," includes cache performance data and graphs that were collected and analyzed under varying cache loads.
To monitor cache performance
- Monitor the cache insert and retrieve times under different cache loads (for example, number of items and size of cache) to identify where your performance problem is coming from.
- Check your cache hit/miss ratio. If this is low, it indicates that items are rarely in cache when you need them. Possible causes for this include:
- Your cache loading technique is not effective.
- Your maximum allowed cache size is too small, causing frequent scavenging operations, which results in cached items being removed to free up memory.
- Check your cache turnover rate. If this is high, it indicates that items are inserted and removed from cache at a high rate. Possible causes for this include:
- Your maximum allowed cache size is too small, causing frequent scavenging operations which result in cached items being removed to free up memory.
- Faulty application design, resulting in improper use of the cache.
Regular monitoring of your cache should highlight any changes in data use and any bottlenecks that these might introduce. This is the main management task associated with the post-deployment phase of using a caching system.
Synchronizing Caches in a Server Farm
A common problem for distributed applications developers is how you synchronize cached data between all servers in the farm. Generally speaking, if you have a situation in which your cache needs to be synchronized in your server farm, it almost always means that your original design is faulty. You should design your application with clustering in mind and avoid such situations in the first place.
However, if you have one of those rare situations where such synchronization is absolutely required, you should use file dependencies to invalidate the cache when the information in the main data store changes.
To create file dependencies for cache synchronization
- Create a database trigger that is activated when a record in your data store is changed.
- Implement this trigger to create an empty file in the file system to be used for notification. This file should be placed either on the computer running SQL Server, a Storage Area Network (SAN), or another central server.
- Use Application Center replication services to activate a service that copies the file from the central server to all disks in the server farm.
- Make the creation of the file on each server trigger a dependency event to expire the cached item in the ASP.NET cache on each of the servers in the farm.
This method has several advantages, including:
- Using the file system is very efficient because the operating system is doing a lot of disk caching anyway.
- It is very flexible and easy to fine tune. For example, you can publish the files at timed intervals (instead of on every change) to invalidate several cache items at the same time.
However, because replicating a file across the server farm can take time, it is inefficient in cases where the cached data changes every few seconds.
Summary
This chapter has described the design details for a custom cache framework. You can use this solution as a basis for your own caching systems. It has also introduced how you can secure the contents of a cache and the items as they are transmitted to and from the cache, and it has described methods for monitoring a deployed caching solution.