About In-Role Cache for Azure Cache
Microsoft recommends all new developments use Azure Redis Cache. For current documentation and guidance on choosing an Azure Cache offering, see Which Azure Cache offering is right for me?
In-Role Cache supports the ability to host caching services on Azure roles. In this model, the cache is part of your cloud service. One role within the cloud service is selected to host In-Role Cache. The running instances of that role join memory resources to form a cache cluster. This private cache cluster is available only to the roles within the same deployment. There are two main deployment topologies for In-Role Cache: co-located and dedicated. Co-located roles also host other non-caching application code and services. Dedicated roles are worker roles that are only used for caching. The following topics discuss these caching topologies in more detail.
For a step-by-step walkthrough of role-based In-Role Cache, see How to Use Azure In-Role Cache. For downloadable samples, see Azure In-Role Cache Samples.
In-Role Cache Concepts
This section provides an overview of three key concepts related to role-based In-Role Cache.
Azure roles have one or more instances. Each instance is a virtual machine that is configured to host the specified role. When a role that has In-Role Cache enabled runs on multiple instances, a cache cluster is formed. A cache cluster is a distributed caching service that uses the combined memory from all of the machines in the cluster. Applications can add and retrieve items from the cache cluster without having to know which machine the items is stored on. If high availability is enabled, a backup copy of the item is automatically stored on a different virtual machine instance.
Only one cache cluster is supported for each cloud service. It is possible to setup multiple cache clusters in a cloud service by specifying separate storage accounts for each role. However, this configuration is not supported.
When you enable In-Role Cache on a Azure role, you specify the amount of memory that can be used for caching. In a co-located scenario, you choose a percentage of the available memory on the virtual machines that host the role. In a dedicated scenario, all of the available memory on the virtual machines is used for caching. However, the available memory is always less than the total physical memory on the virtual machine, because of operating system memory requirements.
Therefore, the total amount of caching memory depends on the role memory reserved for caching multiplied by the number of roles. You can effectively scale the total caching memory up or down by increasing or decreasing the number of running instances for that role.
When scaling down running instances of the role that hosts In-Role Cache, reduce the instance count by no more than three at a time. After that change completes, you can remove up to three additional running instances; repeat until you reach the required number of running instances. Scaling back simultaneously by more than three instances causes cache cluster instability.
Each cache cluster maintains shared information about the cluster's runtime state in Azure storage. During development, you can use the Azure storage emulator. Deployed roles must specify a valid Azure storage account. In Visual Studio, you can specify the appropriate storage account on the Caching tab of the role properties.
Every cache cluster has at least one cache named
default. With role-based In-Role Cache, you can also add additional named caches. There are various settings that can be changed for each cache. The following screenshot shows the Named Cache Settings section of the Caching tab on the Visual Studio role settings.
In Visual Studio, click the Add Named Cache button to add additional named caches. In the previous example, two additional caches were added,
NamedCache2. Each cache has different settings. Change the settings by selecting and modifying the specific fields in the table.
Named caches provide flexibility to application designers. Each named cache has its own properties. For example, one cache could enable High Availability to take advantage of high availability. Other caches might not require this setting, and high availability requires twice the memory for each cached item. It is a better use of resources to use high availability only on the caches that require it. There are other similar scenarios where multiple caches could be used with varying properties to meet application requirements.
A cache client is any application code that stores and retrieves items from the cache cluster. With In-Role Cache on roles, cache clients must be part of the same caching role or incorporated into other roles in the deployment. Configure cache clients by using the application or web configuration files. For more information, see How to: Prepare Visual Studio to Use Azure In-Role Cache. The following example shows the dataCacheClient element in a configuration file.
<dataCacheClients> <dataCacheClient name="default"> <autoDiscover isEnabled="true" identifier="CachingRole1" /> <!--<localCache isEnabled="true" sync="TimeoutBased" objectCount="100000" ttlValue="300" />--> </dataCacheClient> </dataCacheClients>
In the previous example, the autoDiscover element has an identifier attribute set to
CachingRole1. This identifier specifies that the
CachingRole1 has In-Role Cache enabled. It provides the location of the cache server. The cache client uses
CachingRole1 automatically in any caching operations.
Once the cache client has been configured, it can access any cache by name. The following example accesses the
NamedCache1 cache and adds an item to it.
DataCache cache = new DataCache("NamedCache1", "default"); cache.Put("testkey", "testobject");
The DataCache constructor takes two parameters: the cache name and the dataCacheClient section name. For information about the cache name, see the previous section on Named Caches.