Share via


Caching Architecture Guide for .NET Framework Applications

 

patterns and practices home

Understanding Caching Concepts

Summary: This chapter introduces the concepts involved in caching, and includes background information to help you understand the issues you face when implementing caching systems.

This chapter introduces the concepts of caching. It is important to be familiar with these concepts before trying to understand the technologies and mechanisms you can use to implement caching in your applications.

This chapter contains the following sections:

  • "Introducing the Problems that Caching Solves"
  • "Understanding State"
  • "Understanding Why Data Should Be Cached"
  • "Understanding Where Data Should Be Cached"
  • "Introducing Caching Considerations"

Introducing the Problems that Caching Solves

When building enterprise-scale distributed applications, architects and developers are faced with many challenges. Caching mechanisms can be used to help you overcome some of these challenges, including:

  • Performance—Caching techniques are commonly used to improve application performance by storing relevant data as close as possible to the data consumer, thus avoiding repetitive data creation, processing, and transportation.

    For example, storing data that does not change, such as a list of countries, in a cache can improve performance by minimizing data access operations and eliminating the need to recreate the same data for each request.

  • Scalability—The same data, business functionality, and user interface fragments are often required by many users and processes in an application. If this information is processed for each request, valuable resources are wasted recreating the same output. Instead, you can store the results in a cache and reuse them for each request. This improves the scalability of your application because as the user base increases, the demand for server resources for these tasks remains constant.

    For example, in a Web application the Web server is required to render the user interface for each user request. You can cache the rendered page in the ASP.NET output cache to be used for future requests, freeing resources to be used for other purposes.

    Caching data can also help scale the resources of your database server. By storing frequently used data in a cache, fewer database requests are made, meaning that more users can be served.

  • Availability—Occasionally the services that provide information to your application may be unavailable. By storing that data in another place, your application may be able to survive system failures such as network latency, Web service problems, or hardware failures.

    For example, each time a user requests information from your data store, you can return the information and also cache the results, updating the cache on each request. If the data store then becomes unavailable, you can still service requests using the cached data until the data store comes back online.

To successfully design an application that uses caching, you need to thoroughly understand the caching techniques provided by the Microsoft® .NET Framework and the Microsoft Windows® operating system, and you also need to be able to address questions such as:

  • When and why should a custom cache be created?
  • Which caching technique provides the best performance and scalability for a specific scenario and configuration?
  • Which caching technology complies with the application's requirements for security, monitoring, and management?
  • How can the cached data be kept up to date?

It is important to remember that caching isn't something you can just add to your application at any point in the development cycle; the application should be designed with caching in mind. This ensures that the cache can be used during the development of the application to help tune its performance, scalability, and availability.

Now that you have seen the types of issues that caching can help avoid, you are ready to look at the types of information that may be cached. This information is commonly called state.

Understanding State

Before diving into caching technologies and techniques, it is important to have an understanding of state, because caching is merely a framework for state management. Understanding what state is and an awareness of its characteristics, such as lifetime and scope, is important for making better decisions about whether to cache it.

State refers to data, and the status or condition of that data, being used within a system at a certain point in time. That data may be permanently stored in a database, may be held in memory for a short time while a user executes a certain function, or the data may exist for some other defined length of time. It may be shared across a whole organization, it may be specific to an individual user, or it may be available to any grouping in between these extremes.

Understanding the Lifetime of State

The lifetime of state is the term used to refer to the time period during which that state is valid, that is, from when it is created to when it is removed. Common lifetime periods include:

  • Permanent state—persistent data used in an application
  • Process state—valid only for the duration of a process
  • Session state—valid for a particular user session
  • Message state—exists for the processing period of a message

For more details and examples of the lifetime of state, see Chapter 7, "Appendix."

Understanding the Scope of State

Scope is the term used to refer to the accessibility of an applications state, whether it is the physical scope or the logical scope.

Understanding Physical Scope

Physical scope refers to the physical locations from which the state can be accessed. Common physical scoping levels include:

  • Organization—state that is accessible from any application within an organization
  • Farm—state that is accessible from any computer within an application farm
  • Machine—state that is shared across all applications on a single computer
  • Process—state that is accessible across multiple AppDomains in a single process
  • AppDomain—state that is available only inside a single AppDomain

For more details and examples of physical scope, see Chapter 7, "Appendix."

Understanding Logical Scope

Logical scope refers to the logical locations where the state can be accessed from. Common logical scoping levels include:

  • Application—state that is valid within a certain application
  • Business process—state that is valid within a logical business process
  • Role—state that is available to a subset of the applications' users
  • User—state that is available to a single user

For more details and examples of logical scope, see Chapter 7, "Appendix."

Understanding State Staleness

Cached state is a snapshot of the master state; therefore, its data is potentially stale (obsolete) as soon as it is retrieved by the consuming process. This is because the original data may have been changed by another process. Minimizing the staleness of data and the impact of staleness on your application is one of the tasks involved when caching state.

State staleness is defined as the difference between the master state, from which the cached state was created, and the current version of the cached state.

You can define staleness in terms of:

  • Likelihood of changes—Staleness might increase with time because as time goes by there is an increasing chance that other processes have updated the master data.
  • Relevancy of changes—It is possible that master state changes will not have an adverse affect on the usability of a process. For example, changing a Web page style does not affect the business process operation itself.

Depending on the use of the state within a system, staleness may, or may not, be an issue.

Understanding State Staleness Tolerance

The effect of the staleness of state on the business process is termed as the tolerance. A system can be defined as having no staleness tolerance or some staleness tolerance:

  • No tolerance—In some scenarios, state tolerance is unacceptable. A good example of this is the caching of short-lived transactional data. When working with data with ACID (atomic, consistent, isolated, durable) characteristics, you cannot accept any tolerance in the staleness of state. For example, if a banking system is using a customer balance to approve a loan, the balance must be guaranteed to be 100 percent accurate and up to date. Using cached data in this instance could result in a loan being approved against the business rules implemented by the bank.
  • Some tolerance—In some application scenarios, a varying tolerance is acceptable. There are cases where a known and acceptable period for updating the cached items is acceptable (that is, once a day, once a week, or once a month). For example, an online catalog displaying banking products available upon completion of an application form does not necessarily need to be concurrent with all of the products the bank offers. In this situation, a daily or weekly update is sufficient.

When selecting the state to be cached, one of the major considerations is how soon the state will become stale and what effect on the system that staleness will have. The goal is to cache state that either never becomes stale (static state) or that has a known period of validity (semi-static state). For more information about how you can define the period of validity for semi-static state, see Chapter 5, "Managing the Contents of a Cache."

Understanding the State Transformation Pipeline

Another attribute of state is its representation during the different stages in the transformation pipeline. While data is in the data store, it is classed as being in its raw format; at the different transformation levels during business logic processing, it is classed as processed; and at the ready-to-display stage, it is classed as rendered data (for example, HTML or Windows Forms controls). Figure 1.1 shows these various stages.

Ee957904.f01cac01(en-us,MSDN.10).gif

Figure 1.1. The transformation pipeline

Table 1.1 describes and gives examples of the representations of state during the different stages in the transformation pipeline.

Table 1.1: Data representation types

Representation Type Description Example
Raw Data in its raw format. Dataset reflecting data in a database.
Processed Data that has gone through business logic processing. At this stage of the pipeline, the same data may undergo several different transformations. Different representations of the same dataset.
Rendered Data that is rendered and ready to be displayed in the user interface. Rendered combo-box Web control containing a list of countries.
Rendered Windows Forms TreeView control.

When you plan the caching of state in your application, you need to decide which of the state representation types is the best one to be cached. Use the following guidelines to aid you in this decision:

  • Cache raw data when any staleness of the cache can be tolerated by your business logic. Raw data should be cached in:
    • Data access components.
    • Service agents.
  • Cache processed data to save processing time and resources within the system. Processed data should be cached in:
    • Business logic components.
    • Service interfaces.
  • Cache rendered data when the amount of data to be displayed is large and the rendering time of the control is long (for example, the contents of a large TreeView control). Rendered data should be cached in user interface (UI) components.

For a summarized review of .NET-based application architecture, its common elements, and their roles, see Chapter 3, "Caching in Distributed Applications."

State is used in one form or another in all types of applications. Because it is time consuming to access state, it is often wise to cache the state to improve overall application performance.

Understanding Why Data Should Be Cached

You use caching to store data that is consumed frequently by an application; you can also use it to store data that is expensive to create, obtain, or transform. There are three main benefits to implementing caching in your applications:

  • Reduction of the amount of data that is transferred between processes and computers
  • Reduction of the amount of data processing that occurs within the system
  • Reduction of the number of disk access operations that occur

The benefits that are important to you vary depending on the type of application that you are developing.

Reducing Interprocess Communication

One result of building distributed applications is that different elements of the application may reside in different processes, either on the same computer or on different computers. For example, an ASP.NET application executes in the Aspnet_wp.exe process, while a COM+ server application that it may be using executes in a different process. This can be less efficient when the application requires a large amount of data to be moved between the processes or when one process is making chatty (that is, numerous small) calls to the other to obtain data.

Making calls between processes requires using remote procedure calls (RPCs) and data serialization, both of which can result in a performance hit. By using a cache to store static or semi-static data in the consuming process, instead of retrieving it each time from the provider process, the RPC overhead decreases and the application's performance improves.

Reducing Data Access and Processing

Another aspect of distributed applications is data processing. Before data is sent to a consumer, there is always some degree of processing required. This may vary from simple database querying to complex operations on the data, such as report generation and data analysis. By storing the resultant processed data in a cache for later reuse, you can save valuable computing power and achieve better application performance and scalability.

Reducing Disk Access Operations

Input/output (I/O) operations are still a major performance hit; loading different XML files, schemas, and configuration files is an expensive operation. By using a cache to store the files in the consuming process instead of reading it each time from the disk, applications can benefit from performance improvements.

These benefits can only truly be realized if you cache your state in an appropriate place in your application architecture.

Understanding Where Data Should Be Cached

Now that you have seen the issues arising in distributed application design, you understand why you should use caching techniques in your systems. A cache is simply a copy of the master data stored in memory or on disk in different transformation levels, as close as possible to the data consumer.

Therefore, in addition to selecting the data to be cached, another major consideration is where it should be cached. The considerations divide into two main categories:

  • Storage types—where the cache should be physically located
  • Layered architecture elements—where the cache should be logically located

You have many different options when deciding the physical location and logical location for the cache. The following sections describe some of the options.

Understanding Storage Types

Many caching implementations are available, all of which can be categorized as using either a memory resident cache or a disk resident cache. The following describes these categories:

  • Memory resident cache—This category contains techniques which implement in-memory temporary data storage. Memory-based caching is usually used when:

    • An application is frequently using the same data.
    • An application often needs to reacquire the data.

    You can reduce the number of expensive disk operations that need to be made by storing the data in memory, and you can minimize the amount of data that needs to be moved between different processes by storing the data in the memory of the consumer process.

  • Disk resident cache—This category contains caching technologies that use disk-based data storages such as files or databases. Disk based caching is useful when:

    • You are handling large amounts of data.
    • Data in the application services (for example, a database) may not always be available for reacquisition (for example, in offline scenarios).
    • Cached data lifetime must survive process recycles and computer reboots.

    You can reduce the overhead of data processing by storing data that has already been transformed or rendered, and you can reduce interprocess communications by storing the data nearer to the consumer.

Understanding Layered Architecture Elements

Each component in your application deals with different types of data and state. This section refers to the application components described in "Application Architecture for .NET: Designing Applications and Services" (in the MSDN Library) because they are representative of most distributed applications.

Figure 1.2 shows a schematic view of the layered architecture's elements.

Ee957904.f01cac02(en-us,MSDN.10).gif

Figure 1.2. Layered architecture elements

When you explore these different elements, you need to address new considerations, including deciding the type of state that should be cached in each element.

For more information about caching considerations of layered architecture elements, see Chapter 3, "Caching in Distributed Applications."

After you decide that your application architecture can benefit from caching and you decide where you cache state, there are still many considerations before you implement caching.

Introducing Caching Considerations

In addition to understanding the types of state to cache, and where to cache them, there are several other factors that need to be considered when designing an application and deciding whether state should be cached. These considerations are described in more detail later in this guide, but the topics are introduced here so that you can bear them in mind as you read about the technologies and techniques involved in caching.

Introducing Format and Access Patterns

When you decide whether an object should be cached, you need to consider three main issues regarding data format and access mechanisms:

  • Thread safety—When the contents of a cache can be accessed from multiple threads, use some form of locking mechanism to protect one thread from interfering with the data being accessed by another thread.
  • Serialization—When storing an object in a cache and the cache storage serializes data in order to store it, the stored object must support serialization.
  • Normalizing cached data—When storing state in a cache, make sure that it is stored in a format optimized for its intended usage.

For more information about formatting cached data, see the "Planning .NET Framework Element Caching" section in Chapter 4, "Caching .NET Framework Elements."

Introducing Content Loading

Before you can use cached data, you must first load data into the cache. There are two methods you can use for loading data:

  • Proactive loading—When using this method, you retrieve all of the required state, usually when the application or process starts, and then you cache it for the lifetime of the application or the process.
  • Reactive loading—When using this method, you retrieve data when it is requested by the application and then cache it for future requests.

For more information about content loading, see the "Loading a Cache" section in Chapter 5, "Managing the Contents of a Cache."

Introducing Expiration Policies

An important aspect of caching state is the way in which you keep it consistent with the master data (for example, the database or files) and other application resources. You can use expiration policies to define the contents of a cache that are invalid based on the amount of time that the data has been in the cache or on notification from another resource, such as a database, file, or other cached items.

For more information about expiration policies, see the "Determining a Cache Expiration Policy" in Chapter 5, "Managing the Contents of a Cache."

Introducing Security

When caching any type of information, you need to be aware of various potential security threats. Data that is stored in a cache may be accessed or altered by a process that is not permitted access to the master data. This can occur because when the information is held in its permanent store, security mechanisms are in place to protect it. When taking data out of traditional trust boundaries, you must ensure that there are equivalent security mechanisms for the transmission of data to, and the storage of, data in the cache.

For more information about security issues when caching, see the "Securing a Custom Cache" section in Chapter 6, "Understanding Advanced Caching Issues."

Introducing Management

When you use caching technologies, the maintenance needs of your application increase. During the application deployment, you need to configure things such as the maximum size of the cache and the flushing mechanisms. You also need to ensure that the cache performance is monitored, using some of the techniques made available in Windows and the .NET Framework (for example, event logging and performance counters).

For more information about maintenance issues, see the "Monitoring a Cache" section in Chapter 6, "Understanding Advanced Caching Issues."

Summary

In this chapter, you have been introduced to some of the problems involved in developing distributed applications and how implementing caching within the application can help alleviate some if these issues. You have also been introduced to some of the considerations that you need to take into account when planning caching mechanisms for different types of applications.

Now that you have an overall understanding of the concepts involved in caching, you are ready to begin learning about the different caching technologies available.

patterns and practices home