Chapter 17: Crosscutting Concerns

For more details of the topics covered in this guide, see Contents of the Guide.

Contents

  • Overview
  • General Design Considerations
  • Specific Design Issues
  • Design Steps for Caching
  • Design Steps for Exception Management
  • Design Steps for Validating Input and Data
  • Relevant Design Patterns
  • patterns & practices Solution Assets
  • Additional Resources

Overview

The majority of applications you design will contain common functionality that spans layers and tiers. This functionality typically supports operations such authentication, authorization, caching, communication, exception management, logging and instrumentation, and validation. Such functionality is generally described as crosscutting concerns because it affects the entire application, and should be centralized in one location in the code where possible. For example, if code that generates log entries and writes to the application logs is scattered throughout your layers and tiers, and the requirements related to these concerns change (such as logging to a different location), you may have to update the relevant code throughout the entire system. Instead, if you centralize the logging code, you can change the behavior by changing the code in just one location.

This chapter will help you to understand the role that crosscutting concerns play in applications, identify the areas where they occur in your applications, and learn about the common issues faced when designing for crosscutting concerns. There are several different approaches to handling this functionality, from common libraries such as the patterns & practices Enterprise Library, to Aspect Oriented Programming (AOP) methods where metadata is used to insert crosscutting code directly into the compiled output or during run time execution.

General Design Considerations

The following guidelines will help you to understand the main factors for managing crosscutting concerns:

  • Examine the functions required in each layer, and look for cases where you can abstract that functionality into common components, perhaps even general purpose components that you configure depending on the specific requirements of each layer of the application. It is likely that these kinds of components will be reusable in other applications.
  • Depending how you physically distribute the components and layers of your application, you may need to install the crosscutting components on more than one physical tier. However, you still benefit from reusability and reduced development time and cost.
  • Consider using the Dependency Injection pattern to inject instances of crosscutting components into your application based on configuration information. This allows you to change the crosscutting components that each section uses easily, without requiring recompilation and redeployment of your application. The patterns & practices Unity library provides comprehensive support for the Dependency Injection pattern. Other popular Dependency Injection libraries include StructureMap, Ninject, and Castle Windsor (see Additional Resources at the end of this chapter for more information).
  • Consider using a third-party library of components that are highly configurable and can reduce development time. An example is the patterns & practices Enterprise Library, which contains application blocks designed to help you implement caching, exception handling, authentication and authorization, logging, exception handling, validation, and cryptography functions. It also contains mechanisms that implement policy injection and a dependency injection container that make it easier to implement solutions for a range of crosscutting concerns. For more information about Enterprise Library, see Appendix F "patterns & practices Enterprise Library." Another common library is provided by the Castle Project (see Additional Resources at the end of this chapter for more information).
  • Consider using Aspect Oriented Programming (AOP) techniques to weave the crosscutting concerns into your application, rather than having explicit calls in the code. The patterns & practices Unity mechanism and the Enterprise Library Policy Injection Application Block support this approach. Other examples include Castle Windsor and PostSharp (see Additional Resources at the end of this chapter for more information).

Specific Design Issues

The following sections list the key areas to consider as you develop your architecture, and contain guidelines to help you avoid the common issues in each area:

  • Authentication
  • Authorization
  • Caching
  • Communication
  • Configuration Management
  • Exception Management
  • Logging and Instrumentation
  • State Management
  • Validation

Authentication

Designing a good authentication strategy is important for the security and reliability of your application. Failure to design and implement a good authentication strategy can leave your application vulnerable to spoofing attacks, dictionary attacks, session hijacking, and other types of attacks. Consider the following guidelines when designing an authentication strategy:

  • Identify your trust boundaries and authenticate users and calls across the trust boundaries. Consider that calls might need to be authenticated from the client as well as from the server (mutual authentication).
  • Enforce the use of strong passwords or password phrases.
  • If you have multiple systems within the application or users must be able to access multiple applications with the same credentials, consider a single sign-on strategy.
  • Do not transmit passwords over the wire in plain text, and do not store passwords in a database or data store as plain text. Instead, store a hash of the password.

For more information about designing an authentication strategy, and techniques for implementing it, see "Additional Resources" at the end of this chapter.

Authorization

Designing a good authorization strategy is important for the security and reliability of your application. Failure to design and implement a good authorization strategy can make your application vulnerable to information disclosure, data tampering, and elevation of privileges. Consider the following guidelines when designing an authorization strategy:

  • Identify your trust boundaries and authorize users and callers across the trust boundary.
  • Protect resources by applying authorization to callers based on their identity, groups, or roles. Minimize granularity by limiting the number of roles you use where possible.
  • Consider using role-based authorization for business decisions. Role-based authorization is used to subdivide users into groups (roles) and then set permissions on each role rather than on individual users. This eases management by allowing you to administer a smaller set of roles rather than a larger set of users.
  • Consider using resource-based authorization for system auditing. Resource-based authorization sets permissions on the resource itself; for example, an access control list (ACL) on a Windows resource uses the identity of the original caller to determine access rights to the resource. If you use resource-based authorization in WCF, you must to impersonate the original caller through the client or presentation layer, through the WCF service layer, and to the business logic code that accesses the resource.
  • Consider using claims-based authorization when you must support federated authorization based on a mixture of information such as identity, role, permissions, rights, and other factors. Claims-based authorization provides additional layers of abstraction that make it easier to separate authorization rules from the authorization and authentication mechanism. For example, you can authenticate a user with a certificate or with username/password credentials and then pass that claim-set to the service to determine access to resources.

For more information about designing an authorization strategy, and techniques for implementing it, see "Additional Resources" at the end of this chapter.

Caching

Caching can improve the performance and responsiveness of your application. However, a poorly designed caching strategy can degrade performance and responsiveness. You should use caching to optimize reference data lookups, avoid network round trips, and avoid unnecessary and duplicate processing. When you implement caching, you must decide when to load the cache data and how and when to remove expired cached data. Try to preload frequently used data into the cache asynchronously or by using a batch process to avoid client delays. Consider the following guidelines when designing a caching strategy:

  • Choose an appropriate location for the cache. If your application is deployed in Web farm, avoid using local caches that must be synchronized; instead consider using a transactional resource manager such as Microsoft® SQL Server® or a product that supports distributed caching, such as "Memcached" from Danga Interactive or the Microsoft project code named “Velocity” (see Additional Resources at the end of this chapter for more information).
  • Consider caching data in a ready-to-use format when working with an in-memory cache. For example, use a specific object instead of caching raw database data. Consider using Microsoft Velocity to implement in-memory caching.
  • Do not cache volatile data, and do not cache sensitive data unless you encrypt it.
  • Do not depend on data still being in your cache; it may have been removed. Implement a mechanism to handle cache failures, perhaps by reloading the item from the source.
  • Be especially careful when accessing the cache from multiple threads. If you are using multiple threads, ensure that all access to the cache is thread-safe to maintain consistency.

For more information about designing a caching strategy, see "Design Steps for Caching" later in this chapter.

Communication

Communication is concerned with the interaction between components across layers and tiers. The mechanism you choose depends on the deployment scenarios your application must support. Consider the following guidelines when designing communication mechanisms:

  • Consider using message-based communication when crossing physical or process boundaries; and object-based communication when in process (when crossing only logical boundaries). To reduce round trips and improve communication performance across physical and process boundaries, design coarse-grained (chunky) interfaces that communicate less often but with more information in each communication. However, where appropriate, consider exposing a fine-grained (chatty) interface for use by in process calls and wrapping these calls in a coarse-grained façade for use by processes that access it across physical or process boundaries.
  • If your messages do not need to be received in a specific order and do not have dependencies on each other, consider using asynchronous communication to avoid blocking processing or UI threads.
  • Consider using Microsoft Message Queuing to queue messages for later delivery in case of system or network interruption or failure. Message Queuing can perform transacted message delivery and supports reliable once-only delivery.
  • Choose an appropriate transport protocol, such as HTTP for Internet communication and TCP for intranet communication. Consider how you will determine the appropriate message exchange patterns, connection based or connectionless communication, reliability guarantees (such as service level agreements), and authentication mechanism.
  • Ensure that you protect messages and sensitive data during communication by using encryption, digital certificates, and channel security features.

For more information about designing a communication strategy, see Chapter 18 "Communication and Messaging."

Configuration Management

Designing a good configuration management mechanism is important for the security and flexibility of your application. Failure to do so can make your application vulnerable to a variety of attacks, and also leads to an administrative overhead for your application. Consider the following guidelines when designing for configuration management:

  • Carefully consider which settings must be externally configurable. Verify that there is an actual business need for each configurable setting, and provide the minimal configuration options necessary to meet these requirements. Excessive configurability can result in systems that are more complicated, and may leave the system vulnerable to security breaches and malfunctions due to incorrect configuration.
  • Decide if you will store configuration information centrally and have it downloaded or applied to users at startup (for example, though Active Directory Group Policy). Consider how you will restrict access to your configuration information. Consider using least privileged process and service accounts, and encrypt sensitive information in your configuration store.
  • Categorize the configuration items into logical sections based on whether they apply to users, application settings, or environmental settings. This makes it easier to divide configuration when you must support different settings for different sets of users, or multiple environments.
  • Categorize the configuration items into logical sections if your application has multiple tiers. If your server application runs in a Web farm, decide which parts of the configuration are shared and which parts are specific to the machine on which the application is running. Then choose an appropriate configuration store for each section.
  • Provide a separate administrative UI for editing configuration information.

Exception Management

Designing a good exception management strategy is important for the security and reliability of your application. Failure to do so can make it very difficult to diagnose and solve problems with your application. It can also leave your application vulnerable to Denial of Service (DoS) attacks, and it may reveal sensitive and critical information. Raising and handling exceptions is an expensive process, so it is important that the design also takes into account performance issues. A good approach is to design a centralized exception management mechanism for your application, and to consider providing access points within your exception management system (such as WMI events) to support enterprise level monitoring systems such as Microsoft System Center. Consider the following guidelines when designing an exception management strategy:

  • Design an appropriate exception propagation strategy that wraps or replaces exceptions, or adds extra information as required. For example, allow exceptions to bubble up to boundary layers where they can be logged and transformed as necessary before passing them to the next layer. Consider including a context identifier so that related exceptions can be associated across layers to assist in performing root cause analysis of errors and faults. Also ensure that the design deals with unhandled exceptions. Do not catch internal exceptions unless you can handle them or you must add more information, and do not use exceptions to control application flow.
  • Ensure that a failure does not leave the application in an unstable state, and that exceptions do not allow the application to reveal sensitive information or process details. If you cannot guarantee correct recovery, allow the application to halt with an unhandled exception in preference to leaving it running in an unknown and possibly corrupted state.
  • Design an appropriate logging and notification strategy for critical errors and exceptions that stores sufficient details about the exception to allow support staff to recreate the scenario, but does not reveal sensitive information in exception messages and log files.

For more information about designing an exception management strategy, see "Design Steps for Exception Management" later in this chapter.

Logging and Instrumentation

Designing a good logging and instrumentation strategy is important for the security and reliability of your application. Failure to do so can make your application vulnerable to repudiation threats, where users deny their actions, and log files may be required for legal proceedings to prove wrongdoing. You should audit and log activity across the layers of your application, which can help to detect suspicious activity and provide early indication of a serious attack. Auditing is usually considered most authoritative if the audits are generated at the precise time of resource access, and by the same routines that access the resource. Instrumentation can be implemented by using performance counters and events to give administrators information about the state, performance, and health of an application. Consider the following guidelines when designing a logging and instrumentation strategy:

  • Design a centralized logging and instrumentation mechanism that captures system- and business-critical events. Avoid logging and instrumentation that is too fine grained, but consider additional logging and instrumentation that is configurable at run time for obtaining extra information and to aid debugging.
  • Create secure log file management policies. Do not store sensitive information in the log files, and protect log files from unauthorized access. Consider how you will access and pass auditing and logging data securely across application layers, and ensure that you suppress but correctly handle logging failures.
  • Consider allowing your log sinks, or trace listeners, to be configurable so that they can be modified at run time to meet deployment environment requirements. Libraries such as the patterns & practices Enterprise Library are useful for implementing logging and instrumentation in your applications. Other popular libraries include NLog and log4net (see Additional Resources at the end of this chapter for more information).

For more information about logging and instrumentation, see "Design Steps for Exception Management" later in this chapter.

State Management

State management concerns the persistence of data that represents the state of a component, operation, or step in a process. State data can be persisted using different formats and stores. The design of a state management mechanism can affect the performance of your application; maintaining even small volumes of state information can adversely affect performance and the ability to scale out your application. You should only persist data that is required, and you must understand the options that are available for managing state. Consider the following guidelines when designing a state management mechanism:

  • Keep your state management as lean as possible; persist the minimum amount of data required to maintain state.
  • Make sure that your state data is serializable if it must be persisted or shared across process and network boundaries.
  • Choose an appropriate state store. Storing state in process and in memory is the technique that can offer the best performance, but only if your state does not have to survive process or system restarts. Persist your state to a local disk or local SQL Server if you want it available after a process or system restart. Consider storing state in a central location such as a central SQL Server if state is critical for your application, or if you want to share state between several machines.

Validation

Designing an effective validation mechanism is important for the usability and reliability of your application. Failure to do so can leave your application open to data inconsistencies, business rule violations, and a poor user experience. In addition, failing to adequately validate input may leave your application vulnerable to security issues such as cross-site scripting attacks, SQL injection attacks, buffer overflows, and other types of input attacks. Unfortunately there is no standard definition that can differentiate valid input from malicious input. In addition, how your application actually uses the input influences the risks associated with exploitation of the vulnerability. Consider the following guidelines when designing a validation mechanism:

  • Whenever possible, design your validation system to use allow lists that define specifically what is acceptable input, rather than trying to define what comprises invalid input. It is much easier to widen the scope of an allow list later than it is to narrow a block list.
  • Do not rely on only client-side validation for security checks. Instead, use client-side validation to give the user feedback and improve the user experience. Always use server-side validation to check for incorrect or malicious input because client-side validation can be easily bypassed.
  • Centralize your validation approach in separate components if it can be reused, or consider using a third-party library such as the patterns & practices Enterprise Library Validation Block. Doing so will allow you to apply a consistent validation mechanism across the layers and tiers of your application.
  • Be sure to constrain, reject, and sanitize user input. In other words, assume that all user input is malicious. Identify your trust boundaries and validate all input data for length, format, type, and range as it crosses trust boundaries.

For more information about designing a validation strategy, see "Design Steps for Validating Input and Data" later in this chapter.

Design Steps for Caching

Caching can play a vital role in maximizing performance. However, it is important to design an appropriate strategy for caching, as you can reduce performance by applying inappropriate techniques. The following steps will help you to design an appropriate caching strategy for your application.

Step 1 — Determine the Data to Cache

It is important to determine, as part of your application design, the data that is suitable for caching. Create a list of the data to cache in each layer of your application. Consider caching the following types of data:

  • Application-wide data. Consider caching relatively static data that applies to all users of the application. Examples are product lists and product information.
  • Relatively static data. Consider caching data that is fully static, or which does not change frequently. Examples are constants and fixed values read from configuration or a database.
  • Relatively static Web pages. Consider caching the output of Web pages or sections of Web pages that do not change frequently.
  • Stored procedure parameters and query results. Consider caching frequently used query parameters and query results.

Step 2 — Determine Where to Cache Data

When deciding on where to cache data, there are typically two things you must consider: the physical location of the cache, and the logical location of the cache.

The physical location will either be in-memory, or disk-based using files or a database. In-memory caching may be performed using the ASP.NET cache mechanism, Enterprise Library Caching Application Block, or a distributed in-memory caching mechanism such as Microsoft project code named “Velocity” or the Danga Interactive "Memcached" technology. An in-memory cache is a good choice when the data is used frequently by the application, the cached data is relatively volatile and must be frequently reacquired, and the volume of cached data is relatively small. A file system-based or database cache is a good choice when accessing data from the cache store is more efficient when compared to acquiring the data from the original store, the cached data is relatively less volatile, and the services for reacquiring the data are not always available. The disk-based approach is also ideal when the volume of cached data is relatively large, or the cached data must survive process and machine restarts.

The logical location of the cache describes the location within the application logic. It is important to cache the data as close as possible to the location where it will be used to minimize processing and network round trips, and to maximize the performance and responsiveness of the application. Consider the following guidelines when deciding on the logical location of the cache data:

  • Consider caching on the client when the data is page specific or user specific, does not contain sensitive information, and is lightweight.
  • Consider caching on a proxy server or Web server (for Web applications) when you have relatively static pages that are requested frequently by clients, your pages are updated with a known frequency, or the results are returned from Web services. Also, consider this approach where you have pages that can generate different output based on HTTP parameters, and those parameters do not often change. This is particularly useful when the range of outputs is small.
  • Consider caching data in the presentation layer when you have relatively static page outputs, you have small volumes of data related to user preferences for a small set of users, or you have UI controls that are expensive to create. Also consider this approach when you have data that must be displayed to the user and is expensive to create; for example, product lists and product information.
  • Consider caching data in the business layer when you must maintain state for a service, business process, or workflow; or when relatively static data is required to process requests from the presentation layer and this data is expensive to create.
  • Consider caching data in the data layer when you have input parameters for a frequently called stored procedure in a collection, or you have small volumes of raw data that are returned from frequently executed queries. Consider caching schemas for typed datasets in the data layer.
  • Consider caching in a separate table inside the database any data that requires considerable query processing to obtain the result set. This may also be appropriate when you have very large volumes of data to cache, where you implement a paging mechanism to read sections of the data for display in order to improve performance.

Step 3 — Determine the Format of Your Data to Cache

After you have determined the data that you must cache and decided where to cache it, the next important task is to identify the format for the cached data. When you are caching data, store it in a format optimized for the intended use so that it does not require additional or repeated processing or transformation. This type of cached data is a good choice when you must cache data using an in-memory cache, you do not need to share the cache across processes or computers, you do not need to transport cached data between memory locations, and you must cache raw data such as DataSets, DataTables, and Web pages.

If you must store or transport the cached data, consider serialization requirements. Serializing the cached data is a good choice when you will cache data in a disk-based cache, or you will store session state on a separate server or in a SQL Server database. It is also a good approach when you must share the cache across process or computers, transport the cached data between memory locations, or cache custom objects. You can choose to serialize your data using an XML serializer or a binary serializer. An XML serializer is a good choice when interoperability is your key concern. If performance is your key concern, consider using a binary serializer.

Step 4 — Determine a Suitable Cache Management Strategy

You must determine an appropriate cache expiration and cache flushing policy. Expiration and flushing relate to the removal of cached data from the cache store. The difference is that flushing might remove valid cache items to make space for more frequently used items, whereas expiration removes invalid and expired items. Check the capabilities of your underlying cache system; not all of these options are available in all cache implementations.

Design a cache expiration strategy that will maintain the validity of the data and items in the cache. When deciding on the cache expiration policy, consider both time-based expiration and notification-based expiration as follows:

  • In a time-based expiration policy, the cached data is expired or invalidated based on relative or absolute time intervals. This is a good choice when the cache data is volatile, the cached data is regularly updated, or the cached data is valid for only a specific time or interval. When choosing a time-based expiration policy, you can choose an absolute time expiration policy or a sliding time expiration policy. An absolute time expiration policy allows you to define the lifetime of cached data by specifying the time at which it will expire. A sliding time expiration policy allows you to define the lifetime of cached data by specifying the interval between the last access and the time at which it will expire.
  • In a notification-based expiration policy, the cached data is expired or invalidated based on notifications from internal or external sources. This is a good choice when you are working with nonvolatile cache data, the cached data is updated at irregular intervals, or the data is valid unless changed by external or internal systems. Common sources of notifications are disk file writes, WMI events, SQL dependency notifications, and business logic operations. A notification will expire or invalidate the dependent cache item(s).

Design a cache flushing strategy so that storage, memory, and other resources are used efficiently. When deciding on the cache flushing strategy, you can choose explicit flushing or scavenging as follows:

  • Explicit flushing requires you to determine when an item should be flushed and then remove it. This is good choice when you must support the scenario of removing damaged or obsolete cached data, you are working with custom stores that do not support scavenging, or you are working with a disk-based cache.
  • Scavenging requires you to determine the conditions and heuristics in which an item should be scavenged. This is good choice when you want to activate scavenging automatically when system resources become scarce, you want to remove seldom used or unimportant items from the cache automatically, or you are working with a memory-based cache.

Common scavenging heuristics include the following:

  • The Least Recently Used algorithm scavenges the items that have not been used for the longest period of time.
  • The Least Frequently Used algorithm scavenges the items that have been used least frequently since they were loaded.
  • The Priority algorithm instructs the cache to assign a priority to cached items and attempt to preserve those with highest priority when it scavenges the cache.

Step 5 — Determine How to Load the Cache Data

Choosing the appropriate option for loading your cache helps to maximize the performance and responsiveness of your application. When determining how to populate the cache, consider how much of the data you want to be available when the application starts or when you initially load the cache, and the implications on application startup time and performance. For example, you may decide to pre-load data into the cache when the application initializes, or to acquire and cache data only when it is requested. Loading data into the cache at application startup can increase an application's responsiveness, but also increases its startup time. On the other hand, loading data into the cache only when it is first accessed decreases startup time but can also reduce initial responsiveness.

You can use either proactive or reactive loading when designing your cache population strategy, as follows:

  • Choose proactive loading to retrieve all of the data for the application when it starts and then cache it for the lifetime of the application. Proactive loading is a good choice if your cached data is relatively static or has a known update frequency, a known lifetime, and a known size. If you do not know the size of the data, you might exhaust system resources loading it all. It is also a good choice if the source for your cached data is a slow database; or data is retrieved across a slow network or from an unreliable Web service.
  • Choose reactive loading to retrieve data as it is requested by the application and cache it for future requests. Reactive loading is a good choice if your cached data is relatively volatile, you are not sure of your cache data lifetime, your cached data volume is large, and your cache data source is reliable and responsive.

Design Steps for Exception Management

A robust and well designed exception management strategy can simplify application design, and improve security and manageability. It can also make it easier for developers to create the application, and reduces development time and cost. The following steps will help you to design an appropriate exception management strategy for your application.

Step 1 — Identify Exceptions That You Want to Handle

When designing exception management for your application, it is important to identify the exceptions that you want to handle. You should handle system or application exceptions such as those raised by users accessing system resources for which they do not have permission; and system failures due to disk, CPU, or memory issues. You must also identify the business exceptions that you want to handle. These are exceptions caused by actions such as violations of business rules.

Step 2 — Determine Your Exception Detection Strategy

Your design should mandate that structured exception handling is used consistently throughout the entire application. This creates a more robust application that is less likely to be left in an inconsistent state. Structured exception handling provides a way to manage exceptions using try, catch, and finally blocks to detect errors occurring within your code, and react to them appropriately.

The key considerations when detecting exceptions are to only catch the exception when you must gather exception details for logging, add relevant extra information to the exception, clean up any resources used in the code block, or retry the operation to recover from the exception. Do not catch an exception and then allow it to propagate up the call stack if you do not need to carry out any of these tasks.

Step 3 — Determine Your Exception Propagation Strategy

Consider the following exception propagation strategies. Your application can (and should) use a mixture of any or all of these strategies depending on the requirements of each context:

  • Allow exceptions to propagate. This strategy is useful in that you do not need to gather exception details for logging, add relevant extra information to the exception, clean up any resources used in the code block, or retry the operation to recover from the exception. You simply allow the exception to propagate up through the code stack.
  • Catch and rethrow exceptions. In this strategy, you catch the exception, carry out some other processing, and then rethrow it. Usually, in this approach, the exception information remains unaltered. This strategy is useful when you have to clean up resources, log exception information, or if you need to attempt to recover from the error.
  • Catch, wrap, and throw exceptions. In this strategy, you catch generic exceptions and react to them by cleaning up resources or performing any other relevant processing. If you cannot recover from the error, you wrap the exception within another exception that is more relevant to the caller and then throw the new exception so that it can be handled by code higher in the code stack. This strategy is useful when you want to keep the exception relevancy and/or provide additional information to the code that will handle the exception.
  • Catch and discard exceptions. This is not the recommended strategy, but might be suitable in some specific scenarios. You catch the exception and proceed with normal application execution. If required, you can log the exception and perform resource cleanup. This strategy may be useful for system exceptions that do not affect user operations, such as an exception raised when a log is full.

Step 4 — Determine Your Custom Exception Strategy

Consider if you need to design custom exceptions or if you can use just the standard .NET Framework exception types. Do not use a custom exception if a suitable exception is already available in your exception hierarchy or within the .NET Framework. However, use a custom exceptions if your application must identify and handle a specific exception in order to avoid using conditional logic or if it must include additional information to suit a specific requirement.

If you do need to create custom exception classes, ensure that the class name ends with "Exception," and implement the standard constructors for your custom exception class—including the serialization constructor. This is important in order to integrate with the standard exception mechanism. Implement a custom exception by deriving from a suitable more general exception in order to specialize it to meet your requirements.

In general, when designing your exception management strategy, you should create an exception hierarchy and organize your custom exceptions within it. This helps users to quickly analyze and trace problems. Your custom exceptions should indicate the layer in which the exception occurred, the component in which the exception might have occurred, and the type of exception that occurred (such as a security, system, or business exception).

Consider storing your application's exception hierarchy in a single assembly that can be referenced throughout your application code. This helps to centralize the management and deployment of your exception classes. Also, consider how you will marshal exceptions across boundaries. The .NET Framework Exception classes support serialization. When you are designing custom exception classes, ensure that they also support serialization.

Step 5 — Determine Appropriate Information to Gather

When handling exceptions, one of the most important aspects is a sound strategy for gathering exception information. The information captured should accurately represent the exception condition. It should also be relevant and informative to the audience. Audiences usually fall into one of the three categories: end users, application developers, and operators. Analyze the audience you are addressing by looking into the scenario and individual context.

End users require a meaningful and well presented description. When gathering exception information for end users, consider providing user friendly message that indicates the nature of the error, and information on how to recover from the error if this is appropriate. Application developers require more detailed information in order to assist with problem diagnosis.

When gathering exception information for application developers, make sure you provide the precise location in the code where the exception occurred; and exception details such as the exception type and state of the system when the exception occurred. Operators require relevant information that allows them to react appropriately and take the necessary recovery steps. When gathering exception information for operators, consider providing exception details and knowledge that will assist operators to locate the people to notify and the information they will require to solve the problem.

Irrespective of the audience that will receive the exception information, it is useful to provide rich exception information. Store the information in a log file for later examination, and analysis of exception frequency and details. By default, you should capture at least the date and time, machine name, exception source and type, exception message, stack and call traces, application domain name, assembly name and version, thread ID, and user details.

Step 6 — Determine Your Exception Logging Strategy

There is a range of options available for logging exception information. The following key considerations will help you to choose a logging option:

  • Choose Windows Event Log or Windows Eventing 6.0 when your application is deployed on a single machine, you need to leverage existing tools to view the log, or reliability is a prime concern.
  • Choose a SQL Database when your application is deployed in a farm or cluster, you need to centralize your logging, or you need flexibility as to how the exception information is structured and logged.
  • Choose a custom log file when your application is deployed on single machine, you need complete flexibility for choosing the log format, or you want a simple and easy to implement log store. Ensure that you limit the size of the log file by trimming or consolidating the log periodically to prevent it becoming too large.
  • Choose Message Queuing as a delivery mechanism to pass exception messages to their final destination when reliability is your prime concern, your applications are deployed in farm or cluster, or you must centralize logging.

For any application, you can choose a mix of these options depending upon your scenario and exception policy. For example, security exceptions may be logged to the Security Event Log and business exceptions may be logged to a database.

Step 7 — Determine Your Exception Notification Strategy

As part of your exception management design, you must also decide on your notification strategy. Exception management and logging are often not sufficient in enterprise applications. You should consider complementing them with notifications to ensure that administrators and operators are made aware of exceptions. You can use technologies such as WMI events, SMTP e-mail, SMS text messages, or other custom notification systems.

Consider using external notification mechanisms such as log monitoring systems or a third-party environment that detects the error conditions in the log data and raises appropriate notifications. This is a good choice when you want to decouple your monitoring and notification system from your application code and have just logging code inside your applications. Alternatively, consider adding custom notification mechanisms inside your application when you want to generate immediate notifications without relying on external monitoring systems.

Step 8 — Determine How to Handle Unhandled Exceptions

When an exception is not handled until the last point or boundary, and there is no way to recover from the exception before returning control to the user, your application must handle this unhandled exception. For unhandled exceptions, you should gather the required information, write it to a log or audit file, send any notifications required for the exception, perform any cleanup required, and finally communicate the error information to the user.

Do not expose all of the exception details. Instead, provide a user friendly generic error message. In the case of clients that have no user interface, such as Web services, you might choose to throw a generic exception in place of detailed exception. This prevents system details from being exposed to the end-user.

Consider using the patterns & practices Exception Handling Application Block and the patterns & practices Logging Application Block to implement a consistent exception management, logging, and notification strategy for your applications. The Exception Handling Application Block supports a range of exception handling options, and the Logging Application Block can receive, format, and send log messages and notifications to a wide range of logs and other destinations. For more information, see Appendix F "patterns & practices Enterprise Library."

Design Steps for Validating Input and Data

The following steps will help you to design an appropriate validation strategy for your application. When designing input and data validation for your application, the first task is to identify the trust boundaries and key scenarios when data should be validated. Next, identify the data to be validated and the location where it should be validated. You should also determine how to implement a reusable validation strategy. Finally, determine the validation strategy appropriate for your application.

Step 1 — Identify your Trust Boundaries

Trust boundaries define the separation between trusted and untrusted data. Data on one side of the trust boundary is trusted and, on the other side, it is not trusted. You should first identify data that is crossing trust boundaries to determine what you must validate. Consider the use of input validation and data validation at every trust boundary to mitigate security threats such as cross-site scripting and code injection. Examples of trust boundaries are a perimeter firewall, the boundary between the Web server and database server, and the boundary between your application and a third-party service.

Identify the systems and subsystems that your application communicates with, and the outer system boundaries that are crossed when writing to files on a server, making calls to the database server, or calling a Web service. Identify the entry points at the trust boundaries and the exit points where you write data from client input or from un-trusted sources such as shared databases.

Step 2 — Identify Key Scenarios

After you identify the trust boundaries within your application, you should define the key scenarios where you must validate data. All user entered data should be considered malicious until validated. For example, in a Web application, data in the presentation layer that should be validated includes values in form fields, query strings, and hidden fields; parameters sent in GET and POST requests; uploaded data (malicious users can intercept HTTP requests and modify the contents); and cookies (which reside on the client machine and could be modified).

In the business layer, business rules impose a constraint on the data. Any violation of these rules is assumed to be a validation error, and the business layer should raise an error to represent the violation. If you use a rules engine or workflow, ensure that it validates the results for each rule based upon the information required for that rule and the conclusions made from the evaluation of previous rules.

Step 3 — Determine Where to Validate

In this step, you determine where to perform validation—on the client, or on both the server and the client. Never depend on client-side validation alone. Use client-side validation to provide a more interactive UI, but always implement server-side validation to validate the data securely within your trust boundary. Data and business rules validation on the client can reduce round trips to the server and improve user experience. In a Web application, the client browser should support DHTML and JavaScript, ideally implemented in a separate .js file to provide reusability and to allow the browser to cache it. The simplest approach in a Web application is to use the ASP.NET validation controls. This is a set of server controls that can validate data client side, and will automatically validate server side as well.

Server-side data and business rules validation can be implemented using ASP.NET validation controls in a Web application. Alternatively, for both Web and other types of applications, consider using the patterns & practices Validation Application Block to create validation logic that can be reused across layers. The Validation Application Block can be used in Windows Forms, ASP.NET, and WPF applications. For more information about the Validation Application Block, see Appendix F "patterns & practices Enterprise Library."

Step 4 — Identify Validation Strategies

The common strategies for data validation are:

  • Accept known good (allow list or positive validation): Accept only data that satisfies specific criteria, and reject all other. Use this strategy where possible, as it is the most secure approach.
  • Reject known bad (block list or negative validation): Accept data that does not meet specific criteria (such as containing a specified set of characters). Use this strategy cautiously and as a secondary line of defense as it is very difficult to create a complete list of criteria for all known invalid input.
  • Sanitize: Eliminate or translate characters in an effort to make the input safe. As with the block list (negative validation) approach, use this strategy cautiously and as a secondary line of defense as it is very difficult to create a complete list of criteria for all known invalid input.

Relevant Design Patterns

Key patterns connected with crosscutting concerns can be organized into categories, as shown in the following table. Consider using these patterns when making design decisions for each category.

Category

Relevant patterns

Caching

Cache Dependency. Use external information to determine the state of data stored in a cache.

Page Cache. Improve the response time for dynamic Web pages that are accessed frequently, but that change less often and consume a large amount of system resources to construct.

Communication

Intercepting Filter. A chain of composable filters (independent modules) that implement common pre-processing and post-processing tasks during a Web page request.

Pipes and Filters. Route messages through pipes and filters that can modify or examine the message as it passes through the pipe.

Service Interface. A programmatic interface that other systems can use to interact with the service.

For more information on the Page Cache, Intercepting Filter, and Service Interface patterns, see "Enterprise Solution Patterns Using Microsoft .NET" at https://msdn.microsoft.com/en-us/library/ms998469.aspx.

For more information on the Pipes and Filters pattern, see "Integration Patterns" at https://msdn.microsoft.com/en-us/library/ms978729.aspx.

patterns & practices Solution Assets

For more information on related solution assets available from the Microsoft patterns & practices group, see the following resources:

  • Enterprise Library provides a series of application blocks that simplify common tasks such as caching, exception handling, validation, logging, cryptography, credential management, and facilities for implementing design patterns such as Inversion of Control and Dependency Injection. For more information, see the "Microsoft Enterprise Library" at https://msdn.microsoft.com/en-us/library/cc467894.aspx.
  • Unity Application Block is a lightweight, extensible dependency injection container that helps you to build loosely coupled applications. For more information, see "Unity Application Block" at https://msdn.microsoft.com/en-us/library/cc468366.aspx.

Additional Resources

For more information on authentication and authorization, see the following articles:

For more information on the remaining topics covered in this chapter, see the following articles:

For information on some of the popular third party libraries and frameworks that you might find useful for managing crosscutting concerns, see the following resources: