Fast Track — A Guide for Getting Started and Applying the Guidance

 

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

patterns & practices Developer Center

Improving .NET Application Performance and Scalability

J.D. Meier, Srinath Vasireddy, Ashish Babbar, and Alex Mackman
Microsoft Corporation

May 2004

Related Links

Home Page for Improving .NET Application Performance and Scalability

Send feedback to Scale@microsoft.com

patterns & practices Library

Summary: This fast track shows you how to prepare to apply the guidance in your organization. This chapter is particularly relevant for managers who are planning to introduce and implement the guidelines in Improving .NET Application Performance and Scalability.

Contents

Goal and Scope
The Approach
Set Performance Objectives
Design for Performance
Measuring Performance
Testing Performance
Tuning Performance
Applying the Guidance to Your Application Life Cycle
Who Does What?
Implementing the Guidance
Summary

Goal and Scope

The goal of this guide is to provide guidance for designing, implementing, and tuning Microsoft .NET applications to meet your performance objectives. The guide provides a principle-based approach for addressing performance and scalability throughout your application life cycle.

The scope of the guide is shown in Figure 1.

Ff648148.ch01-engineering-for-perf(en-us,PandP.10).gif

Figure 1: The scope of the guide

The guidance is organized by categories, principles, roles, and stages of the life cycle:

  • Performance objectives enable you to know when your application meets your performance goals.
  • Performance modeling provides a structured and repeatable approach to meeting your performance objectives.
  • Architecture and design guidelines enable you to engineer for performance from an early stage.
  • A performance and scalability frame enables you to organize and prioritize performance issues.
  • Measuring lets you see whether your application is trending toward or away from the performance objectives.

The Approach

Performance must be given due consideration up front and throughout the life cycle. The guide promotes a structured and repeatable approach to performance that you can embed into your application life cycle. This enables you to mitigate performance risk from the onset of your project. You work toward defined performance objectives, design for performance, and test, measure, and tune throughout the life cycle. This approach is summarized in Figure 2.

Ff648148.fasttrack-theapproach(en-us,PandP.10).gif

Figure 2: A life cycle-based approach to security: set performance objectives and measure

The performance and scalability frame promoted in this guide provides you with a logical structure to help organize and prioritize performance issues.

Set Performance Objectives

Think carefully about the performance objectives for your application early during requirements analysis. Include performance objectives with your functional requirements and other nonfunctional requirements, such as security and maintainability.

Performance Objectives

Performance objectives should include the following:

  • Responsetime. This is the time it takes your system to complete a particular operation, such as a user transaction.
  • Throughput. This is the amount of work your system can support. Throughput can be measured in terms of requests per second, transactions per second, or bytes per second.
  • Resource utilization. This is the percentage of system resources that are used by particular operations and how long they are used. This is the cost of server and network resources, including CPU, memory, disk I/O, and network I/O.
  • Workload. This is usually derived from marketing data and includes total numbers of users, concurrently active users, data volumes, transaction volumes, and transaction mix.

Quality of Service Attributes

Performance and scalability are quality of service attributes. You need to balance performance with other quality of service attributes, including security, maintainability, and interoperability. The various factors to consider are shown in Figure 3.

Ff648148.ch01-set-goals-and-measur(en-us,PandP.10).gif

Figure 3: Balancing performance objectives with other quality of service attributes

Your performance objectives and other quality of service attributes are derived from your business requirements. Metrics (captured by measuring) tell you whether you are trending toward or away from your performance objectives.

Design for Performance

Give performance due consideration up front. Performance modeling is a structured approach that supports performance engineering, in contrast to the haphazard approaches that characterize many projects. The performance and scalability frame promoted by this guide also enables you to apply structure and organization to the performance problem domain.

Performance and Scalability Frame

The guide uses a performance and scalability frame to help you organize and prioritize performance and scalability issues. The performance categories used in this guide are shown in Table 1.

Table 1: Performance Categories

Category Key Considerations
Coupling and Cohesion Loose coupling, high cohesion among components and layers
Communication Transport mechanism, boundaries, remote interface design, round trips, serialization, bandwidth
Concurrency Transactions, locks, threading, queuing
Resource Management Allocating, creating, destroying, pooling
Caching Per user, application-wide, data volatility
State Management Per user, application-wide, persistence, location
Data Structures and Algorithms Choice of algorithm
Arrays vs. collections

The categories in the frame are a prioritized set of technology-agnostic, common denominators that are pervasive across applications. You can use these categories to build evaluation criteria where performance and scalability decisions can have a large impact.

Performance Modeling

Performance modeling helps you evaluate your design decisions against your objectives early on, before committing time and resources. Invalid design assumptions and poor design practices may mean that your application can never achieve its performance objectives. The performance modeling process model presented in this guide is summarized in Figure 4.

Ff648148.ch02-perf-modeling-proces(en-us,PandP.10).gif

Figure 4: Eight-step performance modeling process

The performance modeling process consists of the following steps:

  1. Identify key scenarios. Identify those scenarios in which performance is important and the ones that pose the most risk to your performance objectives.
  2. Identify workloads. Identify how many users, and how many concurrent users, your system needs to support.
  3. Identify performance objectives. Define performance objectives for each of your key scenarios. Performance objectives reflect business requirements.
  4. Identify budget. Identify your budget or constraints. This includes the maximum execution time in which an operation must be completed and resource utilization such as CPU, memory, disk I/O, and network I/O constraints.
  5. Identify processing steps. Break your scenarios down into component processing steps.
  6. Allocate budget. Spread your budget determined in Step 4 across your processing steps determined in Step 5 to meet the performance objectives you defined in Step 3.
  7. Evaluate. Evaluate your design against objectives and budget. You may need to modify design or spread your response time and resource utilization budget differently to meet your performance objectives.
  8. Validate. Validate your model and estimates. This is an ongoing activity and includes prototyping, testing, and measuring.

Measuring Performance

You need to measure to know whether your application operates within its budget allocation and to know whether your application is trending toward or away from its performance objectives.

Know the Cost

You need to measure to know the cost of your tools. For example, how much does a certain application programming interface (API), library, or choice of technology cost you? If necessary, use prototypes to obtain metrics. As soon as development begins and you have real code to use, start measuring it and refine your performance models.

Validate

Validate your model and estimates. Continue to create prototypes and measure the performance of your application scenarios by capturing metrics. This is an ongoing activity and includes prototyping and measuring. Continue validating until your performance goals are met.

The further on you are in the application life cycle, the more accurate the validation will be. Early on, validation is based on available benchmarks and prototype code, or even proof-of-concept code. Later, you can measure your actual code as your application develops.

Testing Performance

Performance testing is used to verify that an application is able to perform under expected and peak load conditions, and that it can scale sufficiently to handle increased capacity.

Load Testing

Use load testing to verify application behavior under normal and peak load conditions. This allows you to capture metrics and verify that your application can meet its performance objectives. Load testing is a six-step process, as shown in Figure 5.

Ff648148.ch16-load-testing-process(en-us,PandP.10).gif

Figure 5: The load testing process

The load testing process involves the following steps:

  1. Identify key scenarios. Identify application scenarios that are critical for performance.
  2. Identify workload. Distribute the total application load among the key scenarios identified in Step 1.
  3. Identify metrics. Identify the metrics you want to collect about the application when running the test.
  4. Create test cases. Create the test cases, in which you define steps for conducting a single test along with the expected results.
  5. Simulate load. Use test tools to simulate load in accordance with the test cases. Capture the resulting metric data.
  6. Analyze results. Analyze the metric data captured during the test.

You begin load testing with a total number of users distributed against your user profile, and then you start to incrementally increase the load for each test cycle, analyzing the results each time.

Stress Testing

Use stress testing to evaluate your application's behavior when it is pushed beyond its breaking point, and to unearth application bugs that surface only under high load conditions.

The stress testing is a six-step process, as shown in Figure 6.

Ff648148.ch16-stress-testing-proce(en-us,PandP.10).gif

Figure 6: The stress testing process

The stress testing process involves the following steps:

  1. Identify key scenarios. Identify the application scenarios that need to be stress tested to identify potential problems.
  2. Identify workload. Identify the workload that you want to apply to the scenarios identified in Step 1. This is based on the workload and peak load capacity inputs.
  3. Identify metrics. Identify the metrics that you want to collect about the application when you run the test, based on the potential problems identified for your scenarios.
  4. Create test cases. Create the test cases, in which you define steps for conducting a single test along with the expected results.
  5. Simulate load. Use test tools to simulate the required load for each test case. Capture the resulting metric data.
  6. Analyze results. Analyze the metric data captured during the test.

The load you apply to a particular scenario should stress the system sufficiently beyond its threshold limits. You can incrementally increase the load and observe the application behavior over various load conditions.

Tuning Performance

Performance tuning is an iterative process that you use to identify and eliminate bottlenecks until your application meets its performance objectives. You establish a baseline and then collect data, analyze the results, identify bottlenecks, make configuration changes, and measure again. Figure 7 shows the basic performance tuning process.

Ff648148.ch17-tuning-process(en-us,PandP.10).gif

Figure 7: The performance tuning process

Performance tuning consists of the following set of activities:

  1. Establish a baseline. Ensure that you have a well-defined set of performance objectives, test plans, and baseline metrics.
  2. Collect data. Simulate load and capture metrics.
  3. Analyze results. Identify performance issues and bottlenecks.
  4. Configure. Tune your application setup by applying new system, platform or application configuration settings.
  5. Test and measure. Test and measure to verify that your configuration changes have been beneficial.

Applying the Guidance to Your Application Life Cycle

Performance should be pervasive throughout your application life cycle. This section explains how the component parts of the guide relate to the various functions associated with a typical application life cycle.

Functional Mapping

Different parts of the guide apply to different functional areas. The sequence of the chapters corresponds to typical functional areas in an application life cycle. This relationship is shown in Figure 8.

Ff648148.ch01-lifecycle-mapping(en-us,PandP.10).gif

Figure 8: Relationship of chapters to application life cycle

Note that development methodologies tend to be characterized as either linear ("waterfall" approaches) or iterative ("spiral" approaches). Figure 8 does not signify one approach or the other, but simply shows the typical functions that are performed and how the guidance relates to those functions.

Performance Throughout the Life Cycle

Performance begins during requirements gathering and continues throughout the application life cycle. The parallel activities are shown in Figure 9.

Ff648148.ch01-hump-diagram(en-us,PandP.10).gif

Figure 9: Performance tasks performed throughout the life cycle

The following list summarizes how performance is integrated throughout the entire life cycle:

  • Requirements gathering. You start to define performance objectives, workflow, and key scenarios, and begin to consider workloads and estimated volumes for each scenario. You begin the performance modeling process at this stage, using early prototyping if necessary.
  • Design. Working within your architectural constraints, you start to generate specifications for the construction of code. Design decisions should be based on proven principles and patterns, and your design should be reviewed from a performance perspective. Measuring should continue throughout the life cycle, starting from the design phase.
  • Development. You start reviewing your code early during the implementation phase to identify inefficient coding practices that could lead to potential performance bottlenecks. You can start to capture "real" metrics to validate the assumptions made in the design phase.
  • Testing. You conduct load and stress testing to generate metrics and to verify application behavior and performance under normal and peak load conditions.
  • Deployment. During the deployment phase, you validate your model using production metrics. You can validate workload estimates and also resource utilization levels, response time and throughput.
  • Maintenance. You should continue to measure and monitor when your application is deployed in the production environment. Changes that can impact system performance include increased user loads, deployment of new applications on shared infrastructure, system software revisions, and updates to your application to provide enhanced or new functionality.

Who Does What?

Performance is a collaborative effort involving multiple roles.

RACI Chart

RACI stands for the following:

  • Responsible (the role responsible for performing the task)
  • Accountable (the role with overall responsibility for the task)
  • Consulted (people who provide input to help perform the task)
  • Keep Informed (people with a vested interest who should be kept informed)

Table 2 illustrates a simple RACI chart for this guide. The RACI chart helps illustrate who does what by showing who owns, contributes to, and reviews each performance task.

Table 2: RACI Chart

Tasks Architect Administrator Developer Tester
Performance Goals A R C I
Performance Modeling A I I I
Performance Design Principles A I I  
Performance Architecture A C I  
Architecture and Design Review R I I  
Code Development     A  
Technology-Specific Performance Issues     A  
Code Review     R I
Performance Testing C C I A
Tuning C R    
Troubleshooting C A I  
Deployment Review C R I I

You can use a RACI chart at the beginning of your project to identify the key performance-related tasks together with the roles that should perform each task.

Implementing the Guidance

The guidance throughout the guide is task-based and modular, and each chapter relates to the various stages of the product development life cycle and the various roles involved. These roles include architect, developer, administrator, and performance analyst. You can pick a specific chapter to perform a particular task or use a series of chapters for a phase of the product development life cycle.

The checklist shown in Table 3 highlights the areas covered by this guide that are required to improve your application's performance and scalability.

Table 3: Implementation Checklist

  Area Description
Ff648148.checkbox(en-us,PandP.10).gif Performance Modeling Create performance models for your application. For more information, see Chapter 2, "Performance Modeling."
Ff648148.checkbox(en-us,PandP.10).gif Prototyping Prototype early to validate your design assumptions. Measure prototype performance to determine whether or not your design approach enables you to meet your designed performance objectives.
Ff648148.checkbox(en-us,PandP.10).gif Architecture and Design Review Review the designs of new and existing applications for performance and scalability problems. For more information, see Chapter 4, "Architecture and Design Review of a .NET Application for Performance and Scalability."
Ff648148.checkbox(en-us,PandP.10).gif Code Review Educate developers about how to conduct performance-based code reviews. Perform code reviews for applications in development. For more information, see Chapter 13, "Code Review: .NET Application Performance."
Ff648148.checkbox(en-us,PandP.10).gif Measuring Know the cost of design decisions, technology choices, and implementation techniques. For more information, see Chapter 15, "Measuring .NET Application Performance."
Ff648148.checkbox(en-us,PandP.10).gif Load Testing Perform load testing to verify application behavior under normal and peak load conditions. For more information, see Chapter 16, "Testing .NET Application Performance."
Ff648148.checkbox(en-us,PandP.10).gif Stress Testing Perform stress testing to evaluate your application's behavior when it is pushed beyond its breaking point. For more information, see Chapter 16, "Testing .NET Application Performance."
Ff648148.checkbox(en-us,PandP.10).gif Capacity Testing Perform capacity testing to plan for future growth, such as an increased user base or increased volume of data. For more information, see Chapter 16, "Testing .NET Application Performance."
Ff648148.checkbox(en-us,PandP.10).gif Tuning Tune your application to eliminate performance bottlenecks. For more information, see Chapter 17, "Tuning .NET Application Performance."

Summary

This fast track has highlighted the basic approach taken by the guide to help you design and develop .NET applications that meet your performance objectives. It has shown how to prepare to apply the guidance in your organization by explaining how to apply the guidance depending on your specific role in your application life cycle.

patterns & practices Developer Center

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

© Microsoft Corporation. All rights reserved.