Freigeben über


Chapter 1 — Fundamentals of Engineering for Performance

 

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

patterns & practices Developer Center

Improving .NET Application Performance and Scalability

J.D. Meier, Srinath Vasireddy, Ashish Babbar, and Alex Mackman
Microsoft Corporation

May 2004

Related Links

Home Page for Improving .NET Application Performance and Scalability

Send feedback to Scale@microsoft.com

patterns & practices Library

Summary: This chapter introduces performance and scalability fundamentals, and it explains how performance and scalability considerations must be balanced with other quality-of-service requirements such as availability, manageability, and security. It also introduces performance engineering and the common terminology used throughout this guide.

Contents

Overview
Managing Performance
Engineering for Performance
Set Objectives and Measure
Design for Performance
Measure
Life Cycle
Where to Go from Here
Terms You Need to Know
Summary

Overview

Whether you design, build, test, maintain, or manage applications, you need to consider performance. If your software does not meet its performance objectives, your application is unlikely to be a success. If you do not know your performance objectives, it is unlikely that you will meet them.

Performance affects different roles in different ways:

  • As an architect, you need to balance performance and scalability with other quality-of-service (QoS) attributes such as manageability, interoperability, security, and maintainability.
  • As a developer, you need to know where to start, how to proceed, and when you have optimized your software enough.
  • As a tester, you need to validate whether the application supports expected workloads.
  • As an administrator, you need to know when an application no longer meets its service level agreements, and you need to be able to create effective growth plans.
  • As an organization, you need to know how to manage performance throughout the software life cycle, as well as lower total cost of ownership of the software that your organization creates.

Managing Performance

Performance is about risk management. You need to decide just how important performance is to the success of your project. The more important you consider performance to be, the greater the need to reduce the risk of failure and the more time you should spend addressing performance.

Quality-of-Service Requirements

Performance and scalability are two QoS requirements. Other QoS requirements include availability, manageability, and security. The trick is to be able to balance your performance objectives with these other QoS requirements and be prepared to make tradeoffs. Responsiveness is not necessarily the only measure of success, particularly if it means sacrificing manageability or security.

Reactive vs. Proactive Approach

Performance is frequently neglected until a customer reports a problem. In other cases, performance is not evaluated until system test or initial deployment. In either case, you may not be able to fix the issue by throwing more hardware at the problem.

There are several problems with a reactive approach to performance. Performance problems are frequently introduced early in the design and design issues cannot always be fixed through tuning or more efficient coding. Also, fixing architectural or design issues later in the cycle is not always possible. At best, it is inefficient, and it is usually very expensive. Table 1.1 summarizes the characteristics of a reactive approach versus a proactive approach.

Table 1.1: Reactive vs. Proactive Approach

Approach Characteristics
Reactive performance You generally cannot tune a poorly designed system to perform as well as a system that was well designed from the start.

You experience increased hardware expense.

You experience an increased total cost of ownership.

Proactive performance You know where to focus your optimization efforts

You decrease the need to tune and redesign; therefore, you save money.

You can save money with less expensive hardware or less frequent hardware upgrades.

You have reduced operational costs.

Engineering for Performance

To engineer for performance, you need to embed a performance culture in your development life cycle, and you need a process to follow. When you have a process to follow, you know exactly where to start and how to proceed, and you know when you are finished. Performance modeling helps you apply engineering discipline to the performance process. The fundamental approach is to set objectives and to measure your progress toward those objectives. Performance modeling helps you set objectives for your application scenarios. Measuring continues throughout the life cycle and helps you determine whether you are moving towards your performance objectives or away from them.

Figure 1.1 shows the main elements required for performance engineering, which reflect the scope of this guide.

Ff647781.ch01-engineering-for-perf(en-us,PandP.10).gif

Figure 1.1: Engineering for performance

Engineering for performance is broken down into the following actionable categories and areas of responsibility:

  • Performance objectives enable you to know when your application meets your performance goals.
  • Performance modeling provides a structured and repeatable approach to meeting your performance objectives.
  • Architecture and design guidelines enable you to engineer for performance from an early stage.
  • A performance and scalability frame enables you to organize and prioritize performance issues.
  • Measuring lets you see whether your application is trending toward or away from the performance objectives.
  • Providing clear role segmentation helps architects, developers, testers, and administrators understand their responsibilities within the application life cycle. Different parts of this guide map to the various stages of the product development life cycle and to the various roles.

Set Objectives and Measure

Performance must be given due consideration from the beginning. If you determine performance is important, then you must consider it throughout the life cycle. This guide promotes a structured and repeatable approach to performance that you can embed into your application life cycle. This enables you to mitigate performance risk at the start of your project. You work toward defined performance objectives, design for performance, and test, measure and tune performance throughout the life cycle. This approach is summarized in Figure 1.2.

Ff647781.fasttrack-theapproach(en-us,PandP.10).gif

Figure 1.2: Performance approach

Set Performance Objectives

Your project goals must include measurable performance objectives. From the very beginning, design so that you are likely to meet those objectives. Do not over-research your design. Use the planning phase to manage project risk to the right level for your project. To accomplish this, you might ask the following questions: How fast does your application need to run? At what point does the performance of your application become unacceptable? How much CPU or memory can your application consume? Your answers to these questions are your performance objectives. They help you create a baseline for your application's performance. These questions help you determine if the application is quick enough.

Performance objectives are usually specified in terms of the following:

  • Response time. Response time is the amount of time that it takes for a server to respond to a request.
  • Throughput. Throughput is the number of requests that can be served by your application per unit time. Throughput is frequently measured as requests or logical transactions per second.
  • Resource utilization. Resource utilization is the measure of how much server and network resources are consumed by your application. Resources include CPU, memory, disk I/O, and network I/O.
  • Workload. Workload includes the total number of users and concurrent active users, data volumes, and transaction volumes.

You can identify resource costs on a per-scenario basis. Scenarios might include browsing a product catalog, adding items to a shopping cart, or placing an order. You can measure resource costs for a certain user load, or you can average resource costs when you test the application by using a certain workload profile. A workload profile consists of a representative mix of clients performing various operations.

Metrics

Metrics are the criteria you use to measure your scenarios against your performance objectives. For example, you might use response time, throughput, and resource utilization as your metrics. The performance objective for each metric is the value that is acceptable. You match the actual value of the metrics to your objectives to verify that you are meeting, exceeding, or failing to meet your performance objectives.

Know Your Budgets

Your budgets represent a statement of the maximum cost that a particular feature or unit in your project can afford to pay against each of your key performance objectives. Do not confuse budgets with performance objectives. For example, you might have a budget of 10-second response time. If you go past your defined budget, your software has failed. However, you should set a performance objective of three to five seconds to leave room for increased load from other sources. Also, you need to spread your budget among the different functions involved with processing a request. For example, to achieve your 10-second response time, how much time can you afford for accessing the database, rendering results, or accessing a downstream Web service?

Budgets are specified in terms of execution time and resource utilization, but they also include less tangible factors such as project resource costs. A budget is likely to include the following:

  • Network. Network considerations include bandwidth.
  • Hardware. Hardware considerations include items, such as servers, memory, and CPUs.
  • Resource dependencies. Resource dependency considerations include items, such as the number of available database connections and Web service connections.
  • Shared resources. Shared resource considerations include items, such as the amount of bandwidth you have, the amount of CPU you get if you share a server with other applications, and the amount of memory you get.
  • Project resources. From a project perspective, budget is also a constraint, such as time and cost.

You need to measure to find out if your application operates within its budget allocation. The budgeting exercise actually helps you determine if you can realistically meet your performance objectives.

Design for Performance

Many, if not most, performance problems are introduced by specific architecture, design, and technology choices that you make very early in the development cycle, often in the design stage.

Give Performance Due Consideration from the Start

"If you're very lucky, performance problems can be fixed after the fact. But, as often as not, it will take a great deal of effort to get your code to where it needs to be for acceptable performance. This is a very bad trap to fall into. At its worst, you'll be faced with a memorable and sometimes job-ending quote: 'This will never work. You're going to have to start all over.'"

Rico Mariani, Architect, Microsoft

Performance and Scalability Frame

This guide uses a performance and scalability frame to help you organize and prioritize performance and scalability issues. Table 1.2 shows the categories used in this guide.

Table 1.2: Performance Categories

Category Key Considerations
Coupling and cohesion Loose coupling and high cohesion
Communication Transport mechanism, boundaries, remote interface design, round trips, serialization, bandwidth
Concurrency Transactions, locks, threading, queuing
Resource management Allocating, creating, destroying, pooling
Caching Per user, application-wide, data volatility
State management Per user, application-wide, persistence, location
Data structures and algorithms Choice of algorithm

Arrays versus collections

The categories in the frame are a prioritized set of technology-agnostic common denominators that are pervasive across applications. You can use the categories to build evaluation criteria where performance and scalability decisions can have a large impact.

Measure

Good engineering requires you to understand your raw materials. You must understand the key properties of your framework, your processor, and your target system. Perform early research to identify the cost of particular services and features. If it is necessary, build prototypes to verify the cost of specific features.

Your project schedules should allow for contingencies and include time, in case you need to change your approach. Do not be afraid to cancel features or things that are clearly not going to work within your specified objectives.

Know the Cost

When you engineer solutions, you need to know the cost of your materials. You know the cost by measuring under the appropriate workload. If the technology, application programming interface (API), or library does not meet your performance objectives, do not use it. Getting the best performance from your platform is often intrinsically tied to your knowledge of the platform. While this guide provides a great deal of platform knowledge, it is no replacement for measuring and determining the actual cost for your scenarios.

Validate Assumptions

You need to validate your assumptions. The further you are in your project's life cycle, the greater the accuracy of the validation. Early on, validation is based on available benchmarks and prototype code, or on just proof–of–concept code. Later, you can measure the actual code as your application develops.

Scenarios

Scenarios are important from a performance perspective because they help you to identify priorities and to define and apply your workloads. If you have documented use cases or user stories, use them to help you define your scenarios. Critical scenarios may have specific performance objectives, or they might affect other critical scenarios.

For more information about scenarios, see "Step 1 – Identify Key Scenarios" in Chapter 2, "Performance Modeling."

Life Cycle

This guide uses a life cycle–based approach to performance and provides guidance that applies to all of the roles involved in the life cycle, including architects, designers, developers, testers, and administrators. Regardless of your chosen development process or methodology, Figure 1.3 shows how the guidance applies to the broad categories associated with an application life cycle.

Ff647781.ch01-lifecycle-mapping(en-us,PandP.10).gif

Figure 1.3: Life cycle mapping

Regardless of the methodology that you use to develop applications, the main stages or functions shown in Figure 1.3 can generally be applied. Performance is integrated into these stages as follows:

  • Gathering requirements. You start to define performance objectives, workflow, and key scenarios. You begin to consider workloads and estimated volumes for each scenario. You begin the performance modeling process at this stage by using early prototyping, if necessary.
  • Design. Working within your architectural constraints, you start to generate specifications for the construction of code. Design decisions should be based on proven principles and patterns. Your design should be reviewed from a performance perspective. Measuring should continue throughout the life cycle, starting with the design phase.
  • Development. Start reviewing your code early in the implementation phase to identify inefficient coding practices that could lead to performance bottlenecks. You can start to capture real metrics to validate the assumptions made in the design phase. Be careful to maintain a balanced approach during development; micro-optimization at an early stage is not likely to be helpful.
  • Testing. Load and stress testing is used to generate metrics and to verify application behavior and performance under normal and peak load conditions.
  • Deployment. During the deployment phase, you validate your model by using production metrics. You can validate workload estimates, resource utilization levels, response time, and throughput.
  • Maintenance. You should continue to measure and monitor when your application is deployed in the production environment. Changes that may affect system performance include increased user loads, deployment of new applications on shared infrastructure, system software revisions, and updates to your application to provide enhanced or new functionality. Use your performance metrics to guide your capacity and scaling plans.

For more information about ownership of the tasks by architect, administrator, developer and tester, see "Who Does What?" in "Fast Track – A Guide for Getting Started and Applying the Guidance."

Where to Go from Here

This section outlines the parts of this guide that are directly relevant to specific roles:

  • Architects and lead developers. Architects and lead developers should start by reading Part II, "Designing for Performance," to learn about principles and best practice design guidelines. They should also read Chapter 2, "Performance Modeling," and they should use the prescribed performance modeling process to help assess design choices before committing to a solution.
  • Developers. Developers should read the in-depth technical guidance in Part III, "Application Performance and Scalability," to help design and implement efficient code.
  • Testers. Testers should read the chapters in Part V, "Measuring, Testing, and Tuning," for guidance on how to load, stress, and capacity test applications.
  • Administrators. Administrators should use the tuning process and techniques described in Part V, "Measuring, Testing, and Tuning," to tune performance with appropriate application, platform, and system configuration.
  • Performance analysts. Performance analysts should use the whole guide, and specifically the deep technical information on the Microsoft® .NET Framework technologies, to understand performance characteristics and to determine the cost of various technologies. This helps them analyze how applications that fail to meet their performance objectives can be improved.

Terms You Need to Know

Table 1.3 explains the main terms and concepts used throughout this guide.

Table 1.3: Terms and Concepts

Term/Concept Description
Performance Performance is concerned with achieving response times, throughput, and resource utilization levels that meet your performance objectives.
Scalability Scalability refers to the ability to handle additional workload, without adversely affecting performance, by adding resources such as CPU, memory, and storage capacity.
Throughput Throughput is the number of requests that can be served by your application per unit time. Throughput varies depending on the load. Throughput is typically measured in terms of requests per second.
Resource utilization Resource utilization is the cost in terms of system resources. The primary resources are CPU, memory, disk I/O, and network I/O.
Latency Server latency is the time the server takes to complete the execution of a request. Server latency does not include network latency. Network latency is the additional time that it takes for a request and a response to cross a network.

Client latency is the time that it takes for a request to reach a server and for the response to travel back.

Performance objectives Performance objectives are usually specified in terms of response times, throughput (transactions per second), and resource utilization levels. Resource utilization levels include the amount of CPU capacity, memory, disk I/O, and network I/O that your application consumes.
Metrics Metrics are the actual measurements obtained by running performance tests. These performance tests include system-related metrics such as CPU, memory, disk I/O, network I/O, and resource utilization levels. The performance tests also include application-specific metrics such as performance counters and timing data.
Performance budgets Performance budgets are your constraints. Performance budgets specify the amount of resources that you can use for specific scenarios and operations and still be successful.
Scenarios Scenarios are a sequence of steps in your application. They can represent a use case or a business function such as searching a product catalog, adding an item to a shopping cart, or placing an order.
Workload Workload is typically derived from marketing data. The workload includes total numbers of users, concurrent active users, data volumes, and transaction volumes, along with the transaction mix. For performance modeling, you associate a workload with an individual scenario.

Summary

Performance and scalability mean different things to different people, but performance and scalability are fundamentally about meeting your objectives. Your objectives state how long particular operations must take and how many resources it is acceptable for those operations to consume under varying load levels.

The conventional approach to performance is to ignore it until deployment time. However, many, if not most, performance problems are introduced by specific architecture, design, and technology choices that you make very early in the development cycle. After the choices are made and the application is built, these problems are very difficult and expensive to fix. This guide promotes a holistic, life cycle–based approach to performance where you engineer for performance from the early stages of the design phase throughout development, testing, and deployment.

The engineering approach revolves around the principle of setting objectives and measuring. When you measure performance throughout the life cycle, you know whether you are trending toward your target objectives or away from them. A key tool to help you with the performance process is performance modeling. Performance modeling provides a structured and repeatable discipline for modeling the performance characteristics of your software. Throughout your planning, a balanced approach is necessary. It is unwise to spend your time optimizing tiny details until you have a clear understanding of the bigger picture. A risk management–based approach helps you decide how deep to go into any given area and helps you decide the point at which further analysis is premature.

patterns & practices Developer Center

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

© Microsoft Corporation. All rights reserved.