Methods and Formulas Used to Determine Server Capacity

You can understand a server's workload and capacity when you determine the kinds of tasks carried out on that server. The performance statistics that are calculated by System Monitor reveal the effects of those tasks. You can use these statistics with a number of standard mathematical formulas to help determine server size and plan for capacity and growth.

Basic Model of System Capacity

There are three variables that form the basic model of system capacity. These variables are

  • Observation time (T), the amount of time that the server is monitored for activity

  • Busy time (B), the amount of time that the server was active during the observation time

  • Completions (C), the number of transactions completed during the observation period

With these three variables, you can calculate the six significant values, described in Table 9.4, that are used to develop a capacity planning model.

Table 9.4 Capacity Planning Data Formulas




CPU Utilization

The percentage of CPU capacity used during a specific period of time.

U = B/T

Transaction throughout of the system

The average number of transactions completed during a specified period of time.

X = C/T

Average service time

The average time to complete a transaction.

S = B/C

Transaction capacity of the system

The number of transactions the server handles.

Cp = 1/S

Average queue length

The average number of transactions in queue.

Q = U/(1-U)

Average response time

The average time to respond to a transaction.

R = (Q×S)+S

Here is an example of how to use these formulas to size a server. Suppose that you observe the server for 60 seconds (T), during which time there are 90 completed transactions (C), and the server is actually busy processing that workload for 48 seconds (B). Table 9.5 shows the resulting data values using this information.

Table 9.5 Capacity Planning Resource Formula Results




CPU Utilization

U = B/T

48/60 = 80 percent utilization

Average transaction throughput of the system

X = C/T

90/60 = 1.5 transactions/sec

Average service time

S = B/C

48/90 = .53 seconds

Transaction capacity of the server

Cp = 1/S

1/.53 = 1.875 transactions/sec

Average queue length

Q = U/(1-U)

.8/(1 - .8) = 4 transactions

Average response time

R = (Q×S)+S

(3 × .53)+.53 = 2.12 seconds

The CPU utilization was at 80 percent, and handled an average of 1.5 transactions per second. The average service time for these transactions was .53 seconds, and transactions were completed in an average time of 2.12 seconds. On average, there were four transactions waiting to be processed at any given point in time during the observation period, and the server had the capacity to process 1.875 transactions per second.

If the SLA states that during any given 60 second period, the server should not utilize more than 85 percent of the processor and should be capable of handling at least 100 transactions, the calculated values shown in Table 9.5 indicate that the SLA is being met. If the SLA stated that during any 60 second period, the server should not utilize more that 75 percent of the processor or should not have more than three transactions waiting in queue, then the calculated values shown in Table 9.5 indicate that the server cannot perform within the limits of the SLA and probably must be upgraded.

Use these formulas as tools to help you to determine current server performance levels, to develop acceptable and reasonable SLAs given current and expected server hardware configurations, and to identify where upgrades or new equipment is necessary.

End-to-End Response Time

When you consider response time, you should not think only in terms of a single server's response time and performance, but instead you should think of all the data components that make up the service chain for that transaction. So, the first step in determining end-to-end response time is identifying the data components that make up the service chain.

For example, consider that information flows from an SMS client to a CAP or management point, and then to the site server. The service chain that emerges from this flow has five data components associated with it as shown in Figure 9.2:

  • Client Q, R, and S values

  • Network connection between client and CAP or management point Q, R, and S values

  • CAP or management point Q, R, and S values

  • Network connection between CAP or management point and site server Q, R, and S values

  • Site server Q, R, and S values

Figure 9.2 A service chain and the computation of end-to-end response time


The end-to-end response time, then, is the sum of each of the R values for each component in the service chain. Use this information to develop SLAs for service chain performance, and to determine when there are performance aberrations.

There are no standard metrics for SMS performance. Your organization might want to consult its SLAs and perform a cost-to-benefit analysis to determine how fast the SMS site servers must run. Your organization might have time requirements. For example, mission-critical applications might require updating on 95 percent of desktops in an eight-hour period. Another SLA might state that critical virus signature update files must be distributed to all desktops within a two-hour period.

After running a pilot project and discovering the cost to distribute the package to all desktops on the network in four hours, you might compromise on a reduced hardware configuration and accept a window of five hours to complete the distribution. In general, faster response times require more expensive hardware, and lower acceptable response times require less expensive hardware.

Because many SMS service requests come in surges, most SMS sites have service request backlogs that last for at least a few minutes. The two most common surges occur during the user logon cycle and when you send package advertisements.

While you experiment to find the least expensive hardware configuration to meet your needs, consider future growth requirements and the potential for change, and monitor the SMS site for backlogs. If a site is backlogged most of the day and catches up between 3:00 A.M. and 4:00 A.M., then there is a risk that the site cannot catch up if the weekly load increases. Plan for extra capacity so that you can quickly meet unexpected software distribution or other feature demands. Also, when SMS users and administrators become familiar with SMS, their usage levels increase.

Determining Load Signatures

The combination of business objectives and operational styles in every organization creates unique load signatures. However, if an organization has ten remote offices with the same number of workers, the same software, and the same hardware, and you manage them all similarly, then they all might have a similar load signature. Grouping computers with similar load signatures can reduce planning time.

By determining the load signature of servers in the SMS site, you can plan for an appropriate hardware component capacity. Then, by changing hardware capacity, you can increase or decrease the responsiveness of SMS and the time required to accomplish specific tasks. The load signature is determined by several factors, including:

  • Number of optional SMS features installed and in use on the computer

  • Location of site server in the SMS hierarchy (whether it communicates with parent or child sites)

  • Number of objects in the site

  • Size of objects being processed

  • Frequency of scheduled events

  • Frequency of feature use

To successfully determine server sizes for an SMS hierarchy:

  1. Define the load signature for each site component server.

  2. Determine throughput requirements using the formulas documented in this section.

  3. Use the throughput requirements to estimate hardware requirements.

  4. Use the hardware requirements to construct sample SMS configurations to test in your isolated test lab and later in the pilot project.

Testing your hardware configuration and conducting a successful pilot project helps ensure that your organization's deployment progresses smoothly, because the deployment itself is based on site designs customized for your organization's data and tested in your environment.

For More Information

Did you find this information useful? Please send your suggestions and comments about the documentation to