Managing Capacity in Distributed MS Windows NT-Based Systems

Archived content. No warranty is made as to technical accuracy. Content may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

Updated : January 1, 1998

Abstract

A shift in computing architectures is upon us. The move to true distributed architectures, for both systems and applications, has begun. Microsoft Windows NT operating system is one of the fundamental technologies enabling this shift. This White Paper describes the changing architectures, the current approaches to managing capacity, and a new solution called active measurement that is particularly well-suited to capacity management in distributed systems.

This document contains the following sections:

Introduction

Changing Architectures
Current Approaches to Managing Capacity

Terms
Indirection

The Active Measurement Solution
The Process

Step 1: Define objectives
Step 2: Inventory existing infrastructure
Step 3. Determine requirements
Step 4: Assess existing infrastructure
Step 5: Evaluate new technology
Step 6: Deploy and validate
Step 7: Monitor deployment

Baselining

Example baseline
Which measures?
Role of response time measurement

Dynameasure, an Active Measurement Product

Manager
Client Components
Advanced Features

Dynameasure availability
Summary

On This Page

Introduction
Changing Architectures
Current approaches to managing capacity
The active measurement solution
The process
Baselining
Dynameasure, an active measurement product
Dynameasure availability
Summary

Introduction

A shift in computing architectures is upon us. The move to true distributed architectures, for both systems and applications, has begun. Microsoft Windows NT operating system is one of the fundamental technologies enabling this shift.

There are undeniable benefits to the new architectures:

  • Individual components are much easier to use.

  • Component-based architectures are more flexible and, ultimately, more scalable.

  • Capacity can be deployed where needed.

  • Information workers are increasingly empowered and productive, and organizations are more competitive.

However, the greater aggregate power and flexibility can lead to greater complexity for management. The problems stemming from mismatched capacity have been with us since computing began. It seems that users complain every day, in every organization, that systems are too slow, or simply broken. How can we better plan and manage computing capacity to avoid constant problems?

This paper describes the changing architectures, the current approaches to managing capacity, and a new solution called active measurement that is particularly well-suited to capacity management in distributed systems.

Changing Architectures

For the first time in the history of computing, a true architectural shift is beginning. This is the conversion from centralized computing to distributed computing. To understand the impact this change has on capacity management, it's necessary to examine the natural divisions in architectures.

Figure 1: Mainframe environment; vertical partitions bound related applications and hardware

Figure 1: Mainframe environment; vertical partitions bound related applications and hardware

Mainframe environments are vertically partitioned. That is, it's possible to separate application/hardware sets by drawing vertical lines through a representation of a mainframe-based architecture. Figure 1 shows the use of vertical partitioning to separate related applications and hardware. Each mainframe has been a world unto itself, where all capacity issues and factors were contained within easily recognized boundaries.

The vertically partitioned model has been largely true of successors to the mainframe, the mid-range, and open systems. These environments usually have well-defined mappings between applications and hardware, and relatively little resource sharing.

Vertically partitioned environments, with well-defined boundaries between resource sets and tight links between applications and hardware, are relatively easy to measure. This is changing with distributed systems. Distributed systems, because of their nature, can't be effectively viewed with vertical partitions. Distributed systems can be better managed with a horizontal partitioning.

Cc750883.mng2(en-us,TechNet.10).gif

Figure 2: Mainframe meltdown

Figure 2 illustrates the changing architecture. The mainframe represents a valuable, identifiable entity that virtually defines a computing architecture. Distributed architectures, built out of many components (servers, links, network devices) at multiple locations, also represent valuable computing architectures, but they can't be viewed in the same way as the mainframe. Yet, the aggregate investment in a distributed architecture can be very great. Surely it's worth identifying this new computing entity so that it can be effectively managed. In fact, the distributed architecture can be viewed as a discrete entity by adopting a horizontal view.

Fortunately, the horizontal view is also consistent with other elements of the distributed architecture and the contexts in which these architectures are constructed. Besides the changing hardware architectures, changes are occurring in applications and in the technology markets. A complete picture has to factor in all of these changes.

Applications are also shifting away from vertical partitioning. A number of changes in the ways applications are built, deployed, and used impact capacity management:

  • Applications, particularly Internet-based applications, are crossing organizational boundaries.

  • Applications are being developed and deployed as collections of components.

  • More types of applications (SQL, file, Web, and e-mail) are being deployed on a broader scale.

The shifting nature of the application mix in use at an organization leads to a horizontal view of a set of applications as a single, large virtual application, best viewed in a horizontal manner, consistent with the horizontal view of the supporting hardware and system components.

Factors external to the information architecture, but that affect the architecture, are important as well. The increasingly rapid pace of technological innovation, the increasing number of choices in both technologies and suppliers, and the generally increasing number of demands placed on information systems all lead to a situation in which each organization's information architecture is increasingly unique and in continuous flux.

Cc750883.mng3(en-us,TechNet.10).gif

Figure 3: The layered architecture of distributed systems

Figure 3 illustrates the resulting picture, one that takes all factors affecting the distributed architecture into account. This picture, one that recognizes the horizontal nature of distributed systems, is the basis then for understanding a new approach to capacity management.

Current approaches to managing capacity

Terms

One problem with the topic of capacity is the various terms that are often used in vague, confusing ways. The following key definitions are presented to provide a more concrete basis for discussion and to differentiate measures that refer to quite different quantities.

Infrastructure

The computing infrastructure is the virtual platform or computer on which distributed applications execute. Thus, the term "infrastructure" refers to all hardware and software components below the level of the applications. The infrastructure hardware includes servers, networks (devices and connections), and clients. The infrastructure software includes operating systems (server and client), communications stacks, and service engines (database, electronic mail, file, Web, and so on).

Capacity

Capacity is a measure of the ability of an infrastructure to process work in response to user requests. This includes a variety of processing program instructions in a CPU, reading data from disk, transmitting data through a network, and so on. Capacity measures are usually in terms of throughput, that is, units of user work (e-mail messages sent, SQL transactions executed, files copied, and so forth) per unit time. Note that this definition is actually of "processing capacity," as separate from "storage capacity."

Performance

Performance refers to the throughput achieved by some application, (transactions executed per second, for example) on some infrastructure. Application performance is thus separable from infrastructure capacity. Application performance is dependent, to varying degrees, on two main factors: the processing capacity of the infrastructure, and the design of the application.

Utilization

Utilization is a measure of the extent to which some hardware resource is consumed by the various requests that are directed against the resource. Utilization is best expressed as a percentage. When a resource is 100 percent utilized, it has no remaining capacity. Percentage measures are not always easily arrived at; disks, for instance, can be difficult to measure precisely in utilization percentages.

Indirection

Current approaches to capacity measurement fall into two classes: application-specific or application-ignorant. Application-specific approaches include benchmarks and application load testing. Application-ignorant approaches include monitoring, network modeling, and network load testing. All of these approaches fail to address capacity effectively because they are indirect, trying to frame capacity through application performance or resource utilization, rather than through a direct measure of capacity. What's worse, current techniques are usually too slow and too complex, making them likely to be little used.

Application-specific approaches rely on testing an infrastructure using a workload that reflects the behavior of a single application. Benchmarking or application load testing can be useful in estimating application performance and ensuring the quality of the finished application. However, these techniques are not suitable for general capacity management because they are application-specific. These methods miss the forest for the trees. Attempting to view and manage a distributed infrastructure (the "forest") through the lens of individual applications (the "trees") is to attempt to manage the infrastructure from multiple narrow contexts. This approach has not worked well, even when only a handful of monolithic applications were present. It breaks down completely when one considers a future with applications and components of a much larger magnitude.

Application-ignorant approaches have the opposite problem. These approaches tend to work below the level of the applications, focusing on the ground below the trees and missing both trees and forest. Monitoring, a passive measurement approach, attempts to understand capacity by looking at physical resources, treating each as a separate entity. This approach fails to capture the resource sets whose behavior actually determines effective capacity.

Both application-specific and application-ignorant approaches have a number of drawbacks in common. The most critical of these limitations is their inability to tie functions and processes together. Both of these classes of activities tend to grow out of specific disciplines (application development and network management) and do little to close the gap between these areas. Other drawbacks include:

  • Extensive knowledge may be required about the infrastructure, about models, about mathematics, or about programming, sharply limiting these approaches from broad use.

  • There's a significant mapping problem; low-level statistics and measures, abstract models, and mathematicsnone of these relate directly to user activity.

  • Monitors may have to be operated for months before useful knowledge is developed; this kind of time requirement crosses evolutionary steps in the infrastructure.

  • Models and benchmarks must be tested and validated even more rigorously than applications; this is very costly.

The active measurement solution

Active measurement is intended to satisfy the following requirements:

  • Ease-of-operation

  • Good fit to distributed capacity

  • Proactive

Figure 4 shows active measurement fitting into the layered view of distributed architectures. Active measurement tempers more theoretical approaches with a focus on the practical. Active measurement is a form of controlled stress-testing that provides four main high-level benefits:

  • Active measurement fits distributed architectures well, allowing easy identification of the infrastructure and direct measurement of capacity in user-oriented terms.

  • Fast and simple, active measurement supports the broad use of Windows NT-based systems because it is similarly accessible to a broad user base.

  • By tying together different functions and processes, active measurement eliminates the gaps caused by difficulties in translation with other methods.

  • The use of controlled stress allows an accelerated form of measurement that quickly delivers quantitative results.

Cc750883.mng4(en-us,TechNet.10).gif

Figure 4: Active Measurement in the Distributed Scheme

The process

This section provides a brief overview of the application of active measurement to manage infrastructure capacity and reliability.

The process of applying active measurement to the problems of managing capacity and reliability in a distributed computing infrastructure can be broken into a number of steps. Each step is designed to offer independent value. Like the underlying infrastructure, the active measurement process can be implemented incrementally, with each additional step combining with an existing step to further increase the total value.

At the highest level, the process steps outlined here allow for the management of infrastructure capacity and reliability in two ways: 1) by proactively managing existing infrastructure, and 2) by ensuring that infrastructure changes are properly designed and validated.

Cc750883.mng5(en-us,TechNet.10).gif

Figure 5: Active measurement process

Step 1: Define objectives

Defining objectives is a critical first step. Objectives should be in terms of infrastructure requirements. Examples include:

  • Provide infrastructure needed to support the corporate adoption of Microsoft Exchange Server.

  • Upgrade a server operating system to Windows NT 4.0.

  • Provide dial-up access to the sales force.

  • Deploy a new, unified financial-management application suite.

  • Support the addition of image data to an existing application.

  • Improve the performance of file servers.

Objectives should be business-driven. Sometimes, however, objectives aren't direct implementations of business strategy (such as a new financial application), but driven by needed changes to the existing infrastructure (such as an operating system upgrade). However, even upgrades should be selected and accomplished in a manner consistent with business plans. Defining or clarifying objectives is a required step. The step may be quite simple in some cases, but a clearly stated objective, or set of objectives, is necessary to define the project.

Step 2: Inventory existing infrastructure

This step is a straightforward inventory and mapping exercise to develop an understanding of the current infrastructure. The inventory should help to identify resources that can be surveyed for reserve capacity. This step may need little effort if ongoing capacity monitoring is in place.

The first decision is to determine what elements of the current infrastructure, if any, may be available for utilization within your organization in supporting new deployments. Generally this entails gathering information on the physical inventory and configurations of available client machines, network hardware and software (including WAN and remote links), and server systems. How much information is gathered here depends upon your plans for using existing technology.

Step 3. Determine requirements

This step explores and quantifies the objective. The outputs from this step are specific infrastructure capacity and reliability requirements, preferably expressed as specific active measurements. The objective should have been proven to be feasible. The requirements can then be used to assess the ability of existing or new infrastructure components to achieve the objective. This step should be present in most cases since it translates objectives into quantitative requirements.

This step introduces the use of active measurement tools to implement some of the elements of process steps. Two types of tool functions are needed to accomplish the active measurement approach: resource utilization monitoring and managed stress testing.

Once the objectives and the environment have been as fully characterized as possible, active measurement can be used to determine the feasibility of the objective. This phase is accomplished by inducing representative stress on representative infrastructures. The active measurement requirements output from this step are then specific stress-testing requirements with specific result requirements.

Step 4: Assess existing infrastructure

This step assesses the capacity and reliability of the existing infrastructure with respect to the requirements. This step uses active measurement to make the assessment. The requirements from step 3 provide the contextual basis for the assessment. The extent to which this step is necessary depends on the extent to which use of existing infrastructure is being considered. In some cases, the objectives may preclude the use of existing infrastructure, and this step may be made unnecessary. However, there's almost always some infrastructure component shared with new deployments, especially networks. This step serves not only to assess the extent to which new requirements can be met by the existing infrastructure, but also identifies the impact of any new deployment on existing environments.

This step requires two main uses of active measurement. In the previous step, specific active measurement requirements have been developed; these measures should be run against existing infrastructure and the results compared to the required results. This will help determine any capacity gap.

A second use of active measurement concerns the environmental impact assessment. If the existing infrastructure analysis is done in whole, or in part, in a lab, using an analogue of the existing infrastructure, then active measurement can also be used to provide the background environmental load. If the impact is to be measured by using production infrastructure, then active measurement capacity monitoring techniques can be used.

Step 5: Evaluate new technology

This step will be used in most projects and may be quite broad. This step usually has two basic purposes, either testing to design the gap filler between existing infrastructure and requirements, or, testing new technology before deployment (such as an upgrade). This step, now that requirements have been established (step 3) and contributions of existing infrastructure determined (step 4), finalizes the deployment design. Active measurement techniques are the essential instrument of this step. Phases of this step include:

  • Develop plans to fill the gap left by existing infrastructure.

  • Quantify upgrade effects.

  • Develop configuration plans.

  • Validate deployment design.

  • Validate change plan.

Determining and evaluating which technologies will provide the most capacity, performance, and reliability for your infrastructure is a critical and valuable step in the process. The value of this step can be recognized in properly purchasing the right technologies (not overbuying or underbuying), and in providing the proper capacity and reliability to run your organization.

There are many technologies that you may want to evaluate, including:

  • Network - You may want to know the differences between 10 megabyte (MB), 100 MB, and ATM technologies. You may be interested in new hub, router, or switching technologies, or you may want to evaluate various WAN technologies or remote access service options.

  • Server - You may want to assess the viability of running a certain number of users on specific server configurations. You may want to evaluate the impact of multiple workloads on one server, or the impact of splitting workloads across multiple servers. You may want to understand the impact of various server configurations (RAM, disk, CPU).

  • Storage technologies - SCSI, RAID, Fibre Channel.

  • Client systems.

  • Operating system software.

  • Network software.

  • Service engines and their configuration.

Step 6: Deploy and validate

This is a strongly recommended step in any but very small, low-risk deployments. This step uses active measurement to ensure that all deployment components are meeting established baselines, and that the deployment infrastructure meets requirements for capacity and reliability. The two phases can be likened to component and integration testing. Various metrics can be applied to determine the extent of testing: number of components, complexity of the infrastructure, number of locations, extent to which deployments are remote, and so on. This step helps avoid deployment surprises.

After the new components are selected, use active measurement to acceptance-test the key elements individually for capacity and reliability. You should stress test the infrastructure element prior to deployment. Any complex component (a server, for example) has enough parts to merit burn-in testing. Multiple servers, supposedly identical, delivered against a single purchase order, may vary significantly due to the substitution of parts from different suppliers. Consider requiring suppliers to use active measurement to conduct tests.

After testing individual key components, you should configure your infrastructure and run active measurement tests on the collective pieces. This will in effect "blow the pipes" as a collective group to make sure that components work properly together, and that they are performing up to capacity.

This step can help you reduce expensive field failures and the diagnostic time involved in locating the problem component(s), and it provides a solid foundation for adding additional technologies and applications.

Step 7: Monitor deployment

This step goes beyond any deployment project and calls for the use of active measurement in the ongoing monitoring of the infrastructure, both new and old. This step brings benefits that ensure that the infrastructure supports all uses reliably and with adequate capacity. The capacity- and reliability-monitoring process is a proactive approach that provides early warning of developing infrastructure problems in a controlled, bandwidth-efficient form. This step both provides input into other steps that are part of specific deployment projects, and allows the ongoing monitoring of earlier deployments.

Monitoring with an active measurement tool has three main advantages over more conventional passive monitoring.

  • Active measurement joins together resource sets similar to those underlying user applications and is thus more sensitive to resource constraints that impact users.

  • Active measurement doesn't have to run all the time, but only in focused measurement sessions, so resource consumption is kept to a minimum.

  • Active measurement provides simple, user-level metrics rather than large quantities of low-level statistic data.

Start with a manageable application of active measurement by gauging the capacity of one segment of a network. Run active measurement tests at low-usage times to gauge capacity when the infrastructure is at low use. The goal is to establish infrastructure reliability and determine the point of maximum throughput using baselining techniques. If this is a first use of active measurement, piloting active measurement in a lab environment can provide more freedom to experiment and to select a test suite of interest.

Once the capacity of a segment of the infrastructure has been established using active measurement, expand active measurement coverage to an additional segment or another endpoint to establish another point of comparison. Gauging the capacity of several endpoints within the infrastructure provides:

  • The ability to perform routine checks, perhaps daily, on the reliability of the selected infrastructure and connections.

  • The ability to perform routine measurements, perhaps daily, of the selected infrastructure capacity and any changes in this capacity.

  • An "early-warning" system to alert you to problems in the infrastructure ahead of user complaints or system failures.

  • The ability to rapidly isolate trouble segments and systems when problems do occur.

Baselining

Baselining is the essential technique of active measurement. A baseline is a measure of the maximum throughput capacity of an infrastructure, and it constitutes a point of reference that can be compared to other baselines. Baselining is the technique by which baselines are obtained.

Baselines might be compared for many reasons; for example:

  • Baselines may be used to decide between two different servers. Each server would be baselined, and the baselines compared. The same approach applies to any infrastructure options, including network equipment, client systems, operating systems, or service engines.

  • Baselines may be used to detect changes in the capacity of an infrastructure over time. Baselines of the infrastructure would be taken periodically and compared.

  • Baselines may be used to measure the impact of a hardware or software upgrade. Baselines taken before and after the upgrade would be compared. In general, any configuration optionsin software or hardwarecan be compared with baselines.

  • Baselining can ensure that additional infrastructure deployments are burned-in and meet the capacity requirements established by the prototype deployment.

The measurement of capacity using active measurement baselining is conceptually both simple and powerful since the impact of virtually any infrastructure change can be quantified using a single, straightforward approach.

Baselining has many parallels in other disciplines. Doctors, for example, use stress tests to measure cardiovascular fitness. A treadmill stress test can detect severe problems immediately, and repeated stress testing can, through comparison of measurements over time, identify problems before they become dangerous.

Example baseline

Cc750883.mng6(en-us,TechNet.10).gif

Figure 6: Classic baseline

Baselines are frequently presented in chart form, as in Figure 6. The baseline in Figure 6 displays many of the ideal characteristics of a throughput curve that has properly measured infrastructure capacity, including:

  • The saturation point is labeled in Figure 6. This is the maximum capacity of the subject infrastructure. In this case, the saturation point shows up at the 200-user level.

  • Linear throughput growth is seen until infrastructure capacity is approached. When the number of users doubles from 50 to 100, throughput also doublesa nice, but not always seen, match between transaction arrival and processing rates.

  • There's a gentle drop in throughput after the saturation point. At 250 and 300 users, throughput drops off, but the drop is not precipitous, suggesting that the measured infrastructure bends but doesn't break under overload conditions.

The saturation point is the main goal of the baseline exercise. The throughput achieved at the saturation point is the rated capacity of the infrastructure.

Sometimes, the initial attempt to obtain a baseline on an infrastructure doesn't find a saturation point. Figure 7 portrays a situation in which a baselining effort has failed to find a saturation point. This can be easily observed by noting that the curve never turns down. The infrastructure being tested here has more capacity than was called for. In such a case, increase the amount of stress applied during the baselining, either by increasing the number of users, the amount of data referenced, or the rate at which service requests are presented.

Cc750883.mng7(en-us,TechNet.10).gif

Figure 7: Saturation point not found

Post-saturation curves can be quite irregular, particularly if the measure being used retains a realistic dependency on network and client elements of the infrastructure.

Which measures?

General-purpose infrastructure measurement should use general measures. Determine the service types of interest (SQL, file, e-mail, and so on) and select a broad-based representative of each service class.

In cases where general capacity is not the issue, the selection of a measure may be narrower. For example, suppose the question is one of application feasibility. In such a case, select a measure that reasonably represents the application type in question. However, try to avoid overly narrow measurement, since:

  • The application definition is usually not precise.

  • Applications, their use, and data change over time.

  • Infrastructure elements are usually shared, a trend that is increasing.

Role of response time measurement

Besides throughput, there is a second metric of interest, response time. Response time is a measure of the capacity experienced at the client. While throughput indicates the overall capacity of the infrastructure to process work, response time tells us how long the infrastructure took to process one unit of work. In general capacity measurement, response time is usually of secondary importance. However, in some cases, response time may be a significant issue.

Figure 8, below, has a sample response time curve that might accompany the TPS measurement in Figure 6. Response time is charted in terms of ART (Average Response Time per transaction). This sample is nicely behaved, which is not always the case in reality. In this case, response time for the average transaction is a usually reasonable sub-0.05 seconds. Note that response time climbs more quickly once the saturation point (at 200 users) is reached. In many cases, response time climbs much more rapidly once the saturation point is reached. In other cases, response time at the saturation point may be such that it's too long for the users to accept. When testing application feasibility, response time requirements may be specified; if so, feasibility might be driven as much by response time measures as by throughput measures.

Cc750883.mng8(en-us,TechNet.10).gif

Figure 8: Sample response time results

Dynameasure, an active measurement product

Implementing an active measurement process requires an active measurement tool. Dynameasure, from Bluecurve, Inc., is a client/server-based application designed from the ground up to support active measurement. Its components include a test dataset and multiple application scenarios (see Figure 9). All Dynameasure application components run on 32-bit Microsoft Windows operating systems.

Cc750883.mng9(en-us,TechNet.10).gif

Figure 9: Dynameasure installation architecture

The test dataset contains the Dynameasure test schema and data. Dynameasure provides a scalable test dataset. Dynameasure base products support test datasets from about 10 MB to 10 gigabytes. The test dataset is designed around a schema typical for a given target service (an order-entry schema for a SQL database, for example).

Manager

The Manager application is the operational heart of Dynameasure. The Manager consists of three modules: Builder, Dispatcher, and Analyzer. The Builder is used to construct and verify the test environment and provides three classes of objects: Test Datasets, Test Specifications, and Control Structures. The Test Datasets, as discussed above, are the test schema and test data that are installed in the test servers. The Control Structures comprise the contents of the Control Database. The Test Specifications are the workload definitionsthese specifications determine the amount, type, and duration of active measurement sessions.

It's in the Builder that you can create user versions of Dynameasure standard test specifications. The Test Specification dialog allows access to test properties; here you adjust the number of user processes in a test, the rate at which user processes join a test, the length of test steps, the length of step phases, think-time, transaction weights, and a variety of other options. Dynameasure allows an extensive range of workloads to be developed with point-and-click input.

The Dispatcher controls test execution. From the dispatcher you launch measurements, manage target servers and clients, and track measurement progress.

The Analyzer module aids in viewing results of measurements. The Analyzer supports tabular or graphic comparisons of throughput, response time, and load. All of these summary statistics may be exported or printed. In addition, the Analyzer can print or export detailed reports on every facet of a measurement.

Client Components

The remaining application components are the Operator and Motor. These components must be present on any computer that is to function as a measurement client. The Operator represents the machine to the Dispatcher, collects machine attributes, and manages the motor population on a machine.

Motors execute transactions against the test dataset. Motors thus represent users doing work. Dynameasure allows up to 100 motors to be operated from a single computer. Motors are implemented as individual processes to accurately reflect the characteristics of a typical application.

Advanced Features

Dynameasure has a number of advanced features designed to improve the active measurement process.

  • Scalability. Optional versions can scale the test dataset to 10 terabytes. Additional numbers of motors are optionally available.

  • Load Balancing. Dynameasure automatically balances motors across a machine population, or you can manually adjust the balanceall from the Dispatcher. Tests are automatically balanced across the available motors.

  • Repeatability. Dynameasure can automatically rebuild the test dataset after any test that modifies the dataset, thus giving each test a clean starting point.

  • Self-verifying. Dynameasure can automatically audit the results of a SQL write test, verifying that motor-reported results match changes in the database, and that Dynameasure and SQL Server operate properly.

  • Environment capture. Dynameasure captures information about the test environment and stores this information with test results. If you want to know later how the Windows NT-based machines did against the Windows 95-based machines, or how the Microsoft SQL Server SMP option was setno problem, the information is there.

  • Working set size control. Dynameasure has control over its working set size and offers the option of scaling the working set to fit into SQL Server's data cache.

  • Concurrent test groups. Multiple tests can be grouped together and run concurrently. This allows the effect of different background workloads to be analyzed.

Dynameasure availability

Dynameasure currently supports measuring SQL and file workloads, and is available in an enterprise version, a SQL-OLTP-only version, and a file-only version. The enterprise product allows measuring File and SQL services simultaneously, and includes SQL Decision Support measures as well.

Dynameasure 2.0, due to be released by the end of 1997, adds support for Microsoft Exchange measurement, integrates the collection of resource utilization data, and allows users to define their own transactions and measures.

More information on Dynameasure and Bluecurve is available from Bluecurve's Web site at www.bluecurve.com.

Summary

Active measurement has been developed recognizing that existing techniques for managing capacity haven't been working well, and that the migration to distributed architectures further diminishes the validity of these techniques. The central value of active measurement is its ease of use that allows for a quantitative, consistent standard of measurement to be maintained across disparate processes and functions.

For More Information

For the latest information on Windows NT Server, check out our World Wide Web site at https://www.microsoft.com/ntserver