Share via

Chapter 10: Stabilizing Phase

This chapter describes the suggested strategy for stabilizing an application that has been migrated from UNIX to the Microsoft® Windows® operating system. The Stabilizing Phase involves testing the application for the expected functionality and improving the quality of the application to meet the acceptance criteria set for the project.

This chapter describes the objectives of testing in the Stabilizing Phase. It introduces testing processes, methodologies, and tools that can be used to test applications with different architectures.


On This Page

Goals for the Stabilizing Phase Goals for the Stabilizing Phase
Testing the Solution Testing the Solution
Resolving the Solution Defects Resolving the Solution Defects
Conducting the Solution Pilot Conducting the Solution Pilot
Closing the Stabilizing Phase: Release Readiness Approved Closing the Stabilizing Phase: Release Readiness Approved
Tuning Tuning
Testing and Optimization Tools Testing and Optimization Tools
Further Reading Further Reading

Goals for the Stabilizing Phase

The primary goal of the Stabilizing Phase is to improve the quality of the solution so that it meets the acceptance criteria and can be released to the production environment. During this phase, the team tests the feature-complete migrated application by subjecting it to various tests, such as User Acceptance Testing (UAT), regression testing, and bug tracking based on the application requirements. The build must demonstrate that it reaches the defined quality and performance levels and is ready for full production deployment.

Testing during the Stabilizing Phase is an extension of the testing that was conducted during the development of the application in the Developing Phase. Testing in the Stabilizing Phase tests the usage and operation of the application under realistic conditions. Test plans include testing the functionality in the migrated application and making a comparison of the migrated application’s functionality with that provided by the original application. Test plans also must include test cases for testing the new features added to the application.

After a build is stabilized, the solution is deployed. This phase ends with the Release Readiness Approved Milestone, indicating that the team and customer agree that all the outstanding issues have been addressed.

Major Tasks and Deliverables

Table 10.1 describes the tasks that must be completed during the Stabilizing Phase and lists the owners responsible for achieving them.

Table 10.1. Major Stabilizing Phase Tasks and Owners

Major Tasks


Testing the solution

The team executes the test cases that were created during the Planning Phase and enhanced and tested during the Developing Phase. Testing includes comparing the test results of the parent application with the migrated application as well as testing the application from different perspectives.


Resolving defects

The team triages the defects identified and resolves them. New tests are developed to reproduce issues reported from other sources. The new test cases are integrated into the test suite.

Development, Test

Conducting the solution pilot

This task involves setting up the deployment environment and testing the migrated application on the staging area before it is deployed. The team moves a solution pilot from the development area to a staging area in order to test the solution with the actual users and real scenarios. It also includes testing the solution in a live environment. The solution pilot is conducted before starting the Deploying Phase.

Release Management

Closing the Stabilizing Phase

The team documents the results of the tasks performed in this phase and solicits management approval at the Release Readiness Approved Milestone meeting.


Table 10.2 lists the tasks described in Table 10.1 and considers the tasks from the perspective of the team roles. The primary team roles driving the Stabilizing Phase are Test and Release Management.

Table 10.2. Role Cluster Focuses and Responsibilities in Stabilizing Phase

Role Cluster

Focus and Responsibility

Product Management

Execute communications plan and launch test phase.

Program Management

Track project and bug triage.

Release Management

Preparation for deployment of the application and setting up the production environment.


Bug triage and resolution, code optimization, and hardware or service reconfiguration.

User Experience

Stabilization of user documentation and training materials.


Generate build and triage plan.
Track test schedule.

Review bugs entered in the bug-tracking tool and monitor their status during triage meeting.
Generate weekly status reports.
Escalate issues that are blocking progress, review impact analysis, and generate change management document.
Ensure that the appropriate level of testing is achieved for a particular release.
Lead the actual Build Acceptance Test (BAT) execution.
Execute test cases and generate test report.

Testing the Solution

This section describes the testing activities that are performed in the Stabilizing Phase. In the Stabilizing Phase, because all features and functions of the solution are now complete and all solution elements have been built, testing is performed on the solution as a whole, not just on individual components. The testing that began during the Developing Phase according to the test plan created during the Planning Phase continues with further testing, tracking, documentation, and reporting activities during the Stabilizing Phase. This mainly involves user acceptance testing (UAT) and regression testing as explained in the next subsections in detail.

User Acceptance Testing

The emphasis on user acceptance testing (UAT) during the Stabilizing Phase is to ensure that the migrated solution meets the business needs. UAT is performed on a collection of business functions in a production environment after the completion of functional testing. This is the final stage in the testing process before the system is accepted for operational use. It involves testing the system with data supplied by the actual user or customer instead of the simulated data developed as part of the testing process. UAT helps to validate the solution for the overall user requirements and also determines the release readiness status of the system. Running a pilot for a select set of users helps to identify areas where users have trouble understanding, learning, and using the solution.

For migration projects, UAT involves testing the migrated application and identifying its defects. These defects are addressed and regression testing is conducted for each fixed defect to ensure that the fix doesn’t break any other functionality of the migrated application. The UAT Summary confirms that the solution meets the customer’s acceptance criteria, thereby assisting in customer acceptance of the solution.

Regression Testing

Regression testing refers to retesting previously tested components and functionality of the system to ensure that they function properly even after a change has been made to parts of the system. For migration projects, this is the most important class of tests. As defects are discovered in a component, modifications should be made to correct them. This may require retesting of other components or the entire solution.

Regression testing helps in the following areas:

  • To ensure that no new problems are introduced and that the operational performance has not been degraded because of modifications.

  • To ensure that the effects of the changes are transparent to other areas of the application and other components that interact with the application.

  • To modify the original test data and test cases from other testing activities.

Resolving the Solution Defects

In order to resolve defects, they must be reproduced and tested in the test environment. Each reproduced defect in the test environment should be tracked with its status and severity. An important aspect of such tests involves test tracking and test reporting. Test tracking and reporting occurs at frequent intervals during the Developing and Stabilizing Phases. During the Stabilizing Phase, this reporting is driven by the bug count. Regular communication of the test status to the team and other key stakeholders ensures that the project runs smoothly. After fixing the defects, test cases and test data should be updated and integrated with the test suite.

Bug Convergence

Bug convergence is the point at which the team makes visible progress against the active bug count. At bug convergence, the rate of bugs resolved exceeds the rate of bugs found, thus the actual number of active bugs decreases. After bug convergence, the number of bugs should continue to decrease until the zero bug bounce task, as explained in the next sections.

Interim Milestone: Bug Convergence

Bug convergence tells the team that most of the bugs are addressed and the rate of bugs resolved is higher than the rate of new bugs found. This can be considered as the interim milestone and the migrated application can be considered for zero bug bounce verification.

Zero Bug Bounce

Zero bug bounce is the point in the project when development finally catches up to testing and no active bugs currently exist. After zero bug bounce, the number of bugs should continue to decrease until the product is sufficiently stable for the team to build the first release candidate.

Interim Milestone: Zero Bug Bounce

Achieving zero bug bounce is a clear sign that the solution is near to being considered a stable release candidate.

Release Candidates

After the first achievement of zero bug bounce, a series of release candidates is prepared for release to the pilot group. Each release is marked as an interim milestone.

Guidelines for declaring a build as a release candidate include the following:

  • Each release candidate has all the required elements to qualify for release to production.

  • The test period that follows determines whether a release candidate is ready to release to production or if the team must generate a new release candidate with appropriate fixes.

  • Testing the release candidates, performed internally by the team, requires highly focused, intensive efforts and concentrates heavily on discovering critical bugs.

Interim Milestone: Release Candidate

As each new release candidate is built, there should be fewer bugs reported, classified, and resolved. Each release candidate marks significant progress in the team’s approach toward deployment. With each new candidate, the team must focus on maintaining tight control on quality.

Interim Milestone: Preproduction Test Complete

Eventually, a release candidate is prepared that contains no defects. When this has occurred, no defects should be found within the isolated staging environment. At this stage, all testing that can be done before putting the migrated component into production has been completed.

Conducting the Solution Pilot

This section describes the best practices to adopt for conducting a pilot of the migrated application. This section provides you with information regarding various points to be considered while conducting a pilot and deciding the next steps after the pilot.

A pilot release is a deployment into a subset of the live production environment or user group. During the pilot, the team tests as much of the entire solution as possible in a true production environment. Depending on the context of the project, the pilot can take various forms:

  • In an enterprise, a pilot can be a group of users or a set of servers in a data center.

  • For migration projects, the pilot might involve testing the most demanding application or database that is being migrated with a sophisticated group of users who can provide helpful feedback.

The common element in all piloting scenarios is testing under live conditions. The pilot is not complete until the team ensures that the solution is viable in the production environment and that the solution is ready for deployment.

Some of the best practices that should be followed while conducting a pilot are:

  • Before beginning a pilot, the team and the pilot participants must clearly identify and agree upon the success criteria for the pilot. These should map back to the success criteria for the development effort.

  • Any issues identified during a pilot must be resolved either by further development, by documenting resolutions and workarounds for the installation team and production support staff, or by incorporating them as supplemental material in training or Help documentation.

  • Before the pilot is started, a support structure and an issue-resolution process must be in place. This may require that the support staff receive training in the application area that is being piloted.

  • In order to determine any issues and confirm that the deployment process will work, it is necessary to implement a trial run or a rehearsal of all the elements of the deployment prior to the actual deployment.

After you collect and evaluate the pilot data, a corresponding strategy should be selected based on the findings from the analysis of pilot data. The next strategy could be one of the following:

  • Stagger forward. Deploy a new release to the pilot group.

  • Roll back. Execute the rollback plan and revert the pilot group to the stable state they had before the pilot started.

  • Suspend. Suspend the entire pilot.

  • Fix and continue. If you find an issue during the pilot, fix the issue and continue with the next steps.

  • Proceed. Advance to the Deploying Phase.

After the pilot has been completed, the pilot team must prepare a report detailing each lesson learned and how new information was incorporated and issues were resolved.

Interim Milestone: Pilot Complete

This milestone signifies that the pilot has been successfully completed and that the team is ready to proceed to the Deploying Phase.

Closing the Stabilizing Phase: Release Readiness Approved

The Stabilizing Phase culminates with the Release Readiness Approved Milestone. The team builds a release candidate (with all the major defects fixed) that satisfies the necessary quality policy of the organization. All rounds of testing must be completed, meaning that all test plans have been executed and test cases satisfied before the migrated component can be moved into the production environment. Then the release is approved with a formal sign-off marking that the Release Readiness Approved Milestone has been reached.

Key stakeholders, typically representatives of each team role and any important customer representatives who are not on the project team, signal their approval of the milestone by signing or initialing a document stating that the solution is complete and approved for release. The sign-off document becomes a project deliverable and is archived for future reference.

The performance of the application following deployment in the production environment is a key criterion in indicating a successful application migration. The following sections will help you to optimize the performance of the application and the tools following deployment.


This section discusses tuning of the solution in detail, including how to performance-tune the migrated application, and scaling up and scaling out of the application. In addition, the section discusses multiprocessor considerations for applications and network utilizations. You can use this information to identify the parameters that affect application performances and steps to consider in the scalability of applications.

Performance Tuning

Performance management starts with the gathering of a data baseline that indicates what system performance should look like. After establishing a baseline, it is used to evaluate the performance of the application. Performance problems typically do not become apparent until the application is placed under an increased load.

Measuring the performance of an application when placed under ever increasing loads determines the scalability of that application. When the performance begins to fall below the stated minimum performance requirements, you have reached the limit of scalability of the application. For more information about scaling, refer to the "Scaling Up and Scaling Out" section later in this chapter.

Performance tuning can be done in the following ways:

  • Tuning the computer hardware by adding more memory, updating CPUs, adding disk controllers, or upgrading network controllers. This is the most efficient way and helps performance-tune the application as well.

  • Application rearchitecture to remove bottlenecks such as poor threading and looping and checking for other loops that use too much CPU time. This step also helps considerably in performance tuning.

  • Operating system parameter tuning, which involves adjusting the amount of page store and tweaking network stack parameters.

  • Tuning the configurations on a database server, application server, or Web server.

In UNIX, performance is monitored using a type of kernel-level instrumentation, along with rudimentary tools for monitoring the CPU, disk, and memory usage. Windows Server™ 2003 is designed such that it exposes a great deal of performance data. Tools like Windows Performance Monitor (PerfMon) can be used to export detailed information about the processor, memory, disk, and network usage. Performance Monitor support is integrated throughout Windows. Administrators can gather a variety of performance data from many computers simultaneously.

UNIX kernels tend to have many configurable parameters that can be fine-tuned for specific applications. By contrast, the Windows kernel is largely self-tuned. The virtual memory, thread scheduling, and I/O subsystems all dynamically adjust their resource usage and priority to maximize throughput. The difference between these two approaches is that the UNIX approach is to tweak kernel parameters for maximum advantage in the benchmark, even if those tweaks affect the real-world performance, whereas the Windows approach is to let the kernel tune itself for whatever load is placed on it.

More information on improving performance is available at
More information on writing high-performance managed applications is available at

Scaling Up and Scaling Out

Scalability is a measure of how easy it is to modify the application infrastructure and architecture to meet variances in utilization. As with other application capabilities, the decisions you make during the design and early coding phases largely dictate the scalability of your application.

Application scalability requires a balanced partnership between two distinct domains: software and hardware. Because scalability is not a design concern of stand-alone applications, the applications discussed here are distributed applications.

Scaling up involves achieving scalability with the use of better, faster, and more expensive hardware to move the processing capacity limit from one part of the computer to another. Scaling up includes adding more memory, adding more or faster processors, or just migrating the application to a more powerful, single computer. Typically, this method allows for an increase in capacity without requiring changes to source code. However, adding CPUs does not add performance in a linear fashion. Instead, the performance gain curve slowly tapers off as each additional processor is added.

Scaling out distributes the processing load across more than one server by dedicating several computers to a common task. In this, the fault tolerance of the application is increased. Scaling out also presents a greater management challenge because of the increased number of computers.

Developers and administrators use a variety of load-balancing techniques to scale out with the Windows platform. Load balancing allows an application site to scale out across a cluster of servers, making it easy to add capacity by adding replicated servers. It provides redundancy, giving the site failover capabilities so that it remains available to users even if one or more servers fail or are taken down.

Scaling out provides a method of scalability that is not hampered by hardware limitations. Each additional server provides a near linear increase in scalability.

The key to successfully scaling out an application is location transparency. If any of the application code depends on knowing which server is running the code, location transparency has not been achieved and scaling out will be difficult. This situation requires code changes to scale out an application from one server to many, which is seldom an economical option. If you design the application with location transparency in mind, scaling out becomes an easier task.

More information on scaling is available at
Microsoft Application Center 2000 reduces the complexity and the cost of scaling out. More information on "Application Center 2000" is available at
More information on scaling network-aware applications is available at

Multiprocessor Considerations

Application performance improves by having multiple processors perform the same task. You can distribute the processing load across several processors.

Computationally intensive tasks are characterized by intensive processor usage with relatively few I/O operations. The ongoing challenge with these applications is to improve the performance. You can do this with a faster computer, a more efficient algorithm, and by improving the implementation or using more processors. You can improve the performance with the help of tuning techniques as well.

Using more processors can mean taking advantage of an SMP computer or by using distributed computing with multiple networked computers. However, adding CPUs does not add performance in a linear fashion. Instead, the performance gain curve slowly tapers off as each additional processor is added. For computers with SMP configurations, each additional processor incurs system overhead. After you have upgraded each hardware component to its maximum capacity, you will eventually reach the real limit of the processing capacity of the computer. At that point, the next step is to move to another computer.

Multiprocessor optimization can be achieved by making use of threads.

Note   More information on multiprocessor optimizations is available at

Network Utilizations

Network resources, such as available bandwidth and latency, must be predicted and managed on computers and devices throughout the network.

Optimal network utilization is achieved with cooperation among end nodes, switches, routers, and wide area network (WAN) links through which data must pass. Preferential treatment must be given for certain data as it traverses through the network in order to better service certain components during congestion. Tools are available that help analyze network traffic, provide network statistics and packet information, and thereby better use the network by analyzing areas of congestion.

Quality of Service (QoS), an industry-wide initiative, achieves a more efficient use of network resources by differentiating between data subsets. Windows 2000 implements QoS by including a number of components that can cooperate with one another.

Note   More information on QOS on Windows is available at

Note   Network Monitor captures network traffic for display and analysis. More information on Network Monitor is available at

Note Network Probe is another tool for traffic-level network monitoring and for analysis and visualization. More information on Network Probe is available at

Testing and Optimization Tools

This section lists some of the useful tools that can be used for testing and monitoring your applications.

Visual Studio .NET 2003 Tools

Microsoft Visual Studio® .NET 2003 includes tools for analyzing the performance of applications. These include:

Platform SDK Tools

Platform SDK tools includes debugging tools, file management tools, performance tools, and testing tools. These tools are available with the latest Platform SDK.

Debugging Tools

Platform SDK includes the following debugging tools:

File Management Tools
Performance Tools

Performance tools can be used to measure application performance and resolve some performance issues. Platform SDK includes the following performance tools:

Testing Tools

Other Commonly Used Tools

This section lists other commonly used tools that are useful in testing and monitoring applications.

Monitoring Tools
  • Diskmon. This tool captures all hard disk activity or acts as a software disk activity light in your system tray. This tool is available for download at

  • Filemon. This monitoring tool allows you to view all file system activity in real-time. This tool works on all versions of Windows NT, Windows 2000, Windows Server 2003, and Windows XP. It also works with the Windows XP 64-bit edition. This tool is available for download at

  • PMon. This is a Windows NT GUI/device driver program that monitors process and thread creation and deletion, as well as context swaps if it is running on a multiprocessing or checked kernel. This tool is available for download at

  • Portmon. You can monitor serial and parallel port activity with this advanced monitoring tool. It knows about all standard serial and parallel IOCTLs and even shows you a portion of the data being sent and received. This tool is available for download at

  • Regmon. This monitoring tool allows you to view all registry activity in real-time. This tool is available for download at

  • TCPView. You can view all the open TCP and UDP endpoints. TCPView even displays the name of the process that owns each endpoint. This tool is available for download at

  • Task Manager. Task Manager provides run-time information on processes. The Task Manager tool is available as part of Windows.

Testing Tools
Source Test Tools

Tools for win64:

  • VTune Performance Analyzer. Intel VTune Performance Analyzer helps locate and remove software performance bottlenecks by collecting, analyzing, and displaying performance data from the system-wide level down to the source level. More information about VTune Performance Analyzer is available at

Further Reading


Get the UNIX Custom Application Migration Guide

Update Notifications

Sign up to learn about updates and new releases


Send us your comments or suggestions