Performance Development lifecycle for IT - Part 3

This is the continuation of the previous posts on PDL-IT (part-1 , part-2). Part-1 consisted of basic overview of PDL-IT whereas in Part-2 action items proposed by PDL-IT during Envision and Design phase of SDLC were discussed. This post consists  of brief discussion on action items from performance standpoint during Build, Stabilize, Deploy and Production phase of SDLC.

Build Phase

During build phase of SDLC consists of coding the application and generating test plans. It is essential for the developers to follow coding best practices in order to build a high performant application.

The action items proposed by PDL-IT during this phase are

1. Ensure coding best practices from performance standpoint being followed by establishing coding best practices workshops/training programs for developers. 

2. Conduct Code Profiling - Process of identifying and patching slow running functions, expensive operations, etc.

Stabilize Phase

The Stabilize phase includes identifying the performance bottlenecks and bugs, behavior of the system under load, utilization of network, etc. Performance Testing includes load/stress test to identify code bottlenecks and issues, capacity testing to identify web application capacity and scalability, extended stress testing to identify the stability of the web application, etc. For stabilize phase, the application needs to be hosted on a production equivalent test environment to simulate production alike user loads. This phase can be broadly classified into three stages

1. Application Walkthrough

2. Application Network Analysis

3. Load/Stress testing

Application Walkthrough:

Main objective of the Walkthrough/review is to conduct a verification process to confirm that the application is actually in a state where it can sustain “load” in order to proceed to all levels of Performance Testing. This process involves single user executing functionality of the application and verifying execution durations within the business rules of execution times and response times.

Application Walkthrough is helpful in revealing

1. Redundant page calls

2. Server errors (401, 404, etc)

3. Database count reveals what gets updated changed, which is vital information to digest to predict how the application is going to react/perform under load

4. Processing delays

5. Long duration page rendering

6. Evaluate each page for content objects

Recommended tools for Application Walkthrough:

1. neXpert


3. SQL Profiler

Application Network Analysis:

The goal of the application network analysis is to measure, analyze, and fix the end user delays by reducing the number and size of objects traveling from web server to browser over the network. The purpose of the network analysis can be summarized into following action points

1. To pinpoint problematic areas

2. To detect application delays

3. To predict end user response times

4. To analyze application objects

5. To help in creating load test scripts

Recommended tools for Network Analysis

1. Network Monitor


3. Fiddler - nExpert

Load/Stress testing

Load/Stress testing is a very common type of testing conducted as part of performance testing of web applications. It involves understanding the behavior of the system when subjected to high concurrent load. This is done by simulation of artificial user load (often referred as Virtual users) mimicking the production user activity in the test environment. This testing can reveal issues such as potential deadlocks, response times at different user loads, capacity of the system, hardware bottlenecks, expensive operations, etc.

Recommended tools for Load testing:

1. Visual Studio Team System (VSTS)

Deploy Phase

Before deploying the application in the production environment, a careful study needs to be done in the type of environment the application is about to be hosted. Depending on the type of environment allocated for the application, making sure that the best practices are followed is critical for obtaining performance. Major activities from performance stand-point involved during deploy phase are

1. Understanding the production environment i.e. Hyper-V / Shared hosting environments.

2. Following IIS/SQL Server Best Practices suitable for your application, ex: App pool configurations / sharing, Compression, Content- Expiration, etc

Generally, during the stabilize phase end-to-end load testing is done on a dedicated environment. However, it has been often noticed that multiple applications are hosted on a single server in IT organizations. This leads to a situation where availability of resources for the concerned application is also dependent on the other applications hosted on the same server. Each application depending on its functionality has its specific requirements for resources. If two or more applications which are CPU intensive are hosted on same server then both needs to compete for the CPU, affecting the overall performance of the system. It should be seen that such situation doesn’t arise. In such cases, the concerned stakeholders of applications need to set up a dialog with each other in order to understand the needs of the applications to be hosted and to make trade-off decisions.

Production Phase

Monitoring of real-time application performance in production environment is essential in proactively identifying the performance problems. The key deliverables of production monitoring are

1. Identifying the response times for the pages which are outside of service level agreements

2. Validate end-user experiences

3. Generate performance trend data

Monitoring also needs to be done for the system resources and hardware failures. Some of the action steps for monitoring are

1. MOM (MS System Center) for System Resource Monitoring

2. Web logs and SQL Stats for code execution monitoring

3. Weekly generation of the current state of performance status report

Regular analysis of IIS logs can reveal useful information on how the application is behaving in production.

1. Response time values for each web pages

2. Peak hour concurrent load on the system

3. Server errors

4. Amount of data being transferred per page.