Considerations for Large Load Tests
This topic provides tips for performing large load tests in Visual Studio Team System Test Edition. The following subjects are discussed:
Choosing the Appropriate Load Pattern
Choosing the Appropriate Connection Model
Sample Rate and Data Collection
Think Time
Setting Response Time Goals for Web Test Requests
Including Timing Details to Collect Percentile Data
Setting the Percentage of New Users Property
Enabling SQL Tracing
Maintaining an Appropriate Number of Agent Computers
There are three types of load patterns: constant, step, and goal-based. To choose the load pattern that is appropriate for your load test, you must understand the advantages of each type. For more information, see About Load Pattern.
Constant |
A constant load pattern is useful when you want to run your load test with the same user load for a long period of time. If you specify a high user load with a constant load pattern, it is recommended that you also specify a warm-up time for the load test. When you specify a warm-up time, you avoid over stressing your site by having hundreds of new user sessions hitting the site at the same time. |
Step |
A step load pattern is one of the most common and useful load patterns, because it allows you to monitor the performance of your system as the user load increases. Monitoring your system as the user load increases allows you to determine the number of users who can be supported with acceptable response times, or conversely, the number of users at which performance becomes unacceptable. |
Goal-based |
A goal-based load pattern is similar to a step load pattern in that the user load is typically increasing over time, but it allows you to specify that the load should stop increasing when some performance counter reaches a certain level. For example, you can use a goal-based load pattern to continue increasing the load until one of your target servers is 75% busy and then keep the load steady. |
If no predefined load pattern meets your needs, it is also possible to implement a custom load test plug-in that controls the user load as the load test runs. For more information, see Advanced Load Test Tasks.
There are two types of connection model: connection per user, and connection pool. To choose the connection model that is appropriate for your load test, you must understand the advantages of each type.
Connection Per User |
The connection per user model most closely simulates the behavior of a real browser. Each virtual user who is running a Web test uses one or two connections to the Web server that are dedicated to that virtual user. The first connection is established when the first request in the Web test is issued. A second connection may be used when a page contains more than one dependent request; these requests may be issued in parallel using the two connections. These same connections are re-used for subsequent requests within the Web test, and are closed when the Web test has finished running. The disadvantage of the connection per user model is that the number of connections held open on the agent computer may be as high as 2 times the user load, and the resources required to support this high connection count may limit the user load that can be driven from a single load test agent. |
Connection Pool |
The connection pool model conserves the resources on the load test agent by sharing connections to the Web server among multiple virtual Web test users. In the connection pool model, the connection pool size specifies the maximum number of connections to make between the load test agent and the Web server. If the user load is larger than the connection pool size, then Web tests that are running on behalf of different virtual users will share a connection. Sharing a connection means that one Web test may have to wait before issuing a request when another Web test is using the connection. The average time that a Web test waits before submitting a request is tracked by the load test performance counter Avg. Connection Wait Time. This number should be less than the average response time for a page. If it is not, the connection pool size is probably too small. |
Choose an appropriate sample rate based on the length of your load test. A small sample rate, for example five seconds, collects more data for each performance counter than a large sample rate. Collecting large amount of data for long period of time might cause disk space errors. For long load tests, you can increase the sample rate to reduce the amount of data collected. The number of performance counters also affects how much data is collected. For computers under test, reducing the number of counters will reduce the amount of data collected.
You must experiment to determine what sample rate will work best for your particular load test. However, the following table provides recommended sample rates that you can use to get started.
Load Test Duration |
Recommended Sample Rate |
---|---|
< 1 Hour |
5 seconds |
1 - 8 Hours |
15 seconds |
8 - 24 Hours |
30 seconds |
> 24 Hours |
60 seconds |
The think time for Web test requests has a significant effect on the number of users who can be supported with reasonable response times. Changing think times from 2 seconds to 10 seconds will usually enable you simulate 5 times as many users. However, if your goal is to simulate real users, you should set think time based on how you think users will behave on your Web site. Increasing the think time and number of users will not necessarily put additional stress on your Web server. If the Web site is authenticated, the type of scheme used will affect performance.
If you disable think times for a Web test, you might be able to generate a Load test that has higher throughput in terms of requests per second. If you disable think times, you should also reduce the number of users to a much smaller number than when think times are enabled. For example, if you disable think times and try to run 1000 users, you are likely to overwhelm either the target server or the load test agent.
For more information, see About Think Times.
One of the properties of a Web test request is response time goal. If you define response time goals for your Web test requests, when the Web test is run in a load test, the load test analyzer will report the percent of the Web tests for which the response time did not meet the goal. By default, there are no response time goals defined for Web requests.
For more information, see How to: Set Page Response Time Goals in a Web Test.
The run settings include a property named Timing Details Storage. If this property is enabled, the time it takes to execute each individual test, transaction, and page during the load test will be stored in the load test results repository. This allows 90th and 95th percentile data to be shown in the load test analyzer in the Tests, Transactions, and Pages tables.
By default, Timing Details Storage is disabled. There are two important reasons for doing this. First, the amount of space that is required in the load test results repository to store the timing details data may be very large, especially for long load tests. Also, the time to store this data in the load test results repository at the end of the load test is long because this data is stored on the load test agents until the load test has finished executing.
If sufficient disk space is available in the load test results repository, you can enable Timing Details Storage to obtain the percentile data. You have two choices for enabling Timing Details Storage: StatisticsOnly and AllIndividualDetails. With either option, all the individual tests, pages, and transactions are timed, and percentile data is calculated from the individual timing data. If you choose StatisticsOnly, the individual timing data is deleted from the repository after the percentile data has been calculated. Deleting the data reduces the amount of space that is required in the repository. However, if you want to process the timing detail data directly, using SQL tools, choose AllIndividualDetails so that the timing detail data is saved in the repository.
Each scenario in a load test has a property named Percentage of New Users. This property affects the way the load test runtime engine simulates the caching that would be performed by a Web browser. The default value for Percentage of New Users is 100. This means that each Web test iteration that is run in a load test is treated like a first time user to the Web site, who does not have any content from the Web site in their browser cache from previous visits. Therefore all requests in the Web test, including all dependent requests such as images, are downloaded.
Anteckning
An exception is the case where the same cacheable resource is requested more than one time in a Web test.
If you are load testing a Web site that has a significant number of return users who are likely to have images and other cacheable content cached locally, then using the default value of 100 for Percentage of New Users will generate more download requests than would occur in real-world usage. If you are load testing a Web site that has a significant number of return users, you should estimate the percentage of visits to your Web site that are from first time users of the Web site, and set percentage of new users accordingly.
The run settings include a property named SQL Tracing Enabled. This property allows you to enable the tracing feature of Microsoft SQL Server for the duration of a load test. This is an alternative to starting a separate SQL Profiler session while the load test is running to diagnose SQL performance problems. If the property is enabled, SQL trace data is displayed in the load test analyzer. You can view it on the Tables page in the SQL Trace table.
To enable this feature, the user running the load test must have the SQL privileges required to perform SQL tracing. When a load test is running on a rig, the controller user must have the SQL privileges. You must also specify a directory, usually a network share, where the trace data file will be written. At the completion of the load test, the trace data file is imported into the load test repository and associated with the load test so that it can be viewed later using the load test analyzer.
For more information, see About Run Settings, and How to: Integrate SQL Trace Data.
If an agent computer has more than 75% CPU utilization, or has less than 10% of physical memory available, it is overloaded. Add more agents to your rig to ensure that the agent computer does not become the bottleneck in your load test.
For more information, see Controllers, Agents, and Rigs.