Megosztás a következőn keresztül:


6: A Testing Toolbox

patterns & practices Developer Center

On this page: Download:
Where do my team members find their bugs? | What other Microsoft tools can I use to find bugs? - Performance and stress tests in Visual Studio, Adding diagnostic data adapters to test settings, Can I create my own diagnostic data adapters?, Test settings for Visual Studio solutions, Test settings for Microsoft Test Manager test plans, What about bugs that are hard to reproduce?, What diagnostic data adapters are available? | Load testing | Web performance tests in load tests | Unit tests in load tests | Coded UI test in load tests | Creating load tests | Running and analyzing load tests | IntelliTrace - Configuring the IntelliTrace diagnostic data adapter, Fixing non-reproducible bugs with IntelliTrace | Tools that support operational monitoring and feedback | Feedback tool - Create a feedback request, Provide feedback | Remote debugging | Summary | Differences between Visual Studio 2010 and Visual Studio 2012 | Where to go for more information

Download PDF

Download code samples

Download Paperback

It used to be that when you wanted to track down a bug, you'd insert print statements to see where it went. At Contoso, it's still common practice. But Fabrikam's project teams know that there is a wide variety of powerful tools that they can bring to bear not only to discover and track down bugs, but also to keep them to a minimum.

We've seen how Fabrikam's team places strong emphasis on unit tests to keep the code stable, and how they record manual tests to help track down bugs. We've seen how they create virtual environments to make lab configuration faster and more reliable. They like to automate system tests where they can, so as to reserve manual testing effort for exploring and validating new features. They generate charts of tests against each requirement, so that progress (or delay) is constantly evident.

And there are more tools in that toolbox. Let's open it up and see what's in there.

Where do my team members find their bugs?

Using Visual Studio, Microsoft Test Manager, and a few key reports from your build-deploy-test automation, you can easily identify issues in your application under test. Using these Microsoft tools helps locate and resolve bugs in a systematic fashion, thus shortening the cycle time.

Bugs are found by running tests. There are four principal ways in which bugs are found and managed:

  • Local unit testing on the development machine. Before checking in updated application code, the developer will typically write and run unit tests. In Chapter 3, "Lab Environments," we explained how you can set up check-in policies to insist that specified unit tests are run.
  • Build verification tests. As the team members check in code to the source repository, the build server runs unit tests on the integrated code. Even if local tests pass, it is still possible for the combined changes from different developers to cause a failure.
  • Manual testing. If you discover a bug while running a scripted test case or during exploratory testing, you typically generate a bug work item. Alternatively, there might be an error in the test case script.
  • Automated system tests. After a failed build-deploy-test run, you can inspect the test data, and then either create a bug work item or fix the test.

What other Microsoft tools can I use to find bugs?

After you have your test plan and infrastructure established, you might ask how else Visual Studio can help you find bugs and issues in your applications. That was the question that occurred to the Contoso team after they learned from their new Fabrikam teammates just how many types of tests can be run with the help of Visual Studio, Microsoft Test Manager, and Team Foundation Server. Let's take a look at some other exciting options.

Performance and stress tests in Visual Studio

Visual Studio Ultimate includes performance and stress testing tools, including tools for web performance testing and load testing. Both of these test types can be added to the web performance and load test project in your solution. For more information, see the topic Testing Application Performance and Stress on MSDN.

Note

You must have Visual Studio Ultimate in order to use web performance or load testing.

Web performance tests: Web performance tests are used to simulate how end users interact with your web application. You can easily create web performance tests by recording the HTTP requests while using your browser session in conjunction with the Web Performance Test Recorder included in Visual Studio Ultimate. You can also create web performance tests manually by using the Web Performance Test Editor included in Visual Studio Ultimate.

Load tests: Load tests provide you with the ability to stress test your application by emulating large numbers of machines and users hammering away at your application. Load tests can include a test mix consisting of any of the web performance tests, unit tests, or coded UI tests that are in the test project of your solution.

In this chapter, we'll also talk about using load tests in conjunction with the automated tests to help discover additional bugs caused by stressing your application.

Adding diagnostic data adapters to test settings

As we discussed in Chapter 4, "Manual System Tests," Microsoft Test Manager provides the testers on your team with the ability to conveniently author, manage, and execute manual and automated tests using a test plan. Let's look at the test settings a bit more thoroughly and see how we can leverage additional functionality to help find and isolate bugs in our application.

Microsoft Test Manager provides your testers with the ability to add and configure additional diagnostic data adapters (DDA), which are used to collect various types of data in the background while tests are running. Diagnostic data adapters are configured in the test settings associated with either Microsoft Test Manager or Visual Studio. When Fabrikam testers saw that Contoso was using different hardware to test network connectivity for LAN, WAN, and dial-up connections, they explained their own process—using a diagnostic data adapter to emulate different kinds of network connectivity, which saves them a good deal of pain.

JJ159337.A71DED4D127B00F9CA25493A84BF0C31(en-us,PandP.10).png

There are several valuable types of diagnostic data that the diagnostic data adapters can collect on the test machine. For example, a diagnostic data adapter might create an action recording, an action log, or a video recording, or collect system information. Additionally, diagnostic data adapters can be used to simulate potential bottlenecks on the test machine. For example, you can emulate a slow network to impose a bottleneck on the system. Another important reason to use additional diagnostic data adapters is that many of them can help isolate non-reproducible bugs. For more information, see the MSDN topic Setting up Machines and Collecting Diagnostic Information Using Test Settings.

Can I create my own diagnostic data adapters?

As Fabrikam happily shared with Contoso team members, you can create your own custom diagnostic data adapters to fulfill your team's particular testing requirements. You can create custom diagnostic data adapters to either collect data when you run a test, or to impact the machine with bottlenecks as part of your test.

For example, you might want to collect log files that are created by your application under test and attach them to your test results, or you might want to run your tests when there is limited disk space left on your computer. Using APIs provided within Visual Studio, you can write code to perform tasks at specific points in your test run. For example, you can perform tasks when a test run starts, before and after each individual test is run, and when the test run finishes. Once again, Contoso saved time and effort by adopting Fabrikam's practices and creating their own data adapters. For more information, see the MSDN topic, Creating a Diagnostic Data Adapter to Collect Custom Data or Affect a Test Machine.

Test settings for Visual Studio solutions

Test settings for Visual Studio are stored in a .testsettings file that is part of your solution. The Visual Studio .testsettings file allows the user to control test runs by defining the following test roles:

  • The set of roles that are required for your application under test.
  • The role to use to run your tests.
  • The diagnostic data adapters to use for each role.

To specify the diagnostic data to collect for your unit tests and coded UI tests in the test projects of your Visual Studio solution, you can edit an existing test settings file or create a new one. Creating, editing, and setting the active test setting file are all done using the Test menu in Visual Studio. To view the steps required to create a new test setting in Visual Studio, see the MSDN topic Create Test Settings to Run Automated Tests from Visual Studio. To view the steps used to edit an existing test setting in Visual Studio, see the MSDN topic How to: Edit a Test Settings File from Microsoft Visual Studio.

In Visual Studio, when you create a new test project, by default the Local.testsettings is selected for your project. The Local.testsettings file does not have any data or diagnostic adapters configured. You can edit this file, create a new test settings file, or select the TraceAndTestImpact.testsettings file, which has the ASP.NET Client Proxy for IntelliTrace and Test Impact, the IntelliTrace feature of Visual Studio, System Information, and Test Impact diagnostic data adapters.

Test settings for Microsoft Test Manager test plans

Test settings define the following parameters for your test plan in Microsoft Test Manager:

  • The type of tests that you will run (manual or automated).
  • The set of roles that is required for your application under test.
  • The role to use to run your tests.
  • The diagnostic data adapters to use for each role.

To view the steps used to create a test setting in Microsoft Test Manager, see the MSDN topic Create Test Settings for Automated Tests as Part of a Test Plan.

In Microsoft Test Manager, you can edit existing test settings, or create new ones. The test setting is associated with your test plan, and is configurable in the Properties pane. There are separate test settings affiliated with your manual test runs and your automated test runs. You can select the Local Test Run, which by default includes the Actions, ASP.NET Client for IntelliTrace and Test Impact, System Information, and Test Impact diagnostic data adapters.

What about bugs that are hard to reproduce?

An issue that has plagued teams for ages is the non-reproducible bug. We're all too familiar with the scenario in which we find a bug and submit it to a developer only to have the developer come back and say, "I can't reproduce the issue." Sometimes, the bug can go through numerous iterations between the tester or team member who found the bug and the developer attempting to reproduce it. Using either Visual Studio, or Microsoft Test Manager, you can configure your test settings to use specific diagnostic data adapters. For example, the diagnostic data adapter for IntelliTrace is used to collect specific diagnostic trace information to help isolate bugs that are difficult to reproduce. This adapter creates an IntelliTrace file that has an extension of .iTrace that contains this information. When a test fails, you can create a bug. The IntelliTrace file that is saved with the test results is automatically linked to this bug. The data that is collected in the IntelliTrace file increases debugging productivity by reducing the time that is required to reproduce and diagnose an error in the code. From this IntelliTrace file the local session can be simulated on another computer; this reduces the possibility of a bug being non-reproducible. In this chapter, we'll learn more about using IntelliTrace to isolate non-reproducible bugs.

In addition to the IntelliTrace data and diagnostic adapter, you can leverage other adapters, which can help your team find bugs that might otherwise be more difficult to reproduce. For example, you could add the video recorder adapter to help clarify some elaborate steps that are required in order to make an issue occur. You can edit your test settings file, or create new ones to address your specific testing goals.

What diagnostic data adapters are available?

The following list describes the various data and diagnostic adapters that are available for you to configure in your test settings:

  • Actions: You can create a test setting that collects a text description of each action that is performed during a test. When you configure this adapter, the selections are also used if you create an action recording when you run a manual test. The action logs and action recordings are saved together with the test results for the test. You can play back the action recording later to fast-forward through your test, or you can view the action log to see what actions were taken.

    Manual tests

    (Local machine)

    Manual tests

    (Collecting data using a set of roles and an environment)

    Automated tests

    Yes

    Yes

    No

    Note

    When you collect data on a remote environment, the recording will work only on the local machine.

  • ASP.NET Client Proxy for IntelliTrace and Test Impact: This proxy allows you to collect information about the HTTP calls from a client to a web server for the IntelliTrace and Test Impact diagnostic data adapters.

    Manual tests

    (Local machine)

    Manual tests

    (Collecting data using a set of roles and an environment)

    Automated tests

    Yes

    Yes

    Yes

  • ASP.NET profiler: You can create a test setting that includes ASP.NET profiling, which collects performance data on ASP.NET web applications.

    Manual tests

    (Local machine)

    Manual tests

    (Collecting data using a set of roles and an environment)

    Automated tests

    No

    No

    Yes

    Note

    This diagnostic data adapter is supported only when you run load tests from Visual Studio.

  • Code coverage: You can create a test setting that includes code coverage information that is used to investigate how much of your code is covered by tests.

    Manual tests

    (Local machine)

    Manual tests

    (Collecting data using a set of roles and an environment)

    Automated tests

    No

    No

    Yes

    Note

    You can use code coverage only when you run an automated test from Visual Studio or mstest.exe, and only from the machine that runs the test. Remote collection is not supported.

    Note

    Collecting code coverage data does not work if you also have the test setting configured to collect IntelliTrace information.

  • IntelliTrace: You can configure the diagnostic data adapter for IntelliTrace to collect specific diagnostic trace information to help isolate bugs that are difficult to reproduce. This adapter creates an IntelliTrace file that has an extension of .iTrace that contains this information. When a test fails, you can create a bug. The IntelliTrace file that is saved with the test results is automatically linked to this bug. The data that is collected in the IntelliTrace file increases debugging productivity by reducing the time that is required to reproduce and diagnose an error in the code. From this IntelliTrace file, the local session can be simulated on another computer; this reduces the possibility of a bug being non-reproducible.

    Manual tests

    (Local machine)

    Manual tests

    (Collecting data using a set of roles and an environment)

    Automated tests

    Yes

    Yes

    Yes

  • For more details and the steps used to add and configure the IntelliTrace diagnostic data adapter, see the MSDN topic How to: Collect IntelliTrace Data to Help Debug Difficult Issues.

  • Event log: You can configure a test setting to include event log collecting, which will be included in the test results.

    Manual tests

    (Local machine)

    Manual tests

    (Collecting data using a set of roles and an environment)

    Automated tests

    Yes

    Yes

    Yes

  • To see the procedure used to add and configure the event log diagnostic data adapter, see the MSDN topic How to: Configure Event Log Collection Using Test Settings.

  • Network emulation: You can specify that you want to place an artificial network load on your test using a test setting. Network emulation affects the communication to and from the machine by emulating a particular network connection speed, such as dial-up.

    Manual tests

    (Local machine)

    Manual tests

    (Collecting data using a set of roles and an environment)

    Automated tests

    Yes

    Yes

    Yes

    For more information about the network emulation diagnostic data adapter, see the MSDN topic How to: Configure Network Emulation Using Test Settings.

  • System information: A test setting can be set up to include the system information about the machine that the test is run on. The system information is specified in the test results by using a test setting.

    Manual tests

    (Local machine)

    Manual tests

    (Collecting data using a set of roles and an environment)

    Automated tests

    Yes

    Yes

    Yes

  • Test impact: You can collect information about which methods of your application's code were used when a test case was running. This information can be used together with changes to the application code made by developers to determine which tests were impacted by those development changes.

    Manual tests

    (Local machine)

    Manual tests

    (Collecting data using a set of roles and an environment)

    Automated tests

    Yes

    Yes

    Yes

    Note

    If you are collecting test impact data for a web client role, you must also select the ASP.NET Client Proxy for IntelliTrace and Test Impact diagnostic data adapter.

    Note

    Only the following versions of Internet Information Services (IIS) are supported: IIS 6.0, IIS 7.0 and IIS 7.5.

  • For further details, see the MSDN topic How to: Collect Data to Check Which Tests Should be Run After Code Changes.

  • Video recorder: You can create a video recording of your desktop session when you run an automated test. This video recording can be useful for viewing the user actions for a coded UI test. The video recording can help other team members isolate application issues that are difficult to reproduce.

    Manual tests

    (Local machine)

    Manual tests

    (Collecting data using a set of roles and an environment)

    Automated tests

    Yes

    Yes

    Yes

    Note

    If you enable the test agent software to run as a process instead of a service, you can create a video recording when you run automated tests.

    For more information, see the MSDN topic How to: Record a Video of Your Desktop as You Run Tests Using Test Settings.

Tip

The data that some of the diagnostic data adapters capture can take up a lot of database space over time. By default, the administrator of the database used for Visual Studio 2010 cannot control what data gets attached as part of test runs. For example, there are no policy settings that can limit the size of the data captured and there is no retention policy to determine how long to hold this data before initiating a cleanup. To help with this issue, you can download the Test Attachment Cleaner for Visual Studio Ultimate 2010 & Test Professional 2010. The test attachment cleaner tool allows you to determine how much database space each set of diagnostic data captures is using and reclaim the space for runs that are no longer relevant to your team.

Load testing

To help determine how well your application responds to different levels of usage, your team can create load tests. These load tests contain a specified set of your web performance tests, unit tests, or coded UI tests. Load tests can be modeled to test the expected usage of a software program by simulating multiple users who access the program at the same time. Load tests can also be used to test the unexpected!

Load tests can be used in several different types of testing:

  • Smoke: To test how your application performs under light loads for short durations.
  • Stress: To determine if the application will run successfully for a sustained duration under heavy load.
  • Performance: To determine how responsive your application is.
  • Capacity planning: To test how your application performs at various capacities.

Visual Studio Ultimate lets you simulate an unlimited number of virtual users on a local load test run. In a load test, the load pattern properties specify how the simulated user load is adjusted during a load test. Visual Studio Ultimate provides three built-in load patterns: constant, step, and goal-based. You choose the load pattern and adjust the properties to appropriate levels for your load test goals. For more about load patterns, see the MSDN topic Editing Load Patterns to Model Virtual User Activities.

If your application is expected to have heavy usage—for example, thousands of users at the same time—you will need multiple computers to generate enough load. To achieve this, you can set up a group of computers that would consist of one or more test controllers and one or more test agents. A test agent runs tests and can generate simulated load. The test controller coordinates the test agents and collects the test results. For more information about how to set up test controllers and test agents for load testing, see the MSDN topic Distributing Load Tests Runs across Multiple Test Machines Using Test Controllers and Test Agents.

Web performance tests in load tests

When you add web performance tests to a load test, you simulate multiple users opening simultaneous connections to a server and making multiple HTTP requests. You can set properties on load tests that will be applied to all of the individual web performance tests.

Unit tests in load tests

Use unit tests in a load test to exercise a server through an API. Typically, this is for servers that are accessed through thick clients or other server services rather than a browser. One example is a Windows application with a Windows Forms or Windows Presentation Foundation (WPF) front end, using Windows Communication Foundation (WCF) to communicate to the server. In this case, you develop unit tests that call WCF. Another example is a different server that calls the server through web services. Additionally, it is possible that a two-tier client makes calls directly to SQL Server. In this case, you can develop unit tests to call SQL Server directly.

Coded UI test in load tests

Load tests can also include automated coded UI tests. The inclusion of coded UI tests should only be done under specific circumstances. All the scenarios that use coded UI tests in load tests involve using the coded UI tests as performance tests. This can be useful because coded UI tests let you capture performance at the UI layer. For example, if you have an application that takes one second to return data to the client but eight seconds to render the data in the browser, you cannot capture this type of performance problem by using a web performance test.

You would also benefit from using coded UI tests in a load test if you have an application that is difficult to script at the protocol layer. In this case, you might consider temporarily driving load using coded UI until you can correctly script the protocol layer.

For more information, see the MSDN topic Using Coded UI Tests in Load Tests.

Creating load tests

A load test is created by using the New Load Test Wizard in Visual Studio Ultimate. When you use the New Load Test Wizard, you specify the following three settings for the load test:

  • The initial scenario for the load test: Load tests contain scenarios, which contain web performance tests, unit tests, and coded UI tests. A scenario is a container within a load test where you specify load pattern, test mix model, test mix, network mix, and web browser mix. Scenarios are important because they give you flexibility in configuring test characteristics that allow for simulation of complex, realistic workloads. You can also specify other various load test scenario properties to meet your specific load testing requirements; for example, delays and think times.

    Tip

    For a list of the load test scenario properties you can modify using the Load Test Editor, see the MSDN topic Load Test Scenario Properties.

    You can think of a scenario as representing a particular group of users. The tests in the scenario represent the activity of those users, the load pattern is the number of users in the group, and the network and browser settings control the networks and browsers you expect those users to use.

  • Computers and counter sets in the load test: Counter sets are a set of system performance counters that are useful to monitor during a load test. Counter sets are organized by technology; for example, ASP.NET or SQL counter sets. When you create the load test, you specify which computers and their counter sets to include in the load test.

    JJ159337.628E9763F4651D0F23325CD96581E362(en-us,PandP.10).png

    Load test counter sets

Note

If your load tests are distributed across remote machines, controller and agent counters are mapped to the controller and agent counter sets. For more information about how to use remote machines in your load test, see Distributing Load Test Runs Across Multiple Test Machines Using Test Controllers and Test Agents.

  • The initial run setting for the load test: Run settings are a set of properties that influence the way a load test runs.

    You can have more than one run setting in a load test. Only one of the run settings may be active for a load test run. The other run settings provide a quick way to select an alternative setting to use for subsequent test runs.

    Tip

    For a list of the run setting properties you can modify using the Load Test Editor, see Load Test Run Setting Properties.

To see the detailed steps that are used in the Load Test Wizard, see the MSDN topic Creating Load Tests Using the New Load Test Wizard. The initial settings that you configure for a load test using the New Load Test Wizard can be edited later using the Load Test Editor. For more information, see Editing Load Test Scenarios Using the Load Test Editor.

Running and analyzing load tests

You view both running load tests and completed load tests in the Load Test Analyzer.

Tip

Before you run a load test, make sure that all the web performance, unit tests, and coded UI tests that are contained in the load test will pass when they are run by themselves.

While a test is running, a condensed set of the performance counter data that can be monitored in the Load Test Analyzer is maintained in memory. To prevent the resulting memory requirements from growing unbounded, a maximum of 200 samples for each performance counter is maintained. This includes 100 evenly spaced samples that span the run's current elapsed time, and the most recent 100 samples. The result that is accumulated during a run is called an in-progress load test result.

JJ159337.FB01ECBE322AD27542C6B86ED643CC55(en-us,PandP.10).png

Analyzing a running load test

In addition to the condensed set of performance counter data, the Load Test Analyzer has the following functionality available to analyze the in-progress load test result data that is unique while a load test is running:

  • A progress indicator specifies the time that remains.
  • A button on the Load Test Analyzer toolbar is available to stop the load test.
  • You can specify either collapsing or scrolling graphing modes on the Load Test Analyzer toolbar:
    • Collapsing is the default graph mode in the Load Test Analyzer during a running load test. A collapsing graph is used for load test while it is running to reduce the amount of data that must be maintained in memory, while still showing the trend for a performance counter over the full run duration.
    • Scrolling graph mode is available when you are viewing the result of a load test while it is running. A scrolling graph is an optional view which shows the most recent data points. Use a scrolling graph to view only the most recent 100 data intervals in the test.
  • An Overview pane which displays the configuration, requests, and test cases information for the running load test.

Running load tests

  1. In Visual Studio Ultimate, in your solution, locate your test project and open your load test.

  2. In the Load Test Editor, click the Run button on the toolbar.

    JJ159337.200BC356FDCE17FBCC8DE21D0C0F7DA1(en-us,PandP.10).png

Run load tests

For more information, see the two MSDN topics: How to: Run a Load Test and Analyzing Load Test Runs.

IntelliTrace

Undoubtedly, non-reproducible bugs have long been a problem for the developers on your team, as they have been for Contoso. They sometimes saw their applications crash during a test on a test computer, but run without any issues on their developer's computer.

Diagnosing application issues under such circumstances has been very difficult, expensive, and time-consuming for Contoso. The bugs that their developers received likely did not include the steps to reproduce the problem. Even if bugs included the steps, the problem might stem from the specific environment in which the application is being tested.

Fabrikam dealt with this sort of issue by collecting IntelliTrace data in their tests to assist in solving a lot of their non-reproducible errors. Tests configured with a test setting that uses the IntelliTrace diagnostic data adapter can automatically collect IntelliTrace data. The collected data is saved as an IntelliTrace recording that can later be opened by developers using Visual Studio. Team Foundation Server work items provide a convenient means for testers to share IntelliTrace recordings with developers. The developer can debug the problem in a manner similar to postmortem debugging of a dump file, but with more information.

Configuring the IntelliTrace diagnostic data adapter

You can configure the test settings for either Microsoft Test Manager or Visual Studio to use the diagnostic data adapter for IntelliTrace to collect specific diagnostic trace information. Tests can use this adapter, the test can collect significant diagnostic events for the application that a developer can use later to trace through the code to find the cause of a bug. The diagnostic data adapter for IntelliTrace can be used for either manual or automated tests.

Note

IntelliTrace works only on an application that is written in managed code. If you are testing a web application that uses a browser as a client, you should not enable IntelliTrace for the client in your test settings because no managed code is available to trace. In this case, you may want to set up an environment and collect IntelliTrace data remotely on your web server.

When you configure the IntelliTrace adapter, you can configure it to collect IntelliTrace events only. When the adapter is configured to collect IntelliTrace events important diagnostic events are captured with minimal impact on the performance of your tests. The types of events that can be collected by IntelliTrace include the following:

  • Debugger events. These are events that occur within the Visual Studio Debugger while you debug your application. The startup of your application is one debugger event. Other debugger events are stopping events, which are events that cause your application to enter a break state. Hitting a breakpoint, hitting a tracepoint, or executing a Step command are examples of stopping events. For performance reasons, IntelliTrace does not collect all possible values for every debugger event. Instead, IntelliTrace collects values that are visible to the user. If the Autos****window is open, for example, IntelliTrace collects values that are visible in the Autos window. If the Autos window is closed, those values are not collected. If you point to a variable in a source window, the value that appears in the DataTip is collected. Values in a pinned DataTip are not collected, however.
  • Exception events. These occur for handled exceptions, at the points where the exception is thrown and caught, and for unhandled exceptions. IntelliTrace collects the type of exception and the exception message.
  • Framework events. These occur within the Microsoft .NET Framework library. You can view a complete list of .NET events that can be collected on the IntelliTrace Events page of the Options dialog box. The data collected by IntelliTrace varies by event. For a File Access event, IntelliTrace collects the name of the file; for a Check Checkbox, it collects the checkbox state and text; and so on.

Alternatively, you can configure the IntelliTrace adapter to record both the IntelliTrace events, and method level tracing; however, doing so might impact the performance of your tests. Some additional configuration options for the IntelliTrace diagnostic data adapter include:

  • Collection of data from ASP.NET applications that are running on IIS.
  • Turning collection of IntelliTrace information on or off for specific modules. This ability is useful because certain modules might not be interesting for debugging purposes. For example, you might be debugging a solution that includes legacy DLL projects that are well tested and thoroughly debugged. Excluding modules that do not interest you reduces clutter in the IntelliTrace window and makes it easier to concentrate on interesting code. It can also improve performance and reduce the disk space that is used by the log file. The difference can be significant if you have chosen to collect calls and parameters.
  • The amount of disk space to use for the recording.

To view the detailed steps that you use to add the IntelliTrace Diagnostic Data adapter to your test settings, see the MSDN topic How to: Collect IntelliTrace Data to Help Debug Difficult Issues.

JJ159337.C7A5609B910864A228AA3D1686BB6913(en-us,PandP.10).png

Configure the IntelliTrace Data and Diagnostic Adapter

Fixing non-reproducible bugs with IntelliTrace

An IntelliTrace recording provides a timeline of events that occurred during the execution of an application. Using an IntelliTrace recording, you can view events that occurred early in the application run, in addition to the final state. In this way, debugging an IntelliTrace recording resembles debugging a live application more than it resembles debugging a dump file.

IntelliTrace lets the developers on your team debug errors and crashes that would otherwise be non-reproducible. The developers can debug log files that were created by configuring the IntelliTrace data and diagnostic adapter locally, or from Test Manager. Members of your team can link a log file from Test Manager directly to a Team Foundation Server work item or bug, which can be assigned to a developer. In this way, IntelliTrace and Test Manager integrate into your team workflow.

When you debug an IntelliTrace file, the process is similar to debugging a dump file. However, IntelliTrace files provide much more information than traditional dump files. A dump file provides a snapshot of an application's state at one moment in time, usually just when it crashed. With IntelliTrace, you can rewind the history to see the state of the application and events that occurred earlier in the application run. This makes debugging from a log file faster and easier than debugging from a dump file.

IntelliTrace can increase your cycle time significantly by alleviating time-consuming non-reproducible bugs.

To see the steps used to debug an IntelliTrace recording attached to a bug, see the MSDN topic Debugging Non-Reproducible Errors With IntelliTrace.

Tools that support operational monitoring and feedback

Feedback tool

Getting the right feedback at the right time from the right individuals can determine the success or failure of a project or application. Frequent and continuous feedback from stakeholders supports teams in building the experiences that will delight customers. As stakeholders work with a solution, they understand the problem better and are able to envision improved ways of solving it. As a member of the team developing the application, you can make course corrections throughout the cycle. These course corrections can come from both negative and positive feedback your team receives from its stakeholders.

The Team Foundation Server tools for managing stakeholder feedback enable teams to engage stakeholders to provide frequent and continuous feedback. The feedback request form provides a flexible interface to specify the focus and items that you want to get feedback about. Use this tool to request feedback about a web or downloadable application that you're working on for a future release. With Microsoft Feedback Client, stakeholders can directly interact with working software while recording rich and usable data for the team in the background through action scripts, annotations, screenshots, and video or audio recordings.

For more information, see the MSDN topic Engaging Stakeholders through Continuous Feedback.

Create a feedback request

Creating a feedback request for the stakeholders, customers, or team members involved in the current application cycle is relatively simple. Creating a feedback request can be accomplished using Web Access using the following steps:

  1. Connect to Team Web Access by opening a web browser and entering the URL. For example: http://Fabrikam:8080/tfs/.
  2. On the Home page, expand the team project collection and choose your team project.
  3. On the Home page for the team project, choose the Request feedback link.
  4. The REQUEST FEEDBACK dialog box opens.
  5. Follow the instructions provided and fill out the form.

Provide feedback

Your stakeholders respond to your teams request for feedback by using the Microsoft Feedback Client. This tool allows your stakeholders to launch the application under development, capture their interaction with it as video and verbal or type-written comments as well. The feedback is stored in Visual Studio 2012 Team Foundation Server to support traceability. Stakeholders can record their interactions with the application, record verbal comments, enter text, clip and annotate a screen, or attach a file.

When your stakeholders receive an email request for feedback, it contains a link to start the feedback session.

Note

The email also includes a link to install the feedback tool if it is not already installed.

JJ159337.D5B857CCC8B748A65AB3F34D3CF30CD9(en-us,PandP.10).png

Email requesting feedback

Clicking the link opens the feedback client on the Start page. From the Start page, choose the Application link to open, start, or install the application for which you have been requested to provide feedback.

JJ159337.677EF62F952E7361AEDAD5C7C3D96169(en-us,PandP.10).png

Starting the application

On the Provide page, one or more items appear for you to provide feedback. For each item, you can get context on what's being asked and then you can give feedback free-form through video or audio recordings, text, screenshot, or file attachments. When finished with one item, choose the Next button to move to the next item.

JJ159337.AC88D8ED65958D0CF9021E505803C177(en-us,PandP.10).png

Providing feedback for each item

When providing feedback, your stakeholders can add rich text comments and add screenshots and attach related files. While providing feedback, you can optionally choose to record the feedback session using either Screen & Voice, Screen only, or Voice only.

JJ159337.C489E5E8B2814838F4A85D5322D3A081(en-us,PandP.10).png

Feedback options

After entering the feedback, stakeholders can easily submit their feedback. The feedback is uploaded to your team project as a work item.

JJ159337.C6AC8EB190E65A8585C7DB1AF5849B78(en-us,PandP.10).png

Submit the feedback

Remote debugging

At times, it can be helpful to isolate issues on other machines or devices on your network. For example, you might identify the need to debug an issue on a staging server or test machine that your team is having trouble isolating or replicating. Using the remote debugger, you can conduct debugging remotely on a machine that is in your network. When you are doing remote debugging, the host computer can be any platform that supports Visual Studio.

Note

The remote device and the Visual Studio computer must be connected over a network or connected directly through an Ethernet cable. Debugging over the Internet is not supported.

The remote machine or device must have the remote debugging components installed and running. Additionally, you must be an administrator to install the remote debugger on the remote device. Also, in order to communicate with the remote debugger, you must have user access to the remote device. For information on installing the remote debugger, see the MSDN topic How to: Set Up Remote Debugging.

Summary

The print statement is no longer the tester's only device. In Visual Studio 2012, there are powerful well-integrated tools. In this chapter we've looked at:

  • Performance and stress testing
  • Load testing
  • IntelliTrace
  • Stakeholder feedback tool
  • Remote debugging

Differences between Visual Studio 2010 and Visual Studio 2012

  • Creating load and web performance tests: Load tests and web performance tests are created by adding them to a web performance and load test project in Visual Studio 2012. In Visual Studio 2010, load and web performance tests are created by adding them to a test project. The test project in Visual Studio 2010 was also used for unit tests, coded UI tests, and generic and ordered tests. For more information, see the MSDN topic Upgrading Web Performance and Load Tests from Visual Studio 2010.

  • Running load and web performance tests: In Visual Studio 2012, load tests must be run from the Load Test Editor. Similarly, web performance tests must be run from the Web Performance Test Editor. In Visual Studio 2010, web performance tests and load tests can be run from either their respective editors, or from the Test View window or Test List Editor window.

    In Visual Studio 2012, the Test menu that was in Visual Studio 2010 has also been deprecated. To run or debug your coded web performance tests, you must do so from the shortcut menu in the editor. For more information, see the MSDN topic How to: Run a Coded Web Performance Test.
    In Visual Studio 2012, the Test View window has been replaced by the Unit Test Explorer, which provides for a more agile testing experience for code development for unit tests and coded UI tests. The Unit Test Explorer does not include support for web performance and load tests.

  • Virtual user limitations for load testing: Visual Studio Ultimate 2012 RC includes unlimited virtual users that you can use with your load tests. With Visual Studio Ultimate 2010, you are restricted to 250 virtual users on a local load test run. If your load testing requires more virtual users, or you want to use remote machines, you must install a Visual Studio Load Test Virtual User Pack 2010 or Visual Studio 2010 Load Test Feature Pack.
    You can purchase Visual Studio Load Test Virtual User Pack 2010 where you purchased Visual Studio Ultimate. Each user pack adds an additional 1000 virtual users that are configured on your test controller allowing for running your load tests on virtual machines in your environment.

    The Visual Studio 2010 Load Test Feature Pack is available if you are an MSDN subscriber. The feature pack provides unlimited virtual users! Another benefit from installing either the unlimited virtual user license in this feature pack or Visual Studio Load Test Virtual User Pack 2010 is that they enable multiprocessor architecture. The multiprocessor architecture allows the machine that the licenses are installed on to use more than one processor. Otherwise, the machine is restricted to using only one core.

  • Upgrading test controllers used with load tests or web performance tests: If you are using test controllers from Visual Studio for web performance or load testing—these test controllers are not configured with Team Foundation Server—then the version of test controller must match the version of Visual Studio. For more information, see the MSDN topics Upgrading Test Controllers from Visual Studio 2010 and Installing and Configuring Test Agents and Test Controllers.

  • Feedback client: The Feedback Client tool is new for Visual Studio 2012 and did not exist for Visual Studio 2010.

  • Remote debugger: The Visual Studio remote debugging process has been simplified. Installing and running the remote debugger no longer requires manual firewall configuration on either the remote computer or the computer running Visual Studio. You can easily discover and connect to computers that are running the remote debugger by using the Select Remote Debugger Connection dialog box.

Where to go for more information

There are a number of resources listed in text throughout the book. These resources will provide additional background, bring you up to speed on various technologies, and so forth. For your convenience, there is a bibliography online that contains all the links so that these resources are just a click away.

Next Topic | Previous Topic | Home | Community

Last built: Aug 6, 2012