Share via

Performing Exploratory Testing Using Microsoft Test Manager

Exploratory testing means testing an application without a set of tests defined in advance, and without a script of predetermined steps.

Microsoft Test Manager (MTM) helps you by recording the actions you perform as you work with your application. You can also record screenshots, comments, file attachments, audio narration and screen video. The recording makes it easier to trace any fault that you might discover. You can also store your actions as a test case, so that it is easy for you or a colleague to replicate the test after the application is updated.

MTM records actions to make repro easier


  • Visual Studio Ultimate, Visual Studio Premium, Visual Studio Test Professional

See Video: Easily reproducing issues through manual testing.

Starting an exploratory test session

In Microsoft Test Manager, open Testing Center, Test, Do Exploratory Testing.

Starting an exploratory testing session

Choose Explore.

- or –

Select a requirement work item, and then choose Explore work item. This associates the recording of your test with the work item.

  • Why would I associate the test session with a work item?
    If you create bugs or test cases from your exploratory session, they will be automatically linked to that work item.

    You can associate the session with any work item in the requirement category. In the standard team project templates, this includes Requirement (CMMI), Product Backlog Item (Scrum), and User Story (Agile).

    The associated work item and any test cases you create from your exploratory session will automatically be added to the test plan.

    When the coding of each requirement is checked in, it is good practice to perform testing focused on that requirement. Any bugs that are created should be linked to the requirement to show that it is not complete.

  • Why might I not want to associate the session with a requirement?
    Sometimes you want to explore the application without focusing on any particular requirement.

The Exploratory Testing window opens, and waits until you are ready to start.

Exploratory test window ready to start recording

Exploring the application

Prepare to run your application. For example, if your application is a website, start the web server.

In the Exploratory Testing window, choose Start.

Run the application and explore its features. For example, open a web browser and log in to your website.

The Exploratory Testing window records the actions you perform in the rest of the screen. You can add comments, screenshots and files as you work. They will be added to any bug or test case that you create.

[Visual Studio 2012 Update 1] The action log automatically includes a snapshot of the screen, focusing on the area around the text or gesture. The screenshots are included when you create a bug in the exploratory session.

Exploratory test window alongside application.

If you are exploring a particular requirement, verify that the requirement is satisfied under a variety of different conditions. For more information, see What exploratory tests should I perform?

  • Is everything I do recorded in detail?
    By default, actions in MTM and in Office applications such as Word, Paint and Outlook are not recorded. To change this set, configure the action log in the test settings in the test plan properties. For more information, see Configuring the Test Plan.

    Also, some detailed actions such as drawing are not recorded in full. For example, if you draw a face in a drawing application, the action will be captured only as moving the cursor. You should add a comment to describe exactly what you did.

    The action record is more readable if the user interface controls have readable names. The development team should set the accessibility properties of each control in the user interface, or the ID of each element in an HTML application.

  • My application is a website or client-server system. Can MTM record events that take place in the servers?
    Yes. You have to run the server in a lab environment, and you have to configure your test plan to capture events from the lab machines. When you create a bug, MTM will retrieve data from the lab machines and attach it to the bug report. For more information, see Using a Lab Environment for Your Application Lifecycle.

Report bugs

When you find flaws in the application, choose Create bug.

To help reproduce the error, the steps you performed will automatically be saved in the bug, in the Steps to Reproduce pane. Comments that you wrote during the test also appear, along with the attachments and screenshots that you added, and additional system information.

When the bug is created, can change which steps you want to be included.

Creating a bug from an exploratory session.

You can generate a test case at the same time as the bug, containing the same steps. This helps to ensure that the bug does not recur in the future. The bug and the test case are linked.

[Visual Studio 2012 Update 1] The description of each action is automatically accompanied by a screenshot of the area near the action.

Actions with images in bug report

To see how the whole screen appeared during the test, choose Action Log.

Actions log with image context

Make re-testing easy

When the application is updated or developed further, or when a bug is fixed, you will want to re-run your tests to make sure everything is still working - or to see if it works better.

But there's a substantial amount of expertise, creative thinking, and experimentation in an exploratory test. To save time on future occasions, you can save your actions as a script of steps in a test case. When it is time to perform these tests again, you - or someone else - only have to follow the steps, instead of re-inventing them.

You can create a test case either directly from your exploratory session, or immediately after you create a bug.

Creating a test case from a bug.

You can adjust the number of recent steps that are included in the test case.

If you create a test case directly from an exploratory session, you will typically spend some time practicing with a feature before performing a sequence of steps that you want to record. Edit the test case to start where your sequence begins.

You should also edit the work item to state what result should be seen after each step.

When you save and close the work item, you can return to exploration.


Create separate test cases for each separate aspect of the requirement.

  • I ran the same sequence with different data values. Should I record each as a separate test case?
    No. Create one test case, then edit it to substitute a parameter name for a specific value in the sequence. Parameter names begin with "@". For example, "Click '@flavor' link." In the Parameter Values table at the bottom of the test case script, provide a set of values that should be used in successive repetitions of the test. For more information, see Creating Manual Test Cases Using Microsoft Test Manager.

Completing the test

Pausing and ending the testing session.


Give your test run a title that expresses the result, such as "Failed to open account" or "Successfully created an order." This makes it easier to interpret the list of recent exploratory tests.

How well are we doing?

Use View exploratory test sessions to review the tests that have been performed in this test plan. You can sort and filter the tests by requirement.

View exploratory test sessions

Using Exploratory Testing

  • What exploratory tests should I perform?
    The most important categories of test are:

    • Exercise the story. Can you perform the actions promised in the user story or product backlog item?

    • Exercise key values. Can you perform the user story with differing sets of input – for example, an empty shopping cart, a single item, one of everything, two of some things, and so forth?

    • Break the application. Can you make the application fail, for example by providing unexpected inputs or too much input?

    It’s useful to think in terms of different tours. A tour is an exploration in which you perform a particular flavor of test. For more details, see James A. Whittaker’s book, Exploratory Testing.

  • How should we use exploratory tests together with planned test cases?
    Different teams use different mixtures of exploratory testing and planned testing using test cases. Here are some alternative strategies to consider:

    • Just exploratory. Rely entirely on exploratory testing, and never create test cases. Create bug work items when any fault is found. When the bug is fixed, explore again to verify the fix. The list of exploratory tests is the best record of what has been tested: by the end of the sprint, there should be at least one test for each product backlog item or user story. This strategy is suitable for small projects.

    • Exploration for new features, test cases for regression. When the code for a requirement is checked in, perform exploratory tests and create test cases from them. Create bugs for the errors you find. When the bugs are fixed, run all the test cases. The best measure of completeness is the chart of passing test cases. In each sprint, also run test cases for previous sprints, to make sure nothing has changed.

    • Plan test cases in advance, and explore to break the code. Write test case scripts in advance, using them to help clarify the requirements. As code is checked in, run the applicable test cases. Also run exploratory tests, both to generate additional test cases, and with the intention of making the application fail.

Verifying the fix

When a fix for the bug has been checked in, open the MTM and choose Testing Center, Test, Verify Bugs. This page has a list of bugs that were created in this test plan and that are linked to test cases. Select the bug that has been fixed and choose Verify. The Test Runner will open and show the steps that you performed to find the error. Follow the steps and verify that the error does not occur. Mark the test as passed and close the bug. 

Collecting data from servers

If your application is a website or a client-server application, you can collect information from the server machines, as well as from your own client machine.

To do this, you have to set up a lab environment and install your servers on the machines in that environment. For more information, see Running Tests on a Lab Environment.

You must also configure your tests to collect data from the environment. You can either do this in the test plan properties, or you can choose this option in individual tests.

To configure an individual test session, start the test by using Explore with Options.

Explore with options drop-down menu.

To configure all tests to collect server data: In MTM, choose Testing Center, Plan, Properties. At Test Environment, choose the environment on which you have installed your server.

Setting the default environment for the test plan.

Collecting additional data

You can set the properties of the test plan so that additional data are recorded in your test session and in any bugs that you create. For example, you can add or remove programs from which user actions should be collected.

You can also capture screen video as you work, and audio commentary.

Configuring data collection for the test plan.

For more information, see How to: Choose Test Settings and Environments for a Test Plan.


Be aware that the actions you perform during a testing session are automatically recorded. Potentially, this recording could capture sensitive data, including user names and passwords.

External resources


Testing for Continuous Delivery with Visual Studio 2012 – Chapter 4: Manual System Tests


Easily reproducing issues through manual testing

See Also


How to: Create a Work Item using Microsoft Test Manager

How to: Create a Work Item using Microsoft Test Manager


Running Tests in Microsoft Test Manager

Creating Tests for Product Backlog Items, User Stories, or Requirements

Product Backlog Item (Scrum)

User Story (Agile)

Requirement (CMMI)

Other Resources

How to: Add Product Backlog Items, User Story, or Requirements Work Items to Your Test Plan

Creating, Copying, and Updating Work Items