Megosztás a következőn keresztül:


4: Manual System Tests

patterns & practices Developer Center

On this page: Download:
Microsoft Test Manager supports manual system tests | Exploratory testing | Creating bugs | Creating test cases - No more "no repro" | Testing with test cases | Creating test cases before coding the requirements - Test cases have steps | Creating a test case after the code is written | Running a test case - Recording your results and logging a bug | Replaying actions on later runs | Benefits of test cases in Microsoft Test Manager | How test cases are organized | Test plans | Test suites | Shared steps | Parameters | Configurations - Build | Testing in a lab environment | Client outside the lab: testing web servers - Create the lab environment, Tell the test plan about the environment, Get and install the latest build, Installing a web service, Start testing, On finding a bug, save the virtual environment | Client in the lab: testing desktop and thick-client apps | Test impact analysis | Enabling test impact analysis | Using test impact analysis | Devising exploratory tests and scripted test cases | Mostly positive tactics - Storyboards, Storyboards and test cases, Create, read, update, and delete (CRUD), States, Using models in testing | Mostly negative tactics - Script variation, The operational envelope | Exploratory and scripted testing in the lifecycle | Smoke tests | Monitoring test progress | Tracking test plan progress - Tracking test suite progress in Microsoft Test Manager, Tracking test plan results in Microsoft Test Manager, Leveraging test reports to track testing progress | We're not done till the tests all pass | Benefits of system testing with Visual Studio | The old way | The new way | Summary | Differences between Visual Studio 2010 and Visual Studio 2012 | Where to go for more information

Download PDF

Download code samples

Download Paperback

Manual testing is as old as computer programming. After all, most systems are designed to be used by someone.

Anyone can try out a system, just by using it. But testing it fully is an engineering discipline. An experienced tester can explore an application systematically and find bugs efficiently; and conversely, can provide good confidence that the system is ready to be released.

In this chapter, we'll discuss tools and techniques for verifying that your software does what its users and other stakeholders expect of it. We'll show how to relate your tests to requirements (whether you call them user stories, use cases, or product backlog items). You will be able to chart the project's progress in terms of how many tests have passed for each requirement. And when requirements change, you can quickly find and update the relevant tests.

When a bug is found, you need to be able to reproduce it. This can be one of the biggest sources of pain in bug management. We'll look at tools and techniques that help you trace how the fault occurred. And when it's fixed, you can retrace your steps accurately to make sure it's really gone.

When members of the traditional development teams at Contoso speak of testing, manual system testing is usually what they mean. Unlike unit tests, they can't be run overnight at the touch of a button. Running a full set of system tests takes as long as it takes someone to exercise all the system's features. This is why traditional companies like Contoso don't release product updates lightly. No matter how small an update, they have to run all the tests again, just in case the update had unintended consequences for another feature. And as the project goes on and the system gets bigger, a full test run gets longer.

In later chapters, this will be our motivation for automating system tests. However, the testing tools in Visual Studio include a number of ways to speed up manual retesting. For example, you can use test impact analysis to focus just on the tests that have been affected by recent changes in the code. Another way is to record your actions the first time around, and replay them the next time: all you have to do is watch the playback happening and verify the results.

We will never drop manual testing completely. Automated tests are excellent for regression testing—that is, verifying that no faults have developed since the last test—but are not so good for finding new bugs. Furthermore, there's a tradeoff between the effort required to automate a test and the costs of rerunning it manually. Therefore, we'll always do manual testing when new features are developed, and most projects will continue to perform at least some of their regression tests manually.

Microsoft Test Manager supports manual system tests

Microsoft Test Manager (MTM) is the client application that supports the testing features of Visual Studio Team Foundation Server. Get it by installing Visual Studio Ultimate or Visual Studio Test Professional.

In MTM, you can run tests in two modes: exploratory and scripted test cases. In exploratory testing you run the system and see what you can find. With test cases, you work through a script of steps that you either planned in advance, or that you worked out while you were exploring.

Exploratory testing is a lightweight and open approach to testing: nothing is prescribed, except that you might want to focus on a particular user story. Scripted test cases are more repeatable, even by people who aren't familiar with the application.

Exploratory testing

Art, one of the more senior people at Contoso, might very well ask, "I've been doing exploratory testing for thirty years. I never needed any special tools. You just run the system and try it out. Why do I need Microsoft Test Manager?"

Well, that's true, you don't need it. But we think it can make several aspects of system testing less troublesome and faster, such as reporting bugs. Let's try it and see.

We'll start with a scenario that's very easy to set up and experiment with.

Let's assume that you want to test a website that already exists. You are at the start of a project to improve its existing features as well as to add some new ones. Your objective in this testing session is to find any actual bugs, such as broken links, and also to look for any places where things could be made better. Also, you just want to familiarize yourself with the existing website.

On your desktop machine you have installed Microsoft Test Manager, and you have a web browser such as Internet Explorer.

Art would just start up the web browser and point it at the website. Instead, you begin by putting Microsoft Test Manager into exploratory mode. This allows it to record what you do and makes it easy for you to log bugs.

  1. Open Microsoft Test Manager. You might have to choose a team project to log into, and you might have to select or create a test plan.
  2. Choose Testing Center, Test, Do Exploratory Testing, and finally Explore. (If you have specific requirements that you are testing, you can select the requirement and choose Explore Work Item. Otherwise, just choose Explore.)

JJ159334.19C97ED0D7D0E81E7439D56DAB46F61C(en-us,PandP.10).png

MTM Testing Center

The Testing Center window minimizes and Microsoft Test Runner (the exploratory testing window) opens at the side of the screen. In the new window, click Start to start recording your actions.

JJ159334.1B5CCF079C24B4D0DEC90124986F0907(en-us,PandP.10).png

Exploratory testing in Microsoft Test Runner

Now open your web browser and point it at the website you want to test.

As you work, you can write notes in the testing window, insert attachments, and take screenshots. After you take a screenshot, you can double click to edit it so as to highlight items of interest.

JJ159334.DF866090DE4DAA2019B35CD78F725E90(en-us,PandP.10).png

Making notes in Test Runner

If you like to talk your audience through an issue, switch on audio recording, along with a real-time recording of what happens on the screen.

In addition, Test Runner records your keystrokes, button clicks, and other actions as you work. If you create a bug report or a test case, your recent actions are automatically included in the report in readable form, and can also be played back later.

If you want to do something else partway through a test, suspend recording by using the Pause button.

Tip

If you receive a distracting email or instant message during your test session, click Pause before you type your reply. The same applies if you get a sudden urge to visit an off-topic website. You don't want your extramural interests to be circulated among your team mates as part of a bug report.

Creating bugs

If you find an error, just click Create Bug. The bug work item that opens has your recent actions already listed, along with your notes and other attachments. Choose Change Steps if you don't want to include all the actions you took from the start of your session.

Edit the bug, for example, to make sure that it has the correct iteration and area for your project, or to add more introductory text, or to assign it to someone. When you save the bug, it will go into the team project database and appear in queries and charts of currently open bugs.

Notice that the bug has a Save and Create Test button. This creates a test case that is specifically intended to verify that this bug has been fixed. The test case will contain a script of the same steps as the bug report, and the two work items will be linked. After the bug has been fixed, this test case can be performed at intervals to make sure that the bug does not recur.

Creating test cases

At any point while you are exploring, you can create a test case work item that shows other testers how to follow your pioneering steps. Click Create Test Case. The actions that you performed will appear as a script of steps in the test case. Choose Change Steps to adjust the point in the recording at which the test case starts, and to omit or change any steps. Then save the test case.

JJ159334.D6B0C359A521937F3E9ADB27480E14A6(en-us,PandP.10).png

New test case

If you started your exploration in the context of a particular requirement work item, the test case will be attached to it. On future occasions when this requirement is to be tested, testers can follow the script in the test case.

Tip

Consider creating test cases or bug reports in each exploratory session. If you find a bug, you want to report it. If you don't find a bug, you might want to create test cases. These make it easy to verify that future builds of the system are still working. The test case contains your steps so that anyone can easily follow them.

For more about how to do exploratory testing, see the topic on MSDN: Performing Exploratory Testing Using Microsoft Test Manager.

No more "no repro"

So what do we get by performing exploratory testing with Microsoft Test Manager rather than just running the system?

One of the biggest costs in bug management is working out exactly how to make the bug appear. Bug reports often have inaccurate or missing information. Sometimes, in a long exploration, it's difficult to recall exactly how you got there. Consequently, whoever tries to fix the bug—whether it's you or someone else—can have a hard time trying to reproduce it. There's always the suspicion that you just pressed the wrong button, and a lot of time can be wasted passing bugs back and forth. With action recording, there's much less scope for misunderstanding.

It gets even better. When we look at using lab environments, we'll see how execution traces and other diagnostic data can be included in the bug report, and how developers can log into a snapshot of the virtual environment at the point at which the bug was found.

But first let's look at test cases.

Follow link to expand image

Testing with test cases

Test cases are the alternative to exploratory testing. In exploratory testing, you go where your instincts take you; but a test case represents a particular procedure such as "Buy one tub of pink ice cream." You can, if you want, plan a detailed script that you follow when you run the test case.

A test case is a specific instance of a requirement (or user story or product backlog item or whatever your project calls them). For example if the requirement is "As a customer, I can add any number of ice creams to my shopping cart before checking out" then one test case might be "Add an oatmeal ice cream to the cart, then check out" and another might be "Add five different ice creams."

Test cases and requirements are both represented by work items in your team project, and they can (and should) be linked.

JJ159334.745931FF4B160BE4EAD59D1CD7488B63(en-us,PandP.10).png

One requirement typically has several test cases

Test cases are typically created in two situations:

  • Before coding. When your team reviews a requirement in preparation for implementing it. Typically this is at the start of a sprint or iteration. Writing test cases is a great way of nailing down exactly what a requirement means. It forms a clear target for the developers. When the code has been written, the requirement isn't complete until its test cases pass.
  • After coding. When you do exploratory testing after the requirement has been implemented, you can generate a test case to record what you did. Other testers can quickly repeat the same test on future builds.

Creating test cases before coding the requirements

Test cases are a good vehicle for discussing the exact meaning of a requirement with the project stakeholders. For example, another test case for that "I can add any number of ice creams" requirement might be "Don't add any ice creams; just go to check out." This makes it obvious that there might be something wrong with this requirement. Maybe it should have said "one or more."

Tip

Inventing test cases is a good way of taking the ambiguities out of requirements. Therefore you should create test cases for all requirements before they are implemented. Discussing test cases should be a whole-team activity, including all the stakeholders who have a say in the requirements. It isn't something the test lead does privately.

To add test cases in Microsoft Test Manager, choose Testing Center, Plan, Contents.

JJ159334.B2BACD435F9AD9FCB9694EA6D9EF7656(en-us,PandP.10).png

The test plan can contain test suites

Select the root test plan, and choose Add Requirements. This button opens a Team Foundation Server query that will find all the requirements in the project. Before running the query, you might want to add a clause to narrow it down to the current iteration.

Select the items you want (CTRL+A selects all) and choose Add requirements to Plan.

A test suite is created for each requirement, with a single test case in each suite. A suite is a collection of test cases that are usually run in the same session.

You'll probably want to add more test cases for each requirement. A suite of this kind remembers that it was created from a requirement; when you create new test cases in it, they are automatically linked to the requirement.

JJ159334.89C345C84805EFE85F927B9C8C5E370A(en-us,PandP.10).png

Test cases automatically linked to requirement

Discussions about the requirements often result in new ones. You can create new requirements by using the New menu near the top right of the window.****

Test cases have steps

Test cases usually contain a series of steps for the tester to follow. They can be very specific—enter this text, click that button—or much more general—Order some ice cream. With specific instructions, a tester who does not know the application can reliably perform the test. With more general instructions, there is more room for the tester to explore and use her own ingenuity to break the system.

JJ159334.C4D0569DAC0C3664675D97518C4BC75E(en-us,PandP.10).png

Test case steps

For each sequence of steps that you want to write, create a new test case in the test suite that is derived from the requirement.

Later in this chapter, we'll discuss the process of inventing test cases and test case steps.

Creating a test case after the code is written

We've already seen how you can perform an exploratory test and create a test case from the action recording. You can add the test case to the relevant requirements after creating it. Alternatively, when you start an exploratory test, you can select the requirement that you intend to investigate; by default, any bug or test case you create will be linked to that requirement.

Running a test case

In Microsoft Test Manager, choose Testing Center, Test, Run Tests, and then select a suite or one or more individual tests and choose Run.

JJ159334.A6AD2997EA3C977DA16B5807FA368D29(en-us,PandP.10).png

Run tests

JJ159334.DEC4418134381F41CF42FFB4C999D570(en-us,PandP.10).png

Microsoft Test Runner start screen

Microsoft Test Runner opens at the side of the screen, as seen above.

Check Create action recording to record your steps so that they can be played back rapidly the next time you run this test.

Make sure your web browser is ready to run. Click Start Test.

Work through the steps of the test case. Mark each one Pass or Fail as you go.

JJ159334.518CFA40B3ABD3667492DCFA50CC3D0C(en-us,PandP.10).png

Starting a test

When you complete the test case, mark the whole test as Pass or Fail. You can also mark it Blocked if you were unable to conclude the test. Save and close the test case to store the result.

The result you record will contribute to the charts that are visible in the project web portal. You can also run queries to find all the test cases that have failed.

Recording your results and logging a bug

In addition to your pass/fail verdict, you can attach to the bug report your comments, files, snapshots of the screen, and (if you are using a virtual lab environment) snapshots of the state of the environment. You might have to pull down the menu bar continuation tab to see some of these items.

JJ159334.16A6AFE44D12FBCE9066CC9634D94FAE(en-us,PandP.10).png

Validating a step

If you find a fault, create a bug work item:

JJ159334.D1DAD7F2CA29C9B85337F896EFD62BAA(en-us,PandP.10).png

Creating a bug work item

The new bug will automatically contain a record of the steps you took, as well as your comments and attachments. You can edit them before submitting the bug.

Replaying actions on later runs

The first time you run a test case, you can record your actions—button clicks, typing into fields, and so on. When you rerun the test case, you can replay the actions automatically, either one step at a time, or the whole run.

This is very useful for two purposes:

  • Bug replay. Anyone who investigates a bug you reported can rerun your test and see exactly what you did and where it went wrong. Any bugs you logged are linked to the test case, so it's easy to navigate from the bug to the test case and run it.
  • Regression tests. You (or any other team member) can rerun the test on any future build, to verify that the test still passes.

To replay a test for either purpose, open the test case in Microsoft Test Manager, and choose the most recent run. Choose Run, and then in the test runner choose Start Test. (Do not check Overwrite existing action recording.) Choose Play. The actions you recorded will be replayed.

This is a great facility, because it can get the developer to the point where you found the bug. But be aware that the recording isn't perfect. Some actions, such as drawing on a canvas, aren't recorded. However, button clicks and keystrokes are recorded correctly.

Tip

Action recording relies on each input element having a unique ID. Make sure the developers know this when designing both HTML and Windows desktop applications.

JJ159334.A6F0678197B83DDD0726D1D43542A254(en-us,PandP.10).png

Benefits of test cases in Microsoft Test Manager

What have we gained by using test cases?

  • No more "no repro." Just as with exploratory testing, bugs automatically include all the steps you took to get to the bug, and can include your notes and screenshots as well. You don't have to transcribe your actions and notes to a separate bug log, and you don't have to recall your actions accurately.
  • Test cases make requirements runnable. Requirements are typically just statements on a sticky note or in a document. But when you create test cases out of them, especially if there are specific steps, the requirement is much less ambiguous. And when the coding is done and checked in, you can run the test cases and decide whether the requirement has been met or not.
  • Traceability from requirements to tests. Requirements and test cases are linked in in Team Foundation Server. When the requirements change, you can easily see which tests to update.
  • Rapid and reliable regression testing. As the code of the system develops, it can happen that a feature that worked when it was first developed is interfered with by a later update. To guard against this, you want to rerun all your tests at intervals. By using the action replay feature, these reruns are much less time consuming than they would otherwise be, and much less boring for the testers. Furthermore, the tests still produce reliable results, even if the testers are not familiar with the system.
  • Requirements test status chart. When you open the Reports site from Team Explorer, or the Project Portal site, you can see a report that shows which requirements have passed all their associated tests. As a measure of the project's progress, this is arguably more meaningful than the burndown chart of remaining development tasks. Powering through the work means nothing unless there's also a steady increase in passing system tests. Burndown might help you feel good, but you can't use it to guarantee good results at the end of the project.

JJ159334.2113E95D0128F1D30BBA548E11356C27(en-us,PandP.10).png

Requirements test status chart

How test cases are organized

Test plans

When you first start MTM, you are asked to choose your team project, and then to choose or create a test plan. You always run tests in the context of a test plan. Typically you have a separate test plan for each area and iteration of your project. If, later, you want to switch to a different project or plan, click the Home button.

A test plan binds together a set of test cases, a particular build of your product, test data collection rules, and the specification of lab environments on which the plan can be executed. Test cases are referenced by a tree of test suites. A suite is a group of tests that are usually run in the same session. A test case can be referenced from more than one test suite and more than one test plan.

Faults in your system can be reported using Bug work items, which can be linked to particular requirements. A test case can be set up to verify that a particular bug has been fixed.

JJ159334.C69DC40EB954775BA1526C9D6E90DEF7(en-us,PandP.10).png

Test plans contain suites, which refer to test cases

Test suites

A test plan contains a tree of test suites, and each suite contains a list of test cases. A test case can be in any number of test suites. A suite is a group of tests that are usually run in the same session.

You can make a tree structure of nested suites by using the New, Suite command. You can drag suites from one part of the tree to another.

There are two special types of suites:

  • Requirements-linked suites, which we have already encountered.
  • Query-based suites. You sometimes want to run all the test cases that fulfill a particular criterion—for example, all the Priority 1 tests. To make this easy, choose New, Query-basedsuite, and then define your query. When you run this suite, all the test cases retrieved by the query will be run.

JJ159334.056D832AC876688B8395D9D288C19FE3(en-us,PandP.10).png

Choosing new query-based suite

Shared steps

Certain sub-procedures are common to many tests. For example, opening an application on a particular file or logging in. Shared step sets are like subroutines, although they can't be nested.

To create a set of shared steps, select a contiguous subset of steps in a test case, and choose Create Shared Steps. You have to give the set a name. In the test case, the set is replaced by a link.

To reference a set of shared steps that already exists, choose Insert shared steps.

When the test case is run, the shared steps will appear in line in the test runner.

Parameters

You can set parameters to make generic test cases. For example, you could write the instruction in a step as "Open file @file1." The test case thereby acquires a parameter @file1, for which you can provide values.

When you define parameters, a matrix appears in which you can set combinations of parameter values:

JJ159334.B23028D6495F6CDE59D3DC92ED47FC25(en-us,PandP.10).png

Set parameter values during test planning

When you run the test case, the steps appear with parameter values:

JJ159334.FEFA7059275C77E1B2383D1EA623F4DB(en-us,PandP.10).png

Expected values displayed when you run the test

When you complete one run, you are presented with another, in which the parameters are set to the next row in the table of values.

Configurations

Most applications are expected to run on a variety of different versions and implementations of operating systems, browsers, databases, and other platforms.

When you design a test case—either in code or by defining manual steps—you design it to work with a particular configuration. For example, you might write a test case under the assumption that it will be running on Windows Server 2008, where the steps would be different than under, say, Linux. Or your test might include starting up a particular utility; in other words, you are assuming that it will be available when the test is run.

When you design a test case, you can record these assumptions as Configuration properties of the test case.

When you want to run a test case, you can filter the list of tests by configuration, to match the platform and applications that you have available.

To set the configuration properties of a test case, choose Plan, select the test case, and choose Configurations:

JJ159334.D6DA8DA4565B3BA92EA7D2216C7738E1(en-us,PandP.10).png

Setting configurations

To define your own configurations in addition to the built-in sets, choose Testing Center, Organize, Test Configuration Manager.

To filter the list of tests by configuration to match the applications that you have on your test machine, choose Testing Center, Test, and then set the required Filter. You can either set the filter for the test plan as a whole, or for the test suite.

JJ159334.8453066830360A9664DFB1253515ECEF(en-us,PandP.10).png

Setting test configurations

For more information about test configurations in Visual Studio 2010, see Defining Your Test Matrix Using Test Configurations.

Build

You can also define which build you are using for the test plan. This is just a matter of documenting your intentions: it doesn't automatically deploy that build, but the build id will appear in reports. It also appears prominently in the plan, to remind testers to obtain that particular installer.

To set a build, choose Plan, Properties, and under Builds, Modify.

The drop-down menu shows all the server builds that have been successfully performed recently, as described in the previous chapter.

Testing in a lab environment

In the previous sections of this chapter, we made the assumption that the system we're testing is a live website, which meant we could focus on features like action recording and playback, and avoid issues like installation and data logging. Let's now move to a more realistic scenario in which you are developing a website that is not yet live. To test it, you first have to install it on some computers. A lab environment, which we learned all about in the previous chapter, is ideal.

Why not just install your system on any spare computer?

  • As discussed earlier, lab environments, particularly virtual environments, can be set up very quickly. Also the machines are newly created, so that you can be certain they have not been tainted by previous installations.
  • If we link the test plan to the lab environment, Microsoft Test Manager can collect test data from the machines, such as execution traces and event counts. These data make it much easier to diagnose the fault.
  • When you find a bug, you can snapshot the state of the virtual environment. Whoever will try to fix the bug can log into the snapshot and see its state at the point the bug was found.

Client outside the lab: testing web servers

Let's assume you are testing a web server such as a sales system. We'll put the web server in the lab environment. But just as when we were testing the live website, your desktop computer can be the client machine, because it only has to run a web browser—there is no danger to its health from a faulty installation.

Create the lab environment

Switch to the Lab Manager section of MTM, and set up a virtual environment containing the server machines that your system requires, as we described in the previous chapter.

Tip

If one of the machines is a web server, don't forget to install the Web Deployment Tool on it.
Also, open Internet Information Services Manager. Under Application Pools, make sure that your default pool is using the latest .NET Framework version.

The status of the environment must be Ready in order for MTM to be able to work with it. (If it isn't, try the Repair command on the shortcut menu of the environment.)

If you perform tests of this system frequently, it's useful to store a template for this environment.

JJ159334.540F3C624A88A633D24328ED95769E15(en-us,PandP.10).png

Environment template

Tell the test plan about the environment

After creating a lab environment, configure the test settings of your test plan.

In MTM, select Testing Center, Plan, Properties. Under Manual Runs, select your Test Environment. Make sure that Test Settings is set to <Default>. (You can create a new test setting if you want a specific data collection configuration.)

This enables Microsoft Test Manager to collect test data from the machines in the lab. (More about these facilities in Chapter 6, "A Testing Toolbox.")

Get and install the latest build

To find the latest build of your system, open the Builds status report in your web browser. The URL is similar to http://contoso-tfs:8080/tfs. Choose your project, and then Builds. Alternatively, open Team Explorer on your project and choose Builds. The quality of each recent build is listed, together with the location of its output. Use the installers that should be available there: you should find .msi files.

If no build is available, you might need to configure one: see Chapter 2, "Unit Testing: Testing the Inside." More fundamentally, you might need to set up the build service: see Appendix, "Setting up the Infrastructure."

Connect to each machine in the virtual environment and install the relevant component of your system using the installers in the same way that your users will. Don't forget to log a bug if the installers are difficult to use.

Tip

After you have installed a build, take a snapshot of the environment so that it can be restored to a clean state for each test.

In the next chapter we'll look at automating deployment, which you can do whether or not you automate the tests themselves.

Installing a web service

Web services are a frequent special case of installation. Provided you have installed the Web Deploy tool (MSDeploy) on the machine running Internet Information Services (IIS), you can run the installation from any computer. Run the deploy command from the installation package, providing it with the domain name of the target machine (not its name in the environment) and the credentials of an administrator:

IceCream.Service.deploy.cmd /y /m:vm12345.lab.contoso.com /u:ctsodev1 /p:Pwd

Tip

Run the command first with the /T option and not /Y. This verifies whether the command will work, but does not actually deploy the website.

Start testing

In Microsoft Test Manager, switch back to Testing Center and run the tests. Don't forget that when you start your web browser, you will have to direct it to the virtual machine where the web server is running.

On finding a bug, save the virtual environment

If the developer's machine isn't a perfect clone of your environment, what didn't work in your test might work on hers. So, can we reproduce your configuration exactly?

Yes we can. Use the environment snapshot button to save your virtual environment when you log the bug. The developer will be able to connect to the snapshot later.

This allows you to reproduce the bug substantially faster and more easily.

Be aware that anyone who opens the snapshot will be logged in as you, so don't make it readable by anyone who might think it's a great joke to write indelicate messages to your boss on your behalf. Many teams use different accounts for running tests.

Client in the lab: testing desktop and thick-client apps

The previous lab configuration relied on the idea that the only interesting state was on the server machines, so only they were in the lab environment. But if you are testing a desktop application, or a web application with a significant amount of logic on the client side, then:

  • If you save the state of the environment for bug diagnosis, you want this to include the client computer.
  • The application or its installer could corrupt the machine. It is therefore preferable to install and run it on a lab machine.

For these reasons, an application of this type should run entirely on the lab environment, including the thick client or stand-alone application.

However, you must also install MTM on the client computer. To be able to record and play back your actions, MTM has to be running on the same computer as the user interface of the system under test.

This leads to a slightly weird configuration in which one copy of MTM is installed inside the lab environment to run your tests, and another copy is on your desktop machine to manage the environment and plan the tests.

JJ159334.86C053C1EED8011C912E0FF29B478E53(en-us,PandP.10).png

Using MTM to test client software

If you often create environments with this setup, it is a good idea to store in your VM Library a machine on which MTM is installed.

Test impact analysis

Test impact analysis recommends which tests should be run once more, based on which parts of the code have been updated or added.

As the project progresses, you'll typically want to focus your testing efforts on the most recently implemented requirements. However, as we've noted, the development of any feature often involves updates in code that has already been written. The safest—perhaps we should say the most pessimistic—way to make sure nothing has been broken is therefore to run all the tests, not only for new features, but for everything that has been developed so far.

However, we can do better than that. An important feature of the testing toolbox in Visual Studio is test impact analysis (TIA).

To use TIA, you have to set various options, both in the build process and in the properties of the test plan, as we'll explain in a moment. As you run tests, the TIA subsystem makes a note of which lines of code are exercised by each test case. On a later occasion, when you use a new build, TIA can work out which lines of code have changed, and can therefore recommend which tests you should run again.

Notice that TIA will recommend only tests that you have run at least once before with TIA switched on. So when you create new test cases, you have to run them at least once without prompting from TIA; and if you switch on TIA partway through the project, TIA will only know about tests that you run subsequently.

TIA ignores a test case if it fails, or if a bug is generated while you run it. This ensures that, if it recommends a test case that fails, it will recommend it again. However, it also means that TIA will not recommend a test until it has passed at least once.

Enabling test impact analysis

To make TIA work, you have enable it in both the build definition and in your test plan, and then create a baseline.

  1. Enable TIA in the build that you use for manual testing; that is, the periodic build service that generates the installers that you use to set up your system. We discussed setting up a build in Chapter 2, "Unit Testing: Testing the Inside."
    In Visual Studio Premium or Ultimate, connect to the team project, then open the build definition. On the Process tab, set Analyze Test Impact to True.
    Queue a build for that definition.
  2. Enable TIA in your test plan. In MTM, connect to your team project and test plan, and then open the Plan, Properties tab. Under Manual runs, next to Test settings, choose Open. In the test plan settings, on the Data and Diagnostics page, enable the Test Impact data collector.
    If you use a lab environment, you enable it on each machine. For IIS applications, you also need to enable the ASP .NET Client Proxy data collector.
  3. Create a baseline. TIA works by comparing new and previous builds.
    1. Deploy your application by using installers from a build for which you enabled TIA.
    2. Specify which build you have deployed and run your tests:
      In Microsoft Test Manager, choose Testing Center, Test, Run Tests. Choose Run with Options and make sure that Build in Use is set to the build that you have installed.

Using test impact analysis

Typically, you would run test impact analysis when a new major build has been created.

In your test plan's properties, update Build in use to reflect the build that you want to test.

When you want to run test cases, in MTM, choose Testing Center, Track, Recommended Tests. Verify that Build to use is the build you will deploy to run your tests. Select Previous build to compare, then choose Recommended tests.

In the list of results, select all the tests and choose Reset to active from the shortcut menu. This will put them back into the queue of tests waiting to be run for this test plan.

Devising exploratory tests and scripted test cases

Read James A. Whittaker's book Exploratory Software Testing (Addison-Wesley Professional, 2009). In it, he describes a variety of tours. A tour is a piece of generalized advice about procedures and techniques for probing a system to find its bugs. Different tours find different kinds of bugs, and are applicable to different kinds of systems. Like design patterns, tours provide you with a vocabulary and a way of thinking about how to perform exploratory testing. As he says, "Exploration does not have to be random or ad hoc."

Testing tours and tactics are sometimes considered under the headings of positive and negative testing, although in practice you'll often mix the two approaches.

Positive testing means verifying that the product does the things that it is supposed to under ordinary circumstances. You're trying to show that it does what's on the tin. On the whole, detailed test case scripts tend to be for positive tactics.

Negative testing means trying to break the software by giving it inputs or putting it into states that the developers didn't anticipate. Negative testing tends to find more bugs. Negative testing is more usually done in exploratory mode; there are also automated approaches to negative testing.

Mostly positive tactics

Storyboards

A storyboard is a cartoon strip—usually in the form of a PowerPoint slide show—that demonstrates an example of one or more requirements being fulfilled from the stakeholders' point of view. It doesn't show what's going on inside the software system; but it might show more than one user.

Each slide typically shows a mock-up of a screen, or it shows users doing something.

The purpose is to refine the details of the system's behavior, in a form that is easy to discuss, particularly with clients who are expert in the subject matter but not in software development.

Here is a storyboard for part of the ice-cream website:

JJ159334.FE364B01F82C967D1D88498A44664993(en-us,PandP.10).png

A storyboard

There would be other storyboards for the production manager to add new flavors and change prices, for dispatchers to log in, and so on.

Stories can be told at different levels. Some storyboards are at the epic level, telling an end-to-end story in which users' overall goals are satisfied. The customer gets the ice cream, the vendor gets the money. Others are at the level of the typical user story, describing features that make up an overall epic. For example: As a customer, I can add any flavor in the catalog to my shopping cart. The storyboard might show a mock-up of the catalog display.

It's traditional in storyboards to use a font that makes it look as though you drew these slides casually on the whiteboard. Which of course, you can also do. But if you have Visual Studio 2012, you get a PowerPoint add-in that provides you with mock-ups of user interface elements, so that you can make the whole thing look as though it has already been built. (Take care when showing it to prospective users; they will think it already exists.)

Storyboards and test cases

The storyboard can be the basis of a test case in which the slides represent the steps. Link the storyboard file to the test case work item. If someone changes the storyboard, they can see that the test case should be updated.

Like storyboards, test cases can be written at different levels. Remember to create test cases at the epic end-to-end level, as well as at the detailed level.

Storyboards typically illustrate the "happy path," leaving aside the exception cases. Don't forget to test for the exceptions! To make sure the exceptions get tested, add test cases for them. It isn't always necessary to write down the steps of each test case, and unless an exceptional case is complex, just the title will do. Alternatively, you might just leave exceptions to exploratory testing.

Create, read, update, and delete (CRUD)

It's important to make sure that you've covered all the application's features in your tests. Here's a systematic technique for doing that, in which you sketch an entity-relational diagram, and then check that you have tests for updating all the elements on the diagram.

Look at the entities or objects that you see in the user interface as you use the system. Ignoring all the decorative material, what important objects are represented there, and what relationships are there between them? For example, maybe you can browse through a catalog of ice cream flavors. OK, so let's draw boxes for the Catalog, and some example Flavors and relationships between them. By choosing items from the catalog, you can fill a shopping cart with different flavors. So, now we have a Cart, but let's call it an Order, since that's what it becomes if we choose to complete the purchase. And when you want to complete your purchase, you enter your name and address, so we have a Customer connected to the Order:

JJ159334.9EB9D6ABA3C23FBB631825937CFCB4F7(en-us,PandP.10).png

Instance diagram of an example order

The diagram doesn't have to correspond to anything in the system code. These entities just represent the concepts you see in the system's behavior. They are written in the language of the users, not the code. The diagram helps clear away the inessential aspects of the user interface, so that you can concentrate on the important things and relationships.

Look at each attribute and consider what a user would need to do to update it. For example, the price in each Flavor might be updated by an administrator. Do you have a test that covers that operation?

Look at each relationship and consider how you would create or destroy it. Do you have a test that covers that? For example, what can remove the relationship between an Order and a Flavor? What adds a Flavor to a Catalog?

When you do change those elements, how should the result be reflected in the user interface? For example, removing an item from the Catalog should make it disappear from subsequent catalog browsing. But should it disappear from existing Orders?

Perform or plan tests to exercise all of those changes and to verify the results.

Finally, go through the user actions described in the user stories, storyboards, or requirements document, and think about how the actions affect the relationships that you have drawn. If you find an action that has no effect on the entities in your diagram, add some. For example, if a customer can choose a favorite Flavor, we would have to add a relationship to show which flavor the customer has chosen. Then we can ask, how is that relationship deleted? And do we have tests for creating and deleting it?

States

Think about the states of the types of items that you have identified. What interesting states can you infer from the requirements (not from the code, of course)? Different states are those that make a substantial difference to the actions that users can perform on them, or on the outcomes of actions.

For example, on most sales websites, an order changes from being fairly flexible—you can add or delete things—to being fixed. On some sites, that change happens when payment is authorized. On other sites, the change of state happens when the order is dispatched.

Draw a diagram of states for each type of object that you identified in the entity-relational diagram. Draw as arrows the transitions that are allowed between one state and another, and write next to each transition the actions that cause it to happen.

JJ159334.A0C54909E1D3C46F8A013B42FE58E636(en-us,PandP.10).png

States and transitions

Work out what indicates to the user which state the object is in. For instance, in the system in this example, you can add an item to the order even after you've checked out. There must be some way for the user to be able to tell that they have to check out again.

Devise tests to verify that in each state, invalid operations are disallowed, and that the valid operations have the correct effects. For example, you should not be able to add or delete items from a dispatched order. Deleting an item from an order in the Payment Authorized state should not require you to check out again.

Using models in testing

CRUD and state testing are basic forms of model-based testing.

You can create the diagrams in two ways. One way is to draw them while doing exploratory testing, to help you work out what you are seeing. When you've worked out a model this way, verify that the behavior it represents is really what the users need.

The other way is to draw them in advance to help clarify the ideas behind the user stories when you are discussing them with stakeholders. But don't spend too much time on it! Models have a poor reputation for holding up development, and working code is the most effective vehicle for requirements discussions. Nevertheless, if you can sketch a workflow in a few minutes and have the client say "That's not what I meant," you'll have saved a substantial chunk of work.

Mostly negative tactics

Most programs are vulnerable to unusual sequences of actions or unexpected values. Checking for error conditions is one of the more tedious aspects of computer programming, and so it tends to be skimped on. Such vulnerabilities provide the ideal entry points for hackers. Therefore, it's good practice to push the envelope by producing illogical and unexpected user actions.

Script variation

Work through the steps defined in the test case, but vary the script to see what happens. There are variations that are often productive; for instance, you might:

  • Insert additional steps; omit steps. Repeat steps. Re-order major groups of steps—both in ways that ought to work, and ways that should not.
  • If there's a Cancel button, use it, and repeat.
  • Use alternative ways of achieving the same result of a step or group of steps. For example, use keyboard shortcuts instead of menu items.
  • Combine and intermix different scenarios. For example, use one browser to add to an order on a sales website, while paying for the same order with another browser.
  • Cut off access to resources such as a server. Do so halfway through an operation.
  • Corrupt a configuration file.

The operational envelope

Thinking some more about the state diagrams, consider what the preconditions of a transition between states must be. For example, an order should not be fulfilled if it has no items. Devise tests that attempt to perform actions outside the proper preconditions.

The combinations of states and values that are correct are called the operational envelope of the system. The operational envelope can be defined by a (typically quite large) Boolean expression that is called an invariant, which should always be true.

You can guess at clauses of the invariant by looking at the entity-relational diagram. Wherever there are properties, there is a valid range of values. Wherever there are relationships, there are valid relationships between properties of the related items. For example:

Every dispatched Order should always contain at least one item

AND the total price of every Order must always be the sum of the prices of its items

AND an item on an Order must also appear in the Catalog

AND ….

The developers sometimes have to work quite hard to make sure the system stays within its operational envelope when actions occur. For example, if an open order can contain no items, how does the system prevent it from becoming dispatched? For another example, what should happen to outstanding orders when an item is deleted from the catalog? Or is that clause in the invariant wrong?

Devise tests to see if something reasonable happens in each of these cases.

Notice that by trying to express the boundaries of the operational envelope in a fairly precise invariant, we have come across interesting situations that can be verified.

Exploratory and scripted testing in the lifecycle

At the beginning of each iteration, the team and stakeholders choose the requirements that will be developed in that iteration. As a tester, you will work with the client (or client proxy, such as a business analyst) and developers to refine the requirements descriptions. You will create specific test cases from the requirement work items, and work out test steps that should pass when the code has been written. Discuss them with the development team and other stakeholders.

You might need to develop test data or mock external systems that you will use as part of the test rig.

As the development work delivers the first implemented requirements of the iteration, you typically start work on them in exploratory mode. Even if you follow a test case in which you have written steps, you can perform a lot of probing at each step, or go over a sequence of steps several times. (To go back to the beginning of the steps, use the Reset button.) Make sure to check the Create an action recording option when you start exploration, so that you can log what you have done if you find a bug.

After some exploration, you can be more specific about test steps, and decide on specific values for test inputs. Review your script, adjusting and being more specific about the steps. Add parameters and test values. Record your actions as you go through the script so that it can be played back rapidly in future.

Smoke tests

You don't have to stop the playback at the end of each step. You can if you want to get the test runner to play back the whole test in a single sequence without stopping. A drawback of this is that you don't get a chance to verify the outcome of each step; the whole thing goes by in a whirl. Still, it sure gets the test done quickly, and you can verify the results at the end provided that your last step wasn't to close and delete everything.

A test run played back like this can still fail partway through. If an action doesn't find the button or field that it expects, the run stops at that point. This means that if the system responds in an obviously incorrect way—by popping up a message or by going to a different screen than it should, for instance—then your test will expose that failure. You can log a bug, taking a snapshot of the system state at that point.

This gives us an easy way to create a set of smoke tests; that is, the kind of test that you run immediately after every build just to make sure there are no gross functional defects. It won't uncover subtle errors such as prices adding up to the wrong total, but it will generally fail if anything takes the wrong branch. A good smoke test is an end-to-end scenario that exercises all the principal functions.

You don't have to be a developer to create these kinds of tests. Contoso's Bob Kelly is a tester who has never written a line of code in his life, but he records a suite of smoke tests that he can run very quickly. There's one test case for an end user who orders an ice cream, another for tracking an order, and other test cases for the fulfillment user interface.

Using his smoke tests, Bob can verify the latest build in much less time than it would take him to work through all the basic features manually. When he gets a new build, he usually runs the smoke tests first, and then goes on to perform exploratory tests on the latest features. As he refines those explorations, he gradually turns them into more recorded sequences, which he adds to what he calls his smoke stack.

Monitoring test progress

In this section we will see how we can monitor the progress of our test plan and test suites using Microsoft Test Manager and using reporting in Team Foundation Server.

Tracking test plan progress

While using Microsoft Test Manager, you and your team can do what Fabrikam does—easily monitor the progress of your current test plan and the test suites within them. Your team can also leverage testing reports generated in Team Foundation Server to track your testing progress. These reports are accessible through Team Explorer, Test Dashboard, or Team Web Access, but are also easily shared with other critical people such as business decision makers because they are in Excel format.

Tracking test suite progress in Microsoft Test Manager

In Microsoft Test Manager, you can track your progress for the test suites in your current test plan immediately after you run your tests. You can view the tests that have passed and failed. You can mark tests as blocked or reset tests to active when you are ready to run them again.

JJ159334.4643BB019CCC0EA2A292FB5CCB80AC58(en-us,PandP.10).png

Tracking test suite progress

If you want to view the results for all the suites in the test plan rolled up for your overall status, you can do so in the Properties view of your test plan in Test Plan Status.

Tracking test plan results in Microsoft Test Manager

You can also monitor the progress of your test plan by using the test plan results feature in Microsoft Test Manager. The test plan results include charts and numerical statistics on the tests in your test plan. The statistics include the tests that are currently passed, failed, blocked, inconclusive, warning, and active. Additionally, the test plan results include detailed charts that show the failure types and resolution data.

The test plan results can be filtered to specify the test suites and test configurations that you want to be included. For example, you might only want to view the test results for a specific test suite in your test plan that your team is currently active in. Additionally, you might filter the test configurations to only view the test results set to Windows 7. By default, all of the test suites and test configurations that are in your test plan are included in the test plan results.

After you apply the filtering, you can view your test plan progress in either of the two following ways:

  • By Test Suite, displays the test result statistics for all of the tests in the specified test suites and test configurations in your test plan. This is the default view. It's a quick way to view the progress being made for your test plan. If your test suites are organized by specific iterations, or by particular features or functionality, this can help the team identify areas that are troublesome.
  • By Tester, displays the test result statistics for all of the tests in the specified test suites and test configurations in your test plan according to which testers performed the tests. This can be useful for load-balancing your tests among your team members.

JJ159334.15308940D73C3A94EB87A32AA2949928(en-us,PandP.10).png

Test plan results

For more information, see How to: View Test Plan Results in Microsoft Test Manager.

Leveraging test reports to track testing progress

In addition to leveraging the tracking information presented in Microsoft Test Manager, you can also track the progress of your team's testing progress using reports. Several predefined test reports are included in Team Foundation Server. You can also create custom reports to help meet a specific reporting need for your team. The predefined reports are available only when your team uses Microsoft Test Manager to create test plans and run tests. The data for the reports is automatically collected as you run tests and save test results. For more information, see the MSDN topic Reporting on Testing Progress for Test Plans.

The predefined reports are created for use with Excel. Additionally, a subset of the reports is also available for use with Report Designer. If you create your own tests, you can use them in either Excel or Report Designer.

Your team can leverage the following reports to aide in tracking testing progress in your current cycle.

Tracking how many test cases are ready to run. You can view the progress on how many test cases are ready to run and how many have to be finished for a given timeframe. For more information, see the MSDN topic Test Case Readiness Excel Report.

Tracking your test plan progress. You can use the Test Plan Progress report to determine, for a given timeframe, how many test cases were never run, blocked, failed, or passed. For more information, see the MSDN topic Test Plan Progress Excel Report.

  • Tracking progress on testing user stories: The User Story Test Status report shows how many tests have never run, are blocked, failed, or passed for each user story. For more information, see the MSDN topic User Story Test Status Excel Report (Agile).
  • Tracking regression: The Failure Analysis report shows the number of distinct configurations for each Test Case that previously passed and are now failing, for the past four weeks. For more information, see the MSDN topic Failure Analysis Excel Report.
  • Tracking how all runs for all plans are doing: You can use the Test Activity report to see how many test runs for all test cases never ran, were blocked, failed, and passed. For more information, see the MSDN topic Test Activity Excel Report.

For more information, see the MSDN topic Creating, Customizing, and Managing Reports for Visual Studio ALM.

Note

There is a delay between the time the test results are saved and the when the data is available in the warehouse database or the analysis services database in Team Foundation Server to generate your reports.

You can access test reports in one of following three ways:

  • Test Dashboard. If your team uses a project portal, you can view the predefined reports on the Test Dashboard. You can access the project portal from the Track view in Microsoft Test Manager. For more information about the Test Dashboard, see the MSDN topic Test Dashboard (Agile).
  • Team Explorer. You can access Report Designer reports from the Reports folder for your team project, and you can access Excel reports from the Documents folder.
  • Team Web Access: If you have access to Team Web Access, just as with Team Explorer, you can access Report Designer reports from the Reports folder for your team project, and you can access Excel reports from the Documents folder.

We're not done till the tests all pass

Don't call a requirement implemented until you've written all the tests that it needs, and they have all been run, and they all pass. Don't call the sprint or iteration done until all the requirements scheduled for delivery have all their tests passing. The chart of requirements against test cases should show all green.

Expect the requirements to change. Update the test cases when they do.

Benefits of system testing with Visual Studio

The old way

In a traditional shop like Contoso, system testing simply means installing the product on some suitable boxes, and running it in conditions that are as close as possible to real operation. The test and development teams are separate.

Testers play the role of users, working through a variety of scenarios that cover the required functionality and performance goals.

When a bug is found, they try to find a series of steps to reproduce it reliably. The tester creates a bug report, typing in the steps. The developers try to follow these steps while running under the debugger, and may insert some logging code so they can get more information about what's going on.

Reproducing the bug is an imperfect and expensive process. Sometimes the repro steps are ambiguous or inaccurate. Bugs that appear in operation don't always appear on the development machines.

If the tests are planned in advance, they might be set out in spreadsheets or just in rough notes. If the requirements change, it can take a while to work out which tests are affected and how.

When there's a change to any area of the code, people get nervous. It needs thorough re-testing, and that takes time, and might mean training more people in the product. There's always the worry that a Pass result might be explained by less rigorous testing this time. The biggest expense of making a change—either to fix a bug or to satisfy an updated requirement—is often in the retesting. It's a very labor-intensive endeavor.

The new way

Fabrikam uses Visual Studio to improve on this process in several ways:

  • Tests are linked to requirements. Test cases and requirements (such as user stories, use cases, performance targets, and other product backlog items) are represented by work items in Team Foundation Server, and they are linked. You can easily find out which test cases to review when a requirement changes.

  • Repeatable test steps. A good way to clarify exactly what is meant by a particular user story is to write it down as a series of steps. You can enter these steps in the test case.

    When you run the test, you see the steps displayed at the side of the screen. This makes sure the test exercises exactly what you agreed as the requirement. Different people will get the same results from running the test, even if they aren't very familiar with the product.

  • Record/Playback. The first time you run through a test, you can record what you do. When a new build comes out, you can run the test again and replay your actions at a single click. As well as speeding up repeat tests, this is another feature that makes the outcome less dependent on who's doing the test. (This feature works with most actions in most applications, but some actions such as drawing aren't always recorded completely.)

  • Bug with One Click. Instead of trying to remember what you did and writing it in a separate bug reporting app, you just click the Bug button. The actions you performed, screenshots, and your comments can all be included automatically in the bug report. If you're working on a virtual environment, a snapshot of that can be stored and referenced in the report. This all makes it much easier for developers to find out what went wrong. No more "no repro."

  • Diagnostic data collection. System tests can be configured to collect various types of data while the system is running in operational conditions. If you log a bug, the data will be included in the bug report to help developers work out what happened. For example, an IntelliTrace trace can be collected, showing which methods were executed. We'll look at these diagnostic data adapters in more detail in Chapter 6, "A Testing Toolbox."

  • Requirements test chart. When you open the Reports site from Team Explorer, or the Project Portal site, you can see a report that shows which requirements have passed all their associated tests. As a measure of the project's progress, this is arguably more meaningful than the burndown chart of remaining development tasks. Powering through the work means nothing unless there's also a steady increase in passing system tests. Burndown might help you feel good, but you can't use it to guarantee good results at the end of the project.

  • Lab environments. Tests are run on lab environments—usually virtual environments. These are quick to set up and reset to a known state, so there is never any doubt about the starting state of the system. It's also easy to automate the creation of a virtual environment and deploy the system on it. No more hard wiring.

  • Automated system tests. The most important system tests are automated. The automation can create the environment, build the system, deploy it, and run the tests. A suite of system tests can be repeated at the touch of a button, and the automated part of the test plan can be performed every night. The results appear logged against the requirements in the report charts.

Follow link to expand image

Summary

Tests are directly related to requirements, so stakeholders can see project progress in terms of tests passing for each requirement. When requirements change, you can quickly trace to the affected tests.

You can perform tests in exploratory or scripted mode.

Exploratory tests, and the design of scripted tests, can be based on the idea of different types of tours, and on business models.

Scripted tests help you clarify the requirements, and also make the tests more reliably repeatable.

Scripted tests can be recorded and rapidly replayed. Also, you can use test impact analysis to focus just on those tests that have been affected by recent changes in the code.

The time taken to reproduce bugs can be significantly reduced by tracing the detailed steps taken by the tester, and by placing a snapshot of the environment in the hands of the developer who will fix the bug.

Differences between Visual Studio 2010 and Visual Studio 2012

  • Exploratory Tester. In Visual Studio 2010, you can record exploratory tests by creating a test case that has just one step, and you can link that to a requirement. During exploratory testing, you can record actions, log bugs, and take screenshots and environment snapshots.

    In Visual Studio 2012, you can associate an exploratory test session directly with a requirement (or user story or product backlog item). In Testing Center, choose Do Exploratory Testing, select a requirement, and then choose Explore Work Item. In addition to the 2010 facilities, you can create test cases during your session.

  • Multi-line test steps. In Visual Studio 2012, you can write sub-steps in a manual test case step.

  • Windows apps. Visual Studio 2012 includes specialized features for testing Windows apps.

  • Performance. Many aspects of test execution and data collection work faster in Visual Studio 2012. For example, the compression of test data has been improved.

  • Compatibility. Most combinations of 2010 and later versions of the products work together. For example, you can run tests on Team Foundation Server 2010 using Microsoft Test Manager 2012 RC.

Where to go for more information

There are a number of resources listed in text throughout the book. These resources will provide additional background, bring you up to speed on various technologies, and so forth. For your convenience, there is a bibliography online that contains all the links so that these resources are just a click away.

Next Topic | Previous Topic | Home | Community

Last built: Aug 6, 2012