prevention v. cure (part 4)

Manual testing is human-present testing. A human tester using their brain, their fingers and their wit to create the scenarios that will cause software either to fail or to fulfill its mission. Manual testing often occurs after all the other types of developer and automated techniques have already had their shot at removing bugs. In that sense, manual testers are at somewhat of an unlevel playing field. The easy bugs are gone; the pond has already been fished.

However, manual testing regularly finds bugs and, worse, users (who by definition perform manual testing) find them too. Clearly there is some power in manual testing that cannot be overlooked. We have an obligation to study this discipline in much more detail … there’s gold in them-thar fingers.

One reason human-present testing succeeds is that it allows the best chance to create realistic user scenarios, using real user data in real user environments and still allow for the possibility of recognizing both obvious and subtle bugs. It’s the power of having an intelligent human in the testing loop.

Perhaps it will be the case that developer-oriented techniques will evolve to the point that a tester is unnecessary. Indeed, this would be a desirable future for software producers and software users alike, but for the foreseeable future, tester-based detection is our best hope at finding the bugs that matter. There is simply too much variation, too many scenarios and too many possible failures for automation to track it all. It requires a brain-in-the-loop. This is the case for this decade, the next decade and at perhaps a few more after that. We may look to a future in which software just works, but if we achieve that vision, it will be the hard work of the manual testers of this planet that made it all possible.

There are two main types of manual testing.

Scripted manual testing

Many manual testers are guided by scripts, written in advance, that guide input selection and dictate how the software’s results are to be checked for correctness. Sometimes scripts are specific: enter this value, press this button, check for that result and so forth. Such scripts are often documented in Microsoft Excel tables and require maintenance as features get updated through either new development or bug fixes. The scripts serve a secondary purpose of documenting the actual testing that was performed.

It is often the case that scripted manual testing is too rigid for some applications or test processes and testers take a less formal approach. Instead of documenting every input, a script may be written as a general scenario that gives some flexibility to the tester while they are running the test. At Microsoft, the folks that manually test Xbox games often do this, so an input would be “interact with the mirror” without specifying exactly the type of interaction they must perform.

Exploratory testing

When the scripts are removed entirely, the process is called exploratory testing. A tester may interact with the application in whatever way they want and use the information the application provides to react, change course, and generally explore the application’s functionality without restraint. It may seem ad hoc to some, but in the hands of a skilled and experienced exploratory tester, this technique can be powerful. Advocates would argue that exploratory testing allows the full power of the human brain to be brought to bear on finding bugs and verifying functionality without preconceived restrictions.

Testers using exploratory methods are also not without a documentation trail. Test results, test cases and test documentation is simply generated as tests are being performed instead of before. Screen capture and keystroke recording tools are ideal for this purpose.

Exploratory testing is especially suited to modern web application development using agile methods. Development cycles are short, leaving little time for formal script writing and maintenance. Features often evolve quickly so that minimizing dependent artifacts (like test cases) is a desirable attribute. The number of proponents of exploratory testing is large enough that its case no longer needs to be argued so I’ll leave it at that.

At Microsoft, we define several types of exploratory testing. That’s the topic I’ll explore in part five.