Exploratory Testing Examined
Exploratory testing is a topic often embroiled in unreasonable controversy. I didn’t always understand what all of the hullabaloo is about because the simple fact is that all testers engage in exploratory testing, and have used it in software testing for decades (although the early pioneers of software testing simply called it ‘testing’). Testers used exploratory testing approaches quite successfully long before Cem Kaner coined the phrase “exploratory testing.” And testers certainly engaged in exploratory testing long before a few consultants started exploiting the phrase by promoting exploratory testing as some new-fangled testing strategy and disregarding the need for greater technical knowledge and skills in professional testers.
Some readers may comment that I have expressed disdain for exploratory testing, which is simply not correct. I use exploratory testing approaches all the time (even before it was called exploratory testing) and I realize its value in the correct context. My contention with the subject of exploratory testing has nothing to do with the approach per se. My derision is with the ‘experts’ who divide our community into an ‘us’ versus ‘them’ political debate, separate testers into different ‘schools’ of thought, and proselytize exploratory testing as the holy grail of software testing with the zeal of religious extremists.
But, after responding to some comments from Michael Bolton this week, I sat down and thought about the topic of exploratory testing for awhile. (I have never met Michael, but I think we agree on the basic tenets of software testing, and I hope our paths cross sometime in the future so we can share our thoughts.) Michael is an articulate guy, and he said something that clicked and made all the dots connect for me. So, read on and see if the dots between exploratory testing and formal techniques connect for you.
The first dot: Checking assumptions
Michael wrote, “…testing is not merely a rote, mechanical task; that it's not merely the application of engineering principles; that it's not simply comparing the product with the spec; or other such simplifications.” Upon reading that I realized that exploratory testing zealots consistently either choose to ignore the value of testing techniques such as boundary value analysis or equivalence class partitioning, or they have never worked on a software testing project that required the testers to actually use their intellectual knowledge or technical skills.
I guess I have been privileged and have never had to work with or manage software testers who are simple-minded droids incapable of thinking for themselves, blindly use tools or techniques without understanding their purpose or capabilities, and have a very narrow interpretation of the role of the software tester. On the other hand, many of the ‘successful’ software testers who I have worked with and managed possessed the characteristics and traits we commonly look for during the interview process such as problem-solving and analytical thinking, precision questioning, and the capacity to learn new concepts and apply them. Early in the interview process we often dig around until we found an engineering principle or concept the interviewee is unfamiliar with, and take a few minutes to explain it to them. Later in the interview process a different person on the interview loop will ask the interviewee about that principle or concept and see if they were not only to remember it, but to process the knowledge in a manner in which they could think of innovative ways to apply it.
I assume that many good testers already have the knowledge and capacity to learn, to think critically, to analyze, to ask relevant questions, and to “understand, diagnose, and solve problems”. Therefore, as a mentor, a teacher, and a software testing professional my goal is to help novice testers unlock their intellectual horsepower and their ability to successfully exploit new ideas such as testing techniques in order to be more effective in their role as a professional software tester, and more efficient with their time.
I don’t assume that a majority of testers lack the capacity to think for themselves or incapable of logical reasoning. And, contrary to Michael’s assertion I don’t agree that “…one doesn't have to be an expert in anything to be successful (i.e. employed) as a professional tester; one doesn't even need to be expert in testing itself.” First, I don’t equate success with employment. If Michael’s use of the term ‘professional’ implies “a person who earns a living in an occupation frequently engaged in by amateurs” then I concur that our industry has seen its share of self-proclaimed testers who are simply keyboard monkeys or checklist drones. Fortunately, the maturation of the testing discipline is weeding out the amateurs and faux testers. So, when I write or speak of professional testers I am referring to knowledgeable, highly skilled individuals who are “expert in their work”, who “show great skill”, and who “engage in a pursuit or activity professionally.”
The second dot: Techniques mature from exploration
Exploratory testing activists state heuristics are a key component of exploratory testing. In fact, Kaner and and Tinkham state “…exploratory testing is a highly heuristic process that uses heuristics.” So, let’s talk about heuristics. To put it simply heuristics:
- serve to indicate or point out; stimulate interest as a means of furthering investigation
- encourage a person to learn, discover, understand, or solve problems on his or her own, as by experimenting, evaluating possible answers or solutions, or by trial and error
- of, pertaining to, or based on experimentation, evaluation, or trial-and-error methods
- Computers, Mathematics. pertaining to a trial-and-error method of problem solving used when an algorithmic approach is impractical
- Of or relating to a usually speculative formulation serving as a guide in the investigation or solution of a problem
- A rule of thumb, simplification, or educated guess that reduces or limits the search for solutions in domains that are difficult and poorly understood
Experimentation and investigation often leads to patterns or repeatable steps to replicate a desired outcome in a given situation, or systematic procedures by which a complex or scientific task is accomplished (which happens to be the definition of technique). So, can’t we say the progenitors of software testing hypothesized systematic procedures such as boundary value analysis through experimentation and evaluation? And, don’t we agree that when boundary value testing is applied correctly it accomplishes the complex task of verifying the extreme ranges of data more effectively and more efficiently than continuing to use trial and error or educated guesses? So, when a pattern of systematic procedures emerges from experimentation that proves or disproves a hypothesis within specific context our experimentation matures into a technique that can save the tester time and has been proven effective through previous investiation and trail and error.
Functional and structural software testing techniques impart consistent guidelines for highly skilled knowledgeable testers to be more effective, and improve the tester’s efficiency. Techniques are not step by step or rote procedures. In fact, the application of techniques requires a great deal of knowledge and skill.
Educated guessing, trial and error, experimentation, and speculation are tribal knowledge. Attrition of the testers with the knowledge will cause a skill or performance gap in the organization. So, as long as companies continue to hire unqualified, sub-standard keyboard monkeys then the consultants who advocate exploratory testing as the Tao can continue to sell their exploratory testing medicine show. But, I agree with Michael that software testing is a “is a deeply intellectual and challenging activity.” And unfortunately for the exploratory snake oil salesmen many companies also understand that and the caliber of testers hired today is much higher than in years past. The era of hiring the butcher, the baker, and the candle-stick maker to find bugs is over. Companies such as Microsoft, Google, Freddie Mac, and others are targeting knowledgable, technically skilled individuals as professional testers.
Drawing the line between the dots: Stop fighting and play nicely!
Michael described the testing role quite eloquently stating, “… testing is the process of investigating the product to reveal quality-related information about it, for people to whom that information might matter.” At the end of the day, there are multiple approaches and techniques required to examine or inspect any complex software project to accurately and adequately assess quality attributes and exposure to risk. Simply put, no single approach to software testing is sufficient. Exploratory testing and functional and structural software testing techniques are only some of the tools in the repertoire of the professional tester. They are all valuable when employed in the appropriate context. (BTW...everything has context because nothing occurs in a vacuum.) There is no us versus them, there are not 4 schools of testing, and since everyone does exploratory testing to some degree doesn't that make everyone an "exploratory tester?" (Of course, if exploratory testing is the only approach to assessing the capabilities and attributes of a software project, then the testing strategy is probably not very mature, and numerous studies prove that approach leaves a lot of holes (untested areas of the product = 100% exposure to risk)).
I think myself, Michael, and other notable professionals in the field are exposing the need for highly qualified, smart, and innovative testers in the industry because testing is extremely challenging. We are also commonly aligned in our desire to teach bright and creative individuals additional testing skills to be more effective as professional software testers. In essence aren't we just building different walls of a sand castle? Can’t we just get along and play nicely in the same sandbox? Or, do we have to choose sides?
Comments
Anonymous
September 07, 2006
The comment has been removedAnonymous
September 07, 2006
The comment has been removedAnonymous
September 07, 2006
The comment has been removedAnonymous
September 07, 2006
The comment has been removedAnonymous
September 07, 2006
The comment has been removedAnonymous
October 31, 2006
The comment has been removedAnonymous
May 29, 2009
PingBack from http://paidsurveyshub.info/story.php?title=i-m-testy-exploratory-testing-examinedAnonymous
June 02, 2009
PingBack from http://portablegreenhousesite.info/story.php?id=33326Anonymous
June 19, 2009
PingBack from http://debtsolutionsnow.info/story.php?id=11768