How We Test Software At Microsoft (Sometimes, Somewhens)
Some time back, Alan Page sent me a review copy of How We Test Software At Microsoft (HWTSAM), which he, Bj Rollison, and Ken Johnston published late last year. I was curious to read this and discover whether their view of testing matches mine. Now that I have finished HWTSAM I can say it more or less does.
Overall I liked this book, enough so that I believe every tester should read it. HWTSAM is chockablock full of details regarding fundamental testing techniques, strategies, and processes which I believe every tester should be familiar with (even if you disagree with the utility of some of them). HWTSAM is also chockablock full of guidance regarding when each of these tends to be useful and not so useful. And HWTSAM is chockablock full of insight into how many teams at Microsoft test many of their products, insight which has been gleaned from a plethora of successes and failures in testing said products and is likely to have at least some relevance to whatever you happen to be testing or developing. If you do not come away from this book without at least one new idea to integrate into your testing, I will be astonished.
I also disliked HWTSAM. For one, I continually stumbled over the editing (or lack thereof), to the point that I eventually started wondering whether it had been edited at all. Typos, seems-incorrect-to-me grammar, and "Why is that a note?" and "Why is that note here?" questions all took my attention away from the information Alan, Bj, and Ken were presenting. Similarly, as I read the "Software Engineering at Microsoft" and "Software Test Engineers at Microsoft" chapters I wondered what they had to do about testing; some time later I realized that HWTSAM is as much about process as it is about techniques, and that the way Microsoft's Test orgs tend to be structured and mesh with the other discipline orgs is an important factor in how we test and our successes and failures in doing so effectively.
I also found a wide disparity between the levels of coverage of various topics, with some discussions going into reference-book-levels of detail while other topics were given barely a paragraph. The sidebars often confused me as well, not so much via their content as by prompting questions regarding whether their were real or made up - while many of the stories were signed by their submitter, many others weren't, and I felt some of the unsigned stories made their point too well to be true. Finally, while the software-plus-services chapter presents lots of information regarding the testing environments Microsoft uses, my hopes for a similar level of detail regarding testing software-plus-services applications were not fulfilled.
In a few cases my dislike flared into violent disagreement. One cause of these flareups were the many absolute statements I noticed throughout this book, both about the utility of particular techniques, strategies, and processes and about how we do things here at Microsoft. I tend to bristle at the idea that anything always works or is always best, especially when it comes to testing, and especially when I know that statement to be false. Interestingly, other parts of the book clearly state that no technique or strategy or process always works or is always best, and no technique or strategy or process is used everywhere in Microsoft - another instance of bad editing perhaps.
Despite all of these dislikes, I enjoyed reading HWTSAM. I loved the stories about testing Microsoft software, the techniques used to do so, and the bugs those techniques found. Thanks to the "About Microsoft" chapters I finally understand our organizational structure here. I laughed with delight as I read how we now add so many servers to our datacenters every year that we have our suppliers deliver them in shipping containers which we simply drop in, plug in, and turn on. And I was happy to see at least one statement that manual testing is both useful and used here, giving lie to the common belief that Microsoft hates manual testing.
While other books go into more details regarding many of the topics this book covers, and other books are more even in their coverage of said topics, and other books provide bibliographies and other pointers to more details about said topics, I do not know of any other book that weaves all this information together into a story that describes how testing is done at a real software company. That story - the overall view - is the reason I believe every tester should read this book. Once you have, let me know your thoughts about it: michael dot j dot hunter at microsoft dot com.
Comments
Anonymous
April 22, 2009
The comment has been removedAnonymous
April 23, 2009
For the graduate SWE class in Formal Methods that I'm taking, the Prof is letting me present the section on Equivalence Class Partitioning. After the prezo, I'll be able to use this at work. She and I are both pretty excited about using formal methods "in the real world." That's an opportunity I would have missed if I hadn't been reading through the book. Very interested in the comments about what you didn't like as it gives some perspective to a book I read for answers.Anonymous
May 04, 2009
i know about the testing part of microsoftAnonymous
June 19, 2009
This BLOG post is a thank you to all the individuals who have read “How We Test Software at Microsoft,”