ooh Camano… ooh planning testing in the November CTP

In several of the organizations that we've visited we've found some that some of the more experienced ones tend to put much thought into planning their testing efforts. One of the organizations that we visited, followed an agile testing process with sprints of four weeks in duration as follows:

 

Before the sprint, the test manager created a test plan for the release with her team. The test plan served to identify the area to be tested in the sprint. For the next two weeks, the testers wrote test cases in excel to specify the steps to verify the new functionality. Each row in the excel file consisted of a test case with columns used to specify its steps and record the results of its execution. In the closing weeks of the sprint, the testers executed all the test cases that they have authored in the prior weeks twice - once in week 3, once in week 4.

 

While the very organized effort above is bound to locate most of the holes in the product, the tools that the organization used to plan and manage the testing effort were not helping them with the process. The programs provided them with no easy way to organize their test cases, track the requirements associated to them, quickly view test case results, plan a testing effort or view the overall progress of their testing.

 

In the November CTP release of Rosario we introduce Codename 'Camano' - a tool designed to help testers and test managers to plan, organize and analyze a testing effort. In Camano, each test case is a database object that can be executed by Microsoft Test Pilot. Testers can organize their test cases into "Suites" that can by either "Dynamic" (query defined) or "Static" (list defined). A lead can schedule a suite of tests to be executed by placing the suite in a new test plan. When creating a test plan, the lead can also divide up the testing effort by assigning different members of their org to execute specific test cases. Once a plan is created, the testers execute test cases out of it.

 

Once a test plan is created, managers will be able to track its progress, and add and remove test cases to the plan as new test cases are authored and the plans change. Management will also be able to generate reports against the execution of test cases contained in the plan. We are hoping that users will find it very useful to be able to quickly generate reports on the number of test cases currently failing, the number of blocked test cases, the testers that are finding the most bugs, the speed at which bugs are being fixed and the number of testers petitioning to put Arrested Development back on the air :). Test Managers at Microsoft love getting such statistics, we have been planning that those outside of Microsoft will find them quite handy as well (we have 3.5 testers on our team currently currently petetioning). 

 

Have any thoughts on our thinking around planning a testing effort? Are we completely off wack? A little off wack? I would be very interested in your feedback - as we really want to get the planning experience right.

 

Cheers,

- Naysawn

Testing Hierarchy Diagram.jpg