Overview of the Test strategy review workshop


The goal of Success by Design is to ensure a successful customer outcome while implementing Microsoft Dynamics 365. The purpose of the Test strategy review is to:

  • Drive communication and understanding – The Test strategy review workshop is designed to drive a conversation about the test strategy that promotes general understanding across the implementation team regarding the testing objectives, test types, test coverage and planning, and approach to validating the solution.
  • Identify risks and issues – By taking a broad but high-level look at the test strategy, you can identify issues and risks with the approach that would negatively impact the outcome.
  • Provide recommendations– Based on the identified risks, this workshop provides recommendations to help you best manage and mitigate the risks.

The Test strategy review workshop can be conducted in person for complex projects, in which case, it is typically run as a single workshop that covers all required topics. The workshop is most often conducted remotely.

The following sections cover the top-level aspects of the Test strategy review workshop and provide a sampling of the types of questions that are covered in each section.

Overall test strategy

Test strategy covers the high-level approach and plan to validate that the solution will be fit-for-purpose in production use. This topic focuses on answering questions such as:

  • Is a documented test strategy in place?
  • Does the test strategy reflect this project’s needs and circumstances?
  • Is the test strategy expressed in language that is relatable to the project and understandable by the relevant project and business stakeholders?

Project scope mapped to test scope

The scope of testing is clearly dependent on the scope of the project. This section examines how well the project scope is covered by the test scope.

This topic considers this question: How, and as part of which test phase or test type, are the project functional scope areas due to be tested?

For example, consider the following scope areas:

  • Business processes
  • Business requirements
  • Design requirements
  • Data (for functional use, migration, interfaces, reporting/BI, and so on)
  • Geography
  • Customized areas
  • Process changes
  • Security
  • Regulatory requirements
  • Project goals

Another question that this topic considers is: How, and as part of which test phase or test type, are the project non-functional scope areas due to be tested?

For example, consider the following scope areas:

  • Performance
  • Usability
  • Operability
  • Maintainability
  • Disaster recovery
  • Business continuity
  • Other areas, as relevant for this project

High-level test plan

Testing will be conducted throughout the project, and the high-level test plan provides the structure to show how the various testing types and phases build on each other to provide incremental and comprehensive validation of the solution. This topic focuses on answering questions such as:

  • How does the high-level test plan integrate within the project plan?
  • Does the test planning reflect the test strategy?
  • Are all test types and phases reflected accurately in the test plan?
  • Does the test plan provide sufficient time and effort to conduct testing in proportion to the size and complexity of the project?
  • Does the test plan show that the time and effort that are allocated to the various areas of testing are in proportion to the risk that is represented to the business?

Test phases and types

Testing in a business application like Dynamics 365 is multi-faceted, and the test phases and types represent the validation of different layers and dimensions of the solution. This section examines the completeness of some important test definition and test management attributes of the key test phases and types.

The following tables show a view of the areas that each test type should have defined.

Testing - Key definitions

Test phase / Test type Key objectives Source documents Test coverage Entry criteria Exit criteria
Enter the title of the test phase (such as Integration Testing or User Acceptance Testing) or enter the key test types (such as Performance Testing or Mock Cutover). Enter key goals that are expected to be achieved by each test phase. List the document type/requirement area that is used to define the content of test cases and the acceptance criteria (in other words, what is being used as the definition against which the test result is validated). Determine which areas of project scope are expected to be validated by this phase, for example: - End-to-end business process and related configuration - Specific functions - Migrated data Define the entry criteria that must be met for this test phase to be considered ready to start formal implementation. Define the exit criteria that the test results must meet for this test phase to be considered as meeting its objective and able to formally exit from this phase.

Testing management

Test phase / Test type Test preparation Test implementation Test reporting Test administration tool(s) Test ownership
Enter the title of the test phase (such as Integration Testing or User Acceptance Testing) or enter the key test types (such as Performance Testing or Mock Cutover). Brief description of the test preparation that is expected to meet the test entry criteria (for example, test script writing, data requirements, or environments). Brief description of how this test will be implemented (which roles will perform the tests or what the life cycle of a defect will be). Define how the test progress will be reported and how the results/quality will be analyzed and reported, such as: - Reporting for internal project use - Reporting to senior business stakeholders Determine the tools that will be used to store, review, and manage the test framework, test cases, and test results. Also, consider what tools will be used to map test cases to requirement/scope, such as Azure DevOps, Kira, or Microsoft Excel. Define who is accountable for the outcome of this test.

Information from the preceding tables applies to all test types. Key test phases and types that could be covered include:

  • Unit testing
  • Functional/Process testing
  • System integration testing
  • End-to-end testing
  • User acceptance testing (UAT)
  • Regression testing

Key non-functional test types that could be covered include:

  • Performance testing
  • Data validation
  • Security testing

This list is not comprehensive, and depending on the nature of the project, other types of tests might be relevant, such as point of sale (POS) testing for retail stores or scanning device testing for warehouse applications.

Additional questions that are especially relevant for a given test type/phase that might not be adequately covered by the previous categories include:

  • Have functional test cases been defined from requirements and/or business scenarios?
  • Does functional testing cover all functional modules?
  • Are the functional test scripts validated with business users?
  • Does the system integration testing strategy explain creating a production-like test environment for conducting system integration testing?
  • Has a method been defined to synchronize/resynchronize all participating systems in system integration testing?
  • Have end-to-end test cases been validated with the owner of each functional module?
  • Does the end-to-end testing strategy consider usability testing?
  • Have the key stakeholders for UAT been identified?
  • Does the UAT plan clearly document the role of each stakeholder in the UAT phase?
  • Have you set up a clear communication plan during UAT to all required stakeholders?
  • Has each main process been decomposed to include sub processes?
  • Have the test scenarios been prioritized in UAT?
  • Does the UAT test plan include appropriate UAT test environment provisioning?
  • Do you have user training planned before UAT testing for the testers?
  • Has an adequate definition been established for the core set of tests that would constitute a regression test suite?
  • Is a process in place to identify recent changes (at a high level) for regression testing?
  • Does the test plan include automating regression testing?
  • Has a process been defined for data validation testing?
  • Have the correct business stakeholders been identified for conducting data validation testing?
  • Is a plan in place for conducting end-to-end testing on migrated data?
  • Does data validation testing include data reconciliation reports and plans?
  • Have key areas for security testing been identified?
  • Does the test plan require all necessary security roles and privileges to be defined and populated before UAT or system integration testing?
  • Does the security testing strategy include your organization's security requirements?


Planning, preparation, conduct, and reporting of testing requires significant management. For the simplest projects, this process might be managed through spreadsheets, but that could become unwieldy and difficult to track for anything more complex. Most projects will use some form of task management software. In addition, many organizations use automation tools to plan, create tests, run tests, and report the test results. This topic focuses on answering questions such as:

  • What test administration tools are being used, and in what manner, to map testing to source requirements (traceability matrix)?
  • What test administration tools are being used to manage the identification and storage of test cases?
  • What test administration tools are being used to manage the allocation of resources and for tracking the full life cycle of a test from its creation, running the test, collating the test results, defect logging, and then resolution and retry?
  • What tooling is being used to automate the running of test types and collecting the results?