Food and Drug Administration (FDA) Compliance with Visual Studio 2010

Northwest Cadence

June 2011

This white paper focuses on the initial chapter of Title 21 of the Code of Federal Regulations (referred to simply as CFR 21), which governs food and drugs that are manufactured in the United States. The other chapters of CFR 21, which cover the Drug Enforcement Agency (DEA) and the Office of National Drug Control Policy (ONDCP), are outside the scope of this white paper.

Applies To

Visual Studio 2010

Visual Studio Team Foundation Server 2010

Introduction to CFR 21 Software Compliance

Validation is the Goal

Documentation is the Means

Achieving Compliance By Using Visual Studio 2010 and Team Foundation Server 2010

Quality Planning

Requirements

Software Design

Construction or Coding

Testing

Maintenance and Support

Electronic Signature Requirements

At Northwest Cadence, the most requested assistance that we get for FDA compliance is through either CFR 21 Part 11 (Electronic Signatures and Electronic Records) or through CFR 11 Part 820 (Quality Systems).

Because we predominantly work in the application lifecycle management (ALM) arena, the focus of our efforts is generally around providing compliance throughout the creation of software, either stand-alone or embedded within hardware. The actual CFR 21 Part 11 and CFR 21 Part 820 regulations focus specific requirements that may seem to be only somewhat related to software development. For instance, CFR 21 Part 11 defines the requirements for a digital signature and electronic record to be considered valid, whereas CFR 21 Part 820 governs the quality of medical devices. However, upon examination, it becomes clear that there is a strong focus on software development and acquisition. Only 3 pages of CFR 21 Part 11 are directly concerned with electronic signatures and documents. All the remaining pages cover what is required to verify and validate a software system that will maintain electronic signatures and records. CFR 21 Part 820 only briefly discusses software verification and validation, but the FDA has provided a document, General Principles of Software Validation, that covers the recommended practices and artifacts to prove compliance. Thus, the software development process is a critical component of compliance. In fact, as the FDA writes in General Principles of Software Validation, “…software engineering needs an even greater level of managerial scrutiny and control than does hardware engineering.”

Given the applicability of the General Principles of Software Validation to both CFR 21 Part 11 and CFR 21 Part 820, this white paper will focus on the general principles, which will enable compliance for Part 11 and Part 820.

Achieving compliance with federally mandated regulations can be difficult, but this difficulty can be significantly reduced though the use of automated systems and tools that create and enable a defined process, provide an audit trail, and report on full traceability between components of the system.

Introduction to CFR 21 Software Compliance

The good news is that CFR 21 Part 11, CFR 21 Part 820, and the General Principles of Software Validation do not specify a particular methodology. In fact, the FDA believes that it “should consider the least burdensome approach”. This recommendation allows us to consider the use of agile methodologies to deliver validated software that will pass compliance audits. In fact, our clients have used both agile and formal development methodologies to deliver compliant software, with the agile teams delivering value incrementally.

The bad news is that the FDA states that it believes that the guidelines that are set forth in the General Principles of Software Validation are “what we believe is the least burdensome way for you to comply with those requirements.” In many cases, these guidelines appear to be incompatible with agile software delivery. For instance, emphasis is placed on complete requirement specifications and thorough documentation throughout the process. One of the most significant barriers to agile development is the traceability requirement. Without a sufficiently powerful ALM tool, there may be too much documentation and reporting to generate the flow that is the foundation to agile software delivery. None of this prevents agile delivery; it just adds some constraints on an agile process.

The reality is that most compliant software lifecycles will fall somewhere between a stage-gate development process and a purely iterative one. Because agile software development has become widely accepted, this white paper will provide tips for how to achieve compliance while still maintaining agility, where appropriate. In addition, this white paper will highlight how using Visual Studio 2010 and Team Foundation Server can help you achieve compliance, while still remaining agile.

Validation is the Goal

The FDA considers validation to be "confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled." That is, to achieve compliance with FDA regulations, we must show that we have met the requirements of the application. To do this, we need objective evidence. Although a variety of evidence can be considered sufficient, the General Principles of Software Validation are fairly specific as to the level of documentation that is required.

One method for achieving validation is to verify the application as it is being developed. The General Principles of Software Validation distinguish between validation and verification as independent activities. The General Principles of Software Validation state that “Software verification looks for consistency, completeness, and correctness of the software and its supporting documentation, as it is being developed, and provides support for a subsequent conclusion that software is validated.” Thus, we can use the artifacts of the development process as supporting data for validation.

Documentation is the Means

CFR 21 Part 11 puts it clearly: “We consider thorough documentation to be extremely important to the success of your validation efforts.” To successfully pass your compliance audits, you will need documentation. It is likely that you will need a lot of documentation, but do not confuse documentation with paperwork. Although paper may be an important part of your validation plan, it does not have to include the exhausting piles of busywork created by many organizations who look for compliance. Instead, it can be met with relatively simple paperwork requirements, coupled with deep, thorough traceability through the complete development lifecycle. This traceability can be captured in Team Foundation Server 2010 and reported on a wide variety of formats, including, if need be, paper.

Achieving Compliance By Using Visual Studio 2010 and Team Foundation Server 2010

If you enumerated the requirements of the General Principles of Software Validation for compliance and the features of Visual Studio 2010 and Team Foundation Server 2010, you might think that these Microsoft tools were built solely to help in achieving compliance. There are few recommendations in the General Principles of Software Validation that are not matched by a feature in the Visual Studio 2010 suite of tools.

Features in Visual Studio 2010 correlate with FDA compliance requirements because Visual Studio was designed with fundamental good practices, end-to-end traceability, and auditability in mind. In fact, one of the fundamental requirements of any ALM tool that supports regulatory compliance is full traceability and auditability, and Team Foundation Server was designed as a tool to meet those needs.

To see how Visual Studio 2010 and Team Foundation Server 2010 can help you achieve compliance with FDA requirements, including CFR 21 Part 11 and CFR 21 Part 820, this white paper will follow Section 5 of the General Principles of Software Validation. That section discusses the typical software lifecycle activities and tasks that support validation. It also provides many of the concrete recommendations about the types of data that should be captured during the development process. We’ll explore the seven software lifecycle activities that are outlined in the General Principles of Software Validation: Quality Planning, Requirements, Design, Construction or Coding, Testing by the Software Developer, User Site Testing, and Maintenance and Software Changes.

Quality Planning

“A software life cycle model and associated activities should be identified, as well as those tasks necessary for each software life cycle activity… A primary goal of software validation is to then demonstrate that all completed software products comply with all documented software and system requirements.” - General Principles of Software Validation

The validation plan is one of the critical steps in establishing the correct foundation for software validation. It should be created at the start of the development effort, because it provides the framework for the data that will be generated to prove validation. CFR 21 Part 11 states that “Validation documentation should include a validation plan, validation procedures, and a validation report, and should identify who in management is responsible for approval of the plan, the procedures and the report.” In general, the validation plan provides guidance to the team about how compliance will be achieved. This document does not have to be long, but it should provide sufficient guidance for the team as to the process, tools, and techniques that must be used.

In addition to the validation plan, management “must identify and provide the appropriate software development environment and resources.” Basically, the overall structure of the team, development environment, configuration management structure, and software development process must be identified.

Visual Studio 2010 Process Templates

Out of the box, Visual Studio 2010 provides much of the infrastructure that is required for validation. Each product development effort in Team Foundation Server is enabled by a process template. A process template specifies the way requirements, test cases, bugs, tasks, risks, change requests, and other artifacts will be created and tracked. This includes several out-of-the-box reports, template documents, and enabling documentation. Through the tight integration with Visual Studio 2010, a process template can also enforce a wide range of behaviors, including requiring formal sign-off during various workflows, tracking whether required tests have been run, and enforcing linkages between code and the requirements that dictate that code be written. Finally, each process template provides a configurable set of guidance that specifies how the development process should be conducted.

These process templates are very easily customized, but many organizations find that the one of the out-of-the-box templates will fulfill their needs with little or no customization. A frequently used process template for many compliance needs is the MSF for CMMI Process Improvement – v5.0 template. More information is available at MSF for CMMI Process Improvement v5.0.

Additional Development Tools in Visual Studio 2010

In addition to the process templates, Visual Studio 2010 provides many integrated tools that will be used during the development effort. These include architectural, development and testing tools; automated build infrastructures; integrated version control; and reporting engines. To effectively use these tools for compliance, you should set them up for use before the development effort starts.

Tip

Consider a Sprint 0 where the environment is set up, initial solutions and projects are set up in Visual Studio, empty test lists are created, automated builds are created that both compile and test solutions in Visual Studio, branching patterns are defined, and the Definition of Done is created by the team.

A few key practices should be in place early. The version-control structure should be in place, including empty solutions and projects. This will enable automated builds to be created for each solution. Test lists, which will contain the automated tests, should be created and added to the automated build. Branching patterns should be defined, and code promotion paths should be identified. Alerts should be set up to notify people about deviations from the process, and reports should be identified that will be used to track overall quality, progress, and risk.

Requirements

“The software requirements specification document should contain a written definition of the software functions. It is not possible to validate software without predetermined and documented software requirements.”- General Principles of Software Validation

Requirement specification is very important for achieving compliance. Without accurate, well-specified, and documented requirements, the FDA believes that it is impossible to validate software and, therefore, achieve compliance. Thus you must elicit, document, and maintain accurate requirements throughout the development effort. There is no specified format for gathering or documenting requirements. However, the General Principles of Software Validation offer specific guidance: “Requirements development includes the identification, analysis, and documentation of information about the device and its intended use. Areas of special importance include allocation of system functions to hardware/software, operating conditions, user characteristics, potential hazards, and anticipated tasks. In addition, the requirements should state clearly the intended use of the software.” In addition, you should make sure that you maintain a full history of all changes to the requirements as they evolve.

For agile teams, the focus on documented requirements can be daunting. In fact, it feels as if you may have to resign yourself to a waterfall development approach, at least as far as requirements gathering is concerned. The FDA realizes that the requirements specification detail required up front may run counter to an agile approach. They state “Requirements can be approved and released incrementally, but care should be taken that interactions and interfaces among software (and hardware) requirements are properly reviewed, analyzed, and controlled.” In our experience, formal requirement specification is required; a traditional user story does not contain nearly enough detail. A typical user story is meant to lead to a conversation to more fully specify the requirements. The documentation requirements for FDA compliance require a far more formal requirement format, and they come close to mandating a formal requirement review. This does not mean that you cannot delay requirement elaboration until the last responsible moment. It just means that the elaboration must meet much stricter documentation requirements than you may be used to. In many cases, the delayed elaboration of requirements may provide an advantage to agile teams, because a requirement that does not make it into the product never needs to be elaborated, and does not impact the other requirements. Waterfall teams, on the other hand, will generally have conducted significant analysis of how requirements interact with each other. Removing a requirement late in the development process forces the team to reanalyze the interactions between requirements and fully understand the impact of removing the requirement. Agile teams must conduct the same impact analysis, but they do it incrementally and only when requirements are added.

Tip

Use more formal specifications than you would with a traditional agile user story. However, to maintain agility, do not finish all of the requirements up front. Instead, fully specify only the requirements that can be built in the next iteration or two.

Traceability between requirements and other artifacts, such as risks, test runs, and bugs, is another critical artifact that is useful for achieving compliance. In fact, this traceability is one of the recurring themes throughout both the CFR 21 Part 11 and the General Principles of Software Validation. Traceability between test cases and test runs is explicitly encouraged, as is the relationships between risks and requirements. Other forms of traceability are not only encouraged but also, in our experience, directly and positively affect the assessment process.

Requirements Auditability

Not only must requirements be fully documented, but changes to those requirements must be tracked for audit purposes. Team Foundation Server 2010 tracks all changes to all work items, including requirements. This means that, at any time, you can understand the history of a requirement, and you can do so conveniently. In addition to a change log, you can perform an ‘as of’ query that will return the exact value of a requirement at any time in the past.

Figure 1: Work item history is built into Team Foundation Server 2010

Work Item History is built into Team Foundation Se

Requirements Traceability

Team Foundation Server 2010 was designed to provide end-to-end visibility into the entire application lifecycle. Not only does it capture the traceability data effectively, but it also provides a number of visualizations and reports. This allows the requirements and all associated artifacts to be conveniently audited.

Figure 2: Work items have high-fidelity relationships to other work items

Work Items have high-fidelity relationships to oth

Figure 3: Links between work items allow for full traceability, all the way to code

Links between work items allow for traceability

Software Design

“Use error caused by designs that are either overly complex or contrary to users' intuitive expectations for operation is one of the most persistent and critical problems encountered by FDA.”- General Principles of Software Validation

The design of any system is one of the most difficult tasks for any software development effort. When that effort is targeted to a medical device or will maintain highly personal medical information, that task become even more demanding.

The General Principles of Software Validation encourage the consideration of control flow, data flow, complexity, timing, sizing, memory allocation, and criticality analysis in addition to links between software modules, hardware interfaces, and user interactions. In many cases, the most understood format for documentation is the Unified Modeling Language (UML). However, care must be taken to keep the models up to date and to make sure that the models are linked to the appropriate requirement.

One reason for a detailed design specification is to constrain the programmer to stay within the intent of the requirements and design and to reduce the need for ad-hoc design decisions. As the design evolves during the development process, this information must be brought to the developer’s attention. In addition, any changes brought about by coding need to be revalidated against the design.

Support for the Unified Modeling Language

UML diagrams can be created directly in . Out-of-the-box support is provided for Class, Sequence, Use Case, Activity, and Component diagrams. These diagrams, together with other architectural artifacts, support the creation of a complete design specification. More importantly for validation, however, is that these artifacts can be linked directly to the requirements that they support.

Figure 4: UML diagrams can be easily associated with work items

UML diagrams can be associated with Work Items

Layer diagrams

The General Principles of Software Validation state that “Source code should also be evaluated to verify its compliance with the corresponding detailed design specification.” In the past, this has been so difficult to achieve at any level of detail that few assessments could provide enough detail for solid evidence from code to architectural entity. With the introduction of the Layer Diagram in , this is no longer the case.

Tip

You do not have to specify your entire architecture up front. However, provide sufficient detail in a Layer Diagram for the next two iterations forward, and include architectural validation during the automated nightly build. This will support a very fast feedback loop between code changes that necessitate architectural changes.

The Layer Diagram is used to visualize the high-level architecture of a system and to make sure that the code, as it evolves, stays consistent with the design. It organizes artifacts from a Visual Studio solution into logical groups, called layers. These layers describe the major components of the system and the interactions between them. These interactions, generally dependencies, are represented by arrows that connect any two layers. By linking code entities to layers and specifying the interactions between them, you can use a Layer Diagram to enforce architectural constraints on the code. These constraints can be validated on demand, at the time of code check-in, or even during the nightly build.

Layer Diagrams help make code easier to understand, update, reuse, and maintain, and they make sure that architectural designs are not violated as time passes and changes are made to the code base. This is critical for providing proof that system changes did not reach across architected boundaries to cause unintended consequences.

Figure 5: Layer diagrams help you not only visualize but also enforce the architecture of the system

Layer Diagrams can enforce the system architecture

Architecture Explorer

When requirements change, architectures evolve, or changes are made that may affect the system, it is critical to understand the impact on the overall system. Historically, this has been difficult. introduced Architecture Explorer, a tool that allows developers, testers, architects and others to quickly explore the existing architecture of an application. Architecture Explorer lets users quickly identify dependencies between any two blocks of code, identify circular references that may cause instability, visualize all incoming calls to a class or method, and much, much more. By providing this level of detail and the ability to drill from high-level architectural understanding to individual blocks of code, Architecture Explorer enables much of the traceability from design to code that is so important for FDA compliance. As stated in the General Principles of Software Validation, “…care should be taken that interactions and communication links among various elements are properly reviewed, analyzed, and controlled.”

Figure 6: Architecture Explorer can be used to quickly explore dependencies in the code

Use Architecture Explorer to find code dependencie

Construction or Coding

“Source code should be evaluated to verify its compliance with specified coding guidelines. Such guidelines should include coding conventions regarding clarity, style, complexity management, and commenting… Source code should also be evaluated to verify its compliance with the corresponding detailed design specification.”- General Principles of Software Validation

Much of the coding effort remains outside the requirements for validation. For instance, the General Principles of Software Validation say very little about the choice of computer language, unit-test framework, or what framework libraries are used. Instead, the focus is on compliance against several measures that are closely correlated with code quality, such as standard coding conventions, measuring code complexity, and adherence to the design specification.

Another development task generally done during coding is unit testing. The General Principles of Software Validation state that “Unit (module or component) level testing focuses on the early examination of sub-program functionality and ensures that functionality not visible at the system level is examined by testing. Unit testing ensures that quality software units are furnished for integration into the finished software product.” Although they do not specify that these unit tests should be automated, modern development practices generally consider all unit tests to be automated. This automation is especially important because it allows for “code coverage” calculations – the amount of code actually exercised by the automated unit tests.

Finally, release management is a critical piece of constructing any kind of application. Not only must you be able to show that the compilation succeeded and passed any unit tests, but it is important to be able to demonstrate exactly what functionality, at what version, is available in each version of the application. This traceability makes sure that when a version is put into production, the exact contents of that deployment can be audited.

Version Control

Enterprise-level version control is included with Team Foundation Server 2010. This makes sure that all source code changes are tracked and auditable. Unlike some version-control systems, Team Foundation Server 2010 is built from the ground up to make sure that developers cannot manipulate their check-ins after they have been committed and built. This provides auditors confidence in the traceability. In addition, several advanced features ensure not only that traceability is auditable but also that it provides deep information. For instance, version control in Team Foundation Server 2010 groups related code changes into “changesets,” with each changeset generally representing a measurable change to the system, such as a bug fix. These changesets are then linked to the work items for which the code change was done. In the following illustration, the check-in was done to resolve a specific bug. The bug, its detail, and a direct link to the bug are available from Work Items tab.

Tip

Ensure that every check-in, regardless of how small, is associated with a work item. This traceability is fundamental to achieving compliance with FDA regulations.

Figure 7: Code changes are tracked and auditable

Code changes are tracked and auditable

Code Metrics

Several standard code metrics are included with Visual Studio 2010. These metrics are generally used to understand the complexity and maintainability of the underlying code. This means that developers, testers, architects, and even auditors can understand which parts of the code should be refactored or more thoroughly tested. This helps them identify areas of highest risk, because complexity is very often inversely correlated with quality and maintainability.

There are five measures that are calculated automatically by Visual Studio.

  • Maintainability index – an aggregate measure that highlights the overall maintainability of the code

  • Cyclomatic Complexity – measure of the structural complexity of the code

  • Depth of Inheritance – the number of class-level inheritances in an object-oriented design

  • Class Coupling – the number of dependencies between this class and all other classes in the system

  • Lines of code – the number of non-comment lines of code in a class or method

Figure 8: Code metrics highlight potential areas of concern

Code Metrics highlight potential areas of concern

Static Code Analysis

The static code analysis in Visual Studio 2010 has several hundred rules that check code for potential code errors in several areas, which include design, naming, reliability and security. These rules can be combined into rule sets that allow only a specific subset of the rules to be run, highlighting potential problems. These range from the “Minimum Recommended Rules," which focus on the most critical problems in your code, including potential security holes, application crashes, and other important logic and design errors, to the “All Rules” set. which contains every available rule. It’s very easy to configure a custom rule set to focus your code analysis specifically toward your needs.

Figure 9 - Preconfigured rule sets make them easy to use

Preconfigured Rule Sets make them easy to use

Figure 10 - Code analysis rule sets are easy to customize

Code Analysis RuleSets are easy to customize

Automated Unit Testing

Visual Studio 2010 provides an out-of-the-box framework that allows the quick creation and execution of automated unit tests. In addition, it provides an advanced code coverage tool that not only provides numeric insights into the amount of code covered by the automated unit tests but also graphically highlights code that was not touched by any unit test. This lets a developer quickly identify any uncovered code and create a unit test that will effectively test its functionality. This is especially useful for discovering error handling code that may not be executed during normal operation.

Figure 11 - Code coverage metrics are available for all unit test runs

Code coverage metrics available for all unit tests

Automated Build System

Team Foundation Server 2010 includes a comprehensive automated build system that is named Team Foundation Build. This system allows tracking of all builds, including those that are released for testing and release. Because of the tight integration with work items and version control, each build has a substantial amount of traceability information embedded in it. For instance, each build report provides data on which changesets were compiled in the build and with which work items they are associated. By using tools in Visual Studio 2010, you can see exactly what code changes have occurred between any two builds, in addition to what work items were worked on during that time. This provides a great deal of information that is useful to anyone who is assessing the development process or investigating the differences between releases.

Figure 12 - Build reports highlight associated code changes, work items, and affected manual and automated tests

Build Reports associate code changes & Work Items

Testing

“Test procedures, test data, and test results should be documented in a manner permitting objective pass/fail decisions to be reached. They should also be suitable for review and objective decision making subsequent to running the test, and they should be suitable for use in any subsequent regression testing. Errors detected during testing should be logged, classified, reviewed, and resolved prior to release of the software. Software error data that is collected and analyzed during a development life cycle may be used to determine the suitability of the software product for release for commercial distribution. Test reports should comply with the requirements of the corresponding test plans.”- General Principles of Software Validation

Testing is critical to compliance. Tests must be documented, complete, traceable to requirements, and tests must maintain a history of test case results. Both CFR 21 Part 11 and CFR 21 Part 820 compliance rely on the ability to create and audit test plans, test cases, and test results in many environment, with full traceability between software requirements and the test runs that validate the behavior that they describe.

Several levels of testing will support compliance. The first is unit testing, which was discussed in the section about Coding. The next is integration testing, which deals with “the transfer of data and control across a program's internal and external interfaces.” Integration testing is critical for understanding how components and subsystems interact with each other. Finally, system testing “demonstrates that all specified functionality exists and that the software product is trustworthy. This testing verifies the as-built program's functionality and performance with respect to the requirements for the software product as exhibited on the specified operating platform(s).” System testing is performed to make sure that the product behaves as expected, especially with regard to the system requirements.

In addition, testing must be conducted in different environments. It is expected that software testing will be done throughout the development process, not just near the end. This testing will take place predominantly in the development and test areas when the software is in development. Prior to release, however, the software is expected to be tested in a representative user site with end-users of the application. This is critical because user expectations for how a system should behave often differ from those of the technical development staff. As noted earlier in this white paper, the General Principles of Software Validation state that “Use error caused by designs that are either overly complex or contrary to users' intuitive expectations for operation is one of the most persistent and critical problems encountered by FDA.” Thus, effective testing must involve representative users of the application.

Tip

Involve end users of the application early in the development lifecycle. Early feedback is very important to constructing a user-friendly application that meets both the stated and unstated expectations of the end user.

Testing has one primary goal – to make sure that the application meets all specified requirements. Testing auditability means verifying not only that the tests have been run but also that the correct tests, linked to each requirement, were run. We must also know their results, whether the requirements have been effectively “covered” with tests, and whether the tests are completed. No tool can guarantee that a method, component, requirement, or a full application has been fully tested. However, tools can provide visibility into the test cases that verify a requirement. They can also track all bugs that have been logged against the requirement, the code that was implemented to resolve those bugs, and the identified risks that are associated with any untested functionality. Thus, a good tool can provide the information that you need to make important decisions around how much testing is enough.

End-to-End Traceability between Requirements, Test Cases, and Bugs

Visual Studio 2010 provides complete traceability across the software development lifecycle. Of particular importance for testing is the traceability between requirements, test cases, test results, and bugs. By using the testing tools that are available out of the box, developers and testers can effectively collaborate on linking test cases to requirements, finding and fixing bugs, and improving the quality of the code. These day-to-day activities result in data that is tracked by Team Foundation Server 2010 and that can be reported on through both ad-hoc and out-of-the-box reports.

The following illustration highlights a report that displays the quality of each requirement with regard to the number of active test cases, the most recent status of test runs, and any bugs that are currently logged against the requirement. In addition, the report shows the amount of work that remains to complete the coding, testing, and deployment of this requirement. This built-in report powerfully highlights the traceability that is built in to Visual Studio 2010.

Figure 13 - Overview reports highlight test run status and bugs

Overview reports highlight test run status & bugs

Test Case Management

The testing tools in provide many tools that help in creating and maintaining effective manual and automated test cases. Test plans in are auditable entities that contain test cases that are grouped by area, linked to a requirement, or both. This allows for complete traceability through all of the testing artifacts, including every test run. In many cases, this automated traceability replaces hundreds of pages of documentation and provides a much more reliable audit trail.

One of the most powerful features of test cases in is test impact analysis, which ties test run coverage information to source code. Thus, whenever the underlying source code changes, testers are alerted to which manual and automated tests must be re-run to keep the test runs current and valid. Test impact analysis is discussed more fully later in this white paper.

Automatic Tracking of Manual Tests

Using Test Runner in Microsoft Test Manager, makes sure that even manual tests experience automated data collection. The Test Runner tracks data that is involved in manual test runs. This includes action data that enables manual tests to be “rerun” in an automated manner and simplifies the repeated running of manual tests. In addition, manual test runs gather data about the system under test, up to and including the code paths that are being executed by the tests. All this data is maintained inside Team Foundation Server 2010 and provides a solid foundation for compliance.

One of the unique features of the Test Runner is the ability to track end-user testing. As the General Principles of Software Validation suggest, “Documented evidence of all testing procedures, test input data, and test results should be retained” and “During user site testing, records should be maintained of both proper system performance and any system failures that are encountered.” By using the Test Runner, a test team that is deployed to an end-user site location can make sure not only that the appropriate data is tracked but also, if any bugs are discovered, sufficient data can be sent back to developers for any remediation.

In conclusion, the introduction of Test Runner in Microsoft Test Manager and the other testing tools in Visual Studio 2010, Microsoft has provided a solid testing foundation that can meet even the most demanding auditability and traceability needs.

Actionable Bugs

Bugs that are created by using Visual Studio 2010 provide a great deal of information that is usable by both developers and auditors. For instance, a bug that is created during a manual test run automatically has lots of information attached, including a video of the test run, information about the systems under test (such as memory usage and screen resolution), event log data that was collected from targeted computers, and even a log that lets developers very quickly step through the historical execution of the code.

Figure 14 - Bugs capture detailed data automatically, which enables traceability and quick, accurate bug fixes

Test runs capture bug data automatically

Automated Test Cases

In addition to supporting full traceability from requirement to test to bug to code, Visual Studio 2010 provides the capability to automate test cases. This enables them to be run every night during the nightly build process, ensuring that regressions are minimized and that the test result data is valid against the current version of the code. This capability assists compliance by making sure that any changes in code that fail tests are caught quickly, when remediation can be accomplished quickly and at minimum cost. It also guarantees that a record of successful test runs is kept, highlighting the stability of code under change.

Load and Stress Testing

Visual Studio 2010 also provides a powerful load test capability to make sure that the application can be tested under both realistic and unexpected loads. The load test tool allows data to be gathered for any failures or inconsistencies that appear when the application is put under load. It can also be used to determine realistic capacity limits for an application. These limits can then be incorporated into the runtime management of the system, providing early warnings to possible failure when the load exceeds those limits.

The FDA also places a high priority on running these load tests against applications that are deployed to representative end-user sites. “Some of the evaluations that have been performed earlier by the software developer at the developer's site should be repeated at the site of actual use. These may include tests for a high volume of data, heavy loads or stresses, security, fault testing (avoidance, detection, tolerance, and recovery), error messages, and implementation of safety requirements. The developer may be able to furnish the user with some of the test data sets to be used for this purpose.” By using the testing tools in Visual Studio 2010, appropriate load test plans can be easily re-run against applications that have been installed at an end-user site.

Maintenance and Support

“When changes are made to a software system, either during initial development or during post release maintenance, sufficient regression analysis and testing should be conducted to demonstrate that portions of the software not involved in the change were not adversely impacted. This is in addition to testing that evaluates the correctness of the implemented change(s).”- General Principles of Software Validation

The initial software development process often gets the most attention from business and software development teams alike. However, the maintenance and support phase of an application’s life can be the most critical. As mentioned in the introduction to this white paper, of the 242 software related recalls of medical products, “192 (or 79%) were caused by software defects that were introduced when changes were made to the software after its initial production and distribution.” This is a substantial number of defects post release.

To comply with FDA regulations, organizations must be able to identify problems that were encountered in production and how those problems were solved in maintenance.

Additionally, it is not enough to note which bugs were fixed in each release. Instead, “proposed modifications, enhancements, or additions should be assessed to determine the effect each change would have on the system.” (General Principles of Software Validation) Establishing end-to-end traceability between requirements, change requests, and reported bugs is absolutely critical, as is tracking the impact of the change on the whole system, especially on testing. In fact, the General Principles of Software Validation suggest that the reason for analyzing the impact of change requests and defects is to “determine the extent to which verification and/or validation tasks need to be iterated.” Thus, regulatory compliance suggests an understanding not only of the system structure and architecture but also how requirements, change requests, and defects affect that system and each other. Traditionally, this has been very difficult to achieve reliably. Understanding how requirements, change requests, and defects interact is hard enough. Understanding the impact of code changes on tests has been almost impossible. Thus, many organizations that seek to comply with FDA requirements end up acknowledging the risk in this area and using manual means to show a “best effort” at traceability.

Test Impact Analysis

Through the deep traceability that Team Foundation Server 2010 provides, the test impact of code changes can be easily discovered. During both manual and automated test execution, Visual Studio 2010 tracks the code that is being executed during the test run. This information is correlated with the version of the code that is undergoing the tests. When code changes are applied to fix bugs, implement change requests, or for other reasons, Visual Studio 2010 can analyze the changes and, based on the prior test run data, determine which test cases were affected by the code changes. This helps testers quickly identify the minimal set of regression tests that must be run to guarantee that functionality remains intact after an upgrade.

Tip

After a requirement has stabilized, consider automating any manual test cases and configuring them to run during your nightly automated build. Regressions are found very quickly and can be handled very early.

Figure 15 - Test impact analysis automatically identifies the minimal set of regression tests needed

Test Impact Analysis identifies regressions tests

Work Item Activity Reporting

In addition to test impact analysis, Visual Studio 2010 enables reporting on the work item activity between any two builds. This enables auditors to determine what features were added and what bugs were fixed in each maintenance release.

Figure 16 - Development activity is automatically traced between any two builds or releases

Development activity is traced between builds

Work Item Traceability

Because Visual Studio 2010 tracks relationships between work items, tracking the changes to a requirement or the impact of a bug is easily visualized. Figure 17 highlights the various links that work items can have with each other. This ensures that all work items that affect a requirement, such as change requests, test cases, child tasks. or related bugs, are all easily tracked and identified.

Figure 17 - Work item relationships are easily discovered

Work Item relationships are easily discovered

Electronic Signature Requirements

“We understand that there is some confusion about the scope of part 11. Some have understood the scope of part 11 to be very broad. We believe that some of those broad interpretations could lead to unnecessary controls and costs and could discourage innovation and technological advances without providing added benefit to the public health. As a result, we want to clarify that the Agency intends to interpret the scope of part 11 narrowly.”- FDA Guidance for Industry Part 11, Electronic Records: Electronic Signatures – Scope and Application

Not every software development effort that is regulated by the FDA requires electronic signature support. However, some do. For instance, software that will ship with a medical device will generally require signatures (physical or electronic) on requirements, requirement changes, and design documents, particularly when hardware requirements and design are tracked alongside the software requirements. (See CFR 21 Part 820.30(c), 820.30(d), and 820.40(a) for more information.) Whether your particular software development effort requires electronic signature support must be determined on an organizational and project basis. Electronic signature compliance requires significant effort, both organizationally and from the development team in particular, and should not be done unless required.

One common misconception is that CFR 21 Part 11, a regulation specifically geared to Electronic Records and Electronic Signatures, requires that the software development process that is used to create compliant applications must also use electronic signatures. This requirement does not apply to all situations. In other words, you can create an application that complies with CFR 21 Part 11 without the development process itself complying with the regulation. However, it must be stressed that the determination as to whether your software development process requires electronic signature support is made by your organization, and such questions cannot be answered in a general white paper.

If you determine that you must use electronic signatures in your development process, you must do several things to comply with the FDA regulations. First, you must conduct an internal assessment to determine the level of signatures that are required. In many followup Guidance for Industry papers, notes, and public statements, the FDA has made it clear that it understands that requiring signatures (electronic or otherwise) can increase the burden on organizations and adversely affect their ability to quickly deal with change. Thus, although each organization must follow the FDA rules, these rules must be interpreted by the organizations that seek compliance. For instance, the main purpose of electronic signatures is “to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records, and to make sure that that the signer cannot easily repudiate the signed record as not genuine.” (CRF 21 Part 11 Section 10) In addition, the regulation states that “Electronic signatures and handwritten signatures executed to electronic records shall be linked to their respective electronic records to ensure that the signatures cannot be excised, copied, or otherwise transferred to falsify an electronic record by ordinary means.” Thus, the level of strictness depends the risk that is associated with the ability of an individual to “readily repudiate” a signature or to falsify a record by “ordinary means.” In the software development arena, this strictness rarely requires cryptographically secure digital signatures of the type that Verisign and others provide. Unlike a signature on a prescription authorizing the usage of a dangerous drug, signatures on requirements and design documents only start the process. The software will ultimately go through massive amounts of additional design verifications, functional testing, and validation before it is approved for market. This understanding does not decrease the importance of software requirements or design but just recognizes that the risks that are inherent in requirements and design of software systems rarely reaches the immediate severity of medical decisions being made by health professionals who are working with an FDA compliant system.

After you assess the level of risk, the next step is to identify the level of strictness that should apply to electronic signatures. The highest level requires digital signatures that companies such as Verisign provide and maintains a hash of a record’s contents together with the digital signature so that any changes to the underlying data will irrevocably break the signature and show tampering. For Team Foundation Server to support full digital signatures, the data should be exported to a secondary electronic record (possibly Microsoft Word or Excel) that is then signed, either digitally or manually. Or, if a completely integrated solution is desired, the appropriate work item forms must contain a custom field that allows full digital signatures to be used. This field is not provided out-of-the-box.

That said, for software development, it is far more likely that an organization will interpret the FDA regulations more literally and rely on more common methods to provide signatures. In this case, Team Foundation Server 2010 can sometimes be used directly out-of-the-box for most of the FDA requirements. However, the organization must meet additional security requirements. For instance, organizational Active Directory and account policies must comply with the regulations, items such as password expiration and logon timeouts must have documented procedures, and documentation that specifies how the organization plans to implement electronic signatures must be provided directly to the FDA.

Let’s examine some of the specific FDA requirements and how they can be met by Team Foundation Server 2010 and the appropriate security measures. (You can find the requirements predominantly in CFR 21 Part 11 Subpart C.) First, we cover the specific tasks that the organization requires and those that involve Active Directory policies:

  • Electronic signatures must be unique to a single individual and never shared or reused.

    • This requirement is satisfied directly by using Active Directory and Security Identifiers (SIDs).
  • The organization must verify the person's identity before granting an electronic signature.

    • This requirement is generally satisfied during the employment process, where appropriate identification is presented to the human resources department before onboarding.
  • The organization must certify to the FDA that they intend to treat items that are ‘signed’ by an Active Directory user as a legally binding equivalent to the user’s handwritten signature.

    • To meet this requirement, the organization must take specific actions regardless of the type of electronic signature used.
  • If biometrics are not used, the signature must contain at least two components (for example, a user identifier and a password).

    • This requirement is satisfied directly by Active Directory accounts.
  • The signature can be used only by its owner and not by anyone else.

    • This requirement is generally an organizational policy to prevent sharing logon credentials.
  • In addition to each electronic signature being unique, no two individuals can share an identifier and password.

    • By default, Active Directory satisfies this requirement, because the domain logon must be unique across all users.
  • Identifiers and passwords must be periodically checked, and the password must be changed at appropriate intervals.

    • Organizations generally have policies that require periodic password changes, in addition to reviews of current users of the system.
  • Policies must be in place to deal with the loss or revocation of electronic signatures.

    • This requirement must be handled at the organizational level with policies about when Active Directory accounts are blocked.
  • Safeguards must be in place to prevent the unauthorized use of identifiers or passwords.

    • This requirement is handled by general Active Directory security.
  • Devices that generate passwords or codes must be tested periodically to make sure that they operate correctly and have not been changed.

    • This requirement applies to any external devices such as “smart cards” or other tools. In addition, requiring sufficiently complex passwords generally falls into this category.

In addition to these requirements, a couple additional safeguards are appropriate for Team Foundation Server 2010. First, direct database write-access should be restricted to few individuals and monitored closely. Theoretically, a user’s identification could be falsified through careful modification of the database tables directly. In most cases, this strategy falls outside the “ordinary means” of falsification. However, depending on the organization’s interpretation, tighter limits may be appropriate. Secondly, destructive commands, such as those that destroy work items, work item types, version controlled files, and whole team projects, should be restricted. Although this policy does not directly affect electronic signatures, it significantly affects traceability and the creation of a full audit trail.

After all Active Directory configurations have been made, organizational policies have been created, and the letter has been submitted to the FDA, you are ready to use Team Foundation Server 2010 to track your development process. As discussed earlier in this section, just using Team Foundation Server provides a complete, auditable tracking of all activities that affect either work items or artifacts that are stored in version control. This information satisfies the FDA regulations that require the “Use of secure, computer-generated, time-stamped audit trails to independently record the date and time of operator entries and actions that create, modify, or delete electronic records. Record changes shall not obscure previously recorded information.” (CFR 21 Part 11.10 (e))

In summary, remember that not all software development efforts require that you use electronic signatures, even when you are creating applications that comply with CFR 21 Part 11. If you determine that signatures are required (electronic or physical), action must be taken to make sure that your systems comply. , together with good Active Directory practices, is a solution that satisfies many needs for developing compliant software. For more strict interpretations of the FDA requirements, the addition of either customizations or the introduction of digital signatures that are cryptographically secure may be required.

In the vast majority of regulations, compliance is achieved first through auditability. The ability of a regulator or assessor to audit the end-to-end development process and to have a trail that identifies “who, what, when, where, and why” for each change to the system establishes the foundation from which an audit can be passed. This auditability is provided by a tool and a system that can establish traceability between both artifacts and time.

In our work helping clients comply with various regulations, the basic challenge usually comes down to traceability.

Compliance with any regulation requires auditability of the development lifecycle – traceability that identifies the “who, what, when, where and why” for each change to the system. Although that sounds easy enough, traceability is generally very difficult to generate and very labor-intensive to create. In many cases, this leads to the heavyweight, waterfall practices that we all want to avoid. But, without good tools, most teams are forced to rely on manual documentation and significant overhead.

Visual Studio 2010 automates much of the required information and linkages between artifacts. All of this is then exposed in reports, queries, and forms that highlight the relationships, through time, of the important artifacts that are generated throughout the development lifecycle. The beauty of Visual Studio 2010 is that it can generate most of this information from the normal day-to-day activity of each team member. Rather than require additional documentation, Visual Studio 2010 tracks regular activities and correlates them in such a way as to provide deep traceability. For instance, a developer check-in is associated with a work item. The effort is no different from another system, yet the benefits are staggering. We can now automatically identify which code was changed in a new release, which bugs were fixed in a build, what tests were affected by change requests, and so much more – all from that trivially simple action. That is the power of Visual Studio 2010.

In conclusion, the General Principles of Software Validation read like an advertisement for Visual Studio 2010 and Team Foundation Server 2010. Each recommendation and each requirement seem to match perfectly to a feature or set of features in the Microsoft tools. If your organization is mandated to achieve compliance with FDA regulations such as CFR 21 Part 11 or CFR 21 Part 820, we cannot recommend Visual Studio 2010 highly enough.

See Also

Concepts

Technical Articles for Visual Studio Application Lifecycle Management

Northwest Cadence is a national leader in Microsoft ALM and software lifecycle solutions. Recognized by Microsoft as a Gold ALM Partner, Northwest Cadence focuses exclusively on application lifecycle management with clients across the globe. With experience providing consulting services on Microsoft ALM tools that date back to product inception (Visual Studio 2005 Team System), Northwest Cadence has actively worked with clients and Microsoft on product development, implementation, and process incorporation. Northwest Cadence has a solid commitment to honesty and excellence. This commitment, coupled with vast experience, means that Northwest Cadence clients know to expect an exceptional experience every time.