Jaa


4 - Automating the Release Pipeline

patterns & practices Developer Center

Download code samplesDownload PDFOrder paperback book

In the last chapter, the Trey Research team took the first steps towards improving their release pipeline and their processes. The most important step is that they've begun to talk to each other. Each team member has a viewpoint and a set of priorities. Sometimes these viewpoints clash and compromises have to be made.

To formalize their understanding of how they develop software, the team has begun to use a Kanban board, which reflects everyone's participation in the business. They've also begun to document what their product should do by using Microsoft Test Manager (MTM) to capture user stories and to develop tests.

In terms of the pipeline, they've finished the orchestration. What they have now is a framework. Only the commit stage contains steps that are completely functional. This means that they still have a very long list of problems and a lot of work left to do.

To really make progress, they need to implement some automation. Right now, all the deployments occur manually, and the environments are created manually as well. As we'll see later, manual deployments are at the root of many of Trey Research's problems. They also need to start automating some of their tests. Although the team has made progress by adopting MTM, they still run all their tests manually.

To introduce you to the topic of automation, we'll first explain the benefits it brings, (as well as some possible issues), and some general principles to follow. Again, while much of this guidance is true for any release pipeline, this guidance is tailored towards the creation of a continuous delivery pipeline. Later in the chapter, we'll show you how Trey Research automates their pipeline.

Understanding the Benefits of Automation

Implementing automated processes can sometimes seem threatening. People may feel that their skills are undervalued or considered unnecessary, that they will lose control over the way they work, or forfeit precious resources (computers, servers, networks, software, or even people) that they have struggled to acquire. It's impossible to say that these fears are always unjustified, but automation does bring many benefits. Here are some of them.

  • Automation frees people from performing repetitive, monotonous work. Not only is this work dull, but when people are bored they tend to make mistakes.
  • Automation gives people more time to work on creative ways to provide value to the company.
  • Automated processes are faster than their manual counterparts.
  • Automation improves the quality of the entire release process because it standardizes important steps such as deployments. Well understood processes and environments result in predictable outcomes.
  • Automation reduces costs, if you take a medium or long term viewpoint. Once they're in place, automated processes cost less than manual processes.

Of course, automation isn't always easy. Here are some problems people frequently encounter.

  • The initial cost can be high, and it requires the dedication of the team and an investment in tools.
  • The learning curve can be steep. People may not have the necessary skills or be familiar with the necessary tools.
  • You may need the cooperation of people in different silos or departments.
  • Automation can be very complex, depending on the scenario you want to automate. It often requires simplification. Taking a complex, fragile manual process and automating it creates a complex, fragile automated process. When you automate, always look for ways to simplify and standardize.
  • If automation is done incorrectly, security can be compromised. Powerful automation tools in the wrong hands, or not used knowledgably, can cause real havoc. You should always have someone who monitors automated processes to make sure that they're functioning as they should and that they're being used correctly

Overall, we feel that automation is worth it. Of course, you always have to evaluate if the return on investment and resulting software quality outweigh the costs.

What Can Be Automated?

You might be surprised at the number of tasks that are candidates for automation. Here are some good examples.

Activities You May Already Be Automating

There are many activities that you've probably already automated. You may not even realize that you've done it. A prime example is building your code. Do you know anyone who, as a standard release practice, opens a command prompt, runs the compiler manually, then the assembly linker, and then copies the binaries to an output directory? (If you've just said "yes," you may want to have a serious conversation with them.)

Another example is continuous integration. More and more software development teams have automatically triggered builds that run on a build server. These automatic builds generally do more than compile the code. For example, they run unit tests, perform code analysis, and version the code. If you aren't performing continuous integration, you need to put this guidance down, start learning about continuous integration, and incorporate it into your process. You cannot have continuous delivery without continuous integration.

Activities You Should Automate

There are some activities that, if you're not automating them now, it's time to begin. These activities include:

  • Deployments
  • Functional tests
  • Build verification tests

Deployments

Deploying an application is usually a complex process. For manual deployments, the instructions can be in multiple documents that are spread across several groups of people. These documents can easily become out of date or be incomplete. The manual deployment process must be repeated for each environment. Errors are common and costly. In short, automated deployments are a great way to improve your release process.

There are now some tools that can help you to create automated deployments. One is the ALM Rangers tool named DevOps Deployment Workbench Express Edition. You can learn more about it by reading Appendix 1. Another tool is InRelease. See Chapter 1 for a short description, or go to the InRelease website.

Functional Tests

Functional tests, for our purposes, are tests that ensure that the components that make up an application behave as expected. The Trey Research application has three components: the WCF service, a WPF application, and a Windows Phone 8 app. There should be automated tests in place that test all of them.

Build Verification Tests (BVT)

Build verification tests are also known as smoke tests. They ensure that the application, at some predefined level, performs as it should. If an application successfully passes the BVTs, you have a reasonable level of confidence that it will work as it should.

Activities You Can Automate

There are many activities that, although they are suitable for automation, often remain as manual tasks. Some of these often overlooked activities include performance and load testing, and what is informally called "ility testing," which covers many areas but includes, for example, scalability, extensibility, and security. Another example is automating the management of the artifact repository.

Activities That Must Remain Manual

Certain activities must remain manual. These are the activities that require a human being to do some type of validation. User acceptance tests (UAT) are a good example. These tests require that someone use the software to see if it meets some mutually-agreed upon set of requirements.

Patterns and Practices for Automated Deployments and Tests

This section discusses some patterns and practices that you can use to inform your approach to automating deployments and tests. Note that we won't be talking about testing as a subject in and of itself. There are many resources available to learn about testing. A good place to start is Testing for Continuous Delivery with Visual Studio 2012.

Strive for Continuous Improvement

If you don't already have automated deployments and run automated tests, then you must prepare for changes in how you work as you implement them. This guidance advocates an incremental approach to change. Examine one of your most difficult problems, find something you can do to improve it, implement it, and review your results. Apply this process to each of your most pressing issues and you will see gradual, but continuous, improvement. In this guidance, we've used Trey Research to illustrate an incremental approach to improving the pipeline, and we've also tried to demonstrate it in the hands-on labs (HOL).

Automate as Much as Is Feasible

If anything can be automated at a reasonable cost, do it. Keep manual steps to a minimum (manual steps include using an administrative graphical user interface). Ideally, get rid of them all and write scripts instead. In this guidance, we focus on automated deployments and automated tests.

Automate Deployments

Automated deployments include not only the deployment of your application, but also the automated creation of the prerequisite environments. A typical example is a web server that hosts a web application. Create a script that sets up the web server before you deploy the application to it. Automated set ups and deployments mean that you can deploy your application quickly, and as many times as you want. As a result, you can test your deployments just as you do your application.

Automate Tests

The same principles apply to tests as to deployments. Any tests that are currently done manually but that can be automated, should be. Take a look at how often you perform a particular manual test. Even if you've done the test just a few times, you know that you're wasting resources and keeping an error-prone activity in your release process. Automated testing does more than provide faster and more robust tests. It also protects you from introducing regressions and, if artifacts such as configuration files are kept in version control, lets you know if any changes to the system have broken something.

Deploy the Same Way to Every Environment

Deploy to every environment using the same process. If you use automated deployments, this means that the scripts and tools should behave the same way for all environments. Differences between environments should be externalized in configuration files.

This doesn't mean that you have a single deployment script. In fact, it's quite possible to have multiple scripts. Although it's outside the scope of this guidance, different stages of the pipeline can have different deployment requirements. For example, if you deploy to a capacity testing stage, the deployment script may prepare the agents that generate the load. This step is unnecessary for other stages. What does remain the same is that you deploy these agents to one environment the same way as you do to another environment. For example, you would deploy the agents to a test environment the same way that you would deploy them to a staging environment.

Another example of multiple deployment scripts is having one script for each component that makes up the application. As you'll see, this is true for Trey Research.

Tokenize Configurations

To tailor a deployment to a target environment, use tokens or parameters that can be provided externally to the deployment script. Candidates for tokenization include information that is dependent upon a particular deployment, the version being deployed, or the target environment. There should also be a base configuration file that contains the information that is constant for all the environments. The tokens are applied to the base configuration to create a particular deployment script.

You should also tokenize any configuration information that you need for your automated tests. For example, if your tests run against remote target URLs, extract them so that they can be provided as run-time parameters to the tests. By using tokenization, you can run the same tests across different computers or environments.

Avoid hardcoding environmental and version-specific configuration information, or any other variable data. Avoid changing this data manually to conform to another environment.

Automate the BVTs

Automated deployments should be validated by automated BVTs. These tests should give you at least a minimum level of confidence that the deployment is correct. Here are examples of what BVTs should cover.

  • If you deploy a web service, test to see if the service is available.
  • If you deploy changes to a database, test to see that the database is available.
  • Test to see if the configuration for the target environment is correct.

Keep Everything Under Version Control

Many people keep their source code versioned, but it isn't as common to keep configuration information under version control. The configuration of the different components of an application changes during the product's lifecycle, and it's important to be able to recreate any of these versions, if necessary. You should also save deployment scripts and any artifacts that are a part of an automated deployment in your version control system. The same is true of configuration files that you use with your automated tests.

Any tools that you use for deployments are also good candidates for version control. By following this practice, the right versions of the tools will be available if you ever need to recreate a deployment for any specific version of the software. Keeping tools under version control is particularly relevant for applications that are only sporadically maintained after they're released.

You may not want to store all the tools in TFS version control, especially if they take up several gigabytes. The goal is that the information is stored somewhere other than in people's heads or in a document. For example, you can prepare a virtual machine (VM) that is configured with all the necessary tools, and store it in the library of a virtualization system such as System Center Virtual Machine Manager (SCVMM). You may want to version the VM as the tools are updated.

Use One-Click Deployments

You should be able to run fully automated deployments with little effort. If you need to run a series of scripts manually, then your deployments aren't fully automated. The ideal situation is that the only manual action required is when someone starts the deployment process.

Build Once

Although the best practice of building your code once was already discussed in the last chapter, we're bringing it up again in the context of the deployment scripts. None of them should rebuild the application from the source code. Building once ensures binary integrity. This means that the code that is released into production is guaranteed to be the code that was tested and verified throughout all the pipeline stages.

Although building once seems straightforward to implement, it's easy to mistakenly build multiple times with TFS Build. Typically, people use the TFS default build template to create a build definition that triggers the deployment. This template is designed to rebuild the code just before the deployment occurs. The same is true for configuration file transformations. Generally, if you want to transform a configuration file by using the standard procedure in Visual Studio or TFS Build, you have to rebuild the application by using a different build configuration. The issue is that you don’t want a different build configuration for each environment. You want to use the same build configuration (typically, the Release configuration) that was created in the commit stage, with different application configuration files.

Another way to ensure a single build is for the commit stage to store the binaries it builds in a repository. All the deployment scripts should retrieve the files they need from that repository, and never from the previous environment in the pipeline.

Choose a Suitable Deployment Orchestration Model

As you form a plan for automating your deployments, consider the deployment orchestration model, which defines how the deployment script executes. Deployment-level orchestration is an example of orchestration at the step level of the pipeline. Among the decisions to be made when you begin to automate deployments is what you'll use for the deployment orchestration model, which defines the way the deployment scripts execute. There are three options to choose from.

Use a Tool that is Native to Your Platform

The first option is to use a deployment management and orchestration tool that works natively with your platform. For the Windows platform, several tools are available. One possibility is Active Directory Group Policy (GPO). Another possibility is to use a combination of System Center Service Manager and System Center Orchestrator. Both of these options rely on Windows Installer, which is the native packaging tool for the platform.

Because it is the native deployment technology, Windows Installer is a useful deployment tool even if GPO and System Center aren't available. You can write deployment scripts that use the command line tool, msiexec.exe, to automatically deploy the .msi file that contains the Windows Installer package.

Using tools that are native to your platform is the most direct approach for orchestrating deployments. These tools already have the capabilities to perform many important functions. They can install dependencies, deal with versioning and upgrades, schedule deployments, determine the states of the target machines, and validate the security of scripts. Using native tools means that you use the same tools to manage deployments as you use to manage the infrastructure, which simplifies both tasks. Also, operations people are usually familiar with these tools.

Again, as an example, Windows Installer supports all these features and many others. It's also not limited to desktop applications. You can, for instance, prepare an MSI installer that deploys a website simply by configuring the necessary steps with a utility such as the WiX toolset. For more information, go to the Wix Toolset website.

Use Deployment Agents that Are Provided by the Release Management System

The second option is to use deployment agents that are provided by the release management system. A deployment agent is a lightweight service that's automatically installed on the target machine and that runs the steps needed to deploy locally to that computer. These steps are typically contained in a script that is written in a technology that is natively supported by the target operating system (possibilities for Windows include batch scripts and PowerShell).

With TFS, the deployment agent is provided by Lab Management. The agent is automatically installed on the target computers when they are added to a Lab Management environment. This approach is not as powerful as the first option, but still has many advantages. The scripts are much simpler than those required by the third option, remote scripting, because they run locally. Also, the release management system (which, in this guidance, is TFS Build and Lab Management) performs many tasks, such as orchestrating the deployment, logging, and managing the target environments.

This approach is also secure. The environments are locked down and only the service account that runs the scripts has permission to change them.

Use Scripts that Connect Remotely to the Target Machines

The third option is to perform the deployment by using scripts that connect remotely to the target machines. For the Windows platforms, there are several options such as PsExec and PowerShell. This is the least powerful option. You will have to do all the orchestration work yourself, and security can be harder to enforce.

For more information about these deployment orchestration models, you can refer to the "Deployment Scripting" section in Chapter 6 of Humble and Farley’s book, Continuous Delivery.

Choose a Suitable Testing Orchestration Model

You will also need to choose an orchestration model for your automated tests. There are two options.

Run the Tests Locally Using a Testing Agent

The first option is to run the tests locally, on the target machine, by using a testing agent that is provided by your release management system. An example is the Lab Management test agent. The advantages of this orchestration model are similar to those for deployment. The test code is simpler and needs less configuration, a great deal of the work is done by the release management system, and the security is good because the tests don’t run across the network or outside the context of the target computer.

Run the Tests Remotely

The second option is to run the tests remotely by having them connect to the target environments. You can use this approach if you run tests (for example, integration tests or UI tests) from a build agent, in the context of a build definition that is based on the default TFS template. You’ll have more configuration, orchestration, and management work to do in order to make the tests suitable for different environments, and security will be weaker than if you use a test agent. However, there may be some cases where this is the only available option.

Follow a Consistent Deployment Process

Whenever possible, follow the same deployment process for all of the components in your system. It's much easier to set up and maintain a single, standardized approach than multiple approaches that have each been tweaked to work with a particular environment. By stressing a uniform process, you may also find that you can reuse some of the scripts and artifacts for several of your system components. Using the commit stage to prepare the artifacts and then running a script in the subsequent stages is one way to follow a consistent deployment process.

Use the Commit Stage to Prepare the Artifacts

The commit stage should prepare any artifacts you'll need to deploy to the other stages of the pipeline. Typically, these artifacts include the deployment packages and the configuration files that are tailored to the target environments. If you want your commit stage to complete quickly in order to get fast feedback about the build, you might want to split it into two stages. The first stage can run the tests that let you know if the build succeeded. The second stage can prepare the configuration files and the packages. The second stage should be automatically triggered when the first stage completes successfully.

Run a Script in the Subsequent Stages

After the commit stage, the subsequent stages should run deployment scripts that they retrieve from version control. These script should retrieve the required packages and configuration files from the binaries repository.

Leave the Environment and Data in a Known State

Automated deployments should leave target environments in a known, consistent state, so that they can be used immediately without having to perform further steps to clean them up. This is also true of any databases or data repositories that the application uses.

In terms of the target environments, the easiest approach is to always perform complete deployments of every component. Incremental deployments, where you deploy only the artifacts that have changed, are much more complex because you have to keep track of what changes and what doesn’t, and test how the new components work with the old ones.

For databases, leave the target database in a state that is compatible with existing and new data, and that has a schema that is compatible with the version of each component. Although the subject of managing changes to databases during deployments is outside the scope of this guidance, there are several approaches that you can investigate. Here are three of them.

In terms of automated testing, maintaining consistent environments and test data suites is especially relevant because the tests depend on having a known initial state. Again, the preparation and management of useful test data suites is outside the scope of this guidance. To get started, read Chapter 12 of Continuous Delivery by Jez Humble and David Farley.

Have a Rollback Mechanism Available

While useful for any environment, rollback mechanisms are particularly important for production environments. A rollback mechanism allows you to return the target machines to an operational state if anything goes wrong during a deployment. With a rollback mechanism in place, you can ensure that the application is still available while the problem that caused the current deployment to fail is investigated. Of course, an automated rollback mechanism should be your goal.

The easiest way to perform a rollback is to redeploy the version that was running just before the failed deployment. If the automated deployment scripts are written to work with any version of the application (See "Tokenize Configurations" earlier in this chapter), you can use these scripts as the rollback mechanism. You only need to provide the scripts with the version you want to deploy.

Lock Down the Environments

Once you have automated the deployments, there is no reason to allow users to change environments manually. You should only modify the target machines by running the deployment scripts. As a rule of thumb, only a few user accounts should have enough privileges to run these scripts. This restriction ensures that everyone uses the automated procedures and it stops people from manually trying to change the environments, or using procedures other than the correct scripts.

If you use Lab Management build definitions or remote deployment, the user accounts that can run deployments should be limited to:

  • The service accounts that run the deployment agents.
  • Users with administrative privileges.

This way, deployments or any kind of environmental change are only possible if done by the pipeline, or if invoked purposely by users with the correct privileges. An example of when this second case might occur is if you need to perform a rollback.

Make Deployment Scripts Granular

Although we've stressed the importance of uniform procedures, this doesn't mean that a single script that deploys an entire system is the best way to go. Instead, make your deployment scripts granular. By this we mean that each deployment script should focus on a particular component.

In terms of deployments, a component is a set of artifacts (binaries, configuration files, and other supporting files), that can be deployed together, without interrupting other areas of the application. For .NET Framework projects, the organization of these components ultimately depends on the Visual Studio (MSBuild) projects that make up your solution.

Having different deployment scripts for different components lets you deploy changes to individual components without having to discontinue service or affect other components in the system. The deployment process itself will be shorter, and there will be fewer chances for errors. Furthermore, if any problems occur, they will be easier to solve.

It may seem that writing deployment scripts that act on the component level contradicts the advice about avoiding incremental deployments given earlier in "Leave the Environment and Data in a Known State." To distinguish between incremental and non-incremental deployments, evaluate a deployment at the component level.

For example, you can incrementally deploy a single component by only deploying the binaries that have changed. This is often more complex than the alternative, which is to deploy the entire component, including all its binaries and supporting files. At the application level, this second approach may seem like an incremental deployment, but at the component level, it isn't.

Of course, as with any guidance, you must take your own situation into account and make a decision that balances the complexities of the proposed deployment against other factors, such as how much the deployment will cost in terms of time and resources.

Adopt a DevOps Mindset

One of the key principles of DevOps is that there must be collaboration between development and operations teams. Cooperation between these two groups can definitely make it easier to automate deployments. If your goal is continuous delivery, you need the people who write the deployment scripts to collaborate with the people who manage the environments and run the scripts. Poor communication can cause many problems that can make the automation process frustrating and error prone.

When you begin to plan how to automate deployments, make sure that there are people from operations working with people from development. The same principle is true of testing. When you begin to move toward automated testing, make sure that people from test work together with people from development to create the process.

Begin the Process Early and Evolve It

The longer you wait to automate deployments and tests the harder it will be because scenarios get more complex as time goes on. If you can start early, preferably from the beginning of the project, you can begin simply and evolve the processes over time, incrementally, just as you do your software. Furthermore, you won't be able to tell if your software is doing what it should until it's been deployed and tested. The best way to discover if your software is working correctly is to automate your deployments and tests so that you get feedback as quickly as possible. Finally, it can take time to learn the tools and techniques for automation. Trying to cram it all in at the end of a project is guaranteed to cause trouble.

Choose the Right Tests for Each Stage

Not all the automated tests you write should run in every stage. The purpose of certain stages is to run specific types of tests and they should run only after other, more lightweight validations.

For example, it doesn't make sense to run load tests in the commit stage. They would slow the stage down and they would be premature because load tests should run only after the software is functionally validated. Another obvious example is that unit tests should only run in the commit stage rather than, for instance, in the acceptance test stage.

Generally, specific types of test only run once in the pipeline, as a step in a particular stage. The exception is BVTs. They validate deployments so they should run in each stage where automated deployments occur.

Trey Research

Now let's take a look at how Trey Research is implementing these patterns and practices. When we left them, they'd finished orchestrating their pipeline, but what they had was a framework that still didn't provide a lot of functionality. Consequently, the long list of problems they had at the beginning doesn't look that much shorter to them. Here are some of them.

Issue

Cause

Solution

They never know if they have the correct version. What they think is a current version isn't, and they find bugs they've already fixed, or missing features they've already implemented.

The deployed versions are out of date.

Associate deployments to specific changes. Automate the deployments and trigger them either as a step in the pipeline, or by a manual command.

They have to perform acceptance tests again and again to prevent regression errors. This is a large, ever increasing amount of work, and is both slow and error prone. Consequently, they don't test as thoroughly as they should.

All the test cases are performed manually.

Automate the acceptance tests and run them in the corresponding stage of the pipeline.

The deployment process is slow and error prone.

There is no standard deployment process. One deployment is different from another deployment. All deployments are manual.

Automate deployments to all environments.

They don't know how to deploy to different environments.

They change the application configuration manually for each environment. This occurs every time there is a deployment.

Modify the pipeline so that it changes the configuration files to suit particular environments.

Existing environments are vulnerable to uncontrolled changes.

The environments aren't locked down. Too many people have permissions to access them and make changes.

Lock down the environments so that changes occur only in the context of automated scripts that run under special user accounts or by authorized team members

There are also other pressures coming into play. The board wants the Trey Research application to have some new features. They're impatient because the work to orchestrate the pipeline took a fair amount of time. Another issue is that everyone on the team has their own priorities.

Dn449951.16AD8567828B510D6CC4E3DB237AAFDE(en-us,PandP.10).png

Raymond says:

I'm still not sold on all of this. I don't think we're paying enough attention to security, and there is no way I'm letting some test agent run in the release environment.

Dn449951.2B05A15797CA01CF09884AFA4275A5B8(en-us,PandP.10).png

Paulus says:

I need to build new features. I can't spend all my time with Jin, working on the pipeline. No one's paying me for that.

Dn449951.54AFDF38F151D5CAA107FCE6FD4F9BEE(en-us,PandP.10).png

Iselda says:

I need to be working on tests. I like MTM and I think we do need to stop testing manually, but it's a lot of work.

Dn449951.041673CFC2BF0AE00C5D68237AEACB5B(en-us,PandP.10).png

Zachary says:

Everyone wants something different. Our resources are limited, and if we don't add new features, the competition will win. On the other hand, I know that our release process is a mess but fixing it isn't trivial. How do I keep everything in balance?

In fact, Zachary is feeling overwhelmed.

Dn449951.3740E343EE4EAA72B0E507AEF97161AF(en-us,PandP.10).png

After much debate and drawing on whiteboards, the team decides to focus their pipeline efforts on automating deployments and tests. Although it will be a big push, once they're done they'll have a continuous delivery pipeline that should solve their major problems. Here's the final result of all their hard work.

Dn449951.4A447536FE03FD81F4385E0601BDA0D2(en-us,PandP.10).png

The rest of this chapter explains what the Trey Research team did. At a high level, we can summarize their efforts by saying that, for automated deployments, they wrote scripts that use Lab Management agents to run locally across all the environments. They also configured the pipeline to run these scripts and to provide the correct parameters.

Here's what Jin thinks as he looks at the beginning of a new iteration.

Jin says:

Monday, August 19, 2013

Dn449951.1BB72D0F56BB9989DC1874BD93C3A9B4(en-us,PandP.10).png

It seems like Zachary is feeling more confident. He asked us to include two new features in the backlog. He even seems to understand that we can’t start working on them if we're already busy working on other work items. We've also added a bug that a user found in the new UAT environment to the backlog. Of course, we also have all the work to automate the pipeline. We increased our WIP limit of the Ready for Coding column. Let's hope we can handle it.

Here's the product backlog for this iteration. You can see that there are many tasks associated with automating the pipeline, as well work on the Trey Research application.

Follow link to expand image

Here's what else Jin has to say.

Jin says:

Monday, August 19, 2013

Dn449951.1BB72D0F56BB9989DC1874BD93C3A9B4(en-us,PandP.10).png

In the first part of this iteration, Paulus (who's the only person who knows how the components of the application work) will team up with Raymond to automate their deployment. I'll help Iselda set up the automated tests. After that, we'll all concentrate on building some new features so the stakeholders are happy, and so that the pipeline automation is tested with a realistic scenario.

Here's the backlog for this iteration. You can see who's responsible for each task.

Follow link to expand image

How is Trey Research Automating Deployments

Adding automation is the major goal this iteration for the pipeline. This section discusses the key points towards accomplishing that goal. For a step-by-step description of how to automate the pipeline, see the group of labs included under the title Lab03 – Automation. If you look through these lab, you'll see that the team wrote three different deployment scripts. There is one script for each component of the application. This means that there is a script for the WCF web service, one for the Windows Phone 8 client, and one for the WPF application. Currently, Trey Research doesn't need different scripts for the different stages because, in terms of deployments, all the stages in the pipeline have the same requirements.

They still need to write deployment scripts that set up the web server, the Windows Phone 8 emulator, and the Windows computer for the Windows Presentation Foundation (WPF) client. This is something that they plan to do in the future.

How is Trey Research Deploying the Same Way to Every Environment

Trey Research has made sure that the scripts deploy the same way to all three environments. The deployment agent retrieves a script directly from the drop location, where it is placed by the specific instance of the pipeline. They use the $(BuildLocation) built-in variable to compose the path. The first parameter is the drop location. The second parameter is the name of the target environment.

The following code shows how to deploy the WCF web service to the test environment.

"$(BuildLocation)\Release\Deployment\WcfService\DeployWcfService.cmd" "$(BuildLocation)" Testing C:\TreyResearchDeployment

The following code shows how to deploy the WCF web service to the production environment.

"$(BuildLocation)\Release\Deployment\WcfService\DeployWcfService.cmd" "$(BuildLocation)" Production C:\TreyResearchDeployment

How Does Trey Research Use Automated BVTs

Trey Research has created a set of automated BVTs that ensure that the deployments are correct. All the stages that perform automated deployments are configured to run them.

In the future, as an improvement, the team plans to have different BVT test suites that cover specific deployments. If they deploy a specific component, they'll be able to run the BVTs that verify the deployment of that component.

How Does Trey Research Tokenize Configuration

The team identified all the configuration information that was specific to a particular environment and extracted it to specific tokenization (or parameter) files. These files only contain the information that changes from one deployment to another. Trey Research uses configuration transformations to prepare the final configuration files that are used during the deployment.

There is a base configuration file for each group of environment-specific parameter files. The transformation is applied to that base file. For example, this code defines the way the value of the endpoint setting is transformed.

<endpoint address="http://webServerAddress:portNumber/SensorReadingService.svc"
          name="BasicHttpBinding_ISensorReadingService"
          xdt:Locator="Match(name)"
          xdt:Transform="SetAttributes(address)">
</endpoint>

There is a base configuration file for each group of environment-specific parameter files. The transformation is applied to that base file, so only the portions of the file that must change are transformed.

For now, Trey Research tests are run locally by the Lab Management test agent. This means that the team doesn't need to extract any test configuration information and put it into separate files. Tokenizing test configuration information may be something they'll do in the future.

How Does Trey Research Keep Everything under Version Control

Configuration files are part of Visual Studio projects, so they are under version control. Deployment scripts are also part of their respective Visual Studio projects, so they too are version controlled. TFS keeps build scripts (workflow templates) under version control by default. TFS does not provide a direct way to version build definitions. As a future improvement, the team plans to make a copy of each build definition before it changes.

Right now, the team doesn't keep tools under version control. When they must update a tool, they'll decide on the best way to keep the old version available.

How Does Trey Research Provide One-Click Deployments

The entry point for all the deployments is a single deployment script. Once a script runs, the component is deployed without requiring any more manual actions.

If necessary, the script can be run from the command line on the target machine. However, Trey Research usually runs it as a step inside a stage of the pipeline. If the stage is automatically triggered, the script begins with no manual intervention. If it is one of the manually triggered stages, such as the UAT stage, then the script is triggered when the stage is triggered.

How Does Trey Research Ensure That They Build Only Once

Trey Research deactivated the build step in all the pipeline stages but the commit stage. They also added a step to the commit stage that transforms the configuration files explicitly. This transformation is instead of changing the build configuration and rebuilding the application each time they need to transform the application configuration files. The following code is an example of how to explicitly perform the transformation inside the MSBuild project file of a component.

<TransformXml Source="App.config" Transform="@(TransformationFiles)" Destination="$(OutDir)\ConfigFiles\WpfClient\%(TransformationFiles.Identity)" />

Finally, all the deployment scripts obtain the artifacts from the binaries repository (the drop folder for the pipeline instance), instead of getting them from the previous environment in the chain.

What Is the Trey Research Deployment Orchestration Model

Trey Research uses a hybrid approach towards deployment orchestration. It's based on the deployment agent model but relies on the platform’s packaging tools. They use Lab Management, so all the deployments are run locally on the target computers by the Lab Management deployment agent. The agent is a Windows service that runs the deployment script. The script itself is obtained from the binaries repository. The packaging technology depends on the platform. For the WCF web service, they use MSDeploy packages that are supported by IIS. For the Windows Phone 8 app, they use XAP packages. For WPF, they use native MSI installers.

What Is the Trey Research Testing Orchestration Model

Trey Research's testing orchestration model is to use the build agent for the commit stage's unit tests. The subsequent stages use the Lab Management-based build definitions. The test agent that TFS provides runs the tests locally.

Lab Management only allows one machine role to run tests inside an environment. For Trey Research, this is the Client role. If Trey Research wants to run tests directly against the web service, they'll have to do it remotely.

How Does Trey Research Follow a Consistent Deployment Process

Trey Research follows the same deployment process for the WCF web service, the WPF application and the Windows Phone 8 app. Here's a summary of the process.

  1. The commit stage packages the files to be deployed and prepares the configuration files.
    1. The configuration files are prepared by using a base template for each component. The base template is transformed by using the MSBuild TransformXml task. The base template is transformed to include the environment-specific parameters and data, which results in a different configuration file for each environment.
    2. The files to be deployed are packaged using the standard tool for each technology: MSDeploy zip packages for the WCF web service, XAP packages for Windows Phone 8, and MSI Windows Installers for WPF.
    3. The packages and configuration files are copied to the binaries repository. The remaining pipeline stages retrieve them from that location.
  2. The subsequent stages run a deployment script that uses the prepared packages and configuration files to deploy to the target environment. The script is run locally on the target machine by the Lab Management deployment agent.

How Does Trey Research Leave the Environment and Data in a Known State

The WCF web service, the Windows Phone 8 app, and the WPF application are versioned and packaged in their entirety. Each of them is deployed as a complete package. There are no deployments that use subsets of a component or specific files.

Currently, the Trey Research application doesn't use a database, so the team doesn't yet know which approach they'll adopt to leave a database in a known state. They also don't currently run tests that require specific suites of test data.

What Rollback Mechanism Does Trey Research Use

All the deployment scripts are tokenized and they receive the version to be deployed as a parameter. The following code shows how the deployment script for the WCF web service is called in order to deploy the service to the test environment. The $(BuildLocation) parameter contains the version to be deployed.

"$(BuildLocation)\Release\Deployment\WcfService\DeployWcfService.cmd" "$(BuildLocation)" Testing C:\TreyResearchDeployment
An example value of $(BuildLocation) for this invocation would be “\\<Path to the binaries repository> \01 Commit Stage\01 Commit Stage 0.0.0605.781”.

If the team needs to perform a rollback, they can run the deployment script and point it at the version that was running before the failed attempt.

How Does Trey Research Lock Down Its Environments

Trey Research locked down its environments by ensuring that only the account used to set up the Lab Management environment can change them. In addition, the operations manager, Raymond, also has administrative permissions, in case a manual change or a rollback is required.

How Does Trey Research Make Its Deployment Scripts Granular

Trey Research uses the Visual Studio (MSBuild) project that's available for each of the components that make up the Trey Research Visual Studio solution. Each of these components has a different deployment script, which is included in the appropriate MSBuild project. The following screenshot shows the location of the deployment script for the Trey Research application's WCF service.

Dn449951.594C228C061F6CF48D7D7906E165F8D1(en-us,PandP.10).png

For now, the pipeline always deploys the entire application, including the WCF web service, the WPF application, and the Windows Phone 8 app. This is a reasonable approach, at least for now, because the team is adding features that involve all three of them. The following screenshot shows that, during a deployment, all three deployment scripts are used.

Dn449951.E09AC16F46DB547FCD107ACFFAF6C585(en-us,PandP.10).png

However, it would be straightforward to modify the pipeline so that it invokes only the script for the component that has changed. The team could divide each stage into three stages, one for each component. This effectively creates three different pipelines that are triggered only when there are changes to the stage's associated component.

Does Trey Research Have a DevOps Mindset

As we've seen, Trey Research has started to have planning meetings that involve people management, development, operations, and test. Also, there's now much more cooperation between team members. Because Trey Research is a small company, this is easier for them than it might be for people who work in large companies.

By the middle of the iteration, Jin is feeling hopeful.

Jin says:

Monday, August 26, 2013

Dn449951.1BB72D0F56BB9989DC1874BD93C3A9B4(en-us,PandP.10).png

We're starting to feel like a real DevOps team! Working together, we managed to automate the deployments and testing across all the pipeline stages. Now we can spend the rest of the iteration on building new features.

Here's the product backlog at the middle of the iteration.

Follow link to expand image

How Did Trey Research Create Automation Processes Early and Evolve Them

Trey Research began to automate their deployments and tests as soon as they finished orchestrating the pipeline. Fortunately, their application is a simple one, and they've been careful not to add lots of new features before the automation processes were in place.

What Tests Did Trey Research Choose To Run in Each Stage

For each stage, Trey Research runs tests that were appropriate but, for now, they run only a few types of tests.

  • In the commit state, they only run unit tests.
  • For the acceptance test stage, they run automated BVTs that validate the automated deployment. They also run automated acceptance tests that verify that the application still functions as it should after the deployment.
  • For the release and UAT stages, the only run automated BVTs. Other types of tests are performed manually.

In the future, they plan to add some new stages that are devoted to other types of testing, such as load and capacity tests.

Here are Jin's thoughts at the close of the iteration.

Jin says:

Friday, August 30, 2013

Dn449951.1BB72D0F56BB9989DC1874BD93C3A9B4(en-us,PandP.10).png

For the first time in the project, we managed to deliver all the forecasted work. It wasn't easy and we had a minor crisis near the end of the iteration. We realized that nobody was working on the bug that our user reported. The way the MSF Agile template handles bugs in TFS meant it didn't appear in the backlog or on the Kanban board, and those are our main tools for managing work. We had to put in a couple days of overtime and the work wasn't reflected in our backlog. We really need to find all these hidden work queues and make them visible.

Here's what the product backlog looks like at the end of the iteration.

Follow link to expand image

Finally, Jin says this.

Jin says:

Friday, August 30, 2013

Dn449951.1BB72D0F56BB9989DC1874BD93C3A9B4(en-us,PandP.10).png

Now that the automation is in place, I feel like we can really say we use continuous delivery. We know that we can release the correct code whenever we want, and we're working faster, and with fewer problems. It all sounds great, but I expect our stakeholders will start making more demands. They know how quickly we can release new features, and they're going to want to start testing new ideas on users to see how they react. It's the right thing to do, but it means we can't just relax.

Summary

In this chapter we talked about automating deployments and tests. Although it may be difficult to implement, automation has many advantages, such as freeing people from repetitive, monotonous, and error prone work. Automation also significantly improves the release process. By standardizing and automating your deployments, you remove many sources of problems, such as incorrectly configured environments. By moving from manual tests to automated ones, you increase the speed and reliability of your tests.

What's Next

In the next chapter, the team is celebrating because they now have a fully functional continuous delivery pipeline. They know that their release process has improved, but the problem is that they don't have any actual data that proves it. They need to learn how to monitor their pipeline so that they can collect all the data it generates and present it in a meaningful way. They also need to track some metrics that are particularly relevant to a continuous delivery release process.

More Information

There are a number of resources listed in text throughout the book. These resources will provide additional background, bring you up to speed on various technologies, and so forth. For your convenience, there is a bibliography online that contains all the links so that these resources are just a click away. You can find the bibliography at: https://msdn.microsoft.com/library/dn449954.aspx.

For guidance about automatic deployments, see the ALM Rangers DevOps Tooling and Guidance website at https://vsardevops.codeplex.com/.

To learn about InRelease, which allows you to automate your deployments from TFS, see their website at http://www.incyclesoftware.com/inrelease/.

For guidance about creating builds, see the ALM Rangers Team Foundation Build Customization Guide at http://vsarbuildguide.codeplex.com/.

For guidance about using Microsoft Test Manager, see the ALM Rangers Test Release Management Guidance at http://vsartestreleaseguide.codeplex.com/.

For guidance about Visual Studio test features, such as CodedUI, see the ALM Rangers Visual Studio Test Tooling Guides at http://vsartesttoolingguide.codeplex.com/.

Another good testing reference is Testing for Continuous Delivery with Visual Studio 2012, which is available at https://msdn.microsoft.com/en-us/library/jj159345.aspx.

For information about Active Directory Group Policy go to https://support.microsoft.com/kb/816102.

For information about System Center Service Manager and System Center Orchestrator, go to https://www.microsoft.com/en-us/server-cloud/system-center/default.aspx.

For information about Windows Installer, go to https://msdn.microsoft.com/library/windows/desktop/cc185688(v=vs.85).aspx.

For information about the WiX toolset, go to http://wixtoolset.org/.

For information about PsExec, go to https://technet.microsoft.com/sysinternals/bb897553.aspx.

For information about deployment orchestration models, see the "Deployment Scripting" section in Chapter 6 of Jez Humble and David Farley’s book, Continuous Delivery. To learn more about the preparation and management of useful test data suites, read Chapter 12. Learn more about the book at http://continuousdelivery.com/.

For information about SQL Server Data Tools, go to https://msdn.microsoft.com/data/tools.aspx.

For information about DBDeploy.NET, go to http://www.build-doctor.com/2010/01/17/dbdeploy-net/.

For information about Entity Framework Migrations, go to https://msdn.microsoft.com/data/jj591621.aspx.

The hands-on labs that accompany this guidance are available on the Microsoft Download Center at https://go.microsoft.com/fwlink/p/?LinkID=317536.

Next Topic | Previous Topic | Home | Community