Jaa


1 - You Want It When?

patterns & practices Developer Center

Download code samplesDownload PDFOrder paperback book

Does this sound familiar? You're expected to produce releases at an ever-increasing rate. You're under pressure to add new features and deploy to customers sometime between your first cup of coffee in the morning and lunch, if you have time to eat it. In the meantime, you have the same release processes you've always had and it's got problems. Maybe there's some automation, but there's room for lots of improvement. Manual steps are everywhere, everyone has a different environment, and working all weekend to get a release into production is normal.

One of the biggest problems is that changing how your software is released won't happen by waving a magic wand or writing a memo. It comes through effort, time, and money. That takes commitment from every group involved in the software process: test, development, IT (operations), and management. Finally, change is scary. Your current release process bears no similarity to the well-oiled machines you've seen in a dozen PowerPoint presentations, but it’s yours, you know its quirks, and you are shipping.

This guidance is here to help you with some of these challenges. It explains how to progressively evolve the process you use to release software. There are many ways to improve the release process. We largely focus on how to improve its implementation, the release pipeline, by using and customizing the default build templates provided by Team Foundation Server (TFS) and Lab Management. We move forward in small iterations so that no single change you make is too drastic or disruptive.

The guidance also shows you how to improve your release process by using some of the tools that TFS offers. For example, it shows you keep track of your product backlog and how to use Kanban boards.

The goal of this guidance is to put you on the road toward continuous delivery. By continuous delivery, we mean that through techniques such as versioning, continuous integration, automation, and environment management, you will be able to decrease the time between when you first have an idea and when that idea is realized as software that's in production. Any software that has successfully gone through your release process will be software that is production ready, and you can give it to customers whenever your business demands dictate. We also hope to show that there are practical business reasons that justify every improvement you want to make. A better release process makes economic sense.

The Release Pipeline

In the abstract, a release pipeline is a process that dictates how you deliver software to your end users. In practice, a release pipeline is an implementation of that pattern. The pipeline begins with code that's in version control (we hope) and ends with code that's deployed to the production environment. In between, a lot can happen. Code is compiled, environments are configured, many types of tests run, and finally, the code is considered "done." By done, we mean that the code is in production. Anything you successfully put through the release pipeline should be something you'd give to your customers. Here's a diagram based on the one you'll see on Jez Humble's Continuous Delivery website. It's an example of what can occur as code moves through a release pipeline.

Dn449955.78C4D5B00C0075D79721B34992BD73C8(en-us,PandP.10).png

(You should, of course, tailor this pipeline to your own situation, perhaps by adding a variety of other tests.) Notice that every check-in to version control sets the pipeline in motion. If at any point in the pipeline there's a failure, the build goes no further. In general, people shouldn’t check in anything else so long as the build and unit tests fail. Some people enforce this by rejecting commits from anyone but the person fixing the build.

The goal is to release your software as soon as possible. There are practices you can follow that will help you do this.

Version Everything

Version all the information that affects your production system. Use a version control system for your source code, certainly, but it can also contain your tests, your scripts, your configuration files, and anything else you can think of that affects your project. You may want to use virtualization libraries such as System Center Virtual Machine Manager (SCVMM) or Windows Azure management tools for virtual environments. For physical environments or imaging and deployment tools for physical environments you might want to consider the Windows Automated Installation Kit (Windows AIK). NuGet might be a good choice as an artifact repository for binaries and dependencies. For more information, go to https://www.nuget.org/. SharePoint is used by many teams for their documentation. In fact, any versioning tool you're comfortable with is fine as long as it supports a release pipeline with some automation and is well understood by your team. For more information, go to the SharePoint product site.

Use Continuous Integration

Continuous integration is defined in various ways by various groups. In this book, we use the definition given by Martin Fowler: Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily–leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly.

In this guidance, we mean that you should frequently integrate your work with the main branch. Ideally, explicit integration phases are, at some point, no longer necessary because your code is always integrated.

Use Automation

Wherever you can, automate the release pipeline. Automation makes the release process a repeatable, predictable experience. Think about automating not just the pipeline itself, but how you do provisioning, how you create environments, and even how you maintain your infrastructure. Manual steps are repetitious and error prone while automation makes a process repeatable and reliable.

There are sound business reasons for using automation. It maximizes the talents of the people you’ve hired and frees them to do what they do best—tasks that are creative and innovative. Leave the drudgery to your computers. They never get bored. Automation helps to remove dependencies you might have on particular people, who are the only ones who can, perhaps, deploy to the production environment, or run some group of tests. With automation, anyone with the correct permissions can set the process in motion.

Manage Environments

Are your developers and testers handcrafting their own environments, manually installing each piece of software and tweaking configuration files? How long does it take for them to do this? Managing your environments by using automation can solve many problems that plague teams as they try to release their software.

Automation can help to create environments that conform to some known baseline. Automation also makes your environments as versionable, repeatable, and testable as any other piece of software. Finally, it's much easier to create environments with automation, which, in turn means that by making environments (and the tools that create them) available early, every team member can run and test the code in consistent and stable environments from the onset of the project.

If you can, keep the differences between each environment as small as possible. The closer the environments are to each other, the easier it will be to achieve continuous delivery because you can identify interoperability conflicts between the code and the environment long before you reach production. If you do have differing environments (this can be particularly true for development environments), have your key testing environments mirror the production environment as closely as possible.

For some people, the amount of time it takes for a developer to set up a machine is a litmus test that indicates how difficult it’s going to be to start automating other aspects of the release pipeline. For example, if a developer can set up a system in a few hours or less, then there’s probably some processes and tools already in place that will help with the rest of the automation. If it takes more than a day then this could indicate that automation is going to be difficult.

Fail Fast, Fail Often

Failure shouldn't be feared. You can't innovate or learn without it. Expect it, and resolve the issues when they arise.

To address problems quickly, you need to know that a problem is there as soon as possible. Every validation stage should send feedback to the team immediately if the software fails. Additionally, the tests themselves should run quickly. This is particularly true for the unit tests. These initial tests should complete in a few minutes. If your software passes, you have a reasonable level of confidence that it works. If it fails, you know the software has a critical problem.

The other test stages may run slowly. If they take a very long time, you might want to run them in parallel, across multiple machines rather than on a single machine. Another possibility is to make the pipeline wider rather than longer, breaking the dependencies that are inherent in a strictly sequential system. Here’s an example that shows a sequential pipeline.

Dn449955.C84E0EBC6A5B333DB1F4CFA798047208(en-us,PandP.10).png

In this pipeline, one stage follows another. If the acceptance tests, for example, take a long time to run, then capacity testing is delayed until they finish. You may be able to rearrange some of the stages so that all builds that pass some designated stage are available. Here’s an example that shows the same pipeline, but now shorter and wider.

Dn449955.926E69B6E9DBC3DCF51ADD4ED689F009(en-us,PandP.10).png

Any build that passes the acceptance tests can go to production, undergo automated capacity tests or be proven to meet all contractual requirements with manual user acceptance tests (UAT). Breaking dependencies by stacking your pipeline gives you more flexibility than a sequential pipeline does. You can react more quickly to circumstances, such as the need to quickly release a hotfix or bypass an unnecessary stage.

Provide Visibility

Visibility means that everyone on the team has access to all the information required to know what's happening to the software as it goes through the pipeline. Examples of what you might want to know include the build's version number, the build configuration, and the tests that failed. How you expose the information is up to you. You may have a dashboard, you may use a whiteboard, but whatever method you choose, all team members should have easy access to the information.

Some people refer to the display that makes the information visible as an information radiator, a term first coined by Alistair Cockburn. According to Cockburn, "an information radiator is a display posted in a place where people can see it as they work or walk by. It shows readers information they care about without having to ask anyone a question. This means more communication with fewer interruptions." Qualities of a good radiator are:

  • It’s large and easily visible to the casual, interested observer.
  • It’s understood at a glance.
  • It changes periodically, so it's worth visiting and revisiting.
  • It’s easily kept current.

People get very creative when they design their radiators. They use computer screens, wall boards with sticky notes and even lava lamps. One popular approach is to use a traffic light, with four possible combinations.

Dn449955.8A78D014984AA533893FDEADD69EDE21(en-us,PandP.10).png

If the light is green then the build and all the tests have passed. If the light is yellow, then the build and tests are in progress. If both the yellow and green lights are on, then the build is unlikely to fail. If the light is red, some part of the build or the tests has failed.

Bring the Pain Forward

If there's something particularly painful in your release process, do it more frequently and do it sooner. Front load your pipeline so the hardest steps happen early. For example, if you do most of your testing at the end of the project and this isn't working well for you, consider doing many of the tests early, as soon as a commit happens.

If you've begun to increase the number of releases or the pace at which you're creating releasable software, you may find that quality assurance (QA) and information security (Infosec) groups are lagging behind. Perhaps it takes several months for Infosec to perform a review. If this is the case, the answer is still the same. Start incorporating security tests into the integration process instead of waiting until the end of the project. If static code analysis tools are taking too long, perform the analysis on every check-in for only the most important set of rules. Run the rest of the validations as early and as often as possible. You may even want to have a dedicated code analysis stage that performs exhaustive tests. Static code analysis is performed on the assemblies, so you won't have to build again to perform the tests. Again, perform the less critical analyses as early and as often as possible.

Take Small Steps

Even one of guidelines we've discussed might sound difficult to implement, let alone all of them. Try to identify a single aspect of your release process that you'd like to improve. Perhaps take a look at the one that's giving you the biggest problems. Talk it over with your team and think about a feasible solution that would improve matters even a little. Implement it. Did it work? Is life better? If not, why not? If it did work, do the same thing again for another problem.

Dn449955.0C748B9702A82E4B82001E9AFD164252(en-us,PandP.10).png

This cycle of iterative process management is often called the Deming cycle or the PDCA (plan-do-check-adjust) cycle. Edward Deming is considered by many to have initiated this modern quality control movement. The article in Wikipedia on PDCA gives an introduction to the subject.

Think About DevOps

The goals and practices we've discussed are often spoken of in terms of a software development mindset called DevOps. DevOps, an outgrowth of the Agile movement, stresses cooperation and communication between everyone involved in the development and release of good software. The name itself is a combination of development and operations (IT professionals), probably because these two groups often find themselves at odds with each other. Developers are rewarded according to how many new features they can create and release. Ops people are rewarded according to how stable and secure they can make the company's infrastructure. Developers may feel that Ops is slow and stodgy. Ops may feel that developers don't appreciate what it takes to actually release new software, let alone maintain what's already there.

However, it isn't only operations teams and software developers who are involved in the process. Testers, database managers, product and program managers, anyone involved in your project, should be a part of the release process. DevOps stresses close collaboration between traditionally distinct disciplines or silos.

This book touches on some of the principles espoused by DevOps proponents. It uses a fictional company, Trey Research, as the setting and, as you'll see, the employees of Trey Research find that building good software is about more than the tools. There's a very human component as well.

Is It Worth It?

Improving the release pipeline isn't easy and a good question that you, or your managers might ask is "Is it worth it?" The most direct answer is in the Agile manifesto, published in February, 2001. Its first principle is "Our highest priority is to satisfy the customer through early and continuous delivery of valuable software." This statement is the justification for improving your release process. To point out the obvious, businesses thrive when their customers are happy. If they're not, they'll start looking elsewhere for answers. In support of that goal, improving your release process can result in:

  • Faster time to market
  • Better quality software
  • More productive employees

Faster Time to Market

Leaders in the world of online businesses have shrunk the timeline for software delivery from months to days or even hours. No matter what size business you have, customers now expect features such as real-time customer service and frequent releases of services. In his talk "Velocity Culture" given at Velocity 2011, Jon Jenkins, at that time a director at Amazon.com, announced that Amazon was deploying every 11.7 seconds. You may not need to be this fast, but if your organization is only releasing twice a year while a competitor is releasing once a month, there's a problem.

Better Quality Software

The more your pipeline can produce predictable, repeatable results, the better your software. Any aspect of improving your pipeline impacts the quality of your software. If you make incremental changes you'll be able to find bugs easier. If you can deploy those changes early, you'll know right away if you're working on the right features. Find out if your customers like what you're doing before you've invested enormous amounts of time and money.

More Productive Employees

If you can reduce the number of repetitive, frustrating tasks your employees have to do, they'll have more time to exercise the talents that were the reasons you originally hired them. If your devs aren't overwhelmed trying to fix bugs from changes they made a month ago, they'll have more time to implement better products and services. If your testers aren't tied up with tests that could be done faster and better by a computer, they'd have time to come up with really creative ways to give the new app a workout. The same is true for everyone involved in releasing the software. People get to do what they're really good at and computers get to deal with all the drudgery.

The Tools You Need

The patterns we discuss in this book hold true everywhere, no matter how you implement them. We also present a particular solution that requires a specific set of tools. Here's what we use to create our pipeline.

Visual Studio 2012 Virtual Machine

The Visual Studio 2012 Application Lifecycle Management Virtual Machine (VM) is the environment you use for all the HOLs that accompany this guidance, except for those labs marked as advanced. This VM is familiarly known as the Brian Keller VM, and that's how we'll refer to it. For a complete description of the VM and instructions on how to download it, see Brian Keller's blog.

Note

All the hands-on labs (HOL) that accompany this guidance run on the Visual Studio 2012 VM except for the Windows Phone 8 labs. These labs are considered to be optional and advanced, and they are not supported by the VM. They require Windows 8, and for you to set up a Windows Communication Foundation service on Windows Azure. For more information, see the Introduction document that accompanies the HOLs.

Visual Studio 2012

You’re probably already familiar with Microsoft Visual Studio and its integrated development environment (IDE). Visual Studio comes with many tools that can help with, for example, code analysis, testing, and application lifecycle management (ALM). If you want to implement the pipeline we show in this book, you’ll need Visual Studio 2012 Ultimate or Visual Studio Premium because we use Visual Studio Lab Management templates and coded UI tests.

Microsoft Visual Studio Team Foundation Server 2012

TFS provides software development teams with the ability to collaborate on their projects. Anyone who is involved with creating software will find tools and capabilities that will help them perform their jobs. By anyone, we mean not just programmers, but testers, architects, program managers, business managers, and others who contribute to the development and release of software. This book stresses the following capabilities:

  • Version control. TFS provides a place to store and version source code as well as any other artifacts that impact your software project. Examples of these artifacts include scripts, configuration files, and documentation.
  • Test case management. Microsoft Test Management (MTM) stores all the testing artifacts it uses, such as test plans, test cases, bugs, and the results of tests runs in TFS.
  • Build automation. TFS lets you automate your builds, which means you assemble your application into a product without human intervention. An automated build can include many activities such as compiling source code, packaging binaries, and running tests. In this guidance we use the TFS build automation system as the basis for the release pipeline's orchestration, stages and steps.
  • Reporting. TFS provides many types of reports and metrics that give you insight into all aspects of your project. In this book we concentrate on metrics that help you validate the success of your release pipeline.
  • Environment management. TFS, in conjunction with Lab Management, helps you manage and provision your environments. In this book we concentrate on using Lab Management’s standard environments as a way of providing consistent environments for everyone involved in the software project.

Note

The HOLs that deal with monitoring and metrics have procedures that use TFS reports. TFS reports are only available if you use the full version of Team Foundation Server 2012 and it is installed on Windows Server 2008 or later. To duplicate those procedures and create the reports, you have two options. One is to install the full version of TFS on Windows Server 2008 or later. The other is to use the Brian Keller VM, which already runs on Windows Server.

Microsoft Test Manager

Microsoft Test Manager (MTM) is the dedicated interface for testers who work with Team Foundation Server. With it, you can create test plans, add and update test cases, and perform manual and automated tests.

Visual Studio Lab Management

Visual Studio Lab Management works with TFS and allows you to orchestrate physical and virtual test labs, provision environments, and automate build-deploy-test workflows. In this book, we use a new feature of Lab Management—standard environments. Standard environments, as opposed to System Center Virtual Machine Manager (SCVMM) environments, allow you to use any machine, whether physical or virtual, as an environment in Visual Studio, Team Foundation Server, and Microsoft Test Manager. Creating standard environments from your current environments is an easy way to get started with Lab Management. You only need to set up a test controller. For a quick tutorial on creating a standard environment, see Creating a Standard Environment.

Community TFS Build Extensions

The Community TFS Build Extensions are on CodePlex. You can find workflow activities, build process templates, and tools for Team Foundation Build. The pipeline implementation in this guidance uses several of the workflow activities, such as TFSVersion and QueueBuild.

Web Deploy

Web Deploy is the standard packaging and deployment tool for IIS servers. It includes MS Deploy, which is also used in the HOLs. For more information about Web Deploy, go to the IIS website.

Windows Installer XML

The Windows Installer XML (WiX) toolset builds Windows installation packages from XML source code. For more information, go to the WiX website.

Microsoft Excel

Portions of the HOLs include data in Excel spreadsheets.

Additional Tools

Two tools have recently become available that are designed to help you deploy a single build to multiple environments.

DevOps Deployment Workbench Express Edition

The ALM Rangers DevOps Deployment Workbench Express Edition is a new tool that can help you to build once and deploy to multiple environments. For more information, see the ALM Rangers DevOps Tooling and Guidance website. You can also read Appendix 1 in this guidance to get an overview of what the tool does.

InRelease

InRelease is a continuous delivery solution that automates the release process from TFS to your production environment. By using predefined release paths, InRelease automatically deploys your application to multiple environments. Based on a business-approval workflow, InRelease improves coordination and communication between development, operations and quality assurance to make release cycles repeatable, visible, and more efficient. It gives you a single view of the release process that can help you to identify failures and bottlenecks between stages. Another capability is the ability to perform rollbacks. For more information see the InRelease website.

Trey Research's Big Day

Trey Research is a small startup that makes mobile apps for ecological field work. Its competitors are larger, well-established companies who sell dedicated hardware. Trey Research hopes to succeed by keeping the costs of its products down and by being nimbler than its competitors. Because it produces software, the company wants to be able to quickly add new features in response to customer feedback and shifts in the market.

Trey Research's newest product sends GPS coordinates back to a Windows Communication Foundation service on a Windows Azure Virtual Machine and displays a Bing map on a Windows Phone 8. The app uses Windows Presentation Foundation for its user interface. Today there's a meeting to discuss how the CEO's first demo of the product went at an important conference. Here are the meeting attendees.

Dn449955.041673CFC2BF0AE00C5D68237AEACB5B(en-us,PandP.10).png

Zachary is the CEO of Trey Research. He started as a developer, but found out he was more interested in the big picture. He likes thinking about what software should look like a few years down the road and how his company can be ahead of the pack.

Dn449955.2B05A15797CA01CF09884AFA4275A5B8(en-us,PandP.10).png

Paulus is a developer who's been working with computers since he was a kid. He has a real passion for code. His hobby is working on open source projects with other programmers from all over the world.

Dn449955.54AFDF38F151D5CAA107FCE6FD4F9BEE(en-us,PandP.10).png

Iselda is the test lead. She's calm, which helps with some temperamental developers. She's more interested in analyzing software and writing test programs than in writing the applications themselves. She's good at organizing and setting priorities and lives to find edge cases.

Dn449955.16AD8567828B510D6CC4E3DB237AAFDE(en-us,PandP.10).png

Raymond is in operations. He likes practical solutions and he's very cautious (although some people might use the word "paranoid"), which makes sense because he's the person who gets the 03:00 call when something goes wrong.

Dn449955.1BB72D0F56BB9989DC1874BD93C3A9B4(en-us,PandP.10).png

Jin is the new guy. In Chapter 2, he joins Trey Research as a developer. He's worked on all sorts of systems. He likes the idea of being in a small startup where there's lots of opportunity for innovation. He's also a big advocate of continuous delivery and DevOps. He keeps a journal, just for himself, where he records his impressions about what's happening on the new job.

Right now, Raymond, Iselda and Paulus are waiting for Zachary to show up.

Follow link to expand image

Things are not going well for the folks at Trey Research. They have multiple problems, no clear idea why those problems exist, and they're not looking at their situation as a team. The rest of this book is about solving those problems by adopting some new tools and some new ways of working together.

What's Next?

Here's what the rest of this book covers:

Chapter 2: The Beginning

To solve a problem, you first need to analyze what’s going wrong. This chapter explains how to develop a value stream map, a flow diagram that shows all the steps required to take a product or service from its initial state to the customer. The map includes all the people, processes, times, information, and materials that are included in the end-to-end process. They also start using TFS to manage their projects. They begin to use tools such as a product backlog and a Kanban board.

The chapter's main focus is on the Trey Research release pipeline, as it currently exists. The chapter explains what each stage of the pipeline does, the environments, how code and artifacts are stored, and the tools the Trey Research team uses. Finally, you learn about some of the problems that exist because of how the pipeline is implemented.

Chapter 3: Orchestrating the Release Pipeline

This chapter shows the first steps to take to improve the release pipeline, with continuous delivery as the final goal. It focuses on orchestration, which is the arrangement, coordination and management of the pipeline. You orchestrate the pipeline as a whole and you also orchestrate each stage of the pipeline. A number of best practices are included for guidance. Next, the chapter focuses on the Trey Research team. They decide how to prioritize all the problems they have, and begin to implement changes to their pipeline to address those issues. They use the TFS and Lab Management default build templates to create a skeleton framework that will be the basis for future improvements. They also start to learn about some of the tools TFS offers to manage projects.

Chapter 4: Automating the Release Pipeline

To really make progress, the Trey Research team needs to move away from the largely manual pipeline they have now to one that's largely automated. In this chapter, they automate their deployments, the creation of environments, and at least some of their tests. At the conclusion of this chapter, the team has a fully functional continuous delivery pipeline.

Chapter 5: Getting Good Feedback

The team is celebrating because they now have a fully functional continuous delivery pipeline. They know that their release process is improved, but the problem is that they don't have any actual data that proves it. In this chapter, the team starts to monitor their pipeline so that they can collect all the data it generates and present it in a meaningful way. They also start to track some metrics that are particularly relevant to a continuous delivery release process.

Chapter 6: Improving the Pipeline

The team has gotten a taste for continually improving their pipeline and processes. They know that there is always some area that needs attention. In this chapter, they look at some problems they still have, and consider ways that they can be solved. This chapter deals with Trey Research's future, and what the team can do, over multiple iterations, to make it better.

Conventions

The guidance contains diagrams of the Trey Research pipeline that show how it changes from iteration to iteration. In the diagrams, we use the color blue to highlight changes in the pipeline. We use a gray italic font to highlight the tools that are used. Here's an example.

Dn449955.D18AC2B79454C774C9C906644101D373(en-us,PandP.10).png

  • The commit stage is outlined in blue and its name is in blue because the stage is new.
  • The text "Customized TFS default template" is in gray, bold italics because this is a tool that's used for this stage.
  • The text "Merge from Dev branch" is in blue because this is a new step.
  • The text "Perform code analysis" is in black because it's the same as in the previous iteration.

More Information

There are a number of resources listed in text throughout the book. These resources will provide additional background, bring you up to speed on various technologies, and so forth. For your convenience, there is a bibliography online that contains all the links so that these resources are just a click away. You can find the bibliography at: https://msdn.microsoft.com/library/dn449954.aspx.

The book that brought continuous delivery to everyone's attention is Continuous Delivery by Jez Humble and David Farley. For more information, see Jez Humble's blog at http://continuousdelivery.com/.

Martin Fowler is another well-known advocate of continuous delivery. His blog is at http://martinfowler.com/.

Alistair Cockburn's blog is at http://alistair.cockburn.us/.

For guidance that helps you assess where your organization stands in terms of application lifecycle management (ALM) best practices, see the ALM Rangers ALM Assessment Guide at http://vsaralmassessment.codeplex.com/.

The ALM Rangers DevOps Deployment Workbench Express Edition can help you to build once and deploy to multiple environments. For more information, see the ALM Rangers DevOps Tooling and Guidance website at https://vsardevops.codeplex.com/.

For a complete list of guidance that's available from the ALM Rangers, see the Visual Studio ALM Ranger Solutions Catalogue at https://aka.ms/vsarsolutions.

If you're interested in the Edward Deming and the Deming cycle, the article in Wikipedia at http://en.wikipedia.org/wiki/PDCA gives an introduction to the subject.

Jon Jenkins's talk "Velocity Culture" is at http://www.youtube.com/watch?v=dxk8b9rSKOo.

For more information about the Windows Automated Installation Kit go to https://www.microsoft.com/en-us/download/details.aspx?id=5753.

For more information about NuGet, go to https://www.nuget.org/.

For more information about SharePoint, go to https://office.microsoft.com/en-us/microsoft-sharepoint-collaboration-software-FX103479517.aspx.

The Community TFS Build Extensions are at http://tfsbuildextensions.codeplex.com/.

For more information about Web Deploy, go to the IIS website at https://www.iis.net/downloads/microsoft/web-deploy.

For more information about WiX, go to the website at http://wixtoolset.org/.

To learn about Lab Management standard environments, Creating a Standard Environment at https://aka.ms/CreatingStandardEnvironments.

Information about the Brian Keller VM is https://aka.ms/VS11ALMVM.

For more information about InRelease, see the website at http://www.incyclesoftware.com/inrelease/.

The hands-on labs that accompany this guidance are available on the Microsoft Download Center at https://go.microsoft.com/fwlink/p/?LinkID=317536.

Next Topic | Previous Topic | Home | Community