April 2017

Volume 32 Number 4

[Containers]

Modernizing Traditional .NET Apps with Docker

By Elton Stoneman

The Microsoft .NET Framework has been a successful application platform for 15 years, with countless business-critical apps running on older versions of the Framework and older versions of Windows Server. These traditional apps still offer great business value, but they’re likely to be difficult to maintain, upgrade, extend and manage. Equally, they may not justify the investment needed for a full rewrite. With Docker, a platform for running applications in lightweight containers, and Windows Server 2016, you can give traditional apps a new lease on life—adding features, increasing security and performance, and moving toward continuous deployment—without a lengthy and expensive rebuild project.

In this article I’ll take a monolithic ASP.NET WebForms app that connects to a SQL Server database, and modernize it by taking advantage of the Docker platform. I’ll start by moving the whole app as is to Docker, without any code changes, and run the Web site and database in lightweight containers. Then I’ll show a feature-­driven approach to extending the app, improving performance and giving users self-service analytics. With the Docker platform you’ll see how to iterate with new versions of the app, upgrade the components quickly and safely, and deploy the complete solution to Microsoft Azure.

Where Docker Fits in .NET Solutions

Docker is for server applications—Web sites, APIs, messaging solutions and other components that run in the background. You can’t run desktop apps in Docker because there’s no UI integration between the Docker platform and the Windows host. That rules out running Windows Forms or Windows Presentation Foundation (WPF) apps in containers (although you could use Docker to package and distribute those desktop apps), but Windows Communication Foundation (WCF), .NET console apps and all flavors of ASP.NET are great candidates.

To package an application to run in Docker, you write a small script called a Dockerfile that automates all the steps for deploying the app. Typically this includes Windows PowerShell commands for configuration and instructions to copy application content and set up any dependencies. You can unzip compressed archives or install MSIs, too, but the packaging process is all automated, so you can’t run an install process that has a Windows UI and needs user input.

When you’re looking at a solution architecture to figure out which parts can run in Docker containers, keep in mind any component that can be installed and run without the Windows UI is a good candidate. This article focuses on .NET Framework apps, but you can run anything in a Windows container that runs on Windows Server, including .NET Core, Java, Node.js and Go apps.

Migrating .NET Apps to Containers

How you migrate to Docker depends on how you’re currently running your app. If you have a fully configured app running in a Hyper-V VM, the open source Image2Docker tool can automatically generate a Dockerfile from the VM’s disk. If you have a build process that publishes an MSI or a WebDeploy package, it’s easy to write your own Dockerfile by using one of Microsoft’s base images on Docker Hub.

Here’s a complete Dockerfile that scripts the packaging of an ASP.NET WebForms app into a Docker image:

FROM microsoft/aspnet:windowsservercore-10.0.14393.693
SHELL ["powershell"]
RUN Remove-Website -Name 'Default Web Site'; \
    New-Item -Path 'C:\web-app' -Type Directory; \
    New-Website -Name 'web-app' -PhysicalPath 'C:\web-app' -Port 80 -Force
EXPOSE 80
RUN Set-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters' \
    -Name ServerPriorityTimeLimit -Value 0 -Type DWord
COPY ProductLaunch.Web /web-app

Nine lines of script are all I need, and there are no application changes. This could be an ASP.NET 2.0 app, currently running on Windows Server 2003—with this Dockerfile I can build it into an image that immediately upgrades the app to Windows Server 2016 and the .NET Framework 4.5. I’ll walk through each of those instructions:

  • FROM microsoft/aspnet tells Docker which image to use as the starting point. In this case, it’s a Microsoft image with IIS and ASP.NET installed on top of a specific version of Windows Server Core.
  • SHELL ["powershell"] changes to a different shell for the rest of the Dockerfile, so I can run PowerShell cmdlets.
  • RUN Remove-Website uses PowerShell to set up IIS, removing the default Web site and creating a new one with a known location for the application.
  • EXPOSE 80 opens port 80 explicitly to allow network traffic into the container as Docker containers are locked down by default.
  • RUN Set-ItemProperty turns off the Windows DNS cache inside the image, so any DNS requests get served by Docker.
  • COPY ProductLaunch.Webcopies the published Web site project from the ProductLaunch.Web directory on the host into the image.

The Dockerfile is like a deployment guide for the Web application, but instead of being a vague human document, it’s a precise and actionable script. To produce the packaged app I run the docker build command from the directory that contains the Dockerfile and the published Web site:

docker build --tag sixeyed/msdn-web-app:v1 .

This command builds a Docker image with the name sixeyed/msdn-web-app and the tag v1. The name contains my user account for the Hub (sixeyed), so I can share this image by signing in with my credentials and pushing it to the Hub. Tags are useful for versioning images, so when I package a new version of the application, the image name will stay the same, but the tag will be v2.

Now I can run a container from the image and that will start the application, but the sample app has a dependency on SQL Server so I need SQL Server running before I can start the Web site.

Pulling Dependencies from Docker Hub

Docker has a networking stack that lets containers reach each other over a virtual network, and also lets containers reach external hosts running on the physical network. If I had a SQL Server instance running on a machine in the network, the ASP.NET app in the container could use it—I’d just need to specify the server name in the connection string. Or I can run SQL Server in a container, and the Web app will be able to reach it by using the container name in the connection string.

SQL Server Express is available on Docker Hub in an image maintained by Microsoft. To start a database container from that image, I run:

docker run --detach `
 --publish 1433:1433 `
 --env sa_password=MSDNm4g4z!n3 `
 --env ACCEPT_EULA=Y `
 --name sql-server `
 microsoft/mssql-server-windows-express

This starts a container in the background with the detach flag and publishes port 1433, so I can connect to the SQL instance in the container from outside, perhaps using SQL Server Management Studio on the host. The env options are key-value pairs, which Docker surfaces inside the container as system environment variables. The SQL Server image uses these values to confirm that the license agreement has been accepted, and to set the password for the sa user.

To run a container, Docker needs to have a copy of the image locally. Distribution is built into the Docker platform, so if you don’t have the SQL Server Express image locally when you run this command, Docker will download it from the Hub. There are more than half a million images on Docker Hub, which have been downloaded more than 9 billion times. Docker started in the Linux world and the majority of those images are Linux apps, but there are a growing number of high-quality Windows apps you can download and drop straight into your solution.

SQL Server is running in a Docker container now, and my Web app uses sql-server as the hostname in the connection string so it will connect to the database running in Docker. I can start the WebForms application in the background and publish port 80 to make the Web site accessible:

docker run --detach `
 --publish 80:80 `
 sixeyed/msdn-web-app:v1

If an external machine sends a request on port 80 to my host, Docker receives the request and transparently forwards it to the ASP.NET app running in the container. If I’m working on the host, I need to use “docker inspect” to get the container’s IP address and browse to the container to see the site, which is a simple product launch microsite. You can see the data capture page from the site running in Docker in Figure 1.

A Signup Page for a Site Running in Docker
Figure 1 A Signup Page for a Site Running in Docker

Run “docker ps” and you’ll see a list of all running containers. One is a database and one is a Web application, but you manage them both in the same way—“docker top” shows you the processes running in the container; “docker logs” shows you the log output from the app; and “docker inspect” shows you which ports are open and a host of other information about the container. Consistency is a major benefit of the Docker platform. Apps are packaged, distributed and managed in the same way, no matter what technology they use.

Splitting Features from Monolithic Apps

Now that the application is running on a modern platform, I can start to modernize the application itself. Breaking a mono­lithic application down into smaller services can be a significant project of work, but you can take a more targeted approach by working on key features, such as those that change regularly, so you can deploy updates to a changed feature without regression testing the whole application. Features with non-functional requirements that can benefit from a different design without needing a full re-architecture of the app can also be a good choice.

I’m going to start here by fixing a performance issue. In the existing code, the application makes a synchronous connection to the data­base to save the user’s data. That approach doesn’t scale well—lots of concurrent users would make a bottleneck of SQL Server. Asynchronous communication with a message queue is a much more scalable design. For this feature, I can publish an event from the Web app to a message queue and move the data-persistence code into a new component that handles that event message.

This design does scale well. If I have a spike of traffic to the Web site I can run more containers on more hosts to cope with the incoming requests. Event messages will be held in the queue until the message handler consumes them. For features that don’t have a specific SLA, you can have one message handler running in a single container and rely on the guarantees of the message queue that all the events will get handled eventually. For SLA-driven features you can scale out the persistence layer by running more message-handler containers.

The source code that accompanies this article has folders for version 1, version 2 and version 3 of the application. In version 2, the SignUp.aspx page publishes an event when the user submits the details form:

var eventMessage = new ProspectSignedUpEvent
{
  Prospect = prospect,
  SignedUpAt = DateTime.UtcNow
};
MessageQueue.Publish(eventMessage);

Also in version 2 there’s a shared messaging project that abstracts the details of the message queue, and a console application that listens for the event published by the Web app and saves the user’s data to the database. The persistence code in the console app is directly lifted from the version 1 code in the Web app, so the implementation is the same but the design of the feature has been modernized.

The new version of the application is a distributed solution with many working parts, as shown in Figure 2.

The Modernized Application Has Many Working Parts
Figure 2 The Modernized Application Has Many Working Parts

There are dependencies between the components, and they need to be started in the correct order for the solution to work properly. This is one of the problems of orchestrating an application running across many containers, but the Docker platform deals with that by treating distributed applications as first-class citizens.

Orchestrating Applications with Docker Compose

Docker Compose is the part of the Docker platform that focuses on distributed applications. You define all the parts of your application as services in a simple text file, including the dependencies between them and any configuration values they need. This is part of the Docker Compose file for version 2, showing just the configuration for the Web app:

product-launch-web:
  image: sixeyed/msdn-web-app:v2
  ports:
    - "80:80"
  depends_on:
    - sql-server
    - message-queue
  networks:
    - app-net

Here, I’m specifying the version of the image to use for my Web application. I publish port 80 and then I explicitly state that the Web app depends on the SQL Server and message queue containers. To reach these containers, the Web container needs to be in the same virtual Docker network, so all the containers in the Docker Compose file are joined to the same virtual network, called app-net.

Elsewhere in the Docker Compose file I define a service for SQL Server, using the Microsoft image on Docker Hub, and I’m using the NATS messaging system for my message queue service, which is a high-performance open source message queue. NATS is available as an official image on Docker Hub. The final service is for the message handler, which is a .NET console application packaged as a Docker image, using a simple Dockerfile.

Now I can run the application using the Docker Compose command line:

docker-compose up -d

Then Docker Compose will start containers for each of the components in the right order, giving me a working solution from a single command. Anyone with access to the Docker images and the Docker Compose file can run the application and it will behave in the same way—on a Windows 10 laptop, or on a Windows Server 2016 machine running in the datacenter or on Azure.

For version 2, I made a small change to the application code to move a feature implementation from one component to another. The end-user behavior is the same, but now the solution is easily scalable, because the Web tier is decoupled from the data tier, and the message queue takes care of any spikes in traffic. The new design is easy to extend, as well, as I’ve introduced an event-driven architecture, so I can trigger new behavior by plugging in to the existing event messages.

Adding Self-Service Analytics

For my sample app, I’m going to make one more change to show how much you can do with the Docker platform, with very little effort. The app currently uses SQL Server as a transactional database, and I’m going to add a second data store as a reporting database. This will let me keep reporting concerns separate from transactional concerns, and also gives me free choice of the technology stack.

In version 3 of the sample code, I’ve added a new .NET console app that listens for the same event messages published by the Web application. When both console apps are running, the NATS message queue will ensure they both get a copy of all events. The new console app receives the events and saves the user data in Elasticsearch, an open source document store you can run in a Windows Docker container. Elasticsearch is a good choice here because it scales well, so I can cluster it across multiple containers for redundancy, and because it has an excellent user-facing front end available called Kibana.

I haven’t made any changes to the Web application or the SQL Server message handler from version 2, so in my Docker Compose file I just add new services for Elasticsearch and Kibana, and for the new message handler that writes documents to the Elasticsearch index:

index-prospect-handler:
  image: sixeyed/msdn-index-handler:v3
  depends_on:
    - elasticsearch
    - message-queue
  networks:
    - app-net

Docker Compose can make incremental upgrades to an application, and it won’t replace running containers if their definition matches the service in the Docker Compose file. In version 3 of the sample application, there are new services but no changes to the existing services, so when I run docker-compose up –d, Docker will run new containers for Elasticsearch, Kibana and the index message handler, but leave the others running as is—which makes for a very safe upgrade process where you can add features without taking the application offline.

This application prefers convention over configuration, so the host names for dependencies like Elasticsearch are set as defaults in the app, and I just need to make sure the container names match in the Docker Compose setup.

When the new containers have started, I can use “docker inspect” to get the IP address of the Kibana container, and browse to port 5601 on that address. Kibana has a very simple interface and in a few minutes I can build a dashboard that shows the key metrics for people signing up with their details, as shown in Figure 3.

A Kibana Dashboard
Figure 3 A Kibana Dashboard

Power users will quickly find their way around Kibana, and they’ll be able to make their own visualizations and dashboards without needing to involve IT. Without any downtime I’ve added self-service analytics to the application. The core of that feature comes from enterprise-grade open source software I’ve pulled from Docker Hub into my solution. The custom component to feed data into the document store is a simple .NET console application, with around 100 lines of code. The Docker platform takes care of plugging the components together.

Running Dockerized Solutions on Azure

Another great benefit of Docker is portability. Applications packaged into Docker images will run the exact same way on any host. The final application for this article uses the Windows Server and SQL Server images owned by Microsoft; the NATS image curated by Docker; and my own custom images. All those images are published on the Docker Hub, so any Windows 10 or Windows Server 2016 machine can pull the images and run containers from them.

Now my app is ready for testing, and deploying it to a shared environment on Azure is simple. I’ve created a virtual machine (VM) in Azure using the Windows Server 2016 Datacenter with Containers option. That VM image comes with Docker installed and configured, and the base Docker images for Windows Server Core and Nano Server already downloaded. One item not included in the VM is Docker Compose, which I downloaded from the GitHub release page.

The images used in my Docker Compose file are all in public repositories on Docker Hub. For a private software stack, you won’t want all your images publicly available. You can still use Docker Hub and keep images in private repositories, or you could use an alternative hosted registry like Azure Container Registry. Inside your own datacenter you can use an on-premises option, such as Docker Trusted Registry.

Because all my images are public, I just need to copy the Docker Compose file onto the Azure VM and run docker-compose up –d. Docker will pull all the images from the Hub, and run containers from them in the correct order. Each component uses conventions to access the other components, and those conventions are baked into the Docker Compose file, so even on a completely fresh environment, the solution will just start and run as expected.

If you’ve worked on enterprise software releases, where setting up a new environment is a manual, risky and slow process, you’ll see how much benefit is to be had from Windows Server 2016 and the Docker platform. The key artifacts in a Docker solution—the Dockerfile and the Docker Compose file—are simple, unambiguous replacements for manual deployment documents. They encourage automation and they make it straightforward to build, ship and run a solution in a consistent way on any machine.

Next Steps

If you’re keen to try Docker for yourself, the Image2Docker PowerShell module is a great place to start; it can build a Dockerfile for you and jump-start the learning process. There are some great, free, self-paced courses on training.docker.com, which provisions an environment for you. Then, when you’re ready to move on, check out the Docker Labs on GitHub, which has plenty of Windows container walk-throughs.

There are also Docker MeetUps all over the world where you can hear practitioners and experts talk about all aspects of Docker. The big Docker conference is DockerCon, which is always a sell-out; this year it’s running in Texas in April and in Copenhagen in October. Last, check out the Docker Captains—they’re the Docker equivalent of Microsoft MVPs. They’re constantly blogging, tweeting and speaking about all the cool things they’re doing with Docker, and following them is a great way to keep a pulse on the technology.


Elton Stoneman is a seven-time Microsoft MVP and a Pluralsight author who works as a developer advocate at Docker. He has been architecting and delivering successful solutions with Microsoft technologies since 2000, most recently API and Big Data projects in Azure, and distributed applications with Docker.

Thanks to the following technical expert who reviewed this article: Mark Heath
Mark Heath is a .NET developer specializing in Azure, creator of NAudio, and an author for Pluralsight. He blogs at markheath.net and you can follow him on Twitter: @mark_heath


Discuss this article in the MSDN Magazine forum