June 2019

Volume 34 Number 6

[Data Points]

EF Core in a Docker Containerized App, Part 3

By Julie Lerman

Julie LermanIn the past two articles, I’ve walked you through building a small ASP.NET Core API using Entity Framework encapsulated in a Docker container. In the first article, I included a SQLite database within the container. In the second, I targeted an Azure SQL Data­base that can be reached from anywhere. The bigger challenge in this case was how to keep secrets (such as database passwords) secret and configurations (such as connection strings to development or production databases) fluid. This involved learning how to leverage the ability of Docker Compose to read environment variables from the host.

There’s another very interesting path for persisting data, which is to have both a database server and the data in containers. I’ve written about using SQL Server for Linux in Docker containers in earlier columns, such as the one at msdn.com/magazine/mt784660. I really like the way Microsoft’s Steve Lasker expresses the perspective of using SQL Server in containers for dev and test:

Spinning up SQL in a container gives developers the ability to isolate/simulate an environment. Including a different version of SQL Server. Being able to start a test from the exact same state, each time, is goodness. Especially when testing a DB upgrade, instead of figuring out how to roll back the DB update, just restart the container.

In the article on SQL Server for Linux, I interacted with the database using only sqlcmd at the command line. In a more recent article (msdn.com/magazine/mt832858), I used Azure Data Studio to interact with a containerized SQL Server and database. Now I want to integrate a dedicated, containerized SQL Server into my API development. I’ll show you how to accomplish this in Visual Studio, continuing with the sample I used in the previous column.

Quick Review of the API

The API is simple with a single two-property Magazine class (Id and Name), a Magazines controller and an EF Core DbContext that uses migrations to seed new databases with four magazines.

The next iteration of the app, in Part 2 of the series (msdn.com/magazine/mt833438), introduced container orchestration via a docker-­compose file. While that was a single-container solution that didn’t truly need orchestration, docker-compose provided the ability to pass environment variables into the container without storing the values directly in the image. But even with single-container solutions, a compose file can be used to run the app on a Docker Swarm or Kubernetes cluster, giving you scale and self-healing.

The MagazinesContext class used the environment variables to inject a password to the database connection string.

Moving to a Containerized Database

The goal of this article is to target a SQL Server for Linux server and a database in containers on my development machine. This will mean updating the existing docker-compose file and updating connection strings within the app. In all, not a lot of effort. But if you’ve never done it before, it’s certainly nice to have someone show you the way!

The biggest changes will be to the docker-compose.yml file. And what’s super cool is that Docker will take care of the hardest pieces for you, simply by reading the instructions relayed in docker-compose. Because all of this starts in Visual Studio 2017, the included Tools for Docker will also participate when you run or debug from Visual Studio. If you’re using Visual Studio 2019 (which, as I’m writing this, was officially released only yesterday, so I’ll continue this series in 2017), the tooling is built in and the experience should be the same.

Here’s the previous version of the docker-compose.yml file:

version: '3.4'
services:
  dataapidocker:
    image: ${DOCKER_REGISTRY-}dataapidocker
    build:
      context: .
      dockerfile: DataAPIDocker/Dockerfile
    environment:
      - DB_PW

This docker-compose file was only managing one image, defined in the dataapidocker service. The tooling will make sure that the DOCKER_REGISTRY variable is replaced at run time with the Docker engine running on my development machine. Then, using that image, it will find the Dockerfile (defined in both Part 1 and Part 2 of this series) to get further instructions about what to do with the image when the container instance is created. Because I didn’t provide a value for the DB_PW environment variable directly in the docker-compose file, it allows me to pass a value from the shell where I’m running the container or from another source, such as a docker .env file. I used an .env file in Part 2 to store the key-value pair of DB_PW and my password, eiluj.

So now I’m going to tell docker-compose that I want to also have it spin up a SQL Server container. The SQL Server for Linux image is an official image in the Microsoft Container Registry (MCR), though it’s also listed on Docker Hub for discoverability. By referencing it here, Docker will first look in the local registry (on the dev machine where I’m working) and if the image isn’t found there, it will then pull the image from the MCR. See Figure 1 for an example of these changes.

Figure 1 The Docker-Compose File with Changes to Include an mssql/server Container

version: '3.4'
services:
  dataapidocker:
    image: ${DOCKER_REGISTRY-}dataapidocker
    build:
      context: .
      dockerfile: DataAPIDocker/Dockerfile
    environment:
      - DB_PW
    depends_on:
      - db
  db:
    image: mcr.microsoft.com/mssql/server
    environment:
      SA_PASSWORD: "${DB_PW}"
      ACCEPT_EULA: "Y"
    ports:
      - "1433:1433"

But the new service I added, which I named db, does more than just point to the mssql/server image. Let’s, as they say, unpack the changes. Keep in mind that there’s another modification coming after I work through this step.

The first change is within the dataapidocker service—the original one that describes the container for the API. I’ve added a mapping called depends_on, which contains what YAML refers to as a sequence item, named db. That means that before running the dataapidocker container, Docker will check the docker-compose file for another service named db and will need to use its details to instantiate the container defined by that service.

The db service begins by pointing to the SQL Server for Linux image using its official Docker name. This image requires that you pass in two environment variables when running a container—SA_Password and ACCEPT_EULA—so the service description also contains that information. And, finally, you need to specify the port that the server will be available on: 1433:1433. The first value refers to the host’s port and the second to the port inside the container. Exposing the server through the host’s default port 1433 makes it easy to access the database from the host computer. I’ll show you how that works after I’ve gotten this project up and running.

When it’s time to run this docker-compose outside Visual Studio, I’ll also need to expose ports from the dataapidocker service. The Visual Studio tooling created a second docker-compose file, docker-compose.override.yml, that Visual Studio uses during development. In that file are a few additional mappings, including the ports mapping (ports: - 80) for the dataapidocker service. So for now I’ll let the tooling take care of allowing me to browse to the Web site when debugging in Visual Studio.

Defining a Separate Volume for Persistent Data

There’s still more work to be done in docker-compose, however. With the existing description, any databases and data will be created inside the same container that’s running the SQL Server. This is probably not a problem for testing if a clean database is desired for each test. It’s better practice, however, to persist the data separately. There are a few ways to do this, but I’ll be using what’s called a data volume. Take a look at my blog post at bit.ly/2pZ7dDb, where I go into detail about data volumes and demonstrate how they persist even if you stop or remove the container running the SQL Server.

You can leave instructions in the docker-compose file to specify a volume for persisting the data separately from the SQL Server container, as I’ve done in Figure 2. A volume isn’t the same as a container, so it isn’t another service. Instead, you create a new key called volumes that’s a sibling to services. Within the key, you provide your own name for the volume (I’ve called mine mssql-server-julie-data). There’s no value associated with this key. Naming a volume this way allows you to reuse it with other containers if you need. You can read more about volumes in the Docker reference for docker-compose at docs.docker.com/compose/compose-file or, for more detail, check out Elton Stoneman’s Pluralsight course on stateful data with Docker (pluralsight.pxf.io/yoLYv).

Figure 2 Defining a Data Volume in Docker-Compose

services:
  dataapidocker: [etc]
  db:
    image: mcr.microsoft.com/mssql/server
    volumes:
      - mssql-server-julie-data:/var/opt/mssql/data
    environment:
      SA_PASSWORD: "${DB_PW}"
      ACCEPT_EULA: "Y"
    ports:
      - "1433:1433"
volumes:
  mssql-server-julie-data: {}

Notice that the db mapping also has a new mapping for volumes. This mapping contains a sequence item in which I’ve mapped the named volume to a target path in the source container where the data and log files will be stored.

Setting Up the Connection String for the Containerized SQL Server

As a reminder, in the previous version of the app, I left a connection string to SQL Server LocalDB in appsettings.Development.json for times I want to test the API running in the Kestrel or local IIS server.  The connection string to the Azure SQL database, named ConnectionStrings:MagsConnectionMssql, is stored in an environment variable in the API’s Dockerfile, just above the section that defines the build image. Here are the first few lines of that Dockerfile. The connection string has placeholders for the user id and password. The user id is still in the Dockerfile only because I was focusing on hiding the password and haven’t yet removed it:

FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
ENV ConnectionStrings:MagsConnectionMssql="Server=[Azure SQL endpoint];
    Initial Catalog=DP0419Mags;Persist Security Info=False;
    User ID=ENVID;Password=ENVPW;MultipleActiveResultSets=False;
    Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
ENV DB_UserId="lerman"
FROM microsoft/dotnet:2.2-sdk AS build

This connection string can easily be moved out to docker-­compose.yml and controlled by external means, as well. The API’s Startup Configuration method overwrites the user and password placeholders with the values of provided environment variables, including the password DB_PW that I stored in the.env file whose contents look like this:

DB_PW=eiluj

The .env file and its DB_PW value will be read by docker-compose and passed into the container for the Dockerfile to read and make available to the app. In the end, I’m removing both the connection string and DB_UserId variables from Dockerfile.

But that connection string was for Azure SQL. I now need to change its value to point to the SQL Server container:

ENV ConnectionStrings:MagsConnectionMssql
  "Server=db;Database=DP0419Mags;User=sa;Password=ENVPW;"

I’m using the same variable name because that’s what Startup.ConfigureServices is expecting. What’s interesting to me is that the container runtime will understand the name of the db service defined in docker-compose as the server name. So, in the connection string, I can specify the server as db. The default user that the mssql/server image uses to set up the server is sa, so I’ve coded that directly into the connection string, which is why I no longer need an ENV variable for the user id. However, I’m still using a placeholder, ENVPW, for the password value.

As with the sample in the previous column, the placeholder will ultimately be replaced by the value of the DB_PW environment variable. Here’s what the code now looks like that reads the connection string and the DB_PW variable, then updates the connection string before passing it to the SQL Server provider:

public void ConfigureServices(IServiceCollection services)
{
  services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
  var config = new StringBuilder(
    Configuration["ConnectionStrings:MagsConnectionMssql"]);
  string conn = config.Replace("ENVPW", Configuration["DB_PW"])
                      .ToString();
  services.AddDbContext<MagContext>(options => options.UseSqlServer(conn));
}

Those are all the needed changes! Now let’s debug and see what happens.

Debugging with the Database Container

Be sure that the docker-compose project is the startup project in the solution and that the debug button is set to Docker Compose, then hit F5 to debug the API.

If this is the first time you’re running the app this way and you’ve never pulled the mssql/server image, that will be the first thing to happen.  The SQL Server image isn’t small—it’s 1.45GB. So, the first time you pull it, be patient. Note that it requires 2GB of RAM, so be sure to assign enough RAM to your Docker engine (in the Docker settings) to run it. Even so, this is still faster than installing SQL Server on your machine, and it only uses resources already assigned to Docker, not additional resources on your host machine. And once the image is there, it’s breezy to spin up lots of instances. Another benefit is that any updates to the base image get pulled as small layers, not the entire thing again. Note that the Microsoft/dotnet sdk and aspnetcore-runtime images are already on my system, thanks to the work I did in the previous columns.

If you watch the build output, you’ll see that the app creates the data volume prior to handling the mssql/server container—in my case “handling” means first pulling it from Docker Hub because it’s new to my system. This order of operations occurs because the server is dependent on the volume, so the volume gets taken care of first. Once that’s done, it starts building up the API’s image if needed and then spins up the containers. The steps are clearly relayed in the build output, thanks to the tooling.

As with the previous versions of this app, the results of the GET method of the Magazines controller are displayed in a browser window.

Exploring the Containers Created by docker-compose

Now let’s see what’s going on with Docker. I’ll do that at the command line, although the Docker extension for Visual Studio Code is another cool option for achieving this.

First, I’ll list the images with the command docker images. If you’re new to using the Docker CLI, you may prefer to use the more explicit command docker image ls. As you can see in Figure 3, this shows that the mssql image is now on my machine and that a new version of my dataapidocker image was created, as well.

Inspecting the Images
Figure 3 Inspecting the Images

Next, I’ll check out the containers with docker ps, the shortened equivalent of docker container ls. This output is wide so I’m showing the left half of the screen above the right half in Figure 4.

The Running Containers

The Running Containers
Figure 4 The Running Containers

And, finally, to see the volume that was created, which is now tied to the db container, I can use the docker volume lscommand, as shown in Figure 5.

The Docker Volume Command
Figure 5 The Docker Volume Command

Notice that the names of both containers and the data volume are prefaced with the same text generated by the Visual Studio debug process in combination with the tooling.

Remember that I’ve exposed the database from the container on the host’s port 1433 so I can use tools on my development machine to look at the server that’s inside the db container, along with the databases in the attached volume. I’ll use Azure Data Studio to do this; Figure 6 shows how I defined the connection.

Setting Up a Connection to My Containerized Database
Figure 6 Setting Up a Connection to My Containerized Database

With the connection made, I can then interact with the server and my new database. Figure 7 shows the database explorer on the left, along with a query and results on the right.

Interacting with the Containerized Database in Azure Data Studio
Figure 7 Interacting with the Containerized Database in Azure Data Studio

Your App’s Containers Are Ready for Action

Having your apps encapsulated in Docker images makes it so easy to deploy the apps and all of their dependencies as containers. But for container newbies and veterans alike, I think that having the tools to develop apps so that they’re already associated with images and orchestration makes leveraging containers so much more straightforward. And in this series I’ve focused on development and local debugging only. Following Part 2, I published a blog post about deploying the single-image solution to Azure for testing at bit.ly/2CR40x3. That solution stores its data in Azure SQL Database and I run the test container using Azure Container Instances. I also wrote a blog post about publishing the dataapidocker image from this article to Docker Hub and then hosting the full solution in a Linux virtual machine on Azure, which is another great way to test your orchestrated containers. You can find that blog post at bit.ly/2VBaN8M.


Julie Lerman is a Microsoft Regional Director, Microsoft MVP, software team coach and consultant who lives in the hills of Vermont. You can find her presenting on data access and other topics at user groups and conferences around the world. She blogs at thedatafarm.com/blog and is the author of “Programming Entity Framework,” as well as a Code First and a DbContext edition, all from O’Reilly Media. Follow her on Twitter: @julielerman and see her Pluralsight courses at bit.ly/PS-Julie.

Thanks to the following technical experts for reviewing this article: Elton Stoneman (Docker), Travis Wright (Microsoft)


Discuss this article in the MSDN Magazine forum