Events
Mar 17, 9 PM - Mar 21, 10 AM
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Azure DevOps Services | Azure DevOps Server 2022 - Azure DevOps Server 2019
This article explains container jobs in Azure Pipelines.
By default, Azure Pipelines jobs run directly on the host machines where the agent is installed. Hosted agent jobs are convenient, require little initial setup and infrastructure to maintain, and are well-suited for basic projects.
If you want more control over task context, you can define and run jobs in containers. Containers are a lightweight abstraction over the host operating system that provides isolation from the host. When you run jobs in containers, you can select the exact versions of operating systems, tools, and dependencies that your build requires.
Linux and Windows agents can run pipeline jobs directly on the host or in containers. Container jobs aren't available on macOS.
For a container job, the agent first fetches and starts the container. Then each step of the job runs inside the container.
If you need fine-grained control at the individual build step level, step targets let you choose a container or host for each step.
windows-*
and ubuntu-*
agents support running containers. The macos-*
agents don't support running containers.Linux-based containers have the following requirements. For workarounds, see Nonglibc-based containers.
ENTRYPOINT
USER
with access to groupadd
and other privileged commands without using sudo
Note
Node.js must be pre-installed for Linux containers on Windows hosts.
Some stripped-down containers available on Docker Hub, especially containers based on Alpine Linux, don't satisfy these requirements. Containers with an ENTRYPOINT
might not work because Azure Pipelines docker create
and docker exec
expect that the container is always up and running.
The following examples define a Windows or Linux container for a single job.
The following simple example defines a Linux container:
pool:
vmImage: 'ubuntu-latest'
container: ubuntu:18.04
steps:
- script: printenv
The preceding example tells the system to fetch the ubuntu
image tagged 18.04
from Docker Hub and then start the container. The printenv
command runs inside the ubuntu:18.04
container.
You can use containers to run the same step in multiple jobs. The following example runs the same step in multiple versions of Ubuntu Linux. You don't have to mention the jobs
keyword because only a single job is defined.
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
ubuntu16:
containerImage: ubuntu:16.04
ubuntu18:
containerImage: ubuntu:18.04
ubuntu20:
containerImage: ubuntu:20.04
container: $[ variables['containerImage'] ]
steps:
- script: printenv
A container job uses the underlying host agent's Docker configuration file for image registry authorization. This file signs out at the end of the Docker registry container initialization. Registry image pulls for subsequent container jobs might be denied for unauthorized authentication
because another job running in parallel already signed out the Docker configuration file.
The solution is to set a Docker environment variable DOCKER_CONFIG
that's specific to each agent pool running on the hosted agent. Export the DOCKER_CONFIG
in each agent pool's runsvc.sh script as follows:
export DOCKER_CONFIG=./.docker
You can specify options
to control container startup, as in the following example:
container:
image: ubuntu:18.04
options: --hostname container-test --ip 192.168.0.1
steps:
- script: echo hello
Running docker create --help
gives you the list of options that you can pass to Docker invocation. Not all of these options are guaranteed to work with Azure DevOps. Check first to see if you can use a container
property to accomplish the same goal.
For more information, see the docker create command reference and the resources.containers.container definition in the Azure DevOps YAML schema reference.
The following example defines the containers in the resources
section, and then references them by their assigned aliases. The jobs
keyword is explicitly listed for clarity.
resources:
containers:
- container: u16
image: ubuntu:16.04
- container: u18
image: ubuntu:18.04
- container: u20
image: ubuntu:20.04
jobs:
- job: RunInContainer
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
ubuntu16:
containerResource: u16
ubuntu18:
containerResource: u18
ubuntu20:
containerResource: u20
container: $[ variables['containerResource'] ]
steps:
- script: printenv
You can host containers on other registries than public Docker Hub. To host an image on Azure Container Registry or another private container registry, including a private Docker Hub registry, add a service connection to access the registry. Then you can reference the endpoint in the container definition.
Private Docker Hub connection:
container:
image: registry:ubuntu1804
endpoint: private_dockerhub_connection
Azure Container Registry connection:
container:
image: myprivate.azurecr.io/windowsservercore:1803
endpoint: my_acr_connection
Note
Azure Pipelines can't set up a service connection for Amazon Elastic Container Registry (ECR), because Amazon ECR requires other client tools to convert AWS credentials into something Docker can use to authenticate.
The Azure Pipelines agent supplies a copy of Node.js, which is required to run tasks and scripts. To find out the version of Node.js for a hosted agent, see Microsoft-hosted agents.
The version of Node.js compiles against the C runtime used in the hosted cloud, typically glibc. Some Linux variants use other C runtimes. For instance, Alpine Linux uses musl.
If you want to use a nonglibc-based container, you need to:
bash
, sudo
, which
, and groupadd
.If you use a nonglibc-based container, you're responsible for adding a Node binary to your container. Node.js 18 is a safe choice. Start from the node:18-alpine
image.
The agent reads the container label "com.azure.dev.pipelines.handler.node.path"
. If this label exists, it must be the path to the Node.js binary.
For example, in an image based on node:18-alpine
, add the following line to your Dockerfile:
LABEL "com.azure.dev.pipelines.agent.handler.node.path"="/usr/local/bin/node"
Azure Pipelines assumes a Bash-based system with common administrative packages installed. Alpine Linux in particular doesn't come with several of the packages needed. Install bash
, sudo
, and shadow
to cover the basic needs.
RUN apk add bash sudo shadow
If you depend on any in-box or Marketplace tasks, also supply the binaries they require.
FROM node:18-alpine
RUN apk add --no-cache --virtual .pipeline-deps readline linux-pam \
&& apk add bash sudo shadow \
&& apk del .pipeline-deps
LABEL "com.azure.dev.pipelines.agent.handler.node.path"="/usr/local/bin/node"
CMD [ "node" ]
Events
Mar 17, 9 PM - Mar 21, 10 AM
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowTraining
Module
Automate Docker container deployments with Azure Pipelines - Training
Use Azure Pipelines to deploy Docker containers to Azure App Service.