Docker security baseline

Caution

This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the CentOS End Of Life guidance.

This article details the configuration settings for Docker hosts as applicable in the following implementations:

  • [Preview]: Linux machines should meet requirements for the Azure security baseline for Docker hosts
  • Vulnerabilities in security configuration on your machines should be remediated in Azure Security Center

For more information, see Understand the guest configuration feature of Azure Policy and Overview of the Azure Security Benchmark (V2).

General security controls

Name
(CCEID)
Details Remediation check
Docker inventory Information
(0.0)
Description: None None
Ensure a separate partition for containers has been created
(1.01)
Description: Docker depends on /var/lib/docker as the default directory where all Docker related files, including the images, are stored. This directory might fill up fast and soon Docker and the host could become unusable. So, it's advisable to create a separate partition (logical volume) for storing Docker files. For new installations, create a separate partition for /var/lib/docker mount point. For systems that were previously installed, use the Logical Volume Manager (LVM) to create partitions.
Ensure docker version is up-to-date
(1.03)
Description: Using up-to-date docker version will keep your host secure Follow the docker documentation in aim to upgrade your version
Ensure auditing is configured for the docker daemon
(1.05)
Description: Apart from auditing your regular Linux file system and system calls, audit Docker daemon as well. Docker daemon runs with root privileges. It's thus necessary to audit its activities and usage. Add the line -w /usr/bin/docker -k docker into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: service auditd restart
Ensure auditing is configured for Docker files and directories - /var/lib/docker
(1.06)
Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /var/lib/docker is one such directory. It holds all the information about containers. It must be audited. Add the line -w /var/lib/docker -k docker into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: service auditd restart
Ensure auditing is configured for Docker files and directories - /etc/docker
(1.07)
Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /etc/docker is one such directory. It holds various certificates and keys used for TLS communication between Docker daemon and Docker client. It must be audited. Add the line -w /etc/docker -k docker into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: service auditd restart
Ensure auditing is configured for Docker files and directories - docker.service
(1.08)
Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. Docker.service is one such file. The docker.service file might be present if the daemon parameters have been changed by an administrator. It holds various parameters for Docker daemon. It must be audited, if applicable. Find out the 'docker.service' file location by running: systemctl show -p FragmentPath docker.service and add the line -w {docker.service file location} -k docker into the /etc/audit/audit.rules file where {docker.service file location} is the file path you have found earlier. Restart the audit daemon by running the command: service auditd restart
Ensure auditing is configured for Docker files and directories - docker.socket
(1.09)
Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. Docker.socket is one such file. It holds various parameters for Docker daemon socket. It must be audited, if applicable. Find out the 'docker.socket' file location by running: systemctl show -p FragmentPath docker.socket and add the line -w {docker.socket file location} -k docker into the /etc/audit/audit.rules file where {docker.socket file location} is the file path you have found earlier. Restart the audit daemon by running the command: service auditd restart
Ensure auditing is configured for Docker files and directories - /etc/default/docker
(1.10)
Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /etc/default/docker is one such file. It holds various parameters for Docker daemon. It must be audited, if applicable. Add the line -w /etc/default/docker -k docker into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: service auditd restart
Ensure auditing is configured for Docker files and directories - /etc/docker/daemon.json
(1.11)
Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /etc/docker/daemon.json is one such file. It holds various parameters for Docker daemon. It must be audited, if applicable. Add the line -w /etc/docker/daemon.json -k docker into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: service auditd restart
Ensure auditing is configured for Docker files and directories - /usr/bin/docker-containerd
(1.12)
Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /usr/bin/docker-containerd is one such file. Docker now relies on containerd and runC to spawn containers. It must be audited, if applicable. Add the line -w /usr/bin/docker-containerd -k docker into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: service auditd restart
Ensure auditing is configured for Docker files and directories - /usr/bin/docker-runc
(1.13)
Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /usr/bin/docker-runc is one such file. Docker now relies on containerd and runC to spawn containers. It must be audited, if applicable. Add the line -w /usr/bin/docker-runc -k docker into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: service auditd restart
Ensure network traffic is restricted between containers on the default bridge
(2.01)
Description: The inter-container communication would be disabled on the default network bridge. If any communication between containers on the same host is desired, then it needs to be explicitly defined using container linking or alternatively custom networks have to be defined. Run the docker in daemon mode and pass --icc=false as an argument or set the 'icc' setting to false in the daemon.json file. Alternatively, you can follow the Docker documentation and create a custom network and only join containers that need to communicate to that custom network. The --icc parameter only applies to the default docker bridge, if custom networks are used then the approach of segmenting networks should be adopted instead.
Ensure the logging level is set to 'info'.
(2.02)
Description: Setting up an appropriate log level, configures the Docker daemon to log events that you would want to review later. A base log level of info and above would capture all logs except debug logs. Until and unless required, you shouldn't run Docker daemon at debug log level. Run the Docker daemon as below: dockerd --log-level info
Ensure Docker is allowed to make changes to iptables
(2.03)
Description: Docker will never make changes to your system iptables rules if you choose to do so. Docker server would automatically make the needed changes to iptables based on how you choose your networking options for the containers if it's allowed to do so. It's recommended to let Docker server make changes to iptablesautomatically to avoid networking misconfiguration that might hamper the communication between containers and to the outside world. Additionally, it would save you hassles of updating iptableevery time you choose to run the containers or modify networking options. Don't run the Docker daemon with --iptables=false parameter. For example, don't start the Docker daemon as below: dockerd --iptables=false
Ensure insecure registries aren't used
(2.04)
Description: You shouldn't be using any insecure registries in the production environment. Insecure registries can be tampered with leading to possible compromise to your production system. remove --insecure-registry flag from the dockerd start command
The 'aufs' storage driver shouldn't be used by the docker daemon
(2.05)
Description: The 'aufs' storage driver is the oldest storage driver. It's based on a Linux kernel patch-set that is unlikely to be merged into the main Linux kernel. aufs driver is also known to cause some serious kernel crashes. aufs just has legacy support from Docker. Most importantly, aufs isn't a supported driver in many Linux distributions using latest Linux kernels The 'aufs' storage driver should be replaced by a different storage driver, we recommend to use 'overlay2'
Ensure TLS authentication for Docker daemon is configured
(2.06)
Description: By default, Docker daemon binds to a non-networked Unix socket and runs with root privileges. If you change the default docker daemon binding to a TCP port or any other Unix socket, anyone with access to that port or socket can have full access to Docker daemon and in turn to the host system. Hence, you shouldn't bind the Docker daemon to another IP/port or a Unix socket. If you must expose the Docker daemon via a network socket, configure TLS authentication for the daemon and Docker Swarm APIs (if using). This would restrict the connections to your Docker daemon over the network to a limited number of clients who could successfully authenticate over TLS. Follow the steps mentioned in the Docker documentation or other references.
Ensure the default ulimit's configured appropriately
(2.07)
Description: If the ulimits aren't set properly, the desired resource control might not be achieved and might even make the system unusable. Run the docker in daemon mode and pass --default-ulimit as argument with respective ulimits as appropriate in your environment. Alternatively, you can also set a specific resource limitation to each container separately by using the --ulimit argument with respective ulimits as appropriate in your environment.
Enable user namespace support
(2.08)
Description: The Linux kernel user namespace support in Docker daemon provides additional security for the Docker host system. It allows a container to have a unique range of user and group IDs which are outside the traditional user and group range utilized by the host system. For example, the root user will have expected administrative privilege inside the container but can effectively be mapped to an unprivileged UID on the host system. Please consult Docker documentation for various ways in which this can be configured depending upon your requirements. Your steps might also vary based on platform - For example, on Red Hat, sub-UIDs and sub-GIDs mapping creation does not work automatically. You might have to create your own mapping. However, the high-level steps are as below: Step 1: Ensure that the files /etc/subuid and /etc/subgid exist.touch /etc/subuid /etc/subgidStep 2: Start the docker daemon with --userns-remap flag dockerd --userns-remap=default
Ensure base device size isn't changed until needed
(2.10)
Description: Increasing the base device size allows all future images and containers to be of the new base device size, this may cause a denial of service by ending up in file system being over-allocated or full. remove --storage-opt dm.basesize flag from the dockerd start command until you need it
Ensure that authorization for Docker client commands is enabled
(2.11)
Description: Docker’s out-of-the-box authorization model is all or nothing. Any user with permission to access the Docker daemon can run any Docker client command. The same is true for callers using Docker’s remote API to contact the daemon. If you require greater access control, you can create authorization plugins and add them to your Docker daemon configuration. Using an authorization plugin, a Docker administrator can configure granular access policies for managing access to Docker daemon. Third party integrations of Docker may implement their own authorization models to require authorization with the Docker daemon outside of docker's native authorization plugin (i.e. Kubernetes, Cloud Foundry, OpenShift). Step 1: Install/Create an authorization plugin. Step 2: Configure the authorization policy as desired. Step 3: Start the docker daemon as below: dockerd --authorization-plugin=
Ensure centralized and remote logging is configured
(2.12)
Description: Centralized and remote logging ensures that all important log records are safe despite catastrophic events. Docker now supports various such logging drivers. Use the one that suits your environment the best. Step 1: Setup the desired log driver by following its documentation. Step 2: Start the docker daemon with that logging driver. For example, dockerd --log-driver=syslog --log-opt syslog-address=tcp://192.xxx.xxx.xxx
Ensure live restore is Enabled
(2.14)
Description: One of the important security triads is availability. Setting --live-restore flag in the docker daemon ensures that container execution isn't interrupted when the docker daemon isn't available. This also means that it's now easier to update and patch the docker daemon without execution downtime. Run the docker in daemon mode and pass --live-restore as an argument. For Example,dockerd --live-restore
Ensure Userland Proxy is Disabled
(2.15)
Description: Docker engine provides two mechanisms for forwarding ports from the host to containers, hairpin NAT, and a userland proxy. In most circumstances, the hairpin NAT mode is preferred as it improves performance and makes use of native Linux iptables functionality instead of an additional component. Where hairpin NAT is available, the userland proxy should be disabled on startup to reduce the attack surface of the installation. Run the Docker daemon as below: dockerd --userland-proxy=false
Ensure experimental features are avoided in production
(2.17)
Description: Experimental is now a runtime docker daemon flag instead of a separate build. Passing --experimental as a runtime flag to the docker daemon, activates experimental features. Experimental is now considered a stable release, but with a couple of features which might not have tested and guaranteed API stability. Don't pass --experimental as a runtime parameter to the docker daemon.
Ensure containers are restricted from acquiring new privileges.
(2.18)
Description: A process can set the no_new_priv bit in the kernel. It persists across fork, clone and execve. The no_new_priv bit ensures that the process or its children processes don't gain any additional privileges via suid or sgid bits. This way numerous dangerous operations become a lot less dangerous because there's no possibility of subverting privileged binaries. Setting this at the daemon level ensures that by default all new containers are restricted from acquiring new privileges. Run the Docker daemon as below: dockerd --no-new-privileges
Ensure that docker.service file ownership is set to root:root.
(3.01)
Description: docker.service file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it should be owned and group-owned by root to maintain the integrity of the file. Step 1: Find out the file location: systemctl show -p FragmentPath docker.service Step 2: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the ownership and group ownership for the file to root. For example, chown root:root /usr/lib/systemd/system/docker.service
Ensure that docker .service file permissions are set to 644 or more restrictive
(3.02)
Description: docker.service file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it shouldn't be writable by any other user other than root to maintain the integrity of the file. Step 1: Find out the file location: systemctl show -p FragmentPath docker.service Step 2: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the file permissions to 644. For example, chmod 644 /usr/lib/systemd/system/docker.service
Ensure that docker.socket file ownership is set to root:root.
(3.03)
Description: docker.socket file contains sensitive parameters that may alter the behavior of Docker remote API. Hence, it should be owned and group-owned by root to maintain the integrity of the file. Step 1: Find out the file location: systemctl show -p FragmentPath docker.socket Step 2: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the ownership and group ownership for the file to root. For example, chown root:root /usr/lib/systemd/system/docker.socket
Ensure that docker.socket file permissions are set to 644 or more restrictive
(3.04)
Description: docker.socket file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it shouldn't be writable by any other user other than root to maintain the integrity of the file. Step 1: Find out the file location: systemctl show -p FragmentPath docker.socket Step 2: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the file permissions to 644. For example, chmod 644 /usr/lib/systemd/system/docker.service
Ensure that /etc/docker directory ownership is set to root:root.
(3.05)
Description: /etc/docker directory contains certificates and keys in addition to various sensitive files. Hence, it should be owned and group-owned by root to maintain the integrity of the directory. chown root:root /etc/docker This would set the ownership and group-ownership for the directory to root.
Ensure that /etc/docker directory permissions are set to 755 or more restrictive
(3.06)
Description: /etc/docker directory contains certificates and keys in addition to various sensitive files. Hence, it should only be writable by root to maintain the integrity of the directory. chmod 755 /etc/docker This would set the permissions for the directory to 755.
Ensure that registry certificate file ownership is set to root:root
(3.07)
Description: /etc/docker/certs.d/ directory contains Docker registry certificates. These certificate files must be owned and group-owned by root to maintain the integrity of the certificates. chown root:root /etc/docker/certs.d//* This would set the ownership and group-ownership for the registry certificate files to root.
Ensure that registry certificate file permissions are set to 444 or more restrictive
(3.08)
Description: /etc/docker/certs.d/ directory contains Docker registry certificates. These certificate files must have permissions of 444 to maintain the integrity of the certificates. chmod 444 /etc/docker/certs.d//* This would set the permissions for registry certificate files to 444.
Ensure that TLS CA certificate file ownership is set to root:root
(3.09)
Description: The TLS CA certificate file should be protected from any tampering. It's used to authenticate Docker server based on given CA certificate. Hence, it must be owned and group-owned by root to maintain the integrity of the CA certificate. chown root:root This would set the ownership and group-ownership for the TLS CA certificate file to root.
Ensure that TLS CA certificate file permissions are set to 444 or more restrictive
(3.10)
Description: The TLS CA certificate file should be protected from any tampering. It's used to authenticate Docker server based on given CA certificate. Hence, it must have permissions of 444 to maintain the integrity of the CA certificate. chmod 444 This would set the file permissions of the TLS CA file to 444.
Ensure that Docker server certificate file ownership is set to root:root
(3.11)
Description: The Docker server certificate file should be protected from any tampering. It's used to authenticate Docker server based on the given server certificate. Hence, it must be owned and group-owned by root to maintain the integrity of the certificate. chown root:root This would set the ownership and group-ownership for the Docker server certificate file to root.
Ensure that Docker server certificate file permissions are set to 444 or more restrictive
(3.12)
Description: The Docker server certificate file should be protected from any tampering. It's used to authenticate Docker server based on the given server certificate. Hence, it must have permissions of 444 to maintain the integrity of the certificate. chmod 444 This would set the file permissions of the Docker server file to 444.
Ensure that Docker server certificate key file ownership is set to root:root
(3.13)
Description: The Docker server certificate key file should be protected from any tampering or unneeded reads. It holds the private key for the Docker server certificate. Hence, it must be owned and group-owned by root to maintain the integrity of the Docker server certificate. chown root:root This would set the ownership and group-ownership for the Docker server certificate key file to root.
Ensure that Docker server certificate key file permissions are set to 400
(3.14)
Description: The Docker server certificate key file should be protected from any tampering or unneeded reads. It holds the private key for the Docker server certificate. Hence, it must have permissions of 400 to maintain the integrity of the Docker server certificate. chmod 400 This would set the Docker server certificate key file permissions to 400.
Ensure that Docker socket file ownership is set to root:docker
(3.15)
Description: Docker daemon runs as root. The default Unix socket hence must be owned by root. If any other user or process owns this socket, then it might be possible for that non-privileged user or process to interact with Docker daemon. Also, such a non-privileged user or process might interact with containers. This is neither secure nor desired behavior. Additionally, the Docker installer creates a Unix group called docker. You can add users to this group, and then those users would be able to read and write to default Docker Unix socket. The membership to the docker group is tightly controlled by the system administrator. If any other group owns this socket, then it might be possible for members of that group to interact with Docker daemon. Also, such a group might not be as tightly controlled as the docker group. This is neither secure nor desired behavior. Hence, the default Docker Unix socket file must be owned by root and group-owned by docker to maintain the integrity of the socket file. chown root:docker /var/run/docker.sock This would set the ownership to root and group-ownership to docker for default Docker socket file.
Ensure that Docker socket file permissions are set to 660 or more restrictive
(3.16)
Description: Only root and members of docker group should be allowed to read and write to default Docker Unix socket. Hence, the Docket socket file must have permissions of 660 or more restrictive. chmod 660 /var/run/docker.sock This would set the file permissions of the Docker socket file to 660.
Ensure that daemon.json file ownership is set to root:root
(3.17)
Description: daemon.json file contains sensitive parameters that may alter the behavior of docker daemon. Hence, it should be owned and group-owned by root to maintain the integrity of the file. chown root:root /etc/docker/daemon.json This would set the ownership and group-ownership for the file to root.
Ensure that daemon.json file permissions are set to 644 or more restrictive
(3.18)
Description: daemon.json file contains sensitive parameters that may alter the behavior of docker daemon. Hence, it should be writable only by root to maintain the integrity of the file. chmod 644 /etc/docker/daemon.json This would set the file permissions for this file to 644.
Ensure that /etc/default/docker file ownership is set to root:root
(3.19)
Description: /etc/default/docker file contains sensitive parameters that may alter the behavior of docker daemon. Hence, it should be owned and group-owned by root to maintain the integrity of the file. chown root:root /etc/default/docker This would set the ownership and group-ownership for the file to root.
Ensure that /etc/default/docker file permissions are set to 644 or more restrictive
(3.20)
Description: /etc/default/docker file contains sensitive parameters that may alter the behavior of docker daemon. Hence, it should be writable only by root to maintain the integrity of the file. chmod 644 /etc/default/docker This would set the file permissions for this file to 644.
Ensure a user for the container has been created
(4.01)
Description: it's a good practice to run the container as a non-root user, if possible. Though user namespace mapping is now available, if a user is already defined in the container image, the container is run as that user by default and specific user namespace remapping isn't required. Ensure that the Dockerfile for the container image contains: USER {username or ID} where username or ID refers to the user that could be found in the container base image. If there's no specific user created in the container base image, then add a useradd command to add the specific user before USER instruction.
Ensure HEALTHCHECK instructions have been added to the container image
(4.06)
Description: One of the important security triads is availability. Adding HEALTHCHECK instruction to your container image ensures that the docker engine periodically checks the running container instances against that instruction to ensure that the instances are still working. Based on the reported health status, the docker engine could then exit non-working containers and instantiate new ones. Follow Docker documentation and rebuild your container image with HEALTHCHECK instruction.
Ensure either SELinux or AppArmor is enabled as appropriate
(5.01-2)
Description: AppArmor protects the Linux OS and applications from various threats by enforcing security policy which is also known as AppArmor profile. You can create your own AppArmor profile for containers or use the Docker's default AppArmor profile. This would enforce security policies on the containers as defined in the profile. SELinux provides a Mandatory Access Control (MAC) system that greatly augments the default Discretionary Access Control (DAC) model. You can thus add an extra layer of safety by enabling SELinux on your Linux host, if applicable. After enabling the relevant Mandatory Access Control Plugin for your distro, run the docker daemon as docker run --interactive --tty --security-opt="apparmor:PROFILENAME" centos /bin/bash for AppArmor or docker run --interactive --tty --security-opt label=level:TopSecret centos /bin/bash for SELinux.
Ensure Linux Kernel Capabilities are restricted within containers
(5.03)
Description: Docker supports the addition and removal of capabilities, allowing the use of a non-default profile. This may make Docker more secure through capability removal, or less secure through the addition of capabilities. It's thus recommended to remove all capabilities except those explicitly required for your container process. For example, capabilities such as below are usually not needed for container process: NET_ADMIN SYS_ADMIN SYS_MODULE Execute the below command to add needed capabilities: $> docker run --cap-add={"Capability 1","Capability 2"} For example,docker run --interactive --tty --cap-add={"NET_ADMIN","SYS_ADMIN"} centos:latest /bin/bash Execute the below command to drop unneeded capabilities: $> docker run --cap-drop={"Capability 1","Capability 2"} For example,docker run --interactive --tty --cap-drop={"SETUID","SETGID"} centos:latest /bin/bash Alternatively, You may choose to drop all capabilities and add only the needed ones: $> docker run --cap-drop=all --cap-add={"Capability 1","Capability 2"} For example, docker run --interactive --tty --cap-drop=all --cap-add={"NET_ADMIN","SYS_ADMIN"} centos:latest /bin/bash
Ensure privileged containers aren't used
(5.04)
Description: The --privileged flag gives all capabilities to the container, and it also lifts all the limitations enforced by the device cgroup controller. In other words, the container can then do almost everything that the host can do. This flag exists to allow special use-cases, like running Docker within Docker. Don't run container with the --privileged flag. For example, don't start a container as below: docker run --interactive --tty --privileged centos /bin/bash
Ensure sensitive host system directories aren't mounted on containers
(5.05)
Description: If sensitive directories are mounted in read-write mode, it would be possible to make changes to files within those sensitive directories. The changes might bring down security implications or unwarranted changes that could put the Docker host in compromised state. Don't mount host sensitive directories on containers especially in read-write mode.
Ensure the host's network namespace isn't shared
(5.09)
Description: This is potentially dangerous. It allows the container process to open low-numbered ports like any other root process. It also allows the container to access network services like D-bus on the Docker host. Thus, a container process can potentially do unexpected things such as shutting down the Docker host. You shouldn't use this option. Don't pass --net=host option when starting the container.
Ensure memory usage for container is limited
(5.10)
Description: By default, container can use all of the memory on the host. You can use memory limit mechanism to prevent a denial of service arising from one container consuming all of the host’s resources such that other containers on the same host cannot perform their intended functions. Having no limit on memory can lead to issues where one container can easily make the whole system unstable and as a result unusable. Run the container with only as much memory as required. Always run the container using the --memory argument. For example, you could run a container as below: docker run --interactive --tty --memory 256m centos /bin/bash In the above example, the container is started with a memory limit of 256 MB. Note: Please note that the output of the below command would return values in scientific notation if memory limits are in place. docker inspect --format='{{.Config.Memory}}' 7c5a2d4c7fe0 For example, if the memory limit's set to 256 MB for the above container instance, the output of the above command would be 2.68435456e+08 and NOT 256m. You should convert this value using a scientific calculator or programmatic methods.
Ensure the container's root filesystem is mounted as read only
(5.12)
Description: Enabling this option forces containers at runtime to explicitly define their data writing strategy to persist or not persist their data. This also reduces security attack vectors since the container instance's filesystem cannot be tampered with or written to unless it has explicit read-write permissions on its filesystem folder and directories. Add a --read-only flag at a container's runtime to enforce the container's root filesystem to be mounted as read only.docker run --read-only Enabling the --read-only option at a container's runtime should be used by administrators to force a container's executable processes to only write container data to explicit storage locations during the container's runtime. Examples of explicit storage locations during a container's runtime include, but not limited to: 1. Use the --tmpfs option to mount a temporary file system for non-persistent data writes. docker run --interactive --tty --read-only --tmpfs "/run" --tmpfs "/tmp" centos /bin/bash 2. Enabling Docker rw mounts at a container's runtime to persist container data directly on the Docker host filesystem. docker run --interactive --tty --read-only -v /opt/app/data:/run/app/data:rw centos /bin/bash 3. Utilizing Docker shared-storage volume plugins for Docker data volume to persist container data. docker volume create -d convoy --opt o=size=20GB my-named-volume``````docker run --interactive --tty --read-only -v my-named-volume:/run/app/data centos /bin/bash 4. Transmitting container data outside of the docker during the container's runtime for container data to persist container data. Examples include hosted databases, network file shares, and APIs.
Ensure incoming container traffic is bound to a specific host interface
(5.13)
Description: If you have multiple network interfaces on your host machine, the container can accept connections on the exposed ports on any network interface. This might not be desired and may not be secured. Many times a particular interface is exposed externally and services such as intrusion detection, intrusion prevention, firewall, load balancing, etc. are run on those interfaces to screen incoming public traffic. Hence, you shouldn't accept incoming connections on any interface. You should only allow incoming connections from a particular external interface. Bind the container port to a specific host interface on the desired host port. For example, docker run --detach --publish 10.2.3.4:49153:80 nginx In the example above, the container port 80 is bound to the host port on 49153 and would accept incoming connection only from 10.2.3.4 external interface.
Ensure 'on-failure' container restart policy is set to '5' or lower
(5.14)
Description: If you indefinitely keep trying to start the container, it could possibly lead to a denial of service on the host. It could be an easy way to do a distributed denial of service attack especially if you have many containers on the same host. Additionally, ignoring the exit status of the container and always attempting to restart the container leads to non-investigation of the root cause behind containers getting terminated. If a container gets terminated, you should investigate on the reason behind it instead of just attempting to restart it indefinitely. Thus, it's recommended to use on-failure restart policy and limit it to maximum of 5 restart attempts. If a container is desired to be restarted of its own then, for example, you could start the container as below: docker run --detach --restart=on-failure:5 nginx
Ensure the host's process namespace isn't shared
(5.15)
Description: PID namespace provides separation of processes. The PID Namespace removes the view of the system processes, and allows process ID's to be reused including PID 1. If the host's PID namespace is shared with the container, it would basically allow processes within the container to see all of the processes on the host system. This breaks the benefit of process level isolation between the host and the containers. Someone having access to the container can eventually know all the processes running on the host system and can even kill the host system processes from within the container. This can be catastrophic. Hence, don't share the host's process namespace with the containers. Don't start a container with --pid=host argument. For example, don't start a container as below: docker run --interactive --tty --pid=host centos /bin/bash
Ensure the host's IPC namespace isn't shared
(5.16)
Description: IPC namespace provides separation of IPC between the host and containers. If the host's IPC namespace is shared with the container, it would basically allow processes within the container to see all of the IPC on the host system. This breaks the benefit of IPC level isolation between the host and the containers. Someone having access to the container can eventually manipulate the host IPC. This can be catastrophic. Hence, don't share the host's IPC namespace with the containers. Don't start a container with --ipc=host argument. For example, don't start a container as below: docker run --interactive --tty --ipc=host centos /bin/bas
Ensure host devices aren't directly exposed to containers
(5.17)
Description: The --device option exposes the host devices to the containers and consequently, the containers can directly access such host devices. You would not require the container to run in privileged mode to access and manipulate the host devices. By default, the container will be able to read, write and mknod these devices. Additionally, it's possible for containers to remove block devices from the host. Hence, don't expose host devices to containers directly. If at all, you would want to expose the host device to a container, use the sharing permissions appropriately: - r - read only - w - writable - m - mknod allowed Don't directly expose the host devices to containers. If at all, you need to expose the host devices to containers, use the correct set of permissions: For example, don't start a container as below: docker run --interactive --tty --device=/dev/tty0:/dev/tty0:rwm --device=/dev/temp_sda:/dev/temp_sda:rwm centos bash For example, share the host device with correct permissions: docker run --interactive --tty --device=/dev/tty0:/dev/tty0:rw --device=/dev/temp_sda:/dev/temp_sda:r centos bash
Ensure mount propagation mode isn't set to shared
(5.19)
Description: A shared mount is replicated at all mounts and the changes made at any mount point are propagated to all mounts. Mounting a volume in shared mode does not restrict any other container to mount and make changes to that volume. This might be catastrophic if the mounted volume is sensitive to changes. Don't set mount propagation mode to shared until needed. Don't mount volumes in shared mode propagation. For example, don't start container as below: docker run --volume=/hostPath:/containerPath:shared
Ensure the host's UTS namespace isn't shared
(5.20)
Description: Sharing the UTS namespace with the host provides full permission to the container to change the hostname of the host. This is insecure and shouldn't be allowed. Don't start a container with --uts=host argument. For example, don't start a container as below: docker run --rm --interactive --tty --uts=host rhel7.2
Ensure cgroup usage is confirmed
(5.24)
Description: System administrators typically define cgroups under which containers are supposed to run. Even if cgroups aren't explicitly defined by the system administrators, containers run under docker cgroup by default. At run-time, it's possible to attach to a different cgroup other than the one that was expected to be used. This usage should be monitored and confirmed. By attaching to a different cgroup than the one that is expected, excess permissions and resources might be granted to the container and thus, can prove to be unsafe. Don't use --cgroup-parent option in docker run command unless needed.
Ensure the container is restricted from acquiring additional privileges
(5.25)
Description: A process can set the no_new_priv bit in the kernel. It persists across fork, clone and execve. The no_new_priv bit ensures that the process or its children processes don't gain any additional privileges via suid or sgid bits. This way numerous dangerous operations become a lot less dangerous because there's no possibility of subverting privileged binaries. For example, you should start your container as below: docker run --rm -it --security-opt=no-new-privileges ubuntu bash
Ensure container health is checked at runtime
(5.26)
Description: One of the important security triads is availability. If the container image you're using does not have a pre-defined HEALTHCHECK instruction, use the --health-cmd parameter to check container health at runtime. Based on the reported health status, you could take necessary actions. Run the container using --health-cmd and the other parameters. For example, docker run -d --health-cmd='stat /etc/passwd || exit 1' nginx
Ensure PIDs cgroup limit's used
(5.28)
Description: Attackers could launch a fork bomb with a single command inside the container. This fork bomb can crash the entire system and requires a restart of the host to make the system functional again. PIDs cgroup --pids-limit will prevent this kind of attacks by restricting the number of forks that can happen inside a container at a given time. Use --pids-limit flag while launching the container with an appropriate value. For example, docker run -it --pids-limit 100 In the above example, the number of processes allowed to run at any given time is set to 100. After a limit of 100 concurrently running processes is reached, docker would restrict any new process creation.
Ensure Docker's default bridge docker0 isn't used
(5.29)
Description: Docker connects virtual interfaces created in the bridge mode to a common bridge called docker0. This default networking model is vulnerable to ARP spoofing and MAC flooding attacks since there's no filtering applied. Follow Docker documentation and setup a user-defined network. Run all the containers in the defined network.
Ensure the host's user namespace isn't shared
(5.30)
Description: User namespaces ensure that a root process inside the container will be mapped to a non-root process outside the container. Sharing the user namespaces of the host with the container thus does not isolate users on the host with users on the containers. Don't share user namespaces between host and containers. For example, don't run a container as below: docker run --rm -it --userns=host ubuntu bash
Ensure the Docker socket isn't mounted inside any containers
(5.31)
Description: If the docker socket is mounted inside a container it would allow processes running within the container to execute docker commands which effectively allows for full control of the host. Ensure that no containers mount docker.sock as a volume.
Ensure swarm services are bound to a specific host interface
(7.03)
Description: When a swarm is initialized the default value for the --listen-addr flag is 0.0.0.0:2377 which means that the swarm services will listen on all interfaces on the host. If a host has multiple network interfaces this may be undesirable as it may expose the docker swarm services to networks which aren't involved in the operation of the swarm. By passing a specific IP address to the --listen-addr, a specific network interface can be specified limiting this exposure. Remediation of this requires re-initialization of the swarm specifying a specific interface for the --listen-addr parameter.
Ensure data exchanged between containers are encrypted on different nodes on the overlay network
(7.04)
Description: By default, data exchanged between containers on different nodes on the overlay network isn't encrypted. This could potentially expose traffic between the container nodes. Create overlay network with --opt encrypted flag.
Ensure swarm manager is run in auto-lock mode
(7.06)
Description: When Docker restarts, both the TLS key used to encrypt communication among swarm nodes, and the key used to encrypt and decrypt Raft logs on disk, are loaded into each manager node's memory. You should protect the mutual TLS encryption key and the key used to encrypt and decrypt Raft logs at rest. This protection could be enabled by initializing swarm with --autolock flag. With --autolock enabled, when Docker restarts, you must unlock the swarm first, using a key encryption key generated by Docker when the swarm was initialized. If you're initializing swarm, use the below command. docker swarm init --autolock If you want to set --autolock on an existing swarm manager node, use the below command.docker swarm update --autolock

Note

Availability of specific Azure Policy guest configuration settings may vary in Azure Government and other national clouds.

Next steps

Additional articles about Azure Policy and guest configuration: