Docker
Docker is a utility to pack, ship and run any application as a lightweight container.
Installation
To pull Docker images and run Docker containers, you need the Docker Engine. The Docker Engine includes a daemon to manage the containers, as well as the docker
CLI frontend. Install the docker package or, for the development version, the docker-gitAUR package. Next start and enable docker.service
and verify operation:
# docker info
Note that starting the docker service may fail if you have an active VPN connection due to IP conflicts between the VPN and Docker's bridge and overlay networks. If this is the case, try disconnecting the VPN before starting the docker service. You may reconnect the VPN immediately afterwards. You can also try to deconflict the networks (see solutions [1] or [2]).
Next, verify that you can run containers. The following command downloads the latest Arch Linux image and uses it to run a Hello World program within a container:
# docker run -it --rm archlinux bash -c "echo hello world"
If you want to be able to run the docker
CLI command as a non-root user, add your user to the docker
user group, re-login, and restart docker.service
.
docker
group is root equivalent because they can use the docker run --privileged
command to start containers with root privileges. For more information see [3] and [4].Docker Compose
Docker Compose is an alternate CLI frontend for the Docker Engine, which specifies properties of containers using a docker-compose.yml
YAML file rather than, for example, a script with docker run
options. This is useful for setting up reoccuring services that are use often and/or have complex configurations. To use it, install docker-compose.
Docker Desktop
Docker Desktop is a proprietary desktop application that runs the Docker Engine inside a Linux virtual machine. Additional features such as a Kubernetes cluster and a vulnerability scanner are included. This application is useful for software development teams who develop Docker containers using macOS and Windows. The Linux port of the application is relatively new, and complements Docker's CLI frontends [5]. Packages for Arch are provided directly by Docker; see the manual for more information.
Usage
Docker consists of multiple parts:
- The Docker daemon (sometimes also called the Docker Engine), which is a process which runs as
docker.service
. It serves the Docker API and manages Docker containers. - The
docker
CLI command, which allows users to interact with the Docker API via the command line and control the Docker daemon. - Docker containers, which are namespaced processes that are started and managed by the Docker daemon as requested through the Docker API.
Typically, users use Docker by running docker
CLI commands, which in turn request the Docker daemon to perform actions which in turn result in management of Docker containers. Understanding the relationship between the client (docker
), server (docker.service
) and containers is important to successfully administering Docker.
Note that if the Docker daemon stops or restarts, all currently running Docker containers are also stopped or restarted.
Also note that it is possible to send requests to the Docker API and control the Docker daemon without the use of the docker
CLI command. See the Docker API developer documentation for more information.
See the Docker Getting Started guide for more usage documentation.
Configuration
The Docker daemon can be configured either through a configuration file at /etc/docker/daemon.json
or by adding command line flags to the docker.service
systemd unit. According to the Docker official documentation, the configuration file approach is preferred. If you wish to use the command line flags instead, use systemd drop-in files to override the ExecStart
directive in docker.service
.
For more information about options in daemon.json
see dockerd documentation.
Storage driver
The storage driver controls how images and containers are stored and managed on your Docker host. The default overlay2
driver has good performance and is a good choice for all modern Linux kernels and filesystems. There are a few legacy drivers such as devicemapper
and aufs
which were intended for compatibility with older Linux kernels, but these have no advantages over overlay2
on Arch Linux.
Users of btrfs or ZFS may use the btrfs
or zfs
drivers, each of which take advantage of the unique features of these filesystems. See the btrfs driver and zfs driver documentation for more information and step-by-step instructions.
Daemon socket
By default, the Docker daemon serves the Docker API using a Unix socket at /var/run/docker.sock
. This is an appropriate option for most use cases.
It is possible to configure the Daemon to additionally listen on a TCP socket, which can allow remote Docker API access from other computers. This can be useful for allowing docker
commands on a host machine to access the Docker daemon on a Linux virtual machine, such as an Arch virtual machine on a Windows or macOS system.
Note that the default docker.service
file sets the -H
flag by default, and Docker will not start if an option is present in both the flags and /etc/docker/daemon.json
file. Therefore, the simplest way to change the socket settings is with a drop-in file, such as the following which adds a TCP socket on port 4243:
/etc/systemd/system/docker.service.d/docker.conf
[Service] ExecStart= ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:4243
Reload the systemd daemon and restart docker.service
to apply changes.
HTTP Proxies
There are two parts to configuring Docker to use an HTTP proxy: Configuring the Docker daemon and configuring Docker containers.
Docker daemon proxy configuration
See Docker documentation on configuring a systemd drop-in unit to configure HTTP proxies.
Docker container proxy configuration
See Docker documentation on configuring proxies for information on how to automatically configure proxies for all containers created using the docker
CLI.
Configuring DNS
See Docker's DNS documentation for the documented behavior of DNS within Docker containers and information on customizing DNS configuration. In most cases, the resolvers configured on the host are also configured in the container.
Most DNS resolvers hosted on 127.0.0.0/8
are not supported due to conflicts between the container and host network namespaces. Such resolvers are removed from the container's /etc/resolv.conf. If this would result in an empty /etc/resolv.conf
, Google DNS is used instead.
Additionally, a special case is handled if 127.0.0.53
is the only configured nameserver. In this case, Docker assumes the resolver is systemd-resolved and uses the upstream DNS resolvers from /run/systemd/resolve/resolv.conf
.
If you are using a service such as dnsmasq to provide a local resolver, consider adding a virtual interface with a link local IP address in the 169.254.0.0/16
block for dnsmasq to bind to instead of 127.0.0.1
to avoid the network namespace conflict.
Images location
By default, docker images are located at /var/lib/docker
. They can be moved to other partitions, e.g. if you wish to use a dedicated partition or disk for your images. In this example, we will move the images to /mnt/docker
.
First, stop docker.service
, which will also stop all currently running containers and unmount any running images. You may then move the images from /var/lib/docker
to the target destination, e.g. cp -r /var/lib/docker /mnt/docker
.
Configure data-root
in /etc/docker/daemon.json
:
/etc/docker/daemon.json
{ "data-root": "/mnt/docker" }
Restart docker.service
to apply changes.
Insecure registries
If you decide to use a self signed certificate for your private registries, Docker will refuse to use it until you declare that you trust it. For example, to allow images from a registry hosted at myregistry.example.com:8443
, configure insecure-registries
in the /etc/docker/daemon.json
file:
/etc/docker/daemon.json
{ "insecure-registries": [ "my.registry.example.com:8443" ] }
Restart docker.service
to apply changes.
IPv6
In order to enable IPv6 support in Docker, you will need to do a few things. See [6] and [7] for details.
Firstly, enable the ipv6
setting in /etc/docker/daemon.json
and set a specific IPv6 subnet. In this case, we will use the private fd00::/80
subnet. Make sure to use a subnet at least 80 bits as this allows a container's IPv6 to end with the container's MAC address which allows you to mitigate NDP neighbor cache invalidation issues.
/etc/docker/daemon.json
{ "ipv6": true, "fixed-cidr-v6": "fd00::/80" }
Restart docker.service
to apply changes.
Finally, to let containers access the host network, you need to resolve routing issues arising from the usage of a private IPv6 subnet. Add the IPv6 NAT in order to actually get some traffic:
# ip6tables -t nat -A POSTROUTING -s fd00::/80 ! -o docker0 -j MASQUERADE
Now Docker should be properly IPv6 enabled. To test it, you can run:
# docker run curlimages/curl curl -v -6 archlinux.org
If you use firewalld, you can add the rule like this:
# firewall-cmd --zone=public --add-rich-rule='rule family="ipv6" destination not address="fd00::1/80" source address="fd00::/80" masquerade'
If you use ufw, you need to first enable ipv6 forwarding following Uncomplicated Firewall#Forward policy. Next you need to edit /etc/default/ufw
and uncomment the following lines
/etc/ufw/sysctl.conf
net/ipv6/conf/default/forwarding=1 net/ipv6/conf/all/forwarding=1
Then you can add the iptables rule:
# ip6tables -t nat -A POSTROUTING -s fd00::/80 ! -o docker0 -j MASQUERADE
It should be noted that, for docker containers created with docker-compose, you may need to set enable_ipv6: true
in the networks
part for the corresponding network. Besides, you may need to configure the IPv6 subnet. See [8] for details.
User namespace isolation
By default, processes in Docker containers run within the same user namespace as the main dockerd
daemon, i.e. containers are not isolated by the user_namespaces(7) feature. This allows the process within the container to access configured resources on the host according to Users and groups#Permissions and ownership. This maximizes compatibility, but poses a security risk if a container privilege escalation or breakout vulnerability is discovered that allows the container to access unintended resources on the host. (One such vulnerability was published and patched in February 2019.)
The impact of such a vulnerability can be reduced by enabling user namespace isolation. This runs each container in a separate user namespace and maps the UIDs and GIDs inside that user namespace to a different (typically unprivileged) UID/GID range on the host.
- The main
dockerd
daemon still runs asroot
on the host. Running Docker in rootless mode is a different feature. - Processes in the container are started as the user defined in the USER directive in the Dockerfile used to build the image of the container.
- All containers are mapped into the same UID/GID range. This preserves the ability to share volumes between containers.
- Enabling user namespace isolation has several limitations. Also, Kubernetes currently does not work with this feature.
- Enabling user namespace isolation effectively masks existing image and container layers, as well as other Docker objects in
/var/lib/docker/
, because Docker needs to adjust the ownership of these resources. The upstream documentation recommends to enable this feature on a new Docker installation rather than an existing one.
Configure userns-remap
in /etc/docker/daemon.json
. default
is a special value that will automatically create a user and group named dockremap
for use with remapping.
/etc/docker/daemon.json
{ "userns-remap": "default" }
Configure /etc/subuid
and /etc/subgid
with a username/group name, starting UID/GID and UID/GID range size to allocate to the remap user and group. This example allocates a range of 65536 UIDs and GIDs starting at 165536 to the dockremap
user and group.
/etc/subuid
dockremap:165536:65536
/etc/subgid
dockremap:165536:65536
Restart docker.service
to apply changes.
After applying this change, all containers will run in an isolated user namespace by default. The remapping may be partially disabled on specific containers passing the --userns=host
flag to the docker
command. See [9] for details.
Docker rootless
CONFIG_USER_NS_UNPRIVILEGED
) which has some serious security implications, see Security#Sandboxing applications for details.Install the docker-rootless-extrasAUR or docker-rootless-extras-binAUR package to run docker in rootless mode (that is, as a regular user instead of as root).
Configure /etc/subuid
and /etc/subgid
with a username/group name, starting UID/GID and UID/GID range size to allocate to the remap user and group.
/etc/subuid
your_username:165536:65536
/etc/subgid
your_username:165536:65536
Enable the docker.socket
user unit: this will result in docker being started using systemd's socket activation.
Finally set docker socket environment variable:
$ export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/docker.sock
Enable native overlay diff engine
By default, Docker cannot use the native overlay diff engine on Arch Linux, which makes building Docker images slow. If you frequently build images, configure the native diff engine as described in [10]:
/etc/modprobe.d/disable-overlay-redirect-dir.conf
options overlay metacopy=off redirect_dir=off
Then stop docker.service
, reload the overlay
module as follows:
# modprobe -r overlay # modprobe overlay
You can then start docker.service
again.
To verify, run docker info
and check that Native Overlay Diff
is true
.
Images
Arch Linux
The following command pulls the archlinux x86_64 image. This is a stripped down version of Arch core without network, etc.
# docker pull archlinux
See also README.md.
For a full Arch base, clone the repository from above and build your own image.
$ git clone https://gitlab.archlinux.org/archlinux/archlinux-docker.git
Make sure that the devtools, fakechroot and fakeroot packages are installed.
To build the base image:
$ make image-base
Alpine Linux
Alpine Linux is a popular choice for small container images, especially for software compiled as static binaries. The following command pulls the latest Alpine Linux image:
# docker pull alpine
Alpine Linux uses the musl libc implementation instead of the glibc libc implementation used by most Linux distributions. Because Arch Linux uses glibc, there are a number of functional differences between an Arch Linux host and an Alpine Linux container that can impact the performance and correctness of software. A list of these differences is documented here.
Note that dynamically linked software built on Arch Linux (or any other system using glibc) may have bugs and performance problems when run on Alpine Linux (or any other system using a different libc). See [11], [12] and [13] for examples.
CentOS
The following command pulls the latest centos image:
# docker pull centos
See the Docker Hub page for a full list of available tags for each CentOS release.
Debian
The following command pulls the latest debian image:
# docker pull debian
See the Docker Hub page for a full list of available tags, including both standard and slim versions for each Debian release.
Distroless
Google maintains distroless images which are minimal images without OS components such as package managers or shells, resulting in very small images for packaging software.
See the GitHub README for a list of images and instructions on their use with various programming languages.
Run GPU accelerated Docker containers with NVIDIA GPUs
With NVIDIA Container Toolkit (recommended)
Starting from Docker version 19.03, NVIDIA GPUs are natively supported as Docker devices. NVIDIA Container Toolkit is the recommended way of running containers that leverage NVIDIA GPUs.
Install the nvidia-container-toolkitAUR package. Next, restart docker. You can now run containers that make use of NVIDIA GPUs using the --gpus
option:
# docker run --gpus all nvidia/cuda:11.3.0-runtime-ubuntu20.04 nvidia-smi
Specify how many GPUs are enabled inside a container:
# docker run --gpus 2 nvidia/cuda:11.3.0-runtime-ubuntu20.04 nvidia-smi
Specify which GPUs to use:
# docker run --gpus '"device=1,2"' nvidia/cuda:11.3.0-runtime-ubuntu20.04 nvidia-smi
or
# docker run --gpus '"device=UUID-ABCDEF,1"' nvidia/cuda:11.3.0-runtime-ubuntu20.04 nvidia-smi
Specify a capability (graphics, compute, ...) for the container (though this is rarely if ever used this way):
# docker run --gpus all,capabilities=utility nvidia/cuda:11.3.0-runtime-ubuntu20.04 nvidia-smi
For more information see README.md and Wiki.
With NVIDIA Container Runtime
Install the nvidia-container-runtimeAUR package. Next, register the NVIDIA runtime by editing /etc/docker/daemon.json
/etc/docker/daemon.json
{ "runtimes": { "nvidia": { "path": "/usr/bin/nvidia-container-runtime", "runtimeArgs": [] } } }
and then restart docker.
The runtime can also be registered via a command line option to dockerd:
# /usr/bin/dockerd --add-runtime=nvidia=/usr/bin/nvidia-container-runtime
Afterwards GPU accelerated containers can be started with
# docker run --runtime=nvidia nvidia/cuda:9.0-base nvidia-smi
or (required Docker version 19.03 or higher)
# docker run --gpus all nvidia/cuda:9.0-base nvidia-smi
See also README.md.
With nvidia-docker (deprecated)
nvidia-docker is a wrapper around NVIDIA Container Runtime which registers the NVIDIA runtime by default and provides the nvidia-docker command.
To use nvidia-docker, install the nvidia-dockerAUR package and then restart docker. Containers with NVIDIA GPU support can then be run using any of the following methods:
# docker run --runtime=nvidia nvidia/cuda:9.0-base nvidia-smi
# nvidia-docker run nvidia/cuda:9.0-base nvidia-smi
or (required Docker version 19.03 or higher)
# docker run --gpus all nvidia/cuda:9.0-base nvidia-smi
Arch Linux image with CUDA
You can use the following Dockerfile
to build a custom Arch Linux image with CUDA. It uses the Dockerfile frontend syntax 1.2 to cache pacman packages on the host. The DOCKER_BUILDKIT=1
environment variable must be set on the client before building the Docker image.
Dockerfile
# syntax = docker/dockerfile:1.2 FROM archlinux # install packages RUN --mount=type=cache,sharing=locked,target=/var/cache/pacman \ pacman -Syu --noconfirm --needed base base-devel cuda # configure nvidia container runtime # https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec ENV NVIDIA_VISIBLE_DEVICES all ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
Useful tips
To grab the IP address of a running container:
$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container-name OR id>
172.17.0.37
For each running container, the name and corresponding IP address can be listed for use in /etc/hosts
:
#!/usr/bin/env sh for ID in $(docker ps -q | awk '{print $1}'); do IP=$(docker inspect --format="{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}" "$ID") NAME=$(docker ps | grep "$ID" | awk '{print $NF}') printf "%s %s\n" "$IP" "$NAME" done
Using buildx for cross-compiling
Starting from Docker version 19.03 [14], the Docker package includes the buildx CLI plugin, which makes use of the new BuildKit building toolkit. The buildx interface supports building multi-platform images, including architectures other than that of the host.
QEMU is required to cross-compile images. To setup the static build of QEMU within Docker, see the usage information for the multiarch/qemu-user-static image. Otherwise, to setup QEMU on the host system for use with Docker, see QEMU#Chrooting into arm/arm64 environment from x86_64. In either case, your system will be configured for user-mode emulation of the guest architecture.
$ docker buildx ls NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS default * docker default default running linux/amd64, linux/386, linux/arm64, linux/riscv64, linux/s390x, linux/arm/v7, linux/arm/v6
Remove Docker and images
In case you want to remove Docker entirely you can do this by following the steps below:
Check for running containers:
# docker ps
List all containers running on the host for deletion:
# docker ps -a
Stop a running container:
# docker stop <CONTAINER ID>
Killing still running containers:
# docker kill <CONTAINER ID>
Delete containers listed by ID:
# docker rm <CONTAINER ID>
List all Docker images:
# docker images
Delete images by ID:
# docker rmi <IMAGE ID>
Delete all images, containers, volumes, and networks that are not associated with a container (dangling):
# docker system prune
To additionally remove any stopped containers and all unused images (not just dangling ones), add the -a flag to the command:
# docker system prune -a
Delete all Docker data (purge directory):
# rm -R /var/lib/docker
Troubleshooting
docker0 Bridge gets no IP / no internet access in containers when using systemd-networkd
Docker attempts to enable IP forwarding globally, but by default systemd-networkd overrides the global sysctl setting for each defined network profile. Set IPForward=yes
in the network profile. See Internet sharing#Enable packet forwarding for details.
When systemd-networkd tries to manage the network interfaces created by Docker, e.g. when you configured Name=*
in the Match section, this can lead to connectivity issues. Try disabling management of those interfaces. I.e. networkctl list
should report unmanaged
in the SETUP column for all networks created by Docker.
- You may need to restart
docker.service
each time you restartsystemd-networkd.service
oriptables.service
. - Also be aware that nftables may block docker connections by default. Use
nft list ruleset
to check for blocking rules.nft flush chain inet filter forward
removes all forwarding rules temporarily. Edit/etc/nftables.conf
to make changes permanent. Remember to restartnftables.service
to reload rules from the configuration file. See [15] for details about nftables support in Docker.
Default number of allowed processes/threads too low
If you run into error messages like
# e.g. Java java.lang.OutOfMemoryError: unable to create new native thread # e.g. C, bash, ... fork failed: Resource temporarily unavailable
then you might need to adjust the number of processes allowed by systemd. The default is 500 (see system.conf
), which is pretty small for running several docker containers. Edit the docker.service
with the following snippet:
[Service] TasksMax=infinity
Error initializing graphdriver: devmapper
If systemctl fails to start docker and provides an error:
Error starting daemon: error initializing graphdriver: devmapper: Device docker-8:2-915035-pool is not a thin pool
Then, try the following steps to resolve the error. Stop the service, back up /var/lib/docker/
(if desired), remove the contents of /var/lib/docker/
, and try to start the service. See the open GitHub issue for details.
Failed to create some/path/to/file: No space left on device
If you are getting an error message like this:
ERROR: Failed to create some/path/to/file: No space left on device
when building or running a Docker image, even though you do have enough disk space available, make sure:
-
Tmpfs is disabled or has enough memory allocation. Docker might be trying to write files into
/tmp
but fails due to restrictions in memory usage and not disk space. - If you are using XFS, you might want to remove the
noquota
mount option from the relevant entries in/etc/fstab
(usually where/tmp
and/or/var/lib/docker
reside). Refer to Disk quota for more information, especially if you plan on using and resizingoverlay2
Docker storage driver. - XFS quota mount options (
uquota
,gquota
,prjquota
, etc.) fail during re-mount of the file system. To enable quota for root file system, the mount option must be passed to initramfs as a kernel parameterrootflags=
. Subsequently, it should not be listed among mount options in/etc/fstab
for the root (/
) filesystem.
Docker-machine fails to create virtual machines using the virtualbox driver
In case docker-machine fails to create the VM's using the virtualbox driver, with the following:
VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
Simply reload the virtualbox via CLI with vboxreload
.
Starting Docker breaks KVM bridged networking
The issue is that Docker's scripts add some iptables rules to block forwarding on other interfaces other than its own. This is a known issue.
Adjust the solutions below to replace br0 with your own bridge name.
Quickest fix (but turns off all Docker's iptables self-added adjustments, which you may not want):
/etc/docker/daemon.json
{ "iptables": false }
If there is already a network bridge configured for KVM, this may be fixable by telling docker about it. See [17] where docker configuration is modified as:
/etc/docker/daemon.json
{ "bridge": "br0" }
If the above does not work, or you prefer to solve the issue through iptables directly, or through a manager like UFW, add this:
iptables -I FORWARD -i br0 -o br0 -j ACCEPT
Even more detailed solutions are here.
Image pulls from Docker Hub are rate limited
Beginning on November 1st 2020, rate limiting is enabled for downloads from Docker Hub from anonymous and free accounts. See the rate limit documentation for more information.
Unauthenticated rate limits are tracked by source IP. Authenticated rate limits are tracked by account.
If you need to exceed the rate limits, you can either sign up for a paid plan or mirror the images you need to a different image registry. You can host your own registry or use a cloud hosted registry such as Amazon ECR, Google Container Registry, Azure Container Registry or Quay Container Registry.
To mirror an image, use the pull
, tag
and push
subcommands of the Docker CLI. For example, to mirror the 1.19.3
tag of the Nginx image to a registry hosted at cr.example.com
:
$ docker pull nginx:1.19.3 $ docker tag nginx:1.19.3 cr.example.com/nginx:1.19.3 $ docker push cr.example.com/nginx:1.19.3
You can then pull or run the image from the mirror:
$ docker pull cr.example.com/nginx:1.19.3 $ docker run cr.example.com/nginx:1.19.3
iptables (legacy): unknown option "--dport"
If you see this error when running a container, install iptables-nft instead of iptables (legacy) and reboot[18].