CRI-O
CRI-O is an OCI-based implementation of the Kubernetes Container Runtime Interface.
As such it is one of the container runtimes that can be used with a node of a Kubernetes cluster.
Installation
The package will set the system up to load the overlay
and br_netfilter
modules and set the following sysctl options:
net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1
To use CRI-O without a reboot make sure to load the modules and configure the sysctl values accordingly.
Configuration
CRI-O is configured via /etc/crio/crio.conf
or via drop-in configuration files in /etc/crio/crio.conf.d/
.
Network
Plugin Installation
CRI-O can make use of container networking as provided by cni-plugins, or plugins installed with in-cluster deployments such as weave, flannel, calico, etc.
/usr/lib/cni
and /opt/cni/bin
but most other plugins (e.g. in-cluster deployments, kubelet managed plugins, etc) by default only install to the second directory.
CRI-O is only configured to look for plugins in the first directory and as a result, any plugins in the second directory are unavailable without some configuration changes.
This may present itself as a non-working network and in the CRI-O logs something like the following error will show:
Error validating CNI config file /etc/cni/net.d/<plugin-config-file>.conf: [failed to find plugin "<plugin>" in path [/usr/lib/cni/]]
There are 2 solutions available to resolve this; have each of the other systems changed to use /usr/lib/cni
instead or update CRI-O to use the latter directory instead of the first.
The second solution can be achieved with a drop-in configuration file in /etc/crio/crio.conf.d/
:
00-plugin-dir.conf
[crio.network] plugin_dirs = [ "/opt/cni/bin/", ]
As this is an array, you can also set both or any other directories here as possible plugin locations.
Plugin Configuration
Copy one of the examples from /usr/share/doc/cri-o/examples/cni/
to /etc/cni/net.d
and modify it as needed.
10-crio-bridge.conf
and 99-loopback.conf
examples to /etc/cni/net.d
by default (as 100-crio-bridge.conf
and 199-crio-loopback.conf
respectively). This may conflict with Kubernetes cluster network fabrics (weave, flannel, calico, etc) and require manual deletion to resolve this (e.g. #2411 #2885).Storage
By default CRI-O makes use of the overlay
driver as its storage_driver
for the container storage in /var/lib/containers/storage/
. However, it can also be configured to use btrfs or ZFS natively by changing the driver
in /etc/containers/storage
:
sed -i 's/driver = ""/driver = "btrfs"/' /etc/containers/storage.conf
Running
Start and enable the crio.service
systemd unit.
Testing
Use crio-status
like this:
# crio-status info cgroup driver: systemd storage driver: vfs storage root: /var/lib/containers/storage default GID mappings (format <container>:<host>:<size>): 0:0:4294967295 default UID mappings (format <container>:<host>:<size>): 0:0:4294967295
and:
# crio-status config ...
Now Install the crictl package, and see e.g. https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/ or https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md, or simply:
source <(crictl completion bash)
crictl pull index.docker.io/busybox crictl pull quay.io/prometheus/busybox crictl images
curl -O https://raw.githubusercontent.com/kubernetes-sigs/cri-tools/master/docs/examples/podsandbox-config.yaml curl -O https://raw.githubusercontent.com/kubernetes-sigs/cri-tools/master/docs/examples/container-config.yaml crictl run container-config.yaml podsandbox-config.yaml
crictl logs $(crictl ps --last 1 --output yaml | yq -r .containers[0].id) crictl exec -it $(crictl ps --last 1 --output yaml | yq -r .containers[0].id) /bin/sh
crictl rm -af crictl rmp -af
Note how Docker Hub is not hard-coded, so specify container registry explicitly. (See also https://github.com/kubernetes-sigs/cri-tools/pull/718.)
See also
- CRI-O on Github - CRI-O repository on Github
- CRI-O Website - The official CRI-O website