This is the first of a number of posts regarding the orchestration, deployment and scaling of containerized applications in VM sandboxes using kubernetes, kata-containers and AWS Firecracker microVMs. We have gathered some notes during the installation and configuration of the necessary components and we thought they might be useful to the community, especially with regards to the major pain points in trying out recent open-source projects and technologies.
About Orchestration, the Edge, and Kata Containers Link to heading
To manage and orchestrate containers in a cluster, the community is using kubernetes (k8s), a powerful, open-source system for automating the deployment, scaling and management of containerized applications. To accommodate the vast majority of options and use-cases, k8s supports a number of container runtimes: the piece of software that sets up all the necessary components in the system to run the containerized application. For more information on the container runtime support or k8s visit the official documentation.
k8s at the edge Link to heading
A stripped down version of k8s is available through K3s, a kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances. K3s is packaged as a single <40MB binary that reduces the dependencies and steps needed to install, run and auto-update a production Kubernetes cluster. K3s supports various architectures (amd64, ARMv8, ARMv8) with binaries and multiarch images available for all. K3s works great from something as small as a Raspberry Pi to an AWS a1.4xlarge 32GiB server. K3s is also great for trying out k8s on a local machine without messing up the OS.
container sandboxing Link to heading
Kata Containers enable containers to be seamlessly executed in Virtual Machines. Kata Containers are as light and fast as containers and integrate with the container management layers, while also delivering the security advantages of VMs. Kata Containers is the result of merging two existing open source projects: Intel Clear Containers and Hyper runV.
Kata Containers integrate with k8s easily; however, there are some (minor) pain points when adding more options to the mix. For instance, choosing AWS Firecracker as the VMM for the sandbox environment, brings a storage backend dependency: device mapper. In this post, we will be going through the steps needed to setup kata containers with Firecracker, focusing on the device mapper setup for k8s and k3s.
Kata containers and containerd Link to heading
First, lets start of by installing kata containers and add the relevant handler to containerd. Following the docs is pretty straightforward, especially this one. To make sure everything is working as expected, you could try out a couple of examples found here.
In short, the needed steps are:
install kata binaries Link to heading
Download a release from: https://github.com/kata-containers/kata-containers/releases
(using v2.1.1, released in June 2021):
1$ wget https://github.com/kata-containers/kata-containers/releases/download/2.1.1/kata-static-2.1.1-x86_64.tar.xz
Unpack the binary
1$ xzcat kata-static-2.1.1-x86_64.tar.xz | sudo tar -xvf - -C /
by default, kata is being installed in /opt/kata
. Check the installed version by running:
1$ /opt/kata/bin/kata-runtime --version
It should output something like the following:
1$ /opt/kata/bin/kata-runtime --version
2kata-runtime : 2.1.1
3 commit : 0e2be438bdd6d213ac4a3d7d300a5757c4137799
4 OCI specs: 1.0.1-dev
It is recommended you add a symbolic link to /opt/kata/bin/kata-runtime
and
/opt/kata/bin/containerd-shim-kata-v2
in order for containerd to reach these
binaries from the default system PATH
.
1$ sudo ln -s /opt/kata/bin/kata-runtime /usr/local/bin
2$ sudo ln -s /opt/kata/bin/containerd-shim-kata-v2 /usr/local/bin
install containerd Link to heading
To install containerd, you can grab a release from
https://github.com/containerd/containerd/releases
or use the package manager
of your distro:
1$ sudo apt-get install containerd
A fairly recent containerd version is recommended (e.g. we’ve only tested with
containerd versions v1.3.9
and above).
Add the kata configuration to containerd’ config.toml
(/etc/containerd/config.toml
):
1[plugins]
2 [plugins."io.containerd.grpc.v1.cri"]
3 [plugins."io.containerd.grpc.v1.cri".containerd]
4 default_runtime_name = "kata"
5 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
6 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata]
7 runtime_type = "io.containerd.kata.v2"
If /etc/containerd/config.toml
is not present, create it using the following
command:
1$ sudo containerd config default > /etc/containerd/config.toml
and add the above snippet to the relevant section.
test kata with containerd Link to heading
Now, after we restart containerd:
1sudo systemctl restart containerd
we should be able to launch a Ubuntu test container using kata containers and containerd:
1$ sudo ctr image pull docker.io/library/ubuntu:latest
2docker.io/library/ubuntu:latest: resolved |++++++++++++++++++++++++++++++++++++++|
3index-sha256:82becede498899ec668628e7cb0ad87b6e1c371cb8a1e597d83a47fac21d6af3: done |++++++++++++++++++++++++++++++++++++++|
4manifest-sha256:1e48201ccc2ab83afc435394b3bf70af0fa0055215c1e26a5da9b50a1ae367c9: done |++++++++++++++++++++++++++++++++++++++|
5layer-sha256:16ec32c2132b43494832a05f2b02f7a822479f8250c173d0ab27b3de78b2f058: done |++++++++++++++++++++++++++++++++++++++|
6config-sha256:1318b700e415001198d1bf66d260b07f67ca8a552b61b0da02b3832c778f221b: done |++++++++++++++++++++++++++++++++++++++|
7elapsed: 7.2 s total: 27.2 M (3.8 MiB/s)
8unpacking linux/amd64 sha256:82becede498899ec668628e7cb0ad87b6e1c371cb8a1e597d83a47fac21d6af3...
9done
10$ ctr run --runtime io.containerd.run.kata.v2 -t --rm docker.io/library/ubuntu:latest ubuntu-kata-test /bin/bash
11root@clr-d25fa567d2f440df9eb4316de1699b51:/# uname -a
12Linux clr-d25fa567d2f440df9eb4316de1699b51 5.10.25 #1 SMP Fri Apr 9 18:18:14 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Kata containers and AWS Firecracker Link to heading
The above example is using the QEMU/KVM hypervisor. To use a different hypervisor (eg. AWS Firecracker), we need to make sure we honor all the needed prerequisites, so that the VM sandbox is spawned correctly and is able to host the containerized application we want to run. Let’s take a step back and see how kata containers fetch the container image and run the application in the sandboxed environment.
kata containers execution flow Link to heading
As mentioned earlier, Kata containers provide a way for containerized
application to run inside a VM sandbox. Additionally, kata containers manage
the container execution via a runtime system on the Host and an agent running
in the sandbox. The containerized application is packaged in a container
image, which is pulled outside the sandbox environment, along with its metadata
description (the json
file). These two components comprise the container
bundle. In order to run the container inside the sandbox environment (that is,
the VM) the container rootfs (the layer stack) must be somehow exposed from the
host to the VM. Then, the kata agent, which runs inside the VM, creates a
container from the exposed rootfs.
In the case of QEMU/KVM, the container rootfs is exposed using a virtiofs shared directory between the host and the guest. In the case of AWS Firecracker, however, virtiofs is not supported. So, the only option is to use a virtio block device.
devmapper Link to heading
In order to expose the container inside an AWS Firecracker sandbox environment, the kata runtime system expects that containerd uses the devmapper snapshotter. Essentially, the container rootfs is a device mapper snapshot, hot-plugged to AWS Firecracker as a virtio block device. The kata agent running in the VM finds the mount point inside the guest and issues the relevant command to libcontainerd to create and spawn the container.
So, in order to glue all the above together, we need containerd configured with the devmapper snapshotter. The first step is to setup a device mapper thin-pool. On a local dev environment you can use loopback devices.
A simple script from the devmapper snapshotter documentation is show below:
1#!/bin/bash
2set -ex
3
4DATA_DIR=/var/lib/containerd/io.containerd.snapshotter.v1.devmapper
5POOL_NAME=containerd-pool
6
7mkdir -p ${DATA_DIR}
8
9# Create data file
10sudo touch "${DATA_DIR}/data"
11sudo truncate -s 100G "${DATA_DIR}/data"
12
13# Create metadata file
14sudo touch "${DATA_DIR}/meta"
15sudo truncate -s 40G "${DATA_DIR}/meta"
16
17# Allocate loop devices
18DATA_DEV=$(sudo losetup --find --show "${DATA_DIR}/data")
19META_DEV=$(sudo losetup --find --show "${DATA_DIR}/meta")
20
21# Define thin-pool parameters.
22# See https://www.kernel.org/doc/Documentation/device-mapper/thin-provisioning.txt for details.
23SECTOR_SIZE=512
24DATA_SIZE="$(sudo blockdev --getsize64 -q ${DATA_DEV})"
25LENGTH_IN_SECTORS=$(bc <<< "${DATA_SIZE}/${SECTOR_SIZE}")
26DATA_BLOCK_SIZE=128
27LOW_WATER_MARK=32768
28
29# Create a thin-pool device
30sudo dmsetup create "${POOL_NAME}" \
31 --table "0 ${LENGTH_IN_SECTORS} thin-pool ${META_DEV} ${DATA_DEV} ${DATA_BLOCK_SIZE} ${LOW_WATER_MARK}"
32
33cat << EOF
34#
35# Add this to your config.toml configuration file and restart containerd daemon
36#
37[plugins]
38 [plugins.devmapper]
39 pool_name = "${POOL_NAME}"
40 root_path = "${DATA_DIR}"
41 base_image_size = "40GB"
42EOF
This script needs to be run only once, while setting up the devmapper snapshotter for containerd. Afterwards, make sure that on each reboot, the thin-pool is initialized from the same data dir. Otherwise, all the fetched containers (or the ones that you’ve created will be re-initialized). As simple script that re-creates the thin-pool from the same data dir is show below:
1#!/bin/bash
2set -ex
3
4DATA_DIR=/var/lib/containerd/io.containerd.snapshotter.v1.devmapper
5POOL_NAME=containerd-pool
6
7# Allocate loop devices
8DATA_DEV=$(sudo losetup --find --show "${DATA_DIR}/data")
9META_DEV=$(sudo losetup --find --show "${DATA_DIR}/meta")
10
11# Define thin-pool parameters.
12# See https://www.kernel.org/doc/Documentation/device-mapper/thin-provisioning.txt for details.
13SECTOR_SIZE=512
14DATA_SIZE="$(sudo blockdev --getsize64 -q ${DATA_DEV})"
15LENGTH_IN_SECTORS=$(bc <<< "${DATA_SIZE}/${SECTOR_SIZE}")
16DATA_BLOCK_SIZE=128
17LOW_WATER_MARK=32768
18
19# Create a thin-pool device
20sudo dmsetup create "${POOL_NAME}" \
21 --table "0 ${LENGTH_IN_SECTORS} thin-pool ${META_DEV} ${DATA_DEV} ${DATA_BLOCK_SIZE} ${LOW_WATER_MARK}"
After a containerd restart (systemctl restart containerd
), we should be able
to see the plugin registered and working correctly:
1$ ctr plugins ls |grep devmapper
2io.containerd.snapshotter.v1 devmapper linux/amd64 ok
containerd handler setup Link to heading
The last step to setup AWS Firecracker with kata and containerd, is add the
relevant handler to containerd config.toml
. To use a separate handler and
keep both QEMU/KVM and AWS Firecracker options as the hypervisor for kata
containers, we create a simple script that calls the kata containers runtime
with a different config file. Add a file in your path (eg.
/usr/local/bin/containerd-shim-kata-fc-v2
) with the following contents:
1#!/bin/bash
2KATA_CONF_FILE=/opt/kata/share/defaults/kata-containers/configuration-fc.toml /opt/kata/bin/containerd-shim-kata-v2 $@
make it executable:
1$ chmod +x /usr/local/bin/containerd-shim-kata-fc-v2
and add the relevant section in containerd’s config.toml
file (/etc/containerd/config.toml
):
1[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
2 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata-fc]
3 runtime_type = "io.containerd.kata-fc.v2"
After a containerd restart (systemctl restart containerd
), we should be able
to launch a Ubuntu test container using kata containers and AWS Firecracker:
1root@nuc8:~# ctr images pull --snapshotter devmapper docker.io/library/ubuntu:latest
2docker.io/library/ubuntu:latest: resolved |++++++++++++++++++++++++++++++++++++++|
3index-sha256:82becede498899ec668628e7cb0ad87b6e1c371cb8a1e597d83a47fac21d6af3: done |++++++++++++++++++++++++++++++++++++++|
4manifest-sha256:1e48201ccc2ab83afc435394b3bf70af0fa0055215c1e26a5da9b50a1ae367c9: done |++++++++++++++++++++++++++++++++++++++|
5config-sha256:1318b700e415001198d1bf66d260b07f67ca8a552b61b0da02b3832c778f221b: done |++++++++++++++++++++++++++++++++++++++|
6layer-sha256:16ec32c2132b43494832a05f2b02f7a822479f8250c173d0ab27b3de78b2f058: done |++++++++++++++++++++++++++++++++++++++|
7elapsed: 7.9 s total: 27.2 M (3.4 MiB/s)
8unpacking linux/amd64 sha256:82becede498899ec668628e7cb0ad87b6e1c371cb8a1e597d83a47fac21d6af3...
9done
10root@nuc8:~# ctr run --snapshotter devmapper --runtime io.containerd.run.kata-fc.v2 -t --rm docker.io/library/ubuntu:latest ubuntu-kata-fc-test uname -a
11Linux clr-f083d4640978470da04f091181bb9e95 5.10.25 #1 SMP Fri Apr 9 18:18:14 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Configure k8s to use AWS Firecracker with kata containers Link to heading
Following the above steps, integrating kata containers with k8s is a piece of cake ;-)
Two things are needed to create a pod with kata and AWS Firecracker:
- make the containerd snapshotter default
- add the kata containers runtime class to k8s
To configure the devmapper snapshotter as the default snapshotter in containerd add the following lines to /etc/containerd/config.toml
:
1[plugins.cri.containerd]
2 snapshotter = "devmapper"
To add the kata containers runtime class to k8s, we need to create a simple YAML file (kata-fc-rc.yaml
) containing the name of the kata-fc
handler we configured earlier:
1apiVersion: node.k8s.io/v1
2kind: RuntimeClass
3metadata:
4 name: kata-fc
5handler: kata-fc
and apply it to our k8s cluster:
1$ kubectl apply -f kata-fc-rc.yaml
After a containerd restart (systemctl restart containerd
), we should be able to create a pod
to see the plugin registered and working correctly. Take for instance the following YAML file (nginx-kata-fc.yaml
):
1apiVersion: v1
2kind: Pod
3metadata:
4 name: nginx-kata-fc
5spec:
6 runtimeClassName: kata-fc
7 containers:
8 - name: nginx
9 image: nginx
and apply it:
1$ kubectl apply -f nginx-kata-fc.yaml
the output should be something like the following:
1$ kubectl apply -f nginx-kata-fc.yaml
2pod/nginx-kata-fc created
Inspecting the pod in the node created should give us the following:
1$ kubectl describe pod nginx-kata-fc
2Name: nginx-kata-fc
3[snipped]
4Containers:
5 nginx:
6 Container ID: containerd://bbb6dc73c3a0c727dae81b4be0b93d853c9e2a7843e3f037b934bcb5aea89ece
7 Image: nginx
8 Image ID: docker.io/library/nginx@sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90
9 Port: <none>
10 Host Port: <none>
11 State: Running
12 Ready: True
13 Restart Count: 0
14 Environment: <none>
15 Mounts:
16 /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-psxw6 (ro)
17Conditions:
18 Type Status
19 Initialized True
20 Ready True
21 ContainersReady True
22 PodScheduled True
23Volumes:
24 kube-api-access-psxw6:
25 Type: Projected (a volume that contains injected data from multiple sources)
26 TokenExpirationSeconds: 3607
27 ConfigMapName: kube-root-ca.crt
28 ConfigMapOptional: <nil>
29 DownwardAPI: true
30QoS Class: BestEffort
31Node-Selectors: katacontainers.io/kata-runtime=true
32Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
33 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
34Events:
35 Type Reason Age From Message
36 ---- ------ ---- ---- -------
37 Normal Pulling 107s kubelet Pulling image "nginx"
38 Normal Pulled 87s kubelet Successfully pulled image "nginx" in 20.37180329s
39 Normal Created 87s kubelet Created container nginx
40 Normal Started 87s kubelet Started container nginx
Digging in a bit deeper:
1$ ps -ef |grep firecracker
2root 2705170 2705157 1 [snipped] ? 00:00:00 /opt/kata/bin/firecracker --api-sock /run/vc/firecracker/00fedc54a0bb0e55bd0513e0cf3c551f/root/run/firecracker.socket --config-file /run/vc/firecracker/00fedc54a0bb0e55bd0513e0cf3c551f/root/fcConfig.json
3$ kubectl exec -it nginx-kata-fc -- /bin/bash
4root@nginx-kata-fc:/# uname -a
5Linux nginx-kata-fc 5.10.25 #1 SMP Fri Apr 9 18:18:14 UTC 2021 x86_64 GNU/Linux
6root@nginx-kata-fc:/# cat /proc/cpuinfo |head -n 10
7processor : 0
8vendor_id : GenuineIntel
9cpu family : 6
10model : 142
11model name : Intel(R) Xeon(R) Processor @ 3.00GHz
12stepping : 10
13microcode : 0x1
14cpu MHz : 3000.054
15cache size : 4096 KB
16physical id : 0
17root@nginx-kata-fc:/# cat /proc/meminfo |head -n 10
18MemTotal: 2043696 kB
19MemFree: 1988208 kB
20MemAvailable: 1997152 kB
21Buffers: 2172 kB
22Cached: 31224 kB
23SwapCached: 0 kB
24Active: 16336 kB
25Inactive: 21264 kB
26Active(anon): 40 kB
27Inactive(anon): 4204 kB
Looking into the container id via ctr
, we can see that is using the kata-fc handler:
1$ ctr -n k8s.io c ls |grep bbb6dc73c3a0c727dae81b4be0b93d853c9e2a7843e3f037b934bcb5aea89ece
2bbb6dc73c3a0c727dae81b4be0b93d853c9e2a7843e3f037b934bcb5aea89ece docker.io/library/nginx:latest io.containerd.kata-fc.v2
As a side note, containerd matches the handler notation with an executable like this:
1io.containerd.kata-fc.v2 -> containerd-shim-kata-fc-v2
so the name of the script we created above
(/usr/local/bin/containerd-shim-kata-fc-v2
) must match the name of the
handler in containerd config.toml
using the above notation.
Configure k3s to use AWS Firecracker with kata containers Link to heading
k3s ships with a minimal version of containerd where the devmapper snapshotter plugin is not included.
As a result, in order to add AWS Firecracker support to k3s, we either need to
configure k3s to use a different containerd binary or build k3s-containerd from
source, patch it and inject it to
/var/lib/rancher/k3s/data/current/bin
.
Use external containerd Link to heading
Make sure you have configured containerd to use the cri devmapper plugin correctly (see above). Then just install k3s using the system’s containerd.
1curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--container-runtime-endpoint unix:///run/containerd/containerd.sock" sh -
Build k3s’ containerd Link to heading
To build k3s’ containerd we need to patch the source in order to enable the devmapper snapshotter. So:
(i) get the source:
1$ git clone https://github.com/k3s-io/k3s
(ii) patch with the following file (devmapper_patch.txt
):
1diff --git a/pkg/containerd/builtins_linux.go b/pkg/containerd/builtins_linux.go
2index 280b09ce..89c700eb 100644
3--- a/pkg/containerd/builtins_linux.go
4+++ b/pkg/containerd/builtins_linux.go
5@@ -25,5 +25,6 @@ import (
6 _ "github.com/containerd/containerd/runtime/v2/runc/options"
7 _ "github.com/containerd/containerd/snapshots/native"
8 _ "github.com/containerd/containerd/snapshots/overlay"
9+ _ "github.com/containerd/containerd/snapshots/devmapper"
10 _ "github.com/containerd/fuse-overlayfs-snapshotter/plugin"
11 )
12diff --git a/vendor/modules.txt b/vendor/modules.txt
13index 91554f14..054d04bf 100644
14--- a/vendor/modules.txt
15+++ b/vendor/modules.txt
16@@ -307,6 +307,8 @@ github.com/containerd/containerd/services/snapshots
17 github.com/containerd/containerd/services/tasks
18 github.com/containerd/containerd/services/version
19 github.com/containerd/containerd/snapshots
20+github.com/containerd/containerd/snapshots/devmapper
21+github.com/containerd/containerd/snapshots/devmapper/dmsetup
22 github.com/containerd/containerd/snapshots/native
23 github.com/containerd/containerd/snapshots/overlay
24 github.com/containerd/containerd/snapshots/proxy
1$ cd k3s
2$ patch -p1 < ../devmapper_patch.txt
3patching file pkg/containerd/builtins_linux.go
4patching file vendor/modules.txt
and (iii) build:
1$ sudo apt-get install btrfs-progs libbtrfs-dev
2$ mkdir -p build/data && ./scripts/download && go generate
3$ go get -d github.com/containerd/containerd/snapshots/devmapper
4$ SKIP_VALIDATE=true make # because we have local changes
Hopefully, you’ll be presented with a k3s binary ;-)
install kata-containers on k3s Link to heading
After a successful build & run of k3s, you should be able to list the available nodes for your local cluster:
1$ k3s kubectl get nodes --show-labels
2NAME STATUS ROLES AGE VERSION LABELS
3mycluster.localdomain Ready control-plane,master 11m v1.21.3+k3s1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=k3s,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=mycluster.localdomain,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=true,node-role.kubernetes.io/master=true,node.kubernetes.io/instance-type=k3s
Apart from the manual install we saw above for k8s, we could just use the well documented process in the kata-deploy section of the kata-containers repo.
So, just run the following:
1$ k3s kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
2serviceaccount/kata-label-node created
3clusterrole.rbac.authorization.k8s.io/node-labeler created
4clusterrolebinding.rbac.authorization.k8s.io/kata-label-node-rb created
5$ k3s kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
6daemonset.apps/kata-deploy created
This will fetch the necessary binaries and install them on /opt/kata (via the
daemonset). It will also add the relevant handlers on
/etc/containerd/config.toml
. However, in order to use these handlers we need to
create the respective runtime classes. To add them run the following:
1$ k3s kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/runtimeclasses/kata-runtimeClasses.yaml
2runtimeclass.node.k8s.io/kata-qemu-virtiofs created
3runtimeclass.node.k8s.io/kata-qemu created
4runtimeclass.node.k8s.io/kata-clh created
5runtimeclass.node.k8s.io/kata-fc created
Query the system to see if kata-deploy was successful:
1$ k3s kubectl get nodes --show-labels
2NAME STATUS ROLES AGE VERSION LABELS
3mycluster.localdomain Ready control-plane,master 62m v1.21.3+k3s1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=k3s,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=mycluster.localdomain,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=true,node-role.kubernetes.io/master=true,node.kubernetes.io/instance-type=k3s,katacontainers.io/kata-runtime=true
See the last label? its there because kata containers installation was successful! So we’re ready to spawn a sandboxed container on k3s:
1$ k3s kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-fc.yaml
2deployment.apps/php-apache-kata-fc created
3service/php-apache-kata-fc created
Lets see what has been created:
1$ k3s kubectl get pods
2NAME READY STATUS RESTARTS AGE
3php-apache-kata-fc-5ccb8df89-v8hqj 1/1 Running 0 31s
1$ ps -ef |grep firecracker
2root 128201 128187 3 15:21 ? 00:00:02 /opt/kata/bin/firecracker --api-sock /run/vc/firecracker/2050be93ed33df24537f6f8274e3c66c/root/run/firecracker.socket --config-file /run/vc/firecracker/2050be93ed33df24537f6f8274e3c66c/root/fcConfig.json
1$ k3s kubectl describe pod php-apache-kata-fc-5ccb8df89-v8hqj
2Name: php-apache-kata-fc-5ccb8df89-v8hqj
3Namespace: default
4Priority: 0
5Node: mycluster.localdomain
6Labels: pod-template-hash=5ccb8df89
7 run=php-apache-kata-fc
8Annotations: <none>
9Status: Running
10IP: 10.42.0.192
11IPs:
12 IP: 10.42.0.192
13Controlled By: ReplicaSet/php-apache-kata-fc-5ccb8df89
14Containers:
15 php-apache:
16 Container ID: containerd://13718fd3820efb1c0260ec9d88c039c37325a1e76a3edca943c62e6dab02c549
17 Image: k8s.gcr.io/hpa-example
18 Image ID: k8s.gcr.io/hpa-example@sha256:581697a37f0e136db86d6b30392f0db40ce99c8248a7044c770012f4e8491544
19 Port: 80/TCP
20 Host Port: 0/TCP
21 State: Running
22 Ready: True
23 Restart Count: 0
24 Requests:
25 cpu: 200m
26 Environment: <none>
27 Mounts:
28 /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k62m5 (ro)
29Conditions:
30 Type Status
31 Initialized True
32 Ready True
33 ContainersReady True
34 PodScheduled True
35Volumes:
36 kube-api-access-k62m5:
37 Type: Projected (a volume that contains injected data from multiple sources)
38 TokenExpirationSeconds: 3607
39 ConfigMapName: kube-root-ca.crt
40 ConfigMapOptional: <nil>
41 DownwardAPI: true
42QoS Class: Burstable
43Node-Selectors: katacontainers.io/kata-runtime=true
44Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
45 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
46Events:
47 Type Reason Age From Message
48 ---- ------ ---- ---- -------
49 Normal Scheduled 82s default-scheduler Successfully assigned default/php-apache-kata-fc-5ccb8df89-v8hqj to mycluster.localdomain
50 Normal Pulling 80s kubelet Pulling image "k8s.gcr.io/hpa-example"
51 Normal Pulled 80s kubelet Successfully pulled image "k8s.gcr.io/hpa-example" in 745.344896ms
52 Normal Created 80s kubelet Created container php-apache
53 Normal Started 79s kubelet Started container php-apache
To tear things down on k3s run the following:
1$ k3s kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-fc.yaml
2deployment.apps "php-apache-kata-fc" deleted
3service "php-apache-kata-fc" deleted
4$ k3s kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/runtimeclasses/kata-runtimeClasses.yaml
5runtimeclass.node.k8s.io "kata-qemu-virtiofs" deleted
6runtimeclass.node.k8s.io "kata-qemu" deleted
7runtimeclass.node.k8s.io "kata-clh" deleted
8runtimeclass.node.k8s.io "kata-fc" deleted
9$ k3s kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
10daemonset.apps "kata-deploy" deleted
11$ k3s kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
12serviceaccount "kata-label-node" deleted
13clusterrole.rbac.authorization.k8s.io "node-labeler" deleted
14clusterrolebinding.rbac.authorization.k8s.io "kata-label-node-rb" deleted
This concludes a first take on running containers as microVMs in k8s using Kata Containers and AWS Firecracker. In the next posts we will explore hardware acceleration options, serverless frameworks and unikernel execution! Stay tuned for more!
NOTE I: If you choose the first option for k3s, you’ll need to manually
append the handlers in /etc/containerd/config.toml
as the k3s denotes a
relative path for containerd’s config file, so it resides in
/var/lib/rancher/k3s/agent/etc/containerd/config.toml
. The easiest way to
work around this is:
1cat /var/lib/rancher/k3s/agent/etc/containerd/config.toml >> /etc/containerd/config.toml
2systemctl restart containerd
NOTE II: If you are already running k8s/k3s with a different containerd
snapshotter, after configuring devmapper and restarting containerd, you
probably need to restart kubelet/k3s services and make sure that all the
containers running in k8s.io
containerd namespace are now running using
devicemapper.