More and more people use machines with the energy-efficient ARM chips. No wonder, after all the Raspberry Pi’s processing power evolves which makes it more suitable for a wider range of use cases. For the power users there are the new 80 core servers with Ampere processors and nowadays Oracle offers free 4 core ARM VMs with 24(!) GM RAM. Enough reasons for us to offer not only x86_64 Docker images with Icinga 2, but also ARM ones – both 32 and 64 bits.
How to do this?
Fortunately we don’t have to use separate Docker tags (or even repos) for the extra architectures. In a registry a single tag – e.g. icinga/icinga2:latest – can contain multiple architectures. docker pull icinga/icinga2:latest
chooses the one able to run on the host. Such multi-arch images can be relatively easily built docker buildx build. E.g. consider this simple Dockerfile:
FROM alpine RUN ["apk", "add", "bash"]
Yes, the RUN layer misses rm -vf /var/cache/apk/*
, but with that the Dockerfile wouldn’t be simple. Anyway. If this is built via docker build .
, it – surprise, surprise – pulls alpine:latest of the host architecture and installs bash:
#1 [internal] load build definition from Dockerfile #2 [internal] load .dockerignore #3 [internal] load metadata for docker.io/library/alpine:latest #4 [1/2] FROM docker.io/library/alpine@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a #5 [2/2] RUN ["apk", "add", "bash"]
The actual command I used was docker build --progress=plain . 2>&1 |grep -Fe '['
to reduce the output for this post, but both do the same. Apropos the same: docker buildx build
does pretty much the same as docker build
. I.e. docker buildx build --progress=plain . 2>&1 |grep -Fe '['
(or just docker buildx build .
) gives us:
#1 [internal] booting buildkit #2 [internal] load .dockerignore #3 [internal] load build definition from Dockerfile #4 [internal] load metadata for docker.io/library/alpine:latest #5 [1/2] FROM docker.io/library/alpine@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a #6 [2/2] RUN ["apk", "add", "bash"]
With the additional parameter --platform linux/amd64,linux/arm64/v8,linux/arm/v7
Docker repeats the Dockerfile for all given platforms and combines the results to a single multi-arch image:
#1 [internal] load .dockerignore #2 [internal] load build definition from Dockerfile #3 [linux/arm64 internal] load metadata for docker.io/library/alpine:latest #4 [linux/amd64 internal] load metadata for docker.io/library/alpine:latest #5 [linux/arm/v7 internal] load metadata for docker.io/library/alpine:latest #6 [linux/arm64 1/2] FROM docker.io/library/alpine@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a #7 [linux/arm64 2/2] RUN ["apk", "add", "bash"] #8 [linux/amd64 1/2] FROM docker.io/library/alpine@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a #9 [linux/amd64 2/2] RUN ["apk", "add", "bash"] #10 [linux/arm/v7 1/2] FROM docker.io/library/alpine@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a #11 [linux/arm/v7 2/2] RUN ["apk", "add", "bash"]
Linux hosts may complain that docker buildx create --use
has to be run, in this case just do it. Also make sure the packages qemu-user-static and binfmt-support are installed, otherwise strange errors may occur, e.g. “non-existent” executables of foreign architectures which actually exist. Besides that, if everything architecture-dependent is done in the Dockerfile, there’s nothing between you and your new multi-arch image. Just give it a tag via -t as usual and let buildx push it directly to Docker Hub via –push.
TL;DR
The next Icinga 2 release will come with ARM-compatible images as described above and our other products will follow. By the way we cleaned up the Icinga 2 images, so you will save 70%. I mean storage and bandwidth, not money. The images are now and have always been available for free. Apropos money: the managed K8s of our NWS colleagues – where you can use our Docker images – costs € 2.47 per day (minimal setup). But it’s worth it.