When working with systemd services in Linux, you might encounter situations where multiple instances of a service need to be managed dynamically. When I had to develop a solution to monitor multiple Kubernetes clusters with Icinga for Kubernetes, I ran into exactly this challenge. I asked myself: ‘How do you efficiently manage multiple instances of a service without manually enabling or disabling each one?’ It quickly became clear that a traditional systemd approach with individual unit files wasn’t scalable – it was too much manual effort and not really scalable.
The solution was a systemd generator that dynamically manages service instances. In this post, I’ll walk you through how I tackled this problem and how you can implement a similar system in your environment.
Understanding the Use Case
Imagine a scenario where multiple Kubernetes cluster monitoring services need to be managed under a single control unit. Each instance has its own environment configuration file, and we want to:
- Automatically start only the desired instances.
- Ensure instances are stopped or restarted when the main service is controlled.
- Configure each instance independently.
The Systemd Components
I found out that we need three key systemd components to achieve this:
- A generator: Dynamically creates symlinks for service instances.
- A main service: Acts as the control unit for all instances.
- A template service: Defines how each instance runs.
Let’s dive into the implementation.
1. Creating the Systemd Generator
Systemd generators are scripts that run at boot time or on daemon-reload to dynamically create dependencies. Our generator will read an /etc/default/icinga-kubernetes
configuration file to determine which instances should be started. Personally I like using bash, but you can use any other language as well.
#!/bin/bash set -eu WANTDIR="$1/icinga-kubernetes.service.wants" SERVICEFILE="/lib/systemd/system/icinga-kubernetes@.service" AUTOSTART="all" CONFIG_DIR=/etc/icinga-kubernetes if [[ ! -d "$WANTDIR" ]]; then mkdir -p "$WANTDIR" fi if [[ -e /etc/default/icinga-kubernetes ]]; then source /etc/default/icinga-kubernetes fi if [[ "$AUTOSTART" == "none" ]]; then exit 0 fi if [[ "$AUTOSTART" == "all" || -z "$AUTOSTART" ]]; then for CONFIG in $(cd $CONFIG_DIR; ls *.env 2> /dev/null); do NAME=${CONFIG%%.env} ln -s "$SERVICEFILE" "$WANTDIR/icinga-kubernetes@$NAME.service" done else for NAME in $AUTOSTART ; do if [[ -e "${CONFIG_DIR}/${NAME}.env" ]]; then ln -s "$SERVICEFILE" "$WANTDIR/icinga-kubernetes@$NAME.service" fi done fi exit 0
This script ensures that instances listed in AUTOSTART
are linked to the systemd service directory, allowing systemd to manage them collectively.
To manually adjust the AUTOSTART
value, I created the configuration file /etc/default/icinga-kubernetes
. Allowed values for the AUTOSTART
variable are all
, none
, or a space-separated list of service names. If the variable is empty, all
is assumed. Changing this file will require running systemctl daemon-reload
followed by a restart of the control unit.
#AUTOSTART="all" #AUTOSTART="none" #AUTOSTART="test-cluster prod-cluster"
2. Defining the Main Service
The main service doesn’t run a process itself. Instead, it ensures the generator is executed and sets up dependencies for instance management.
[Unit] Description=Icinga for Kubernetes After=syslog.target network-online.target mariadb.service postgresql.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/bin/true WorkingDirectory=/etc/icinga-kubernetes [Install] WantedBy=multi-user.target
This service guarantees that instance symlinks are created during the boot or systemctl daemon-reload
.
3. Creating the Template Service
Each instance is controlled using a template service. The %i
placeholder represents the instance name (derived from the environment file’s name).
[Unit] Description=Icinga for Kubernetes (%i) PartOf=icinga-kubernetes.service After=syslog.target network-online.target mariadb.service postgresql.service [Service] Type=simple WorkingDirectory=/etc/icinga-kubernetes Environment="ICINGA_FOR_KUBERNETES_CLUSTER_NAME=%i" EnvironmentFile=/etc/icinga-kubernetes/%i.env ExecStart=/usr/sbin/icinga-kubernetes --config /etc/icinga-kubernetes/config.yml --cluster-name ${ICINGA_FOR_KUBERNETES_CLUSTER_NAME} User=icinga-kubernetes Restart=on-failure [Install] WantedBy=multi-user.target
This configuration:
- Reads the instance name from the
%i
placeholder. - Overrides the instance name with the value from the environment file if set.
- Starts the icinga-kubernetes process with the correct cluster configuration.
- Ensures each instance restarts if it fails.
How It Works in Practice
1. Control the main service
Controlling the main service affects all instances.
systemctl enable —now icinga-kubernetes.service systemctl stop icinga-kubernetes.service systemctl start icinga-kubernetes.service
2. Manage individual instances
You can still control the instances individually.
systemctl start icinga-kubernetes@prod-cluster.service systemctl stop icinga-kubernetes@test-cluster.service
3. Add a new instance
Add a new Environment file /etc/icinga-kubernetes/test-cluster.env
.
ICINGA_FOR_KUBERNETES_CLUSTER_NAME="Test Cluster"
Then, reload systemd and restart the service.
systemctl daemon-reload systemctl restart icinga-kubernetes.service
4. Remove an instance
Stop it and remove the environment file from /etc/icinga-kubernetes
, or alternatively remove the instance from the AUTOSTART
configuration like in point 5.
systemctl stop icinga-kubernetes@test-cluster.service rm /etc/icinga-kubernetes/test-cluster.env
Then, reload systemd and restart the service.
systemctl daemon-reload systemctl restart icinga-kubernetes.service
5. Modify AUTOSTART configuration
Modify /etc/default/icinga-kubernetes
and change AUTOSTART
values. Then, reload systemd and restart the service.
systemctl daemon-reload systemctl restart icinga-kubernetes.service
Conclusion
Through the use of a systemd generator and template services, I was able to efficiently manage multiple service instances under a single control unit. This approach proved to be scalable and flexible, ensuring that all instances could be started, stopped, or reloaded as needed – without the need for manual handling. Developing this was a big step towards multi-cluster support for Icinga for Kubernetes. Beyond this specific use case, the same pattern can be applied to many other scenarios. Although it was an interesting excursion into systemd for me, I had some really frustrating moments while doing research, so I decided to share my new knowledge with you to make things easier.