Monitoring Automation with Icinga – Certificate Monitoring

Monitoring Automation with Icinga – Certificate Monitoring

In our ongoing efforts to make it easier to automate monitoring environments we recently introduced a new module for Icinga Web 2.

Icinga Certificate Monitoring

on Github

This module is first and foremost a platform which lets you have an overview over all the certificates you are using in your environment to prove the identity of your devices. You can take a quick glance or a very detailed look at them. It will help you to know exactly how your certificates are distributed based on the signing certificate authority, the used algorithms and key strengths as well as which certificate expires next.

 

Certificates Dashboard

 

It helps with automation

You don’t need to register each device or certificate by hand. The module will scan the networks you’ll provide it with and harvest any certificates it encounters. Whether it does this regularly or on demand is fully up to you.

Networks are provided by setting up jobs. These jobs define several IP ranges in CIDR notation and ports. Schedules in CRON format may also be set for jobs so that this module’s daemon runs them regularly.

 

Integrates well with your environment

Cloud hosting and virtual machines are on the rise for a long time now and with SNI (Server Name Indication) a single host may easily present different certificates on the same endpoint. In order to facilitate this, the module can be told to scan an endpoint multiple times by setting up SNI maps.

Installed alongside the monitoring module, Icinga Certificate Monitoring even accesses its database backend to fetch SNI information.¹ This will help to match results found in the scan process to already known hostnames in your monitoring environment.

 

Don’t miss to roll out new certificates

Let’s be honest, everyone has sometimes missed to re-new or replace expired certificates. The module provides detailed views showing you exactly which certificates require your attention.

Certificate Overview

Certificate Chain Health

 

Take advantage of your favorite monitoring tool

Though, if you’re not proactively looking at the user interface the check command shipped with this module may help with setting up notifications in Icinga.

Certificate Usage

Monitoring Service List

 

Bridging the gap with the Director

With all this talk about automation one has to wonder how to establish a link between this module’s knowledge about certificates and Icinga’s configuration. You’re right if you think of the Director’s import and synchronization functionality now.

The module lets you easily import known hosts or certificates with its own import sources. By setting this up you only have to define jobs for it and all the rest is handled automatically.

 

¹Available with Icinga Web v2.7 (Scheduled for release mid 2019)

 

 

Monitoring Automation with Icinga – The Director

Monitoring Automation with Icinga – The Director

I’m not going to list all benefits of automating your monitoring system. If you’re here and reading this, you are most likely very aware that maintaining a large infrastructure is a big challenge.

Automating the monitoring process for a huge amount of servers, virtual machines, applications, services, private and public clouds was a main driver for us when we decided to build Icinga 2. In fact, monitoring large environments is not a new demand for us at all. We experienced this challenge in tandem with many corporations for many years. Finally, it lead us to build features like our rule based configuration, Icinga’s REST API and various modules, cookbooks, roles and playbooks for different configuration management tools.

 

„It depends“

All these methods make it easier to automate monitoring in their own particular way. We have built multiple ways to automate monitoring because there is not only one way to do it right. As usual in the IT field “it depends”. Depending on your infrastructure either one way or another may be the right way for you to automate your monitoring.

 

Beyond the Static

With all this in mind we have created the Icinga Director 3 years ago. Director is a module for Icinga that enables users to create Icinga configuration within a web interface. The Director utilizes Icinga’s API to create, modify and delete monitoring objects. Besides the plain configuration functionality, the Director has a strong focus on automating these tasks. It provides a different approach that goes far beyond a well-known static configuration management. With thoughtful features the Director empowers operators to manage massive amounts of monitoring objects. In this article I put my attention on the automation functionality of the Director, even though creating single hosts and services manually is also possible of course.

 

Step 1: Import

In almost every company a database exists, or something similar, that holds information about the currently running servers and applications. Some use dedicated software to manage this information. Others use tools they have built themselves or rely on their config management tools.

While most people refer to this database as a CMDB there are also other common terms. In theory only one database exists within an organization, allowing everyone involved to manage the information in a single place. In practice, most organizations spread the data across a number of different tools. While there are some professional approaches, there are also some not-so-professional ways of maintaining this data. Have you ever seen someone keep their IP addresses and locations of servers in an Excel file? We’ve seen it all.

 

Creating the source

The Director contains a feature called Import Source. It allows you to import all kinds of data from many different data graves. The data does not have to be in the Icinga configuration format, the Director will take care of that later. For a start, you only need some kind of data.

In my very simple example, I’m using a MySQL database, which is a common storage for this type of information. My database
cmdb contains only one table hosts that holds everything I know about my servers. For demo purposes, this is perfect.

Creating the import source requires access to the database server. The credentials are stored in an Icinga Web 2 resource, therefore they are re-usable. After triggering a Check for changes you can preview the data set in the Preview tab. If everything that you need is included, you can trigger the import run which actually imports the data.

Starting from here your data is generally available and you can create Icinga configuration out of it. The properties may also be modified during the import, but I leave them as they are for now. Learn more about available modifiers

 

Using SQL, LDAP or else

For my import source I used the source type SQL which is built-in and available by default. You can use other source types as well, for example LDAP. That allows you to import not only objects that have to be monitored, but also users from your LDAP or Active Directory and use the contact information for alerts.

Of course, you can also use other import sources, such as plain text files, PuppetDB, vSphere or AWS. New import sources are added to the Director as Modules, which you could also write yourself. Our lovely community is continuously extending the Director with new import sources as well, for example with import sources for Microsoft’s Azure or Proxmox VE.

 

Step 2: Synchronise

After a successful import I am able to continue with basic config synchronisation. Syncing configuration means, that you use the imported data to generate Icinga configuration out of it. Generally speaking, you map your data fields into Icinga objects and properties.

Some of the data I imported is easy to map, such as hostname and IP address. Icinga has pre-defined config parameters for those. Others, like the location and environment of the servers are mapped to custom variables. Custom variables are something like tags, but on steroids. They accept plain strings as values but also booleans, arrays and even dictionaries (hashes). I know, this sounds crazy. Custom variables are usually used to store meta information about your objects. This information you might later want to use to create rules which in turn define what should be monitored or who should receive alerts.

 

But first things first.

Creating a new Sync requires some input and some decisions. You define what kind of monitoring objects you’re going to create out of your imported data. This can be Hosts, Endpoints, Services, User, Groups and so on. In my example I simply choose to create some host objects.

Then you decide what should happen with existing monitoring configuration. Shall it be replaced with the new one, merged or ignored? Once the Sync is created you can finally start to tell the Director how to map your data to Icinga configuration attributes. This is done within the Properties type by creating new sync property. For every column in my table I create a sync property. Have a look at the images for details.

Checking for changes will only do a dry run to tell you if there are any changes available. Triggering the Sync actually synchronizes the new configuration with the existing one and automatically creates new entries in Director’s activity log. At this point, everything is ready to be deployed to Icinga.

 

Step 3: Deploy

The Director’s activity log shows precisely what changes are waiting to be deployed to Icinga. Clicking on each element displays the exact diff between old and new configuration. This diff format may be familiar to you from Git for example. Another point that may be familiar is the whole history of deployments which you can see within the activity log.

 

Travelling in time

It contains every configuration change you ever made with the Director. It lets you travel back in time and deploy old configs if necessary. The Director’s history of deployments basically works like your Git history: You can do a diff between certain deployments to track the changes and see which user deployed which configuration at what time.

The simplest way of deploying your new configuration is by just clicking Deploy pending changes. Your config will be pushed to Icinga and validated. If everything is fine it will be in production within seconds. If there’s anything wrong, you will receive a log with the details and your running configuration remains untouched.

 

Automation

 

As I mentioned in the beginning, automation is a key aspect of the Director. The steps described so far (Import, Synchronise and Deploy) can all be automated once they are fully configured. The automation is done by creating Jobs, with each Job running one certain task in a specified period.

The frequency of a Job is freely configurable. It may run every minute or only once a day. You create a Job for each task or only for certain tasks.

 

 

Hint: You can use a simple cronjob to run the Jobs. To fully leverage the Director’s Job functionality in the web interface, you have to start a separate daemon as described in the documentation.

 

Conclusion

The workflow described above was only a sneak peek into the capabilities of the Director. This scenario however demonstrates very well the basic functionality and what you can do with it. There are many more features left out in this article like modifying imported data, merging data from multiple data sources, filtering or creating and using custom data sets.

Managing and maintaining large server infrastructures is a very complex thing to do and there’s a good reason why whole teams are required to do so. I would lie if I told you that monitoring of such large setups is easy. But with the right toolset it is definitely doable! Check out the full Director Documentation to get started with your monitoring automation project.

We will discuss the challenges of monitoring large infrastructure at the upcoming Icinga Camps as well. Join us for a full day filled with talks and discussions about and around Icinga. Meet new people and get to know others from the same field.

 

 

Icinga joins HashiCorp Technology Partner Program

Icinga joins HashiCorp Technology Partner Program

One of the biggest advantages of Icinga is its capability to interact with other tools. Integrations of Icinga cover notification mechanisms, visualisations or automatic deployment. Especially our REST API allows users to easily connect with Icinga, be it for read access or the creation of new objects.The possibility to automate allows Icinga users to monitor highly dynamical infrastructure in private and public clouds. (more…)

Exceptions prove the rule, Director to the rescue

Exceptions prove the rule, Director to the rescue

Two and a half month have gone by since version 1.2.0. Yesterday we tagged the latest release: version 1.3.0. As we all know, Director is a fantastic tool for automation, being the glue between your various data sources and your monitoring. But in the rough real world not all the things are always automated. Therefore, this release puts focus on managing all those “little exemptions” many of you are facing in their daily work with in semi-automated environments. (more…)

Docker, Docker, Docker!

docker-logoWe are already using Docker and container based implementations during development, package builds and tests. This helps speed up development quite a lot next to the fancy Vagrant boxes. Since we’ve seen community members creating docker images for everything we thought we’d give it a try for our own official Docker container – our notebooks used in live demos at Icinga Camps certainly say thanks 😉
One problem arises – Docker containers are not made for running multiple applications, you would normally run each application inside its own container, and only export volumes and ports for communication links. A demo environment for Icinga 2 requires as least:

  • Web server (httpd, apache2, etc)
  • Database server (MariaDB, etc)
  • Icinga 2 daemon

Furthermore we want to serve Icinga Web 2 as primary frontend and need to export port 80 for browser access. Even with SSH access for whatever comes to mind.
There’s already a Debian based Docker container using supervisord starting multiple applications in foreground. Taking this example whilst adding our own requirements into a CentOS7 based container (similar to the Vagrant boxes) leads us to our very own icinga2 Docker container.
It requires you to have at least Docker v1.6.0+ installed, then fire away and bind port 80 to your host’s port 3080:

$ sudo docker run -ti -p 3080:80 icinga/icinga2

Navigate to http://localhost:3080/icingaweb2/ and login using icingaadmin/icinga as credentials.
Keep in mind that the Docker container was made for test, development and demo purposes without any further production support.
If you’re planning to try Icinga Web 2 for example and want to test your own local patches, just mount the exported volumes like this:

$ sudo docker run -ti -p 3081:80 -v /usr/share/icingaweb2 /etc/icingaweb2 icinga/icinga2

There are additional volumes for /etc/icinga2 and /var/lib/icinga2 available. If you’re planning to modify the container image, you’ll find all required instructions inside the git repository.
On the long run, one might think of an Icinga 2 application cluster based on Docker containers. Who knows – happy testing & sending patches! 🙂