Updates on Icinga Camps in Stockholm, Milan and Zürich

Updates on Icinga Camps in Stockholm, Milan and Zürich

The Icinga Camps we are co-organizing with our partners are truly fascinating. We manage to offer a platform for experienced monitoring experts and beginners at the same time. An Icinga Camp is an event where you can listen to amazing stories from Icinga users and developers. But it’s also a place where you discuss your own challenges learn from the experience of others. You will get the news about the latest technologies in the Icinga universe and discover many possibilities how Icinga helps you to manage your daily monitoring requirements.

We have opened the Call for Presentations and Registration for the upcoming Icinga Camps in Stockholm, Milan and Zürich!

We would love to hear about your successes, learnings and pains in production. Share your experience with others and tell us your story. These are just some examples on what we are looking for:

  • Best Practices – There isn’t just one way to do it
  • From your Experience – Every monitoring environment has different challenges, tell us about yours
  • Monitoring Culture – Building a healthy monitoring environment
  • Integrations – How do you connect and use Icinga with your other tools


Submit your Proposal

Monitoring Automation with Icinga – The Director

Monitoring Automation with Icinga – The Director

I’m not going to list all benefits of automating your monitoring system. If you’re here and reading this, you are most likely very aware that maintaining a large infrastructure is a big challenge.

Automating the monitoring process for a huge amount of servers, virtual machines, applications, services, private and public clouds was a main driver for us when we decided to build Icinga 2. In fact, monitoring large environments is not a new demand for us at all. We experienced this challenge in tandem with many corporations for many years. Finally, it lead us to build features like our rule based configuration, Icinga’s REST API and various modules, cookbooks, roles and playbooks for different configuration management tools.

 

„It depends“

All these methods make it easier to automate monitoring in their own particular way. We have built multiple ways to automate monitoring because there is not only one way to do it right. As usual in the IT field “it depends”. Depending on your infrastructure either one way or another may be the right way for you to automate your monitoring.

 

Beyond the Static

With all this in mind we have created the Icinga Director 3 years ago. Director is a module for Icinga that enables users to create Icinga configuration within a web interface. The Director utilizes Icinga’s API to create, modify and delete monitoring objects. Besides the plain configuration functionality, the Director has a strong focus on automating these tasks. It provides a different approach that goes far beyond a well-known static configuration management. With thoughtful features the Director empowers operators to manage massive amounts of monitoring objects. In this article I put my attention on the automation functionality of the Director, even though creating single hosts and services manually is also possible of course.

 

Step 1: Import

In almost every company a database exists, or something similar, that holds information about the currently running servers and applications. Some use dedicated software to manage this information. Others use tools they have built themselves or rely on their config management tools.

While most people refer to this database as a CMDB there are also other common terms. In theory only one database exists within an organization, allowing everyone involved to manage the information in a single place. In practice, most organizations spread the data across a number of different tools. While there are some professional approaches, there are also some not-so-professional ways of maintaining this data. Have you ever seen someone keep their IP addresses and locations of servers in an Excel file? We’ve seen it all.

 

Creating the source

The Director contains a feature called Import Source. It allows you to import all kinds of data from many different data graves. The data does not have to be in the Icinga configuration format, the Director will take care of that later. For a start, you only need some kind of data.

In my very simple example, I’m using a MySQL database, which is a common storage for this type of information. My database
cmdb contains only one table hosts that holds everything I know about my servers. For demo purposes, this is perfect.

Creating the import source requires access to the database server. The credentials are stored in an Icinga Web 2 resource, therefore they are re-usable. After triggering a Check for changes you can preview the data set in the Preview tab. If everything that you need is included, you can trigger the import run which actually imports the data.

Starting from here your data is generally available and you can create Icinga configuration out of it. The properties may also be modified during the import, but I leave them as they are for now. Learn more about available modifiers

 

Using SQL, LDAP or else

For my import source I used the source type SQL which is built-in and available by default. You can use other source types as well, for example LDAP. That allows you to import not only objects that have to be monitored, but also users from your LDAP or Active Directory and use the contact information for alerts.

Of course, you can also use other import sources, such as plain text files, PuppetDB, vSphere or AWS. New import sources are added to the Director as Modules, which you could also write yourself. Our lovely community is continuously extending the Director with new import sources as well, for example with import sources for Microsoft’s Azure or Proxmox VE.

 

Step 2: Synchronise

After a successful import I am able to continue with basic config synchronisation. Syncing configuration means, that you use the imported data to generate Icinga configuration out of it. Generally speaking, you map your data fields into Icinga objects and properties.

Some of the data I imported is easy to map, such as hostname and IP address. Icinga has pre-defined config parameters for those. Others, like the location and environment of the servers are mapped to custom variables. Custom variables are something like tags, but on steroids. They accept plain strings as values but also booleans, arrays and even dictionaries (hashes). I know, this sounds crazy. Custom variables are usually used to store meta information about your objects. This information you might later want to use to create rules which in turn define what should be monitored or who should receive alerts.

 

But first things first.

Creating a new Sync requires some input and some decisions. You define what kind of monitoring objects you’re going to create out of your imported data. This can be Hosts, Endpoints, Services, User, Groups and so on. In my example I simply choose to create some host objects.

Then you decide what should happen with existing monitoring configuration. Shall it be replaced with the new one, merged or ignored? Once the Sync is created you can finally start to tell the Director how to map your data to Icinga configuration attributes. This is done within the Properties type by creating new sync property. For every column in my table I create a sync property. Have a look at the images for details.

Checking for changes will only do a dry run to tell you if there are any changes available. Triggering the Sync actually synchronizes the new configuration with the existing one and automatically creates new entries in Director’s activity log. At this point, everything is ready to be deployed to Icinga.

 

Step 3: Deploy

The Director’s activity log shows precisely what changes are waiting to be deployed to Icinga. Clicking on each element displays the exact diff between old and new configuration. This diff format may be familiar to you from Git for example. Another point that may be familiar is the whole history of deployments which you can see within the activity log.

 

Travelling in time

It contains every configuration change you ever made with the Director. It lets you travel back in time and deploy old configs if necessary. The Director’s history of deployments basically works like your Git history: You can do a diff between certain deployments to track the changes and see which user deployed which configuration at what time.

The simplest way of deploying your new configuration is by just clicking Deploy pending changes. Your config will be pushed to Icinga and validated. If everything is fine it will be in production within seconds. If there’s anything wrong, you will receive a log with the details and your running configuration remains untouched.

 

Automation

 

As I mentioned in the beginning, automation is a key aspect of the Director. The steps described so far (Import, Synchronise and Deploy) can all be automated once they are fully configured. The automation is done by creating Jobs, with each Job running one certain task in a specified period.

The frequency of a Job is freely configurable. It may run every minute or only once a day. You create a Job for each task or only for certain tasks.

 

 

Hint: You can use a simple cronjob to run the Jobs. To fully leverage the Director’s Job functionality in the web interface, you have to start a separate daemon as described in the documentation.

 

Conclusion

The workflow described above was only a sneak peek into the capabilities of the Director. This scenario however demonstrates very well the basic functionality and what you can do with it. There are many more features left out in this article like modifying imported data, merging data from multiple data sources, filtering or creating and using custom data sets.

Managing and maintaining large server infrastructures is a very complex thing to do and there’s a good reason why whole teams are required to do so. I would lie if I told you that monitoring of such large setups is easy. But with the right toolset it is definitely doable! Check out the full Director Documentation to get started with your monitoring automation project.

We will discuss the challenges of monitoring large infrastructure at the upcoming Icinga Camps as well. Join us for a full day filled with talks and discussions about and around Icinga. Meet new people and get to know others from the same field.

 

 

Releasing Icinga Reporting for Early Adopters

Releasing Icinga Reporting for Early Adopters

We’re happy to announce that we released an early version of Icinga Reporting today! With this release we create the foundation for an overall reporting functionality for Icinga by introducing a new way to work with collected data. At the same time we are also publishing the first use case of Icinga Reporting which enables you to calculate, display and export SLA reports for your hosts and services.

 

Icinga Reporting

Icinga Reporting is the framework and foundation we created to handle data collected by Icinga 2 and other data providers. By definition Icinga Reporting does not collect or calculate any data. The framework processes usable data from data providers such as Icinga’s IDO or Icinga Web 2 modules and makes them available in different formats. The first version can display the data directly within the Icinga web interface or export it to PDF, JSON or CSV format. With scheduled reports you can receive the prepared data periodically via email.

 

IDO Reports

The IDO is the database where Icinga 2 stores all the status data it collects. It is also the first data provider for Icinga Reporting. We calculate the availability of your hosts and services over a certain amount of time and return a percentage value. This allows you to evaluate and compare the accessibility of you applications and network devices. You can use the data to check if you’re meeting your SLA (Service-Level-Agreement) and share it with your team and managers.

 

Open Source Projects

Icinga Reporting consists of multiple projects. We’re continuously working on updating and extending our existing Modules to provide data for Icinga Reporting. This release is at a very early stage and your feedback is very welcome and appreciated.

Join our community on community.icinga.com and have a chat with us about your reporting use cases and challenges! We will discuss Icinga Reporting on our upcoming Icinga Camps as well. The CfP for Stockholm, Milan and Zürich is still open for those who are interested in speaking at these events.

 

 

 

 

 

 

 

 

 

 

 

Announcing the Agenda for Icinga Camp Berlin 2019

Announcing the Agenda for Icinga Camp Berlin 2019

It’s coming closer! This years Icinga Camp Berlin will take place on 14th March and we’re excited to publish the first confirmed speakers. The agenda includes a great bunch of talks from experienced Icinga users. The topics vary from integrations with Elastic and the HashiCorp Stack, reports by professional Icinga users about their challenges and best practices from the field. Not to forget about the latest news directly from the Icinga Team about current developments and upcoming projects.
Icinga Camps are a great platform to expand your knowledge and learn new methods that enhance your monitoring experience. Besides the planned program, Icinga Camps function as a meeting point for monitoring enthusiasts from all areas. (more…)

Announcing Icinga Tour 2019

Announcing Icinga Tour 2019

We’re happy to announce our Icinga Tour 2019! The Icinga Tour is a series of Icinga Camps on different dates spread all over the globe through the year. We’re organising this series of events in coorporation with our local partners. Through the past years we hosted various Icinga Camps, for this year we decided to start with defined dates to give everyone the chance to plan ahead. (more…)