We are proud to announce that icinga.com is now also hosting integrations of hardware for monitoring environmental sensors and alerting via text message. The devices from the manufacturers listed on Icinga Integrations can easily be implemented into your Icinga monitoring by using plugins provided on Icinga Exchange.
Alerts via Text Message
Basically, there are two types of devices, which are most suitable for integration with Icinga. First group of them are gateways for sending SMS as alert messages. These gateways work with standard SIM cards and can run on bands in 4G, 3G and 2G networks. These devices also provide more functionality than only sending messages, they are also capable of converting SMS to E-Mail or the other way round or using filters and routing for sending messages only to a certain person or group of persons (phonebook listing). Furthermore, the gateways have their own monitoring server which means that they can automatically send notifications for example in case of loosing connection to the network.
In order to smoothly add these gateways into one’s productive processes, there exist for example plugins for directly connecting to your Microsoft Exchange for full usage of E-Mail to SMS conversion. Integrated with Icinga 2 you cannot only monitor the device itself but also use it in order to forward notifications generated by Icinga. The two main manufacturers of these gateways are Braintower located in Germany and SMSEagle with their headquarters in Poland.
Second type of integratable devices are mainly meant for measuring and surveillance. These consist most of the time of a base unit and several ports for connecting sensors. The base units can either directly communicate via LAN or via GSM in case of remote locations. For full environmental monitoring, there are plenty of sensor types that can be attached:
- combination of temperature and humidity
- air pressure
- air flow
- water and leakage
- smoke and Gas
- levels of liquids like pertrol
- movement, vibration and opening of doors
The main manufacturers for measuring devices are currently AKCP, GUDE, HW group and Tinkerforge. For all these you can find monitoring plugins on Icinga Exchange. The manufacturer GUDE also offers power supplies that cannot only monitor voltage, but are also capable of remote controlling plugs and devices.
Integrating Tinkerforge with Icinga
In order to have a closer look at how these devices can be integrated with Icinga 2, Tinkerforge is one of the best examples. The whole plugin project can be found at https://exchange.icinga.com/netways/check_tinkerforge and shows not only the integration but also offers an insight into the functionalities of the device.
For using the check_tinkerforge plugin it is required to run Python 2.7+ and to have the tinkerforge Python library from Pypi installed. After this, installation is done with
pip install tinkerforge and moving the plugin into the Icinga
Now let’s have a look at the options of the plugin:
$ ./check_tinkerforge.py --help
usage: check_tinkerforge.py [-h] [-V] [-v] -H HOST [-P PORT] [-S SECRET]
[-u UID] -T TYPE [-w WARNING] [-c CRITICAL]
-h, --help show this help message and exit
-V, --version show program's version number and exit
-H HOST, --host HOST The host address of the Tinkerforge device
-P PORT, --port PORT Port (default=4223)
-S SECRET, --secret SECRET Authentication secret
-u UID, --uid UID UID from Bricklet
-T TYPE, --type TYPE Bricklet type. Supported: 'temperature', 'humidity', 'ambient_light', 'ptc'
-w WARNING, --warning WARNING Warning threshold. Single value or range, e.g. '20:50'.
-c CRITICAL, --critical CRITICAL Critical threshold. Single vluae or range, e.g. '25:45'.
-t TIMEOUT, --timeout TIMEOUT Timeout in seconds
As the TinkerForge device supports sensors like temperature or humidity as well as other types, we need to indicate what sensor should be checked by using the
check_tinkerforge.py -H 10.0.10.163 -T temperature -w 23
WARNING - Tinkerforge: Temperature is 24.75 degrees celcius|'temperature'=24.75
In case you have more multiple sensors of one type, you are required to indicate the UID with the
--uid operator in order to correctly identify the sensor in question. Furthermore, thresholds can either be indicated as single values or value ranges and by using the usual operators for warning and critical.
For integration of the sensors into your Icinga 2 configuration, you need to create a
CheckCommand object like for other check plugins. The device itself will be created as
Host object and all sensors should be covered with an
apply Service. For your comfort you can find an example config for TinkerForge here. This example can directly be used, it is only necessary to change the parameters and in case you have multiple sensors of one type, to add the UIDs.
All manufacturers mentioned in this blog post can be found on the Icinga integrations website including information on the companies, the devices and the links to the corresponding monitoring plugins.
When we hunt down problems in Icinga setups we ask for logs most of the time. While you get used to sifting through logs and collect some bash magic during the process there’s always the wish for this routine to be easier and especially faster. If you get logfiles from several days where each of the nodes produces millions of logfiles per day, every time you start your grep’s over and over get’s you madder and madder. So I started searching for a solution.
Having experience with Elastic Stack setups I always wanted to have an easier way of parsing Icinga logs with Logstash. I built some setups with some basic rules before but they were just starting points. The
icinga module in Filebeat helps with that as well but it also just parses the metadata of the logfiles and not the logs itself.
Like every decent IT person ( 😉 ) I run my own installation of Icinga and Elastic Stack for my personal systems so one day I started building filters that would parse the Icinga logfiles as much as possible. When this project matured I decided that I wanted to open source it so everyone running the same combination could benefit. I tried making it as easily usable as possible so I added every bit of configuration you would need to the repository on GitHub. It’s all said in detail in the Readme of the project but what you basically need is to check out the repository in a directory, use this directory as configuration for a Logstash pipeline and use Redis (with predefined keys) to get the data into and out of this pipeline.
Meanwhile the project got a lot of attention from Alexander Stoll, an employee at Netways, one of our Icinga partners. This means, the project grows even faster and having a second pair of eyes looking at your code is always a great help. Thank you very much!
What you get if you apply this pipeline:
- More and more of all possible log events from Icinga get parsed. So you will get fields holding the information about the host/service/object an event refers to, queue lengths, pids, timestamps and so on. You can build your own dashboards out of this information or just use the example ones the project is providing. They are great for filtering your events, too.
- The pipeline tries to follow the Elastic Common Schema as closely as possible. So you will be able to use your logs for upcoming features like SIEM
- The pipeline adds a field called
eventtype to every event, so you can filter more easily and there’s even the possibility of having a knowledgebase to all the events when there’s enough interest (and help) from the community
- You get information like the host writing the log or the Icinga facility providing the information. This can be used for filtering more easily for the information you really need. Of course the severity has also its own field and filterable
What this ruleset brings is not only the possibility of users parsing their own logs but one big benefit for Icinga partners providing support. Netways has set up a centralized Elastic Stack running in Netways Web Services where their support squad can access logs we have been sent by their customers. To get the logs into this centralized system they use a dockerized Filebeat installation which reads some extra information (customer id, ticket id) from an .ini file and adds them to every event.
The Elastic Stack uses this information not only as part of the events being written into Elasticsearch but also to create the index name where the information is written, too. This way they stray from the default where the index name has a timestamp (or increasing counter when using ILM). This leads to indices having very different sizes which doesn’t matter because they don’t need a tuned high performance stack in this case. But they get the benefit that they can cleanly delete all logevents about one specific ticket when they are done.
As an example what you can do with this setup: There was a ticket about a problem with notifications being triggered despite an object being in downtime. So the customer sent all logs from their Icinga masters and the agent involved. All of them where sent to the centralized setup (with the filename of the log representing the server name) and they started digging. Narrowing down the window of events being shown to the timeframe from when the downtime was created and when the notification was triggered helped in the first place. Searching for the name of the agent being involved showed the downtime being created and the notification being sent.
They could easily change between several levels of information and get more or less information for debugging. Since the information came in more than 40GB split into several logfiles it would have taken them quite a long time to refine their search every time they wanted to change something. Using the Elastic Stack they could make changes within seconds.
As with every open source project feedback is always very welcome. Be it mails, issues on GitHub or Pull Requests. Things like this exist for the community and through the help of the community. By now there are not all possible events being parsed but the ruleset get more and more complete over time. If you find events with the eventtype
undefined please consider writing rules and issuing a pull request. By the way – during the process of working on these rules we discovered one or another way to improve logging in Icinga itself, and created issues or pull requests.
If you want to meet and have a chat about combining monitoring with logmanagement you can catch me at the upcoming Icinga Meetup in Linz, Austria or listen to my talk about this very topic at the Icinga Camp in Stockholm, Sweden.
Icinga Camp Berlin is happening right now, and as promised in my talk about “Dev and ops stories: Integrations++” we are proud to bring you Dashing for Icinga 2 in its v2.0.0 release ❤️
Some highlights from the Changelog:
You can test-drive the release inside the “standalone” Icinga Vagrant box, and download it here.
Christmas brought you Dashing for Icinga 2 v1.1.0 and many things happened ever since. Community members joined the forums and issue tracker with questions and enhancement ideas. We’ve also had a hidden v1.2.0 release which added changes to the backend. Now it is time to focus on the frontend again with v1.3.0.
A while back we’ve written about changes inside our Vagrant box demo environments – and many things happened ever since.
There are a couple of new Icinga Web 2 modules directly integrated into the Vagrant boxes (Director, Grafana, Cube, Globe). In terms of metrics and event collecting we’ve integrated Elastic Stack with Icingabeat and also InfluxDB with Grafana. We are happy to release v1.3.0 today.
Want to visualize part of your IT infrastructure in a hierarchical way? Do you know the Business Impact of single services? What would happen in case you power down a specific server? Would it have any influence on your most important services? If yes, which applications would have been affected? This is what the Icinga Business Process module has been built for. (more…)