Icinga 2.10.4 bugfix release

Icinga 2.10.4 bugfix release

This release provides fixes for the InfluxDB and Elasticsearch metric writers. If you’re using TLS connections, the latter were not closed correctly. In addition to these fixes, we’ve also backported fixes for delayed and one-time notifications. Special thanks to mdetrano for being patient and testing this one.

Additionally, the Windows wizard has been updated and check_perfmon now supports non-localized performance counters. One of our customers sponsored improving mass-creation of downtimes via the REST API in HA enabled clusters, thank you.

Official packages are available on packages.icinga.com, have been pushed to Chocolatey and Raspbian will follow soon. Meanwhile check the changelog for v2.10.4.

Icinga 2.10.3 bugfix release

2.10.3 fixes TLS connections with masters and agents on reload and “Connection: close” headers with Ruby clients. We’ve also tackled long-term problems with (scheduled) downtimes in HA-enabled cluster environments. 2.10.3 also fixes a problem with time offsets and check results from the future.

Next to the long list of bugfixes, we’ve also improved the documentation with technical concepts on the check scheduler, and a complete overhaul of our development docs for contributors and packagers. We’re currently working on an improved network stack and cluster synchronisation for 2.11, stay tuned.

Thanks to our contributors Edgar, Sven, Leon, Michael, Alex and Max! 2.10.3 is available on packages.icinga.com, Raspbian packages will follow soon. Meanwhile check the changelog for 2.10.3.

Icinga 2.10.2 bugfix release

With the TLS connection improvements there was also another bug with hanging TLS connections unveiled. Turns out, this has been sitting there since 2.8.2 and not only affects JSON-RPC cluster connections but also HTTP request sessions, as being used inside the Director kickstart wizard for example. Tom is working on a fix for Director 1.6 in order to support older Icinga 2 versions too.
2.10.2 also fixes a programming mistake with the minimum version parameter for the “icinga” check, thanks for the patch, Max! The path constant changes in 2.10 introduced a regression with the cache file for icinga2 object list being overridden with the legacy 1.x objects cache content. You’re safe when you have disabled the statusdata feature before 2.10.2. SELinux would throw an error with package related changes, this has been fixed too. The documentation has been updated for removed/updated packages too.
Check the full changelog prior to upgrading packages from the official repositories.

Icinga 2.10.1 bugfix release

The namespace support in 2.10 caused a regression with the registered global scope being evaluated for API permissions with filters. This release fixes the problem, next to a problem with Windows packages not fully starting up. There’s also a fixed oversight with not setting a default environment constant. This affects setups checking the SNI header in external load balancers.
v2.10.1 also fixes a problem with application reload and missing event states in large scale environments.
(more…)

Icinga 2.10 released: Namespaces, Notifications, TLS Performance

Our friends from the Max-Planck-Institut for Marine Mikrobiologie kindly sponsored that acknowledgement notifications are now sent only to users which have been notified about a problem before – thanks a lot. Another sponsor asked for more child options for the ScheduledDowntime which are now released in 2.10.
2.10 also brings support for namespaces and allows us to keep the “globals” namespace clean. In addition to that, user-defined namespaces are possible and can be imported into the global namespace too. Read more about this feature here. An additional DSL feature is the support for references. You’ll also find new fine granular path constants in this release, e.g. ConfigDir instead of SysconfDir + “/icinga2”. The old constants are still intact but deprecated.
As promised in the 2.9.2 release post, we’ve been debugging TLS connection handling with many threads and TLS timeouts in large scale environments. This release adds a dynamic thread connection pool for both, cluster messages and HTTP requests. With the performance boost granted, we’ve also lowered the cluster reconnect interval from 60 to 10 seconds. This ensures that configuration deployments triggering a reload don’t leave clients behind.
(more…)