Personal

Can you describe your role in the Icinga project?

In Icinga I am a developer and the maintainer of Icinga Web 2.
I keep an eye out for bugs, needed features, documentation and the tasks in the web and its modules.

Our team consists of Eric, Ravi, Florian and our trainees Niko, Sukhwinder and Yonas.

 

What did you do in Icinga DB?

I was the “person for everything Icinga Web related” – so mostly responsible for everything that belonged to the Icinga Web parts of the project and the interface between frontend and backend.

 

Which tasks were you responsible for in Icinga DB?

I would categorise it in three parts:
1. Database integration and schema – so fetching and analysing the data in Icinga Web
2. The ORM (Object Relational Model) integration, which is part of the IPL (Icinga PHP Library)
3. Authorisation and identification, so the migration of our authorisation, permission and restriction model from the monitoring module

 

What do you need to get going?

COFFEE! Apart from that, not much else. Just coffee really, yeah.

General

Which technologies did you use?

The main piece of technology would be docker, I think. Well, docker-compose.
It provided the entire development environment for us.
It’s super nice to customise, especially when it comes to things like testing the code for different PHP versions and the like.

We used GitHub Actions for testing in this project, as a successor of travis.

And for me, PhpStorm (the IDE from Jetbrains) was also a big help.
It provides integrations for the aforementioned technologies and also for Git and GitHub, which work very smoothly and make life a lot easier.

When it comes to databases, we built the software around MySQL for now.
There are plans to also add support for PostgreSQL at a later date, if the demand is there.

 

How would you describe the communication and management over the course of the project?

[laughs] It was all rather spontaneous I’d say.
Everyone involved sat together in one room, so we had really quick exchanges whenever necessary.
I think the best word to describe the teamwork is … ‘easy’, because there wasn’t really much of a hierarchical structure that would have blocked the flow.

 

How did you manage who had to do which tasks?

Well, since we are a small team, it’s mostly that everyone chooses what they want to do by themselves.
[laughs] I wouldn’t say that it was unmanaged – but I would also not call it managed, you know?
For me personally, as I already mentioned, I’d call myself a person for everything. But everyone had their assigned roles, so we fit together like a puzzle.

We had a list of issues that needed to be tackled, so people mostly decided for themselves what they could do best and wanted to do.

 

Task 1: Database integration

What’s the general use for it in the project?

Well, without it, we wouldn’t have an Icinga DB. It’s the interface and the source of all of the information.
We need to fetch all data that Icinga DB is writing into the database in order to display it.

From there we get the data on which hosts are available, which services are running on them, what the relationship between them is, which host- or user groups there are and much, much more.

 

What was the initial plan for the task?

We wanted to avoid making the same mistakes as in previous tries to get an Icinga DB running.
We planned to make the schema flatter and less nested, as a result of that, it’s less normalised and we now have some intended redundancy in it.
This improves the performance by quite a bit – both when it comes to fetching data and writing it as well.

 

How does the final version differ from the original plan?

There were a lot of iterations, which mostly focused on consistency and predictability. But for the most part the plan we had in mind worked out great.

 

Which challenges did you have to overcome?

The main challenge was performance.
The monitoring module had a lot of issues with performance, especially when a lot of restrictions had to be applied.
They would affect multiple tables which slowed down every single query. For a restricted user, the logic we used would join every table available – like host to hostgroup to service group to custom variables and so on.
Redundancy removes some of the joins, which helped a lot, but improving the performance was still a big challenge.

Another obstacle here was also version compatibility. MySQL 8 and MySQL 5x, for example, are very different in their inner workings, so we had to build around that too.

 

If Icinga DB was a house which part would the database integration be?

Hmmm.
[Scratches chin]

If we’re talking about just the house, with no people in it, I think it would be the library. A library holds the knowledge and the information in a house. And this is where the data is at!

 

Task 2: ORM integration

Which features does it add?

The ORM (Object Relationship Mapper) is the layer between the database and our code.
It remodels the database structure into code to make it easier to work with for a human.

It is what changes the very abstract structure of the database into objects that we can work with.
In the code we then have an object for what used to be a set of data – a table row, so to speak.
This makes it easier to envision the data and the relationships and improves how well we, as developers, can access and communicate with the database.

 

What was the initial plan for the task?

I’d say that it was envisioned as it is now – we had an idea in mind, that just evolved over time.
The plan was not set in stone, but evolved with the needs that arose during development.

 

Which challenges did you have to overcome?

Redundancy and missing consistency were an issue here – all of the relationships had to be defined in the code manually.

We could not rely on the relations in the database, because it doesn’t have any.
There are no constraints, those are usually used to avoid inconsistencies and faulty entries.
So, the constraints would usually prevent inserting a service without its host being in the database already, for example.

In order to improve the performance, we needed to make it so that a service writer wouldn’t need to wait for the host.
We had to ensure that we can handle faulty entries – like the service without a host.
This may seem like it would not happen very often – but that scenario might occur during the writing process of the database or when the daemon restarts.

So this issue had to be solved in the ORM – it needed to be both flexible and forgiving.

 

If Icinga DB was a house which part would the ORM be?

Phew, it’s pretty abstract.
When the database itself is the library, the ORM would be the humans’ brain.
Or maybe the fingertips or eyes. Something that helps the knowledge from the books reach the brain!

 

Task 3: Authorisation, Restrictions and Permissions

What’s the general use for it in the project?

This task is there to avoid someone being able to see everything, when they should not.
It was necessary to reimplement everything from the monitoring module.
There is no native permission system in Icinga DB right now – we want to add it at some point, but the priority right now is to get the final release finished.

The main focus was on backwards compatibility, so it is easy to migrate for the users and they don’t have to rewrite their configuration.
It’s plug-and-play at the moment.

 

What was the initial plan for the task?

Doing it. And then we did it. [laughs]

Well, the main concept was pretty clear, but we also wanted to create an easy and intuitive way for users to migrate.

In the end we decided on a migration helper that not only helps with authentication and configuration, but also with custom views the users built for themselves.

Those customised views could be filters for dashboards, for example.
You can easily migrate them with a single click now:
There is a small widget that rewrites the entire URL and query string so it matches the new filters and rules in Icinga DB.

 

Which challenges did you have to overcome?

That little widget was actually a lot of work – not difficult per se, but time consuming.
We had to fully map the previous filter implementation to the new schema.

So, everything you see in the filter editor, is using old filter columns and names from the monitoring module.
Mapping all that to the new names was a very time consuming task.
[sighs]
So many tables and names which had to be looked at and checked for their counterpart…

 

If Icinga DB was a house which part would this task be?

I’d say the door.
It can either be protected by a lock or a keypad – or not at all – depending on how well the restrictions and permissions were set.

 

Final

If you had the time and resources, what else would you add / improve?

Dropping the dependency to the monitoring module would be my top priority.
Currently the Icinga DB depends on the monitoring module being installed. It doesn’t need to be configured or active, but it needs to be there.

There are also certain views, like the service grid, that need to be migrated as well.

 

Did you enjoy working on Icinga DB? Why?

Yes, it was something new.
It allowed us to start from scratch and do it right from the start.
Previously I mostly added and extended Icinga Web 2 and its modules.

The only preexisting part for the Icinga DB was the IPL (Icinga PHP Library), so we had the chance to reimplement some features that were – not as effective as they should have been – from the monitoring module.

 

What did you learn that could be of use in future projects?

I wasn’t a part of the initial idea and implementation of the IPL, so I was a bit sceptical at first.
Now that I have some experience with it and also enhanced it a little myself, I think we can put it to good use!

I also learnt to reimplement long standing designs quickly, and to help the community easily build their custom views for modules in an intuitive way.

 

What do you say will the future of Icinga DB look like?

Since it’s a full replacement for the IDO and monitoring module – which was rather necessary – it is now a real solution to the reporting problem.
(AN: “The reporting problem” would be, that the IDO is really not a very good time series database)
The entire schema, with all it’s redundancy, is way more suitable for creating reports than the IDO.
It enables us to create detailed reports and generate various statistics.

There is also the extendability: Whenever we wanted to change or import something in the IDO it just wasn’t possible, so here we had more flexibility in this regard.

 

Do you recall your first / last line of code?

Not really – and I did not write my last line yet!