MySQL queries – faster than light (almost)

by | Aug 25, 2021

(The title image from NASA is available under the CC BY 2.0 license.)

At the moment I’m working at a tool for migrating Icinga 2 IDO history to Icinga DB. Sure, one could also run IDO and Icinga DB in parallel for one year and then switch to Icinga DB if they only care for the history of the past year. But the disadvantage is: one would have to wait one year. Nowadays (in our quickly changing world) that’s quite a long time.

One of the requirements is the ability to interrupt and resume the migration. For this reason the tool first has to fetch all row IDs and check for every row whether migrated or not to know where to resume. I.e. to fetch the first 10,000 IDs, translate them to Icinga DB history IDs, verify whether they’re already present, fetch the next 10,000 IDs, … The first of such queries would be:

SELECT xh.statehistory_id
FROM icinga_statehistory xh 
INNER JOIN icinga_objects o ON o.object_id=xh.object_id
WHERE xh.statehistory_id > 0
ORDER BY xh.statehistory_id
LIMIT 10000

The INNER JOIN just excludes icinga_statehistory rows without corresponding icinga_objects rows as such can’t be migrated later anyway. The WHERE starts at the first possible IDs for now and will increase until either a not yet migrated row is found or all rows are considered migrated. Unfortunately in my setup with 100,338,361 such rows the above query takes 5 minutes and 17.292 seconds. In the worst case – i.e. if 99% are already migrated – all queries would take 36 days.

Surprisingly SELECTing icinga_statehistory.* takes only 74 milliseconds. So it seems MariaDB’s query optimizer isn’t omniscient despite present indices. Fortunately Bernd gave me a hint: USE INDEX. Indeed with “USE INDEX (PRIMARY)” after “FROM icinga_statehistory xh” the same query takes only 27 milliseconds. Now the future of the tool is intact again. 🙂

By the way: the USE INDEX trick also works while SELECTing the whole table. But even if the algorithm considers the rest not needed after checking the first 5%, the (at least) Golang MySQL client reads the whole dataset on cursor close. So I’ll stick to WHERE and LIMIT 10,000 here.

But we aren’t omniscient as well. Do you have even better ideas? Do you use EXPLAIN to analyse your app’s queries? Write it down in the comments, subscribe and hit the notification bell! 🙂

You May Also Like…

Subscribe to our Newsletter

A monthly digest of the latest Icinga news, releases, articles and community topics.