Apply Database Partitions to a live Zabbix database – without downtime

Due to the growth of our database (> 1TB), the 'housekeeper' no longer worked properly. The best solution to this problem is to apply Database Partitioning, however with a database of this size this takes a lot of time if you want to keep the data. We tried this action in several ways, the one below was the only way we were able to implement partitioning without downtime.

The example below must be repeated for each table and takes several hours per table.

# Create temporary partition 
CREATE TABLE `history_log_tmp` LIKE `history_log`;
# Apply partitioning
CALL partition_maintenance('zabbix', 'history_log_tmp', 30, 24, 3);

# Rename tables so the new empty table will be used by Zabbix. Leaving the old one as backup
RENAME TABLE history_log TO history_backup_log;
RENAME TABLE history_log_tmp TO history_log;

# Output all data from backup table to file
SELECT * INTO OUTFILE '/var/lib/mysql-files/history_backup_log.sql' FROM history_backup_log;

# Open MySQL Shell and start import
util.importTable("/var/lib/mysql-files/history_backup_log.sql", {schema: "zabbix", table: "history_log", columns: ["itemid","clock","value","ns"], dialect: "default", skipRows: 0, showProgress: true, fieldsOptionallyEnclosed: false, linesTerminatedBy: "\n",threads: 2, bytesPerChunk: "50M", maxRate: "10M"})

Leave a Reply

Your email address will not be published. Required fields are marked *