Upgrading from previous versions

This is archived documentation for InfluxData product versions that are no longer maintained. For newer documentation, see the latest InfluxData documentation.

If you’re thinking about upgrading or if you’ve recently upgraded from InfluxDB 0.9 to 0.10, there are several steps you should consider taking to ease the transition:

While none of these steps are required, we highly recommend carrying them out when you update the InfluxDB binary.

Generate a new configuration file

InfluxDB 0.10 has several new settings in the configuration file.

The influxd config command prints out a new TOML-formatted configuration with all the available configuration options set to their default values. On POSIX systems, a new configuration file can be generated by redirecting the output of the command to a file.

influxd config > /etc/influxdb/influxdb_010.conf.generated

Compare your InfluxDB 0.9 configuration file against the newly generated InfluxDB 0.10 file and manually update any defaults with your localized settings.

Convert b1 and bz1 shards to tsm1

InfluxDB version 0.10 uses a new default storage engine, tsm1. Converting existing b1 and bz1 shards to tsm1 format results in a significant permanent reduction in disk usage and significantly improved write throughput to those shards.

influx_tsm is a tool for converting existing b1 and bz1 shards to tsm1 format. Before you get started, there are several things to note:

  • Comprehensive conversion is an offline process (the InfluxDB system must be stopped during the conversion). However, influx_tsm reads and writes shards directly on disk so the conversion should be fast. If downtime is not acceptable see below for how to do a conversion in stages.
  • The tool automatically ignores tsm1 shards.
  • Conversion can be controlled on per-database basis and can be run idempotently on any database.
  • By default, the tool backs up databases so that you can undo a conversion (we cover how to undo a conversion below). Before you start, ensure that the host system has at least as much free disk space as the disk space consumed by the data directory of your InfluxDB system.

Conversion steps

  1. Upgrade your system to InfluxDB 0.10 before proceeding.
  2. Stop all write traffic to your InfluxDB system.
  3. Stop the InfluxDB service.
  4. Start the InfluxDB service.
  5. Ensure that all data have persisted to disk by waiting until the WAL is fully flushed. This is complete when the system responds to queries following the restart.
  6. Stop the InfluxDB service. Do not restart the service until you’ve completed the conversion.
  7. Create a directory for the backup. Here, we call it influxdb_backup:

    mkdir /tmp/influxdb_backup
  8. Run the conversion tool.

    To convert all databases, run:

    influx_tsm -backup <path_to_backup_directory>  <path_to_data_directory>

    For example:

    influx_tsm -backup /tmp/influxdb_backup /var/lib/influxdb/data

    When you run influx_tsm, the tool will first list the shards to be converted and will ask for confirmation. You can abort the conversion process at this step if you just wish to see what would be converted, or if the list of shards does not look correct.

    Note: By default, the conversion operation performs each operation in a serial manner. This minimizes load on the host system performing the conversion, but also takes the most time. If you wish to minimize the time conversion takes, enable parallel mode with -parallel:

    influx_tsm -backup /tmp/influxdb_backup -parallel /var/lib/influxdb/data

    Conversion will then perform as many operations as possible in parallel, but the process may place significant load on the host system (CPU, disk, and RAM, usage will all increase).

    Enter influx_tsm -h for a complete list of the tool’s options.

  9. If you ran the conversion tool as a different user from the user who runs InfluxDB, check and, if necessary, set the correct read and write permissions on the new tsm1 directories.

  10. Restart InfluxDB and ensure that the data look correct.

  11. If everything looks correct, you may then wish to remove or archive the backed-up shards in your backup directory (/tmp/influxdb_backup):

    rm -r /tmp/influxdb_backup
  12. Restart write traffic and you’re done!

Rollback a conversion

After a successful backup, you have a duplicate of your database(s) in the backup directory that you provided on the command line. If, when checking your data after a conversion, you notice things missing or something just isn’t right, you can rollback the conversion.

  1. Shut down your node (this is very important).
  2. Remove the database’s directory from the influxdb data directory. Where /var/lib/influxdb/data is your data directory and stats is the name of the database you’d like to remove:

    rm -r /var/lib/influxdb/data/stats
  3. Copy the database’s directory from the backup directory you created (/tmp/influxdb_backup/stats) into the data directory (/var/lib/influxdb/data/):

     cp -r /tmp/influxdb_backup/stats /var/lib/influxdb/data/
  4. Restart InfluxDB.

How to avoid downtime when upgrading shards

Identify non-tsm1 shards

Non-tsm1 shards are files of the form: data/<database>/<retention_policy>/<shard_id>.

tsm1 shards are files of the form: data/<database>/<retention_policy>/<shard_id>/<file>.tsm.

Determine which bz/bz1 shards are cold for writes

Run the SHOW SHARDS query to see the start and end dates for shards. If the date range for a shard does not span the current time then the shard is said to be cold for writes. This means that no new points are expected to be added to the shard. The shard whose date range spans now is said to be hot for writes. You can only safely convert cold shards without stopping the InfluxDB process.

Convert cold shards

  1. Copy each of the cold shards you’d like to convert to a new directory with the structure /tmp/data/<database>/<retention_policy>/<shard_id>.
  2. Run the influx_tsm tool on the copied files:

    influx_tsm -parallel /tmp/data/
  3. Remove the existing cold b1/bz1 shards from the production data directory.

  4. Move the new tsm1 shards into the original directory, overwriting the existing b1/bz1 shards of the same name. Do this simultaneously with step 3 to avoid any query errors.

  5. Wait an hour, a day, or a week (depending on your retention period) for any hot b1/bz1 shards to become cold and repeat steps 1 through 4 on the newly cold shards.

Note: Any points written to the cold shards after making a copy will be lost when the tsm1 shard overwrites the existing cold shard. Nothing in InfluxDB will prevent writes to cold shards, they are merely unexpected, not impossible. It is your responsibility to prevent writes to cold shards to prevent data loss.

Revisit any existing continuous queries

CQs defined in InfluxDB 0.9 will still run in InfluxDB 0.10, but at a reduced frequency and with no resampling of data.

In InfluxDB 0.9 the CREATE CONTINUOUS QUERY statement defines the Continuous Query’s (CQ) database and the query to be performed. Settings in the [continuous_queries] section of the configuration file control the frequency at which the CQ runs and how much historical data the CQ covers. All CQs share the same configuration settings; this makes it difficult to optimize CQ execution for both short and long sampling intervals.

In InfluxDB 0.10 the CREATE CONTINUOUS QUERY statement also controls CQ execution. By default, CQs run at the same interval as the CQ’s GROUP BY time() interval, and the system calculates the query for the most recent GROUP BY time() interval. A new and optional RESAMPLE clause allows users to specify how often the CQ runs and the time range over which InfluxDB runs the CQ. Finally, the 0.9 CQ configuration settings are no longer in the configuration file and have been replaced by a single setting which controls how often InfluxDB checks to see if a CQ needs to run.

This new behavior can result in incomplete downsampling of your data. We strongly recommend that you:

  1. Redefine those CQs with the new 0.10 syntax.
  2. Delete CQs that were defined in InfluxDB 0.9.