This is archived documentation for InfluxData product versions that are no longer maintained. For newer documentation, see the latest InfluxData documentation.
The InfluxDB configuration file contains configuration settings specific to a local node.
Using Configuration Files
The influxd config
command will print out a new TOML-formatted configuration with all the available configuration options set to their default values.
On POSIX systems, a new configuration file can be generated by redirecting the output of the command to a file.
influxd config > /etc/influxdb/influxdb.conf.generated
Custom settings from an older configuration file can be preserved when generating a new config file using the -config
option.
influxd config -config /etc/influxdb/influxdb.conf.old > /etc/influxdb/influxdb.conf.new
Use the -config
option to start InfluxDB using a configuration file.
influxd -config /etc/influxdb/influxdb.conf.new
InfluxDB will use the default configuration settings when no configuration file is provided. A new configuration file should be generated every time InfluxDB is upgraded.
See the installation documentation for more detail on generating and using configuration files.
Configuration Sections
- [reporting]
- [meta]
- [data]
- [cluster]
- [retention]
- [shard-precreation]
- [admin]
- [monitor]
- [subscriber]
- [http]
- [graphite]
- [collectd]
- [opentsdb]
- [udp]
- [continuous_queries]
- [hinted-handoff]
Configuration Options
Every configuration section has configuration options. Every configuration option is optional. If a configuration option is not provided, its default value will be used. All configuration options listed below are set to their default value.
Note: This page documents configuration options for the latest official release - the sample configuration file on GitHub will always be slightly ahead of what is documented here.
[reporting]
InfluxData, the company, relies on reported data from running nodes primarily to track the adoption rates of different InfluxDB versions. This data helps InfluxData support the continuing development of InfluxDB.
reporting-disabled = false
The reporting-disabled
option toggles the reporting of data every 24 hours to m.influxdb.com
. Each report includes a randomly-generated identifier, OS, architecture, InfluxDB version, and the number of databases, measurements, and unique series. Setting this option to true
will disable reporting.
Note: No data from user databases is ever transmitted.
[meta]
This section controls some of the parameters for the InfluxDB cluster. Specifically, it handles the parameters for the Raft consensus group which coordinates metadata about the cluster. For step-by-step instructions on setting up an InfluxDB cluster, see Cluster Setup.
dir = “/var/opt/influxdb/meta”
The meta
directory contains the metastore, which stores information on nodes, users, databases, retention policies, shards, and continuous queries.
Files in the meta
directory include: id
, peers.json
, raft.db
, and the snapshots
directory.
id
stores the identification number of the Raft peer:1
for the first node to join the cluster,2
for the second node, and3
for the third node to join the cluster.peers.json
stores the hostnames and ports of any Raft peers. There will always be at least one hostname present.snapshots
contains the server’s snapshots taken for the purpose of log compaction.raft.db
is the BoltDB database that contains the Raft log and snapshots.
hostname = “localhost”
The hostname is the hostname
of the node which is advertised to its Raft peers.
bind-address = “:8088”
The bind address is the port
of the node which is used to communicates with its Raft peers.
retention-autocreate = true
Retention policy auto-creation automatically creates a default
retention policy when a database is created.
The retention policy is named default
, has an infinite duration, and is also set as the database’s default retention policy, which is used when a write or query does not specify a retention policy.
Disable this setting to prevent the creation of a default
retention policy when creating databases.
election-timeout = “1s”
The election timeout is the duration a Raft candidate spends in the candidate state without a leader before it starts an election. The election timeout is slightly randomized on each Raft node to a value between one to two times the election timeout duration. The default setting should work for most systems.
heartbeat-timeout = “1s”
The heartbeat timeout is the amount of time a Raft follower remains in the follower state without a leader before it starts an election. Clusters with high latency between nodes may want to increase this parameter.
leader-lease-timeout = “500ms”
The leader lease timeout is the amount of time a Raft leader will remain leader if it does not hear from a majority of nodes. After the timeout the leader steps down to the follower state. The default setting should work for most systems.
commit-timeout = “50ms”
The commit timeout is the amount of time a Raft node will tolerate between commands before issuing a heartbeat to tell the leader it is alive. The default setting should work for most systems.
cluster-tracing = false
Cluster tracing toggles the logging of Raft logs on Raft nodes. Enable this setting when debugging Raft consensus issues.
raft-promotion-enabled = true
Raft promotion automatically promotes a node to a Raft node when needed. Disabling Raft promotion is desirable only when specific nodes should be participating in Raft consensus.
logging-enabled = true
Meta logging toggles the logging of messages from the meta service.
[data]
This section controls how the actual time series data is stored and flushed from the write ahead log (WAL). The default WAL settings should work for most systems.
dir = “/var/opt/influxdb/data”
The directory where InfluxDB stores the data. This directory may be changed.
engine = “bz1”
The engine is the backing time series database storage engine. There are three storage engines in the InfluxDB version 0.9.x series.
b1
A BoltDB backed time series storage engine. It is the default engine for versions 0.9.0 to 0.9.2.1.bz1
A BoltDB backed time series storage engine with compressed shards. It is the default engine for versions 0.9.3 to 0.9.6.1.tsm1
A purpose built time series storage engine developed by the InfluxData team. It is an experimental engine in versions 0.9.5.1 to 0.9.6.1.
max-wal-size = 104857600
Only applies to the b1
engine. The maximum WAL size is the max amount of data (in bytes) which triggers a WAL flush.
This defaults to 100MiB.
wal-flush-interval = “10m0s”
Only applies to the b1
engine. The WAL flush interval sets the maximum time data can stay in the WAL before a flush.
wal-partition-flush-delay = “2s”
Only applies to the b1
engine. The WAL partition flush delay is the time the engine will wait between each WAL partition flush.
wal-dir = “/var/opt/influxdb/wal”
Only applies to the bz1
engine. The WAL directory is the location of the write ahead log.
For best throughput, the WAL directory and the data directory should be on different physical devices.
wal-logging-enabled = true
Only applies to the bz1
engine. The WAL logging enabled toggles the logging of WAL operations such as WAL flushes to disk.
wal-ready-series-size = 30720
Only applies to the bz1
engine. The WAL ready series size is the size (in bytes) of a series in the WAL in-memory cache at which the series is marked as ready to flush to the index.
wal-compaction-threshold = 0.5
The WAL compaction threshold is the ratio of series that are over the wal-ready-series-size
which trigger a partition flush and compaction.
wal-max-series-size = 1048576
The WAL maximum series size is the maximum size (in bytes) of a series in a partition. Any series in a partition above this size is forced to flush and compact.
wal-flush-cold-interval = “5s”
The WAL flush cold interval sets the duration of the interval when all series are flushed and full compaction takes place. This option ensures shards with infrequent writes are flushed to disk instead of remaining cached in memory as part of the WAL.
wal-partition-size-threshold = 52428800
The WAL partition size threshold sets the maximum size of a partition (in bytes). When the threshold is hit, the partition is forced to flush its largest series. There are five partitions so you’ll need at least five times this amount of memory. The more memory you have, the bigger this setting can be.
query-log-enabled = true
The query log enabled setting toggles the logging of parsed queries before execution.
cache-max-memory-size = 524288000
Only applies to the tsm1
engine. The cache maximum memory size is the maximum size (in bytes) of a shard’s cache allowed before it starts rejecting writes.
cache-snapshot-memory-size = 26214400
Only applies to the tsm1
engine. The cache snapshot memory size is the size at which the engine will snapshot the cache and write it to a TSM file, freeing up memory.
cache-snapshot-write-cold-duration = “1h0m0s”
Only applies to the tsm1
engine. The cache snapshot write cold duration is the length of time at which the engine will snapshot the cache and write it to a new TSM file if the shard hasn’t received writes or deletes.
compact-min-file-count = 3
Only applies to the tsm1
engine. The compact minimum file count is the minimum number of TSM files that need to exist before a compaction cycle will run.
compact-full-write-cold-duration = “24h0m0s”
Only applies to the tsm1
engine. The compact full write cold duration is the duration at which the engine will compact all TSM files in a shard if it hasn’t received a write or delete.
max-points-per-block = 0
Only applies to the tsm1
engine. The maximum points per block is the maximum number of points in an encoded block in a TSM file.
Larger numbers may yield better compression but could incur a performance peanalty when querying.
[cluster]
This section controls non-Raft cluster behavior, which generally includes how data is shared across shards.
force-remote-mapping = true
write-timeout = “5s”
The time during which the coordinating node must receive a successful response for writing to all remote shard owners before it considers the write a failure.
If the write times out, it may still succeed but we stop waiting and queue those writes in hinted handoff.
Depending on the requested consistency level and the number of successful responses received, the return value will be either write failure
or partial write
.
shard-writer-timeout = “5s”
The time that a write from one node to another must complete before the write times out. If the write times out, it may still succeed on the remote node but the client node stops waiting and queues it in hinted handoff. This timeout should always be less than or equal to the write-timeout.
shard-mapper-timeout = “5s”
[retention]
This section controls the enforcement of retention policies for evicting old data.
enabled = true
Set to false
to prevent InfluxDB from enforcing retention policies.
check-interval = “30m”
The rate at which InfluxDB checks to enforce a retention policy.
[shard-precreation]
Controls the precreation of shards so that shards are available before data arrive. Only shards that, after creation, will have both a start- and end-time in the future are ever created. Shards that would be wholly or partially in the past are never precreated.
enabled = true
check-interval = “10m”
advance-period = “30m”
The maximum period in the future for which InfluxDB precreates shards.
The 30m
default should work for most systems.
Increasing this setting too far in the future can cause inefficiencies.
[admin]
Controls the availability of the built-in, web-based admin interface.
enabled = true
Set to false
to disable the admin interface.
bind-address = “:8083”
The port used by the admin interface.
https-enabled = false
Set to true
to enable HTTPS for the admin interface.
Note: HTTPS must be enable for the [http] service for the admin UI to function properly using HTTPS.
https-certificate = “/etc/ssl/influxdb.pem”
The path of the certificate file.
[monitor]
This section controls InfluxDB’s system self-monitoring.
By default, InfluxDB writes the data to the _internal
database.
If that database does not exist, InfluxDB creates it automatically.
The DEFAULT
retention policy on the _internal
database is seven days.
If you want to use a retention policy other than the seven-day retention policy, you must create it.
store-enabled = true
Set to false
to disable recording statistics internally.
If set to false
it will make it substantially more difficult to diagnose issues with your installation.
store-database = “_internal”
The destination database for recorded statistics.
store-interval = “10s”
The interval at which InfluxDB records statistics.
[subscriber]
This section toggles the subscriber feature used by Kapacitor. When a service like Kapacitor has subscribed to InfluxDB, all incoming writes are sent to the subscribed endpoint via UDP.
enabled = true
[http]
This section controls how InfluxDB configures the HTTP endpoints. These are the primary mechanisms for getting data into and out of InfluxDB. Edit the options in this section to enable HTTPS and authentication. See Authentication and Authorization.
enabled = true
Set to false
to disable HTTP.
Note that the InfluxDB command line interface (CLI) connects to the database using the HTTP API.
bind-address = “:8086”
The port used by the HTTP API.
auth-enabled = false
Set to true
to require authentication.
log-enabled = true
Set to false
to disable logging.
write-tracing = false
Set to true
to enable logging for the write payload.
If set to true
, this will duplicate every write statement in the logs and is thus not recommended for general use.
pprof-enabled = false
Set to true
to enable pprof on InfluxDB so that it gathers detailed performance information.
https-enabled = false
Set to true
to enable HTTPS.
https-certificate = “/etc/ssl/influxdb.pem”
The path of the certificate file.
[[graphite]]
This section controls one or many listeners for Graphite data. See the README on GitHub for more information.
enabled = false
Set to true
to enable Graphite input.
database = “graphite”
The name of the database that you want to write to.
bind-address = “:2003”
The default port.
protocol = “tcp”
Set to tcp
or udp
.
consistency-level = “one”
The number of nodes that must confirm the write.
If the requirement is not met the return value will be either partial write
if some points in the batch fail or write failure
if all points in the batch fail.
For more information, see the Query String Parameters for Writes section in the Line Protocol Syntax Reference .
name-separator = “.”
The next three options control how batching works. You should have this enabled otherwise you could get dropped metrics or poor performance. Batching will buffer points in memory if you have many coming in.
batch-size = 1000
The input will flush if this many points get buffered.
batch-pending = 5
The number of batches that may be pending in memory.
batch-timeout = “1s”
The input will flush at least this often even if it hasn’t reached the configured batch-size.
udp-read-buffer = 0
UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
name-schema = “type.host.measurement.device”
This option configures tag keys for parsing the metric name from graphite protocol; separated by name-separator
.
The “measurement” tag is special and the corresponding field will become the name of the metric.
e.g.
“type.host.measurement.device” will parse “server.localhost.cpu.cpu0” as:
## {
## measurement: "cpu",
## tags: {
## "type": "server",
## "host": "localhost,
## "device": "cpu0"
## }
## }
ignore-unnamed = true
Set to true
so that when the input metric name has more fields than name-schema
specified, the extra fields are ignored.
Otherwise an error will be logged and the metric rejected.
[collectd]
This section controls the listener for collectd data.
enabled = false
Set to true
to enable collectd writes.
bind-address = “”
An empty string is equivalent to 0.0.0.0
.
database = “”
The name of the database that you want to write to.
This defaults to collectd
.
typesdb = “”
Defaults to /usr/share/collectd/types.db
.
The next three options control how batching works. You should have this enabled otherwise you could get dropped metrics or poor performance. Batching will buffer points in memory if you have many coming in.
batch-size = 1000
The input will flush if this many points get buffered.
batch-pending = 5
The number of batches that may be pending in memory.
batch-timeout = “1s”
The input will flush at least this often even if it hasn’t reached the configured batch-size.
read-buffer = 0
UDP Read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
[opentsdb]
Controls the listener for OpenTSDB data. See the README on GitHub for more information.
enabled = false
Set to true
to enable openTSDB writes.
bind-address = “:4242”
The default port.
database = “opentsdb”
The name of the database that you want to write to. If the database does not exist, it will be created automatically when the input is initialized.
retention-policy = “”
The relevant retention policy.
An empty string is equivalent to the database’s DEFAULT
retention policy.
consistency-level = “one”
tls-enabled = false
certificate = “”
log-point-errors = true
Log an error for every malformed point.
The next three options control how batching works. You should have this enabled otherwise you could get dropped metrics or poor performance. Only points metrics received over the telnet protocol undergo batching.
batch-size = 1000
The input will flush if this many points get buffered.
batch-pending = 5
The number of batches that may be pending in memory.
batch-timeout = “1s”
The input will flush at least this often even if it hasn’t reached the configured batch-size.
[[udp]]
This section controls the listeners for InfluxDB line protocol data via UDP. See the UDP page for more information.
enabled = false
Set to true
to enable writes over UDP.
bind-address = “”
An empty string is equivalent to 0.0.0.0
.
database = “udp”
The name of the database that you want to write to.
retention-policy = “”
The relevant retention policy for your data.
An empty string is equivalent to the database’s DEFAULT
retention policy.
The next three options control how batching works. You should have this enabled otherwise you could get dropped metrics or poor performance. Batching will buffer points in memory if you have many coming in.
batch-size = 1000
The input will flush if this many points get buffered.
batch-pending = 5
The number of batches that may be pending in memory.
batch-timeout = “1s”
The input will flush at least this often even if it hasn’t reached the configured batch-size.
read-buffer = 0
UDP read buffer size, 0 means OS default. UDP listener will fail if set above OS max.
udp-payload-size = 65536
Sets the expected UDP payload size. Lower values tend to yield better performance, default is max UDP size 65536.
[continuous_queries]
This section controls how continuous queries (CQs) run within InfluxDB.
CQs are automated batches of queries that execute over recent time intervals.
InfluxDB executes one auto-generated query per GROUP BY time()
interval.
log-enabled = true
Set to false
to disable logging for CQ events.
enabled = true
Set to false
to disable CQs.
recompute-previous-n = 2
The upper bound on the number of previous interval queries that InfluxDB executes per CQ batch.
recompute-no-older-than = “10m0s”
InfluxDB will not generate queries with an upper time boundary older than now()
- recompute-no-older-than
, regardless of the value of recompute-previous-n
.
compute-runs-per-interval = 10
The upper bound on the number of incremental queries generated within each GROUP BY time()
interval.
The actual number of generated queries can be lower, depending on the GROUP BY time()
interval in the CQ and the compute-no-more-than
setting.
compute-no-more-than = “2m0s”
Batches of CQs run at intervals determined by the GROUP BY time()
interval divided by compute-runs-per-interval
.
However, CQ batches will never run more often than the compute-no-more-than
value.
Note:
GROUP BY time()
* (recompute-previous-n
+ 1) must be greater thancompute-no-more-than
or some time intervals will never be sampled.
[hinted-handoff]
This section controls the hinted handoff feature, which allows nodes to temporarily store queued data when one node of a cluster is down for a short period of time. Note that the hinted handoff has no function in a single node cluster.
enabled = true
Set to false
to disable hinted handoff.
dir = “/var/opt/influxdb/hh”
The hinted handoff directory. For best throughput, the HH directory and the WAL directory should be on different physical devices. If you have performance concerns, you will also want to make this setting different from the dir in the [data] section.
max-size = 1073741824
The maximum size of the hinted handoff queue for a node. If the queue is full, new writes are rejected and an error is returned to the client. The queue is drained when either the writes are retried successfully or the writes expire.
max-age = “168h”
The time writes sit in the queue before they are purged. The time is determined by how long the batch has been in the queue, not by the timestamps in the data.
retry-rate-limit = 0
The rate (in bytes per second) per node at which the hinted handoff retries writes.
Set to 0
to disable the rate limit.
Hinted handoff begins retrying writes to down nodes at the interval defined by the retry-interval
.
If any error occurs, it will backoff exponentially until it reaches the interval defined by the retry-max-interval
.
Hinted handoff then retries writes at that interval until it succeeds.
The interval resets to the retry-interval
once hinted handoff successfully completes writes to all nodes.
retry-interval = “1s”
The initial interval at which the hinted handoff retries a write after it fails.
retry-max-interval = “1m”
The maximum interval at which the hinted handoff retries a write after it fails. It retries at this interval until it succeeds.
purge-interval = “1h”
The interval at which InfluxDB checks to purge data that are above max-age
.