This is archived documentation for InfluxData product versions that are no longer maintained. For newer documentation, see the latest InfluxData documentation.
Writes the data back into the Kapacitor stream. To write data to a remote Kapacitor instance use the InfluxDBOut node.
Example:
|kapacitorLoopback()
.database('mydb')
.retentionPolicy('myrp')
.measurement('errors')
.tag('kapacitor', 'true')
.tag('version', '0.2')
NOTE: It is possible to create infinite loops using this node. Take care to ensure you do not chain tasks together creating a loop.
Available Statistics:
- points_written – number of points written back to Kapacitor
Index
Properties
Chaining Methods
Properties
Property methods modify state on the calling node.
They do not add another node to the pipeline, and always return a reference to the calling node.
Property methods are marked using the .
operator.
Database
The name of the database.
node.database(value string)
Measurement
The name of the measurement.
node.measurement(value string)
RetentionPolicy
The name of the retention policy.
node.retentionPolicy(value string)
Tag
Add a static tag to all data points. Tag can be called more than once.
node.tag(key string, value string)
Chaining Methods
Chaining methods create a new node in the pipeline as a child of the calling node.
They do not modify the calling node.
Chaining methods are marked using the |
operator.
Deadman
Helper function for creating an alert on low throughput, a.k.a. deadman's switch.
- Threshold – trigger alert if throughput drops below threshold in points/interval.
- Interval – how often to check the throughput.
- Expressions – optional list of expressions to also evaluate. Useful for time of day alerting.
Example:
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
//Do normal processing of data
data...
The above is equivalent to this Example:
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|stats(10s)
.align()
|derivative('emitted')
.unit(10s)
.nonNegative()
|alert()
.id('node \'stream0\' in task \'{{ .TaskName }}\'')
.message('{{ .ID }} is {{ if eq .Level "OK" }}alive{{ else }}dead{{ end }}: {{ index .Fields "emitted" | printf "%0.3f" }} points/10s.')
.crit(lambda: "emitted" <= 100.0)
//Do normal processing of data
data...
The id
and message
alert properties can be configured globally via the 'deadman' configuration section.
Since the AlertNode is the last piece it can be further modified as usual. Example:
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
data
|deadman(100.0, 10s)
.slack()
.channel('#dead_tasks')
//Do normal processing of data
data...
You can specify additional lambda expressions to further constrain when the deadman's switch is triggered. Example:
var data = stream
|from()...
// Trigger critical alert if the throughput drops below 100 points per 10s and checked every 10s.
// Only trigger the alert if the time of day is between 8am-5pm.
data
|deadman(100.0, 10s, lambda: hour("time") >= 8 AND hour("time") <= 17)
//Do normal processing of data
data...
node|deadman(threshold float64, interval time.Duration, expr ...ast.LambdaNode)
Returns: AlertNode
Stats
Create a new stream of data that contains the internal statistics of the node. The interval represents how often to emit the statistics based on real time. This means the interval time is independent of the times of the data points the source node is receiving.
node|stats(interval time.Duration)
Returns: StatsNode