prometheus的agent模式

简介

prometheus的2.32.0版本出了一个agent模式,在这种模式下prometheus只负责采集数据然后remote write到其他的地方

所以在这种模式下prometheus的dashboard是不能使用的,也不能链接到alertmanager上,同时本地是不保存数据的,当然也不能查询。

操作

使用--enable-feature=agent就可以开启agent模式

给个docker-compose.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
version: "3"
services:
prometheus:
image: "prom/prometheus:v2.32.0"
volumes:
- "./prometheus-etc/prometheus.yml:/etc/prometheus/prometheus.yml"
- "/etc/localtime:/etc/localtime"
- "./prometheus-etc/file_sd/:/etc/prometheus/file_sd/"
command:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--enable-feature=agent"
restart: "always"
container_name: "prometheus"

然后再给个配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).

# Alertmanager configuration
#alerting:
# alertmanagers:
# - static_configs:
# - targets:
# - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
- job_name: 'node-exporter'
file_sd_configs:
- files:
- "./file_sd/node-exporter.yaml"
refresh_interval: 5s

# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"

# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.

static_configs:
- targets: ["localhost:9090"]
remote_write:
- url: 'https://www.baidu.com/api/v1/write'
metadata_config:
send: true

注意要注释掉alertmanager相关的东西

欢迎关注我的博客www.bboy.app

Have Fun

欢迎关注我的其它发布渠道