tencent cloud

TencentCloud Managed Service for Prometheus

Product Introduction
Overview
Strengths
Use Cases
Concepts
Use Limits
Features
Service Regions
Purchase Guide
Billing Overview
Pay-as-You-Go (Postpaid)
Free Trial Introduction
Managed Collector Billing Introduction
Archive Storage Billing Introduction
Purchase Methods
Payment Overdue
Getting Started
Integration Guide
Scrape Configuration Description
Custom Monitoring
EMR Integration
Java Application Integration
Go Application Integration
Exporter Integration
Nacos Integration
Common Exporter
Health Check
Instructions for Installing Components in the TKE Cluster
Cloud Monitoring
Non-Tencent Cloud Host Monitoring
Read Cloud-Hosted Prometheus Instance Data via Remote Read
Agent Self-Service Access
Pushgateway Integration
Security Group Open Description
Operation Guide
Instance
TKE
Integration Center
Data Multi-Write
Recording Rule
Instance Diagnosis
Archive Storage
Alerting Rule
Tag
Access Control
Grafana
API Guide
TKE Metrics
Resource Usage and Billing Overview
Practical Tutorial
Migration from Self-Built Prometheus
Custom Integration with CVM
TKE Monitoring
Enabling Public Network Access for TKE Serverless Cluster
Connecting TMP to Local Grafana
Enabling Public Network Access for Prometheus Instances
Configuring a Public Network Address for a Prometheus Instance
Terraform
Terraform Overview
Managing Prometheus Instances Using Terraform
Managing the Integration Center of Prometheus Instances Using Terraform
Collecting Container Monitoring Data Using Terraform
Configuring Alarm Policies Using Terraform
FAQs
Basic Questions
Integration with TKE Cluster
Product Consulting
Use and Technology
Cloud Monitor FAQs
Service Level Agreement
TMP Policy
Privacy Policy
Data Processing And Security Agreement

Scrape Configuration Description

PDF
Focus Mode
Font Size
Last updated: 2026-03-30 09:32:22

Overview

Prometheus primarily uses the Pull method to scrape monitoring endpoints exposed by target services. Therefore, corresponding scrape jobs need to be configured to request monitoring data and write it into the storage provided by Prometheus. Currently, the Prometheus service offers configurations for the following tasks:
Native Job Configuration: Provides the configuration for Prometheus' native scrape jobs.
CVM Service Discovery Configuration: Provides service discovery configuration for Tencent Cloud CVM instances.
Pod Monitor: In the K8S ecosystem, based on Prometheus Operator to scrape corresponding monitoring data from Pods.
Service Monitor: In the K8S ecosystem, based on Prometheus Operator to scrape monitoring data from Endpoints corresponding to a Service.
Probe: In the K8S ecosystem, based on Prometheus Operator to perform health checks or availability probes on targets and convert the probe results into Prometheus metrics.
PrometheusRule: In the K8S ecosystem, based on Prometheus Operator to define Alerting Rules and Recording Rules, enabling declarative management of alarm configurations.
Note:
The above CRD resources based on Prometheus Operator follow the effective rules:
PodMonitor/ServiceMonitor/Probe
CRD resources under the kube-system namespace automatically take effect without requiring additional labels.
CRD resources under any other namespace must contain the `prom_id: <instance ID>` label in metadata.labels to take effect. This label associates the CRD resource with the specified Prometheus instance.
PrometheusRule
PrometheusRule must be placed in the namespace corresponding to the Prometheus <instance ID>.
Alarm/Recording rules created through CRDs cannot be modified in the console.
In the following configuration, the [] configuration items are optional.

Native Job Configuration

The corresponding configuration items are described as follows:

# Scrape job name, which also adds a label (job=job_name) to the scraped metrics
job_name: <job_name>

# Scrape Job Interval
[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]

# Scrape Request Timeout
[ scrape_timeout: <duration> | default = <global_config.scrape_timeout> ]

# Scrape Job Request URI Path
[ metrics_path: <path> | default = /metrics ]

# Resolution for label conflicts when scraped labels clash with those added by the backend Prometheus.
# true: Retain scraped labels and ignore labels conflicting with the backend Prometheus;
# false: For conflicting labels, prefix the scraped label with exported_<original-label> and add the labels added by the backend Prometheus;
[ honor_labels: <boolean> | default = false ]

# Whether to use timestamps generated on the target during scraping.
# true: If timestamps exist in the target, use the timestamps from the target;
# false: Ignore the timestamps on the target;
[ honor_timestamps: <boolean> | default = true ]

# Scrape protocol: http or https
[ scheme: <scheme> | default = http ]

# Scrape Request URL Parameters
params:
[ <string>: [<string>, ...] ]

# Set the `Authorization` value in the scrape request headers via basic auth; password/password_file are mutually exclusive, and the value in password_file takes precedence.
basic_auth:
[ username: <string> ]
[ password: <secret> ]
[ password_file: <string> ]

# Set the `Authorization` in the scrape request headers via bearer token; bearer_token / bearer_token_file are mutually exclusive, and the value in bearer_token takes precedence.
[ bearer_token: <secret> ]

# Set the `Authorization` in the scrape request headers via bearer token; bearer_token / bearer_token_file are mutually exclusive, and the value in bearer_token takes precedence.
[ bearer_token_file: <filename> ]

# Whether the scrape connection uses TLS; configure the corresponding TLS parameters.
tls_config:
[ <tls_config> ]

# Metrics are scraped from the target via a proxy service; specify the corresponding proxy service address.
[ proxy_url: <string> ]

# Specify targets via static configuration; see the description below.
static_configs:
[ - <static_config> ... ]

# HTTP service discovery configuration. For native service discovery configuration, refer to the Prometheus official documentation
http_sd_configs:
[ - <http_sd_config> ... ]

# After scraping data, modify the labels on the target through the relabel mechanism, executing multiple relabel rules sequentially.
# For relabel_config, see the description below.
relabel_configs:
[ - <relabel_config> ... ]

# After data scraping, modify the label values through the relabel mechanism, executing multiple relabel rules sequentially.
# For relabel_config, see the description below.
metric_relabel_configs:
[ - <relabel_config> ... ]

# Limit on the number of data points per scrape; 0: no limit, default is 0.
[ sample_limit: <int> | default = 0 ]

# Limit on the number of targets per scrape; 0: no limit, default is 0.
[ target_limit: <int> | default = 0 ]


static_config configuration

The corresponding configuration items are described as follows:
# Specify the value of the corresponding target host, such as ip:port.
targets:
[ - '<host>' ]

# Add corresponding labels to all targets, similar to the concept of global labels.
labels:
[ <labelname>: <labelvalue> ... ]
Example:
job_name: prometheus
scrape_interval: 30s
static_configs:
- targets:
- 127.0.0.1:9090

CVM Service Discovery Configuration

Note:
The cvm_sd_configs for CVM service discovery is not a native Prometheus configuration and is not supported in scrape tasks of the Integration Center or TKE integrations. Currently, it is only configurable in the CVM integration of the Integration Center. Hereafter referred to as CVM service discovery.
CVM service discovery utilizes TencentCloud API to automatically retrieve the CVM instance list, using the private IP address of CVM by default. The service discovery generates the following metadata labels, which can be used in relabel configurations.
Tag
Description
__meta_cvm_instance_id
Instance ID
__meta_cvm_instance_name
Instance Name
__meta_cvm_instance_state
Instance State
__meta_cvm_instance_type
Instance type.
__meta_cvm_OS
Instance operating system name.
__meta_cvm_private_ip
Private IP Address
__meta_cvm_public_ip
Public IP Address
__meta_cvm_vpc_id
Network ID
__meta_cvm_subnet_id
subnet ID
__meta_cvm_tag_<tagkey>
instance Tag value
__meta_cvm_region
Instance Region
__meta_cvm_zone
Instance AZ
cvm_sd_configs Configuration Instructions
# Tencent Cloud regions. For the region list, refer to the documentation at https://cloud.tencent.com/document/api/213/15692#Region-List.
region: <string>

# custom endpoint.
[ endpoint: <string> ]

# Credentials for accessing Tencent Cloud API. If not set, the values of environment variables TENCENT_CLOUD_SECRET_ID and TENCENT_CLOUD_SECRET_KEY will be used.
# If using the CVM crawling task of the Integration Center for configuration, then no need to fill in.
[ secret_id: <string> ]
[ secret_key: <secret> ]

# Refresh interval for the CVM list.
[ refresh_interval: <duration> | default = 60s ]

# Port for scraping metrics.
ports:
- [ <int> | default = 80 ]

# Filter rules for the CVM list. For supported filter conditions, refer to the documentation at https://cloud.tencent.com/document/api/213/15728#2.-.E8.BE.93.E5.85.A5.E5.8F.82.E6.95.B0.
filters:
[ - name: <string>
values: <string>, [...] ]

Note
When the Integration Center's CVM configuration is used for cvm_sd_configs, this integration automatically employs service-preset role authorization to ensure security, eliminating the need for manual entry of the following parameters: secret_id, secret_key, endpoint.
CVM Integration Configuration Example
job_name: demo-monitor
cvm_sd_configs:
- region: ap-guangzhou
ports:
- 8080
filters:
- name: tag:service
values:
- demo
relabel_configs:
- source_labels: [__meta_cvm_instance_state]
regex: RUNNING
action: keep
- regex: __meta_cvm_tag_(.*)
replacement: $1
action: labelmap
- source_labels: [__meta_cvm_region]
target_label: region
action: replace


Pod Monitor

The corresponding configuration items are described as follows:
# Prometheus Operator CRD version
apiVersion: monitoring.coreos.com/v1
# Corresponding to the K8S resource type, here, Pod Monitor
kind: PodMonitor
# Corresponding to K8S Metadata. Only the name needs attention here. If no jobLabel is specified, the job value in the scraped metrics label will be <namespace>/<name>
metadata:
name: redis-exporter # Enter a unique name
namespace: cm-prometheus # The namespace is flexible; any namespace except kube-system can be used.
labels:
prom_id: prom-xxx
# Describing the selection of target Pods and the configuration of scrape jobs
spec:
# Enter the label of the corresponding Pod. The pod monitor will use this value as the job label value.
# If viewing the Pod Yaml, retrieve the value from pod.metadata.labels.
# If viewing Deployment/Daemonset/Statefulset, retrieve spec.template.metadata.labels.
[ jobLabel: string ]
# Add the corresponding Pod's Labels to the Target's Labels
[ podTargetLabels: []string ]
# Limit on data points per scrape; 0: no limit, default is 0.
[ sampleLimit: uint64 ]
# Limit on the number of targets per scrape; 0: no limit, default is 0.
[ targetLimit: uint64 ]
# Configure the Prometheus HTTP API endpoints to be scraped; multiple endpoints can be configured
podMetricsEndpoints:
[ - <endpoint_config> ... ] # See below for endpoint description
# Select the namespace where the Pod to be monitored is located. If left blank, all namespaces will be selected.
[ namespaceSelector: ]
# Whether to select all namespaces
[ any: bool ]
# namespace list to be selected
[ matchNames: []string ]
# Fill in the Label value of the Pod to be monitored to locate the target Pod [K8S metav1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)
selector:
[ matchExpressions: array ]
[ example: - {key: tier, operator: In, values: [cache]} ]
[ matchLabels: object ]
[ example: k8s-app: redis-exporter ]

Example:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: redis-exporter # Enter a unique name
namespace: cm-prometheus # The namespace is not fixed; any namespace except kube-system is acceptable
labels:
prom_id: prom-xxx # Configure your instance ID
spec:
podMetricsEndpoints:
- interval: 30s
port: metric-port # Enter the Name of the Port for the Prometheus Exporter in the pod yaml
path: /metrics # Enter the path of the Prometheus Exporter. Default value: /metrics.
relabelings:
- action: replace
sourceLabels:
- instance
regex: (.*)
targetLabel: instance
replacement: 'crs-xxxxxx' # Replace with the corresponding Redis instance ID
- action: replace
sourceLabels:
- instance
regex: (.*)
targetLabel: ip
replacement: '1.x.x.x' # Replace with the corresponding Redis instance IP
namespaceSelector: # Select the namespace where the Pod to be monitored is located.
matchNames:
- redis-test
selector: # Enter the Label value of the pod to be monitored to locate the target pod.
matchLabels:
k8s-app: redis-exporter


Service Monitor

The corresponding configuration items are described as follows:

# Prometheus Operator CRD version
apiVersion: monitoring.coreos.com/v1
# Corresponding to the K8S resource type, here, Service Monitor
kind: ServiceMonitor
# Corresponding to K8S Metadata. Only the name needs attention here. If no jobLabel is specified, the job value in the scraped metrics label will be the name of the Service.
metadata:
name: redis-exporter # Enter a unique name
namespace: cm-prometheus # The namespace is flexible; any namespace except kube-system
labels:
prom_id: prom-xxx # Configure your instance ID
# Describing the selection of target Pods and the configuration of scrape jobs
spec:
# Enter the label (metadata/labels) of the corresponding Pod. The service monitor will use this value as the job label value.
[ jobLabel: string ]
# Add the corresponding service's Labels to the Target's Labels
[ targetLabels: []string ]
# Add the corresponding Pod's Labels to the Target's Labels
[ podTargetLabels: []string ]
# Limit on data points per scrape; 0: no limit, default is 0.
[ sampleLimit: uint64 ]
# Limit on the number of targets per scrape; 0: no limit, default is 0.
[ targetLimit: uint64 ]
# Configure the Prometheus HTTP API endpoints to be scraped; multiple endpoints can be configured
endpoints:
[ - <endpoint_config> ... ] # See below for endpoint description
# Select the namespace where the Pod to be monitored is located. If left blank, all namespaces will be selected.
[ namespaceSelector: ]
# Whether to select all namespaces
[ any: bool ]
# namespace list to be selected
[ matchNames: []string ]
# Enter the Label values of the Pod to be monitored to locate target Pods; [K8S metav1.LabelSelector](https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#labelselector-v1-meta)
selector:
[ matchExpressions: array ]
[ example: - {key: tier, operator: In, values: [cache]} ]
[ matchLabels: object ]
[ example: k8s-app: redis-exporter ]

Example:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: go-demo # Enter a unique name
namespace: cm-prometheus # The namespace is flexible; any namespace except kube-system
labels:
prom_id: prom-xxx # Configure your instance ID
spec:
endpoints:
- interval: 30s
# Enter the Name of the Port for the Prometheus Exporter in the service yaml
port: 8080-8080-tcp
# Enter the Path of the Prometheus Exporter. Default value: /metrics.
path: /metrics
relabelings:
# ** There must be a label named application. Here it is assumed that k8s has a label named app,
# The value is replaced with application via the replace action in relabel
- action: replace
sourceLabels: [__meta_kubernetes_pod_label_app]
targetLabel: application
# Select the namespace where the service to be monitored is located
namespaceSelector:
matchNames:
- golang-demo
# Enter the Label value of the service to be monitored to locate the target service.
selector:
matchLabels:
app: golang-app-demo


endpoint_config Configuration

The corresponding configuration items are described as follows:
# The name of the corresponding port. Note that this refers to the port name, not the port number. Default: 80. Valid values are as follows:
# ServiceMonitor: corresponds to Service>spec/ports/name;
# PodMonitor: Description:
# If viewing the Pod Yaml, retrieve the value from pod.spec.containers.ports.name.
# If viewing the Deployment/Daemonset/Statefulset, retrieve the value from spec.template.spec.containers.ports.name.
[ port: string | default = 80]
# Scrape Job Request URI Path
[ path: string | default = /metrics ]
# Scrape protocol: http or https
[ scheme: string | default = http]
# Scrape Request URL Parameters
[ params: map[string][]string]
# Scrape Job Interval
[ interval: string | default = 30s ]
# Scrape Job Timeout
[ scrapeTimeout: string | default = 30s]
# Whether the scrape connection uses TLS; configure the corresponding TLS parameters.
[ tlsConfig: TLSConfig ]
# Read the corresponding bearer token value from the file and place it into the scrape job header.
[ bearerTokenFile: string ]
# Read the corresponding bearer token via the specified K8S secret key; note that the secret namespace must be the same as the PodMonitor/ServiceMonitor.
[ bearerTokenSecret: string ]
# Resolution for label conflicts when scraped labels clash with those added by the backend Prometheus.
# true: Retain scraped labels and ignore labels conflicting with the backend Prometheus;
# false: For conflicting labels, prefix the scraped label with exported_<original-label> and add the labels added by the backend Prometheus;
[ honorLabels: bool | default = false ]
# Whether to use timestamps generated on the target during scraping.
# true: If timestamps exist in the target, use the timestamps from the target;
# false: Ignore the timestamps on the target;
[ honorTimestamps: bool | default = true ]
# For basic auth credentials, fill in the corresponding K8S secret key values for username/password; note that the secret namespace must match that of the PodMonitor/ServiceMonitor.
[ basicAuth: BasicAuth ]
# Metrics are scraped from the target via a proxy service; specify the corresponding proxy service address.
[ proxyUrl: string ]
# Before scraping data, modify the corresponding labels on the target through the relabel mechanism, executing multiple relabel rules sequentially.
# For relabel_config, see the description below.
relabelings:
# After data scraping but before writing, modify the label values through the relabel mechanism, with multiple relabel rules executed sequentially.
# For relabel_config, see the description below.
metricRelabelings:


relabel_config/relabelings configuration

The corresponding configuration items are described as follows:

# Extract values from source labels for relabeling; concatenate the extracted values using the specified separator.
# For PodMonitor/ServiceMonitor/Probe, their configuration item is sourceLabels.
[ source_labels: '[' <labelname> [, ...] ']' ]
# Defines the concatenation character for relabeled label values, defaults to ';'.
[ separator: <string> | default = ; ]

# When the action is replace/hashmod, specify the corresponding label name via target_label.
# For PodMonitor/ServiceMonitor/Probe, their configuration item is targetLabel.
[ target_label: <labelname> ]

# Regular expression for matching the values corresponding to source labels.
[ regex: <regex> | default = (.*) ]

# Used when the action is hashmod, based on the md5 modulo value of the source label values.
[ modulus: <int> ]

# When the action is replace, define the replacement expression after regex matching via replacement, which can be combined with regex regular expression substitution.
[ replacement: <string> | default = $1 ]

# Perform operations based on values matched by regex; corresponding actions are as follows (default: replace):
# replace: If regex matches, replace the corresponding value with the one defined in replacement, and set/add the corresponding label via target_label.
# keep: drop if regex is not matched.
# drop: discard if regex is matched.
# hashmod: Take modulo of the md5 hash value from the source label using the value specified by modulus, and add a new label with the name specified via target_label.
# labelmap: If regex matches, use replacement to replace the corresponding label name.
# labeldrop: delete the corresponding label if regex is matched.
# labelkeep: delete the corresponding label if regex is not matched.
[ action: <relabel_action> | default = replace ]


Probe

The corresponding configuration items are described as follows:
# Prometheus Operator CRD version
apiVersion: monitoring.coreos.com/v1
# Corresponding to the K8S resource type, this type is Probe.
kind: Probe
# Corresponding to K8S Metadata
metadata:
name: test-blackbox-exporter # Enter a unique name
namespace: cm-prometheus # The namespace is flexible; any namespace except kube-system
labels:
prom_id: prom-xxx # Configure your instance ID
# Describing the selection of probe targets and the configuration of probe requests
spec:
# The probe will take the corresponding value as the job label value. If no jobName is specified, the job label value in the scraped metrics will be probe/<namespace>/<name>
[ jobName: string ]
# Scrape Job Interval
[ interval: uint64 ]
# Scrape Job Timeout
[ scrapeTimeout: uint64 ]
# Limit on data points per scrape; 0: no limit, default is 0.
[ sampleLimit: uint64 ]
# Limit on the number of targets per scrape; 0: no limit, default is 0.
[ targetLimit: uint64 ]
# Method for probing targets
[ module: string ]
# Static targets to probe or dynamically discovered targets
targets:
# Static Target Set for Probe Targets
[ staticConfig: ]
# Static Address Set for Probe Targets
[ static: []string ]
# Apply corresponding labels to all targets, similar to the concept of global labels.
[ labels: map[string][]string ]
# Before scraping data, modify the corresponding labels on the target through the relabel mechanism, executing multiple relabel rules sequentially
[ relabelingConfigs: ]
# For relabel_config, see the description above.
[ - <static_config> ... ]
# Set of Ingress objects for probe targets; if staticConfig is also configured, staticConfig takes precedence.
[ ingress: ]
# Enter the Label values of the probe targets to locate target Pods; [K8S metav1.LabelSelector](https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#labelselector-v1-meta)
[ selector: ]
[ matchExpressions: array ]
[ example: - {key: tier, operator: In, values: [cache]} ]
[ matchLabels: object ]
[ example: k8s-app: redis-exporter ]
# Select the namespace where the probe targets are located. If left blank, all namespaces will be selected.
[ namespaceSelector: ]
# Whether to select all namespaces
[ any: bool ]
# namespace list to be selected
[ matchNames: []string ]
# Before scraping data, modify the corresponding labels on the target through the relabel mechanism, executing multiple relabel rules sequentially
[ relabelingConfigs: ]
# For relabel_config, see the description above.
[ - <relabel_config> ... ]
# Probe request rules
prober:
# Probe service address
url: string
# Probe service metrics path, defaults to /probe
[ path: string ]
# Probe service request protocol, defaults to http
[ scheme: string ]
# Proxy address
[ proxyUrl: string ]
# After scraping data, modify the values of the corresponding labels through the relabel mechanism, executing multiple relabel rules sequentially
[ metricRelabelings: ]
# For relabel_config, see the description above.
[ - <relabel_config> ... ]
Example:
apiVersion: monitoring.coreos.com/v1
kind: Probe
metadata:
name: test
namespace: test
spec:
# job name corresponding to the monitoring configuration
jobName: probe-job
interval: 15s
scrapeTimeout: 10s
# Preprocess metric samples before they are scraped into the monitoring system
metricRelabelings:
- sourceLabels:
- pod_name
separator: ;
regex: (.+)
targetLabel: pod
replacement: $1
action: replace
# Targets to probe
targets:
# Probe target static addresses
staticConfig:
static:
- 192.168.1.100:9100
# Probe request rules
prober:
# Probe service address (blackbox-exporter service address)
url: test-blackbox-exporter.default.svc.cluster.local:8180
# Probe service metrics path
path: /metrics
# Probe service request protocol
scheme: http


PrometheusRule

PrometheusRule is a CRD resource provided by Prometheus Operator for managing Alerting Rules and Recording Rules in a declarative manner. Through PrometheusRule, users can version control, automatically load, and dynamically update alert rules as Kubernetes resources without manually modifying Prometheus configuration files.
Note:
must be placed under the <Instance ID> namespace.
Alarm/Recording rules created through CRDs cannot be modified in the console.
The corresponding configuration items are described as follows:
# Prometheus Operator CRD version
apiVersion: monitoring.coreos.com/v1
# Corresponding to the K8S resource type, this type is PrometheusRule.
kind: PrometheusRule
# Corresponding to K8S Metadata
metadata:
annotations:
prometheus.tke.tencent.cloud.com/notice-id: <notice-id>
prometheus.tke.tencent.cloud.com/notice-repeat-interval: <convergence-interval>
name: example-alert-rules # Enter a unique name
namespace: prom-xxxxx # Must be placed under the <instance ID> namespace for alarm/pre-aggregation rules to take effect
# Definition of Alerting Rules and Recording Rules
spec:
# List of rule groups, each group contains a set of related rules
groups:
- # Rule group name, must be unique within the same PrometheusRule.
name: <string>
# The evaluation interval for rule groups, which specifies how often all rules within the group are evaluated.
[ interval: <duration> | default = global evaluation interval ]
# List of rules, including alarm rules and/or recording rules
rules:
# ---- Alerting Rules ----
- # Alarm name, must be unique within the same group
alert: <string>
# PromQL expression that triggers an alarm when the query result is not empty
expr: <string>
# Duration for which the expression must continuously satisfy the condition before triggering an actual alarm, to avoid false alarms caused by transient fluctuations
[ for: <duration> | default = 0s ]
# Additional tags for alarms, which are merged into the labels of the alarm instance, commonly used to mark severity levels
# Note: labels._interval_ corresponds to the convergence time on the page
[ labels: ]
[ <labelname>: <labelvalue> ... ]
# Alert annotation information, used to describe alarm details, supporting template variables {{ $labels.<labelname> }} and {{ $value }}
# Note: annotations.description corresponds to the alarm content (Description) displayed on the page; annotations.summary corresponds to the alarm target (Summary)
[ annotations: ]
[ <labelname>: <labelvalue> ... ]
# ---- Recording Rules ----
- # New metric name, the calculation result of expr will be saved as this metric
record: <string>
# PromQL expression
expr: <string>
# Additional tags for recording rules
[ labels: ]
[ <labelname>: <labelvalue> ... ]
Example:
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
annotations:
prometheus.tke.tencent.cloud.com/notice-id: notice-abcd
prometheus.tke.tencent.cloud.com/notice-repeat-interval: 30m
name: node-load-alert
namespace: prom-8xpa3dzm
spec:
groups:
- name: node-load
rules:
- alert: NodeLoadLow
annotations:
description: node_load1 is {{ $value }} (below 10) for more than 5 minutes.
summary: Low load on {{ $labels.instance }}
expr: node_load1 < 10
for: 5m
labels:
severity: warning

Special Notes on Annotations in PrometheusRule

prometheus.tke.tencent.cloud.com/notice-id: Notification id bound to the alarm rule. Multiple notification IDs should be separated by commas, with a maximum of 3 configurable. You can view notification IDs in the Tencent Cloud console.
prometheus.tke.tencent.cloud.com/notice-repeat-interval: Convergence interval for alarm notifications. Available values are 5m, 10m, 15m, 30m, 60m, 1h, 2h, 3h, 6h, 12h, 24h. If not configured, the default value is 1h.

Description of common fields for alarm rules

annotations.summary: Corresponds to the alarm object (Summary) displayed on the page.
annotations.description: Corresponds to the alarm content (Description) displayed on the page, supports using template variables such as {{ $labels.instance }} and {{ $value }} to dynamically populate specific information.

Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback