Overview
Strengths
Use Cases
Concepts
Use Limits
Features
Service Regions
[] configuration items are optional.# Scrape job name, which also adds a label (job=job_name) to the scraped metricsjob_name: <job_name># Scrape Job Interval[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]# Scrape Request Timeout[ scrape_timeout: <duration> | default = <global_config.scrape_timeout> ]# Scrape Job Request URI Path[ metrics_path: <path> | default = /metrics ]# Resolution for label conflicts when scraped labels clash with those added by the backend Prometheus.# true: Retain scraped labels and ignore labels conflicting with the backend Prometheus;# false: For conflicting labels, prefix the scraped label with exported_<original-label> and add the labels added by the backend Prometheus;[ honor_labels: <boolean> | default = false ]# Whether to use timestamps generated on the target during scraping.# true: If timestamps exist in the target, use the timestamps from the target;# false: Ignore the timestamps on the target;[ honor_timestamps: <boolean> | default = true ]# Scrape protocol: http or https[ scheme: <scheme> | default = http ]# Scrape Request URL Parametersparams:[ <string>: [<string>, ...] ]# Set the `Authorization` value in the scrape request headers via basic auth; password/password_file are mutually exclusive, and the value in password_file takes precedence.basic_auth:[ username: <string> ][ password: <secret> ][ password_file: <string> ]# Set the `Authorization` in the scrape request headers via bearer token; bearer_token / bearer_token_file are mutually exclusive, and the value in bearer_token takes precedence.[ bearer_token: <secret> ]# Set the `Authorization` in the scrape request headers via bearer token; bearer_token / bearer_token_file are mutually exclusive, and the value in bearer_token takes precedence.[ bearer_token_file: <filename> ]# Whether the scrape connection uses TLS; configure the corresponding TLS parameters.tls_config:[ <tls_config> ]# Metrics are scraped from the target via a proxy service; specify the corresponding proxy service address.[ proxy_url: <string> ]# Specify targets via static configuration; see the description below.static_configs:# HTTP service discovery configuration. For native service discovery configuration, refer to the Prometheus official documentationhttp_sd_configs:[ - <http_sd_config> ... ]# After scraping data, modify the labels on the target through the relabel mechanism, executing multiple relabel rules sequentially.# For relabel_config, see the description below.relabel_configs:# After data scraping, modify the label values through the relabel mechanism, executing multiple relabel rules sequentially.# For relabel_config, see the description below.metric_relabel_configs:# Limit on the number of data points per scrape; 0: no limit, default is 0.[ sample_limit: <int> | default = 0 ]# Limit on the number of targets per scrape; 0: no limit, default is 0.[ target_limit: <int> | default = 0 ]
# Specify the value of the corresponding target host, such as ip:port.targets:[ - '<host>' ]# Add corresponding labels to all targets, similar to the concept of global labels.labels:[ <labelname>: <labelvalue> ... ]
job_name: prometheusscrape_interval: 30sstatic_configs:- targets:- 127.0.0.1:9090
Tag | Description |
__meta_cvm_instance_id | Instance ID |
__meta_cvm_instance_name | Instance Name |
__meta_cvm_instance_state | Instance State |
__meta_cvm_instance_type | Instance type. |
__meta_cvm_OS | Instance operating system name. |
__meta_cvm_private_ip | Private IP Address |
__meta_cvm_public_ip | Public IP Address |
__meta_cvm_vpc_id | Network ID |
__meta_cvm_subnet_id | subnet ID |
__meta_cvm_tag_<tagkey> | instance Tag value |
__meta_cvm_region | Instance Region |
__meta_cvm_zone | Instance AZ |
# Tencent Cloud regions. For the region list, refer to the documentation at https://cloud.tencent.com/document/api/213/15692#Region-List.region: <string># custom endpoint.[ endpoint: <string> ]# Credentials for accessing Tencent Cloud API. If not set, the values of environment variables TENCENT_CLOUD_SECRET_ID and TENCENT_CLOUD_SECRET_KEY will be used.# If using the CVM crawling task of the Integration Center for configuration, then no need to fill in.[ secret_id: <string> ][ secret_key: <secret> ]# Refresh interval for the CVM list.[ refresh_interval: <duration> | default = 60s ]# Port for scraping metrics.ports:- [ <int> | default = 80 ]# Filter rules for the CVM list. For supported filter conditions, refer to the documentation at https://cloud.tencent.com/document/api/213/15728#2.-.E8.BE.93.E5.85.A5.E5.8F.82.E6.95.B0.filters:[ - name: <string>values: <string>, [...] ]
job_name: demo-monitorcvm_sd_configs:- region: ap-guangzhouports:- 8080filters:- name: tag:servicevalues:- demorelabel_configs:- source_labels: [__meta_cvm_instance_state]regex: RUNNINGaction: keep- regex: __meta_cvm_tag_(.*)replacement: $1action: labelmap- source_labels: [__meta_cvm_region]target_label: regionaction: replace
# Prometheus Operator CRD versionapiVersion: monitoring.coreos.com/v1# Corresponding to the K8S resource type, here, Pod Monitorkind: PodMonitor# Corresponding to K8S Metadata. Only the name needs attention here. If no jobLabel is specified, the job value in the scraped metrics label will be <namespace>/<name>metadata:name: redis-exporter # Enter a unique namenamespace: cm-prometheus # The namespace is flexible; any namespace except kube-system can be used.labels:prom_id: prom-xxx# Describing the selection of target Pods and the configuration of scrape jobsspec:# Enter the label of the corresponding Pod. The pod monitor will use this value as the job label value.# If viewing the Pod Yaml, retrieve the value from pod.metadata.labels.# If viewing Deployment/Daemonset/Statefulset, retrieve spec.template.metadata.labels.[ jobLabel: string ]# Add the corresponding Pod's Labels to the Target's Labels[ podTargetLabels: []string ]# Limit on data points per scrape; 0: no limit, default is 0.[ sampleLimit: uint64 ]# Limit on the number of targets per scrape; 0: no limit, default is 0.[ targetLimit: uint64 ]# Configure the Prometheus HTTP API endpoints to be scraped; multiple endpoints can be configuredpodMetricsEndpoints:[ - <endpoint_config> ... ] # See below for endpoint description# Select the namespace where the Pod to be monitored is located. If left blank, all namespaces will be selected.[ namespaceSelector: ]# Whether to select all namespaces[ any: bool ]# namespace list to be selected[ matchNames: []string ]# Fill in the Label value of the Pod to be monitored to locate the target Pod [K8S metav1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)selector:[ matchExpressions: array ][ example: - {key: tier, operator: In, values: [cache]} ][ matchLabels: object ][ example: k8s-app: redis-exporter ]
apiVersion: monitoring.coreos.com/v1kind: PodMonitormetadata:name: redis-exporter # Enter a unique namenamespace: cm-prometheus # The namespace is not fixed; any namespace except kube-system is acceptablelabels:prom_id: prom-xxx # Configure your instance IDspec:podMetricsEndpoints:- interval: 30sport: metric-port # Enter the Name of the Port for the Prometheus Exporter in the pod yamlpath: /metrics # Enter the path of the Prometheus Exporter. Default value: /metrics.relabelings:- action: replacesourceLabels:- instanceregex: (.*)targetLabel: instancereplacement: 'crs-xxxxxx' # Replace with the corresponding Redis instance ID- action: replacesourceLabels:- instanceregex: (.*)targetLabel: ipreplacement: '1.x.x.x' # Replace with the corresponding Redis instance IPnamespaceSelector: # Select the namespace where the Pod to be monitored is located.matchNames:- redis-testselector: # Enter the Label value of the pod to be monitored to locate the target pod.matchLabels:k8s-app: redis-exporter
# Prometheus Operator CRD versionapiVersion: monitoring.coreos.com/v1# Corresponding to the K8S resource type, here, Service Monitorkind: ServiceMonitor# Corresponding to K8S Metadata. Only the name needs attention here. If no jobLabel is specified, the job value in the scraped metrics label will be the name of the Service.metadata:name: redis-exporter # Enter a unique namenamespace: cm-prometheus # The namespace is flexible; any namespace except kube-systemlabels:prom_id: prom-xxx # Configure your instance ID# Describing the selection of target Pods and the configuration of scrape jobsspec:# Enter the label (metadata/labels) of the corresponding Pod. The service monitor will use this value as the job label value.[ jobLabel: string ]# Add the corresponding service's Labels to the Target's Labels[ targetLabels: []string ]# Add the corresponding Pod's Labels to the Target's Labels[ podTargetLabels: []string ]# Limit on data points per scrape; 0: no limit, default is 0.[ sampleLimit: uint64 ]# Limit on the number of targets per scrape; 0: no limit, default is 0.[ targetLimit: uint64 ]# Configure the Prometheus HTTP API endpoints to be scraped; multiple endpoints can be configuredendpoints:[ - <endpoint_config> ... ] # See below for endpoint description# Select the namespace where the Pod to be monitored is located. If left blank, all namespaces will be selected.[ namespaceSelector: ]# Whether to select all namespaces[ any: bool ]# namespace list to be selected[ matchNames: []string ]# Enter the Label values of the Pod to be monitored to locate target Pods; [K8S metav1.LabelSelector](https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#labelselector-v1-meta)selector:[ matchExpressions: array ][ example: - {key: tier, operator: In, values: [cache]} ][ matchLabels: object ][ example: k8s-app: redis-exporter ]
apiVersion: monitoring.coreos.com/v1kind: ServiceMonitormetadata:name: go-demo # Enter a unique namenamespace: cm-prometheus # The namespace is flexible; any namespace except kube-systemlabels:prom_id: prom-xxx # Configure your instance IDspec:endpoints:- interval: 30s# Enter the Name of the Port for the Prometheus Exporter in the service yamlport: 8080-8080-tcp# Enter the Path of the Prometheus Exporter. Default value: /metrics.path: /metricsrelabelings:# ** There must be a label named application. Here it is assumed that k8s has a label named app,# The value is replaced with application via the replace action in relabel- action: replacesourceLabels: [__meta_kubernetes_pod_label_app]targetLabel: application# Select the namespace where the service to be monitored is locatednamespaceSelector:matchNames:- golang-demo# Enter the Label value of the service to be monitored to locate the target service.selector:matchLabels:app: golang-app-demo
# The name of the corresponding port. Note that this refers to the port name, not the port number. Default: 80. Valid values are as follows:# ServiceMonitor: corresponds to Service>spec/ports/name;# PodMonitor: Description:# If viewing the Pod Yaml, retrieve the value from pod.spec.containers.ports.name.# If viewing the Deployment/Daemonset/Statefulset, retrieve the value from spec.template.spec.containers.ports.name.[ port: string | default = 80]# Scrape Job Request URI Path[ path: string | default = /metrics ]# Scrape protocol: http or https[ scheme: string | default = http]# Scrape Request URL Parameters[ params: map[string][]string]# Scrape Job Interval[ interval: string | default = 30s ]# Scrape Job Timeout[ scrapeTimeout: string | default = 30s]# Whether the scrape connection uses TLS; configure the corresponding TLS parameters.[ tlsConfig: TLSConfig ]# Read the corresponding bearer token value from the file and place it into the scrape job header.[ bearerTokenFile: string ]# Read the corresponding bearer token via the specified K8S secret key; note that the secret namespace must be the same as the PodMonitor/ServiceMonitor.[ bearerTokenSecret: string ]# Resolution for label conflicts when scraped labels clash with those added by the backend Prometheus.# true: Retain scraped labels and ignore labels conflicting with the backend Prometheus;# false: For conflicting labels, prefix the scraped label with exported_<original-label> and add the labels added by the backend Prometheus;[ honorLabels: bool | default = false ]# Whether to use timestamps generated on the target during scraping.# true: If timestamps exist in the target, use the timestamps from the target;# false: Ignore the timestamps on the target;[ honorTimestamps: bool | default = true ]# For basic auth credentials, fill in the corresponding K8S secret key values for username/password; note that the secret namespace must match that of the PodMonitor/ServiceMonitor.[ basicAuth: BasicAuth ]# Metrics are scraped from the target via a proxy service; specify the corresponding proxy service address.[ proxyUrl: string ]# Before scraping data, modify the corresponding labels on the target through the relabel mechanism, executing multiple relabel rules sequentially.# For relabel_config, see the description below.relabelings:# After data scraping but before writing, modify the label values through the relabel mechanism, with multiple relabel rules executed sequentially.# For relabel_config, see the description below.metricRelabelings:
# Extract values from source labels for relabeling; concatenate the extracted values using the specified separator.# For PodMonitor/ServiceMonitor/Probe, their configuration item is sourceLabels.[ source_labels: '[' <labelname> [, ...] ']' ]# Defines the concatenation character for relabeled label values, defaults to ';'.[ separator: <string> | default = ; ]# When the action is replace/hashmod, specify the corresponding label name via target_label.# For PodMonitor/ServiceMonitor/Probe, their configuration item is targetLabel.[ target_label: <labelname> ]# Regular expression for matching the values corresponding to source labels.[ regex: <regex> | default = (.*) ]# Used when the action is hashmod, based on the md5 modulo value of the source label values.[ modulus: <int> ]# When the action is replace, define the replacement expression after regex matching via replacement, which can be combined with regex regular expression substitution.[ replacement: <string> | default = $1 ]# Perform operations based on values matched by regex; corresponding actions are as follows (default: replace):# replace: If regex matches, replace the corresponding value with the one defined in replacement, and set/add the corresponding label via target_label.# keep: drop if regex is not matched.# drop: discard if regex is matched.# hashmod: Take modulo of the md5 hash value from the source label using the value specified by modulus, and add a new label with the name specified via target_label.# labelmap: If regex matches, use replacement to replace the corresponding label name.# labeldrop: delete the corresponding label if regex is matched.# labelkeep: delete the corresponding label if regex is not matched.[ action: <relabel_action> | default = replace ]
# Prometheus Operator CRD versionapiVersion: monitoring.coreos.com/v1# Corresponding to the K8S resource type, this type is Probe.kind: Probe# Corresponding to K8S Metadatametadata:name: test-blackbox-exporter # Enter a unique namenamespace: cm-prometheus # The namespace is flexible; any namespace except kube-systemlabels:prom_id: prom-xxx # Configure your instance ID# Describing the selection of probe targets and the configuration of probe requestsspec:# The probe will take the corresponding value as the job label value. If no jobName is specified, the job label value in the scraped metrics will be probe/<namespace>/<name>[ jobName: string ]# Scrape Job Interval[ interval: uint64 ]# Scrape Job Timeout[ scrapeTimeout: uint64 ]# Limit on data points per scrape; 0: no limit, default is 0.[ sampleLimit: uint64 ]# Limit on the number of targets per scrape; 0: no limit, default is 0.[ targetLimit: uint64 ]# Method for probing targets[ module: string ]# Static targets to probe or dynamically discovered targetstargets:# Static Target Set for Probe Targets[ staticConfig: ]# Static Address Set for Probe Targets[ static: []string ]# Apply corresponding labels to all targets, similar to the concept of global labels.[ labels: map[string][]string ]# Before scraping data, modify the corresponding labels on the target through the relabel mechanism, executing multiple relabel rules sequentially[ relabelingConfigs: ]# For relabel_config, see the description above.# Set of Ingress objects for probe targets; if staticConfig is also configured, staticConfig takes precedence.[ ingress: ]# Enter the Label values of the probe targets to locate target Pods; [K8S metav1.LabelSelector](https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#labelselector-v1-meta)[ selector: ][ matchExpressions: array ][ example: - {key: tier, operator: In, values: [cache]} ][ matchLabels: object ][ example: k8s-app: redis-exporter ]# Select the namespace where the probe targets are located. If left blank, all namespaces will be selected.[ namespaceSelector: ]# Whether to select all namespaces[ any: bool ]# namespace list to be selected[ matchNames: []string ]# Before scraping data, modify the corresponding labels on the target through the relabel mechanism, executing multiple relabel rules sequentially[ relabelingConfigs: ]# For relabel_config, see the description above.# Probe request rulesprober:# Probe service addressurl: string# Probe service metrics path, defaults to /probe[ path: string ]# Probe service request protocol, defaults to http[ scheme: string ]# Proxy address[ proxyUrl: string ]# After scraping data, modify the values of the corresponding labels through the relabel mechanism, executing multiple relabel rules sequentially[ metricRelabelings: ]# For relabel_config, see the description above.
apiVersion: monitoring.coreos.com/v1kind: Probemetadata:name: testnamespace: testspec:# job name corresponding to the monitoring configurationjobName: probe-jobinterval: 15sscrapeTimeout: 10s# Preprocess metric samples before they are scraped into the monitoring systemmetricRelabelings:- sourceLabels:- pod_nameseparator: ;regex: (.+)targetLabel: podreplacement: $1action: replace# Targets to probetargets:# Probe target static addressesstaticConfig:static:- 192.168.1.100:9100# Probe request rulesprober:# Probe service address (blackbox-exporter service address)url: test-blackbox-exporter.default.svc.cluster.local:8180# Probe service metrics pathpath: /metrics# Probe service request protocolscheme: http
# Prometheus Operator CRD versionapiVersion: monitoring.coreos.com/v1# Corresponding to the K8S resource type, this type is PrometheusRule.kind: PrometheusRule# Corresponding to K8S Metadatametadata:annotations:prometheus.tke.tencent.cloud.com/notice-id: <notice-id>prometheus.tke.tencent.cloud.com/notice-repeat-interval: <convergence-interval>name: example-alert-rules # Enter a unique namenamespace: prom-xxxxx # Must be placed under the <instance ID> namespace for alarm/pre-aggregation rules to take effect# Definition of Alerting Rules and Recording Rulesspec:# List of rule groups, each group contains a set of related rulesgroups:- # Rule group name, must be unique within the same PrometheusRule.name: <string># The evaluation interval for rule groups, which specifies how often all rules within the group are evaluated.[ interval: <duration> | default = global evaluation interval ]# List of rules, including alarm rules and/or recording rulesrules:# ---- Alerting Rules ----- # Alarm name, must be unique within the same groupalert: <string># PromQL expression that triggers an alarm when the query result is not emptyexpr: <string># Duration for which the expression must continuously satisfy the condition before triggering an actual alarm, to avoid false alarms caused by transient fluctuations[ for: <duration> | default = 0s ]# Additional tags for alarms, which are merged into the labels of the alarm instance, commonly used to mark severity levels# Note: labels._interval_ corresponds to the convergence time on the page[ labels: ][ <labelname>: <labelvalue> ... ]# Alert annotation information, used to describe alarm details, supporting template variables {{ $labels.<labelname> }} and {{ $value }}# Note: annotations.description corresponds to the alarm content (Description) displayed on the page; annotations.summary corresponds to the alarm target (Summary)[ annotations: ][ <labelname>: <labelvalue> ... ]# ---- Recording Rules ----- # New metric name, the calculation result of expr will be saved as this metricrecord: <string># PromQL expressionexpr: <string># Additional tags for recording rules[ labels: ][ <labelname>: <labelvalue> ... ]
apiVersion: monitoring.coreos.com/v1kind: PrometheusRulemetadata:annotations:prometheus.tke.tencent.cloud.com/notice-id: notice-abcdprometheus.tke.tencent.cloud.com/notice-repeat-interval: 30mname: node-load-alertnamespace: prom-8xpa3dzmspec:groups:- name: node-loadrules:- alert: NodeLoadLowannotations:description: node_load1 is {{ $value }} (below 10) for more than 5 minutes.summary: Low load on {{ $labels.instance }}expr: node_load1 < 10for: 5mlabels:severity: warning
Was this page helpful?
You can also Contact sales or Submit a Ticket for help.
Help us improve! Rate your documentation experience in 5 mins.
Feedback