tencent cloud

MySQL Exporter Integration
Last updated: 2025-11-19 11:55:11
MySQL Exporter Integration
Last updated: 2025-11-19 11:55:11

Scenarios

The MySQL exporter is specially designed and developed by the Prometheus community to collect MySQL/MariaDB database monitoring metrics. The exporter reports core database metrics, which can be used for exception alerting and displayed on the monitoring dashboard. Tencent Cloud Observability Platform (TCOP) Prometheus provides integration with MySQL Exporter and pre-configured Grafana monitoring dashboards.
Currently, the exporter supports MySQL 5.6 or above and MariaDB 10.1 or above. If MySQL or MariaDB is below 5.6 or 10.1 respectively, some monitoring metrics may fail to be collected.
Note:
If the MySQL to be monitored is Tencent Cloud TencentDB for MySQL, it is recommended to use Cloud Monitor Integration in the Integration Center, which supports one-click collection of cloud product metrics.

Access Method

Method One: One-Click Installation (Recommended)

Operation Steps

1. Log in to the TMP console.
2. Select and enter the corresponding Prometheus instance from the instance list.
3. On the instance details page, select Data Collection > Integration Center.
4. In the Integration Center, locate and click MySQL to open an installation window. On the installation page, enter details such as the metric collection name and address, then click save.


Configuration Description

Parameter
Description
name
Exporter name, which should meet the following requirements:
The name should be unique.
The name should conform to the following regular expression: '^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$'.
username
MySQL username.
password
MySQL password.
address
MySQL connection address
tag
Custom labels for metrics.
Exporter Configuration

Parameter description may vary. For details, see official documentation.
auto_increment.columns: Collect auto_increment columns and maximum value from information_schema.
binlog_size: Collect the current size of all registered binlog files.
engine_innodb_status: Collect from SHOW ENGINE INNODB STATUS.
engine_tokudb_status: Collect from SHOW ENGINE TOKUDB STATUS.
global_status: Collect from SHOW GLOBAL STATUS. The default is true.
global_variables: Collect from SHOW GLOBAL VARIABLES. The default is true.
info_schema.clientstats: Set to true to collect client statistical information if running with userstat=1.
info_schema.innodb_metrics: Collect metrics from information_schema.innodb_metrics.
info_schema.innodb_tablespaces: Collect metrics from information_schema.innodb_sys_tablespaces.
info_schema.innodb_cmp: Collect InnoDB compressed table metrics from information_schema.innodb_cmp. The default is true.
info_schema.innodb_cmpmem: Collect InnoDB buffer pool compression metrics from information_schema.innodb_cmpmem. The default is true.
info_schema.processlist: Collect thread state counting from information_schema.processlist.
info_schema.query_response_time: Collect query response time distribution if query_response_time_stats is ON. The default is true.
info_schema.tables: Collect metrics from information_schema.tables.
info_schema.tables.databases: Designed to collect the database list for table statistics, comma-separated. The default is '*' which means all databases.
info_schema.tablestats: Set to true to collect table statistics if running with userstat=1.
info_schema.schemastats: Set to true to collect schema statistics if running with userstat=1.
info_schema.userstats: Set to true to collect user statistical information if running with userstat=1.
perf_schema.eventsstatements: Collect metrics from performance_schema.events_statements_summary_by_digest.
perf_schema.eventsstatements.digest_text_limit: Maximum length of standardized statement text, default value is 120.
perf_schema.eventsstatements.limit: Limit the number of event statement summaries by response time, default value is 250.
perf_schema.eventsstatements.timelimit: Limit the Max Time for 'last_seen' event statements, in seconds, default value is 86400.
perf_schema.eventsstatementssum: Aggregate metrics from performance_schema.events_statements_summary_by_digest.
perf_schema.eventswaits: Collect metrics from performance_schema.events_waits_summary_global_by_event_name.
perf_schema.file_events: Collect metrics from performance_schema.file_summary_by_event_name.
perf_schema.file_instances: Collect metrics from performance_schema.file_summary_by_instance.
perf_schema.indexiowaits: Collect metrics from performance_schema.table_io_waits_summary_by_index_usage.
perf_schema.tableiowaits: Collect metrics from performance_schema.table_io_waits_summary_by_table.
perf_schema.tablelocks: Collect metrics from performance_schema.table_lock_waits_summary_by_table.
perf_schema.replication_group_member_stats: Collect metrics from performance_schema.replication_group_member_stats.
perf_schema.replication_applier_status_by_worker: Collect metrics from performance_schema.replication_applier_status_by_worker.
slave_status: Collect from SHOW SLAVE STATUS. The default is true.
slave_hosts: Collect from SHOW SLAVE HOSTS.
heartbeat: Collect from heartbeat.
heartbeat.database: Database for heartbeat data collection. Default value is heartbeat.
heartbeat.table: Table for heartbeat data collection. Default value is heartbeat.

Method 2: Custom Installation

Note:
TKE is recommended for convenient installation and management of the Exporter.

Prerequisites

You have created a TKE cluster in the region and VPC of your TMP instance and created a namespace for the cluster.
In the TMP console, select and enter the corresponding Prometheus instance, then choose Data Collection > Integrate with TKE to locate the corresponding container cluster and complete the cluster association. See the guide Associating Clusters.

Operation Steps

Step 1: Database Authorization
As the MySQL exporter monitors a database by querying its status data, you need to grant the exporter access to the corresponding database instance. The account and password should be set based on the actual conditions. The authorization steps are as follows:
1. Log in to the TencentDB for MySQL console.
2. On the instance list page, click the name of the database for which to authorize the exporter to enter the database details page.
3. Select Database Management > Account Management to enter the account management page and create an account for monitoring based on the actual business needs.
4. Click Modify Permissions in the Operation column on the right of the account to modify the corresponding permissions as shown below:


Alternatively, you can authorize by executing the following command in your Cloud Virtual Machine (CVM).
CREATE USER 'exporter'@'ip' IDENTIFIED BY 'password' WITH MAX_USER_CONNECTIONS 3;
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'ip';
Note:
We recommend you set the allowed maximum number of connections for the account to avoid any impact on the database due to monitoring data collection. However, not all database versions support this configuration, for example, MariaDB 10.1. For more information, please see MariaDB Documentation.
Step 2: Deploy Exporter
1. Log in to the TKE console.
2. Select Cluster in the left menu bar.
3. Click the ID/name of the cluster whose access credential you want to get to enter the cluster management page.
4. Manage MySQL connection strings using Secret.
4.1 On the left sidebar, select Workload > Deployment to enter the Deployment page.
4.2 In the top-right corner of the page, click Create via YAML to create a YAML configuration as detailed below:
You can use Kubernetes Secrets to manage and encrypt connection strings. When starting the MySQL exporter, you can directly use the Secret key but need to adjust the corresponding connection string. Below is a sample YAML configuration:
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret-test
namespace: mysql-demo
type: Opaque
stringData:
datasource: "user:password@tcp(ip:port)/" #corresponds to MySQL connection string information

5. Deploy MySQL Exporter.
On the Deployment management page, select the target namespace to deploy the service. You can create in the console. Here, YAML is used to deploy the exporter. Below is a sample configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: mysql-exporter # Adjust to the corresponding name based on business requirements. It is recommended to include information about the MySQL instance.
name: mysql-exporter # Adjust to the corresponding name based on business requirements. It is recommended to include information about the MySQL instance.
namespace: mysql-demo
spec:
replicas: 1
selector:
matchLabels:
k8s-app: mysql-exporter # Adjust to the corresponding name based on business requirements. It is recommended to include information about the MySQL instance.
template:
metadata:
labels:
k8s-app: mysql-exporter # Adjust to the corresponding name based on business requirements. It is recommended to include information about the MySQL instance.
spec:
containers:
- env:
- name: DATA_SOURCE_NAME
valueFrom:
secretKeyRef:
name: mysql-secret-test # Corresponds to the name of the Secret in the previous step.
key: datasource # Corresponds to the Secret Key in the previous step.
image: ccr.ccs.tencentyun.com/rig-agent/mysqld-exporter:v0.12.1
imagePullPolicy: IfNotPresent
name: mysql-exporter
ports:
- containerPort: 9104
name: metric-port
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: qcloudregistrykey
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30

6. Verification.
6.1 Click the newly created Deployment on the Deployment page to enter the Deployment management page.
6.2 Click the Log tab, and you can see that the exporter is successfully started and its address is exposed as shown below:

6.3 Click the Pod Management tab to enter the Pod page.
6.4 Click Log in under the operation bar on the right to log in to the Pod. In the command-line window, execute the following wget command to retrieve the corresponding MySQL metrics. If no data is retrieved, check whether the connection string is correct. Details are as follows:
wget -O- localhost:9104/metrics
The successful outcome is shown in the following figure:



Step 3: Add Collection Task
1. Log in to the TMP console and select the corresponding Prometheus instance to go to the management page.
2. Select Data Collection > Integrate with TKE, choose the associated cluster, and add a collection task via Data Collection Configuration > Customize Monitoring Configuration > Via YAML.
3. Add a PodMonitors via service discovery to define the collection task. The YAML example is as follows:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: mysql-exporter # Enter a unique name.
namespace: cm-prometheus # Pay-as-you-go instance: cluster namespace; Annual and monthly subscription instances (no longer available): namespace is fixed, do not change
spec:
podMetricsEndpoints:
- interval: 30s
port: metric-port # Enter the port of the Prometheus Exporter in the Pod YAML file.
path: /metrics # Enter the path of the Prometheus Exporter. Default value: /metrics.
relabelings:
- action: replace
sourceLabels:
- instance
regex: (.*)
targetLabel: instance
replacement: 'cdb-xxxxxx' # Replace with the corresponding MySQL instance ID
- action: replace
sourceLabels:
- instance
regex: (.*)
targetLabel: ip
replacement: '1.x.x.x' # Replace with the corresponding MySQL instance IP
namespaceSelector: # Select the namespace where the Pod to be monitored is located.
matchNames:
- mysql-demo
selector: # Enter the labels of the Pod to be monitored to locate the target Pod.
matchLabels:
k8s-app: mysql-exporter


Viewing Monitoring Data

Prerequisites

The Prometheus instance has been bound to a Grafana instance.

Operation Steps

1. Log in to the TMP console and select the corresponding Prometheus instance to go to the management page.
2. Select Data Collection > Integration Center, enter the Integration Center page, locate and click MySQL, select Install/Upgrade Dashboard under Dashboard > Dashboard Operation, then click Install/Upgrade to install the corresponding Grafana Dashboard.
3. Select Integrated. In the integrated list, click the Grafana icon to automatically open the MySQL monitoring dashboard and view the monitoring data related to the instance, as shown below:





Configuring Alarm

TMP has some built-in MySQL alerting rule templates. You can adjust the corresponding thresholds to add alerting rules based on your actual business conditions. For more information, please see Create Alarm Policy.

Appendix: Data Collection Parameters of MySQL Exporter

MySQL Exporter uses various Collector to control data collection. Specific parameters are as follows:
Parameter
MySQL Version
Description
collect.auto_increment.columns
5.1
Collects auto_increment columns and max values from information_schema.
collect.binlog_size
5.1
Collects the current size of all registered binlog files.
collect.engine_innodb_status
5.1
Collects the status data from SHOW ENGINE INNODB STATUS.
collect.engine_tokudb_status
5.6
Collects the status data from SHOW ENGINE TOKUDB STATUS.
collect.global_status
5.1
Collects the status data from SHOW GLOBAL STATUS (enabled by default).
collect.global_variables
5.1
Collects the status data from SHOW GLOBAL VARIABLES (enabled by default).
collect.info_schema.clientstats
5.5
If userstat=1 is set, this parameter can be set to true to enable client data collection.
collect.info_schema.innodb_metrics
5.6
Collects the monitoring data from information_schema.innodb_metrics.
collect.info_schema.innodb_tablespaces
5.7
Collects the monitoring data from information_schema.innodb_sys_tablespaces.
collect.info_schema.innodb_cmp
5.5
Collects the monitoring data of compressed InnoDB tables from information_schema.innodb_cmp.
collect.info_schema.innodb_cmpmem
5.5
Collects the monitoring data of InnoDB buffer pool compression from information_schema.innodb_cmpmem.
collect.info_schema.processlist
5.1
Collects the monitoring data of the thread status count from information_schema.processlist.
collect.info_schema.processlist.min_time
5.1
Minimum time a thread must be in each status to be counted. Default value: 0
collect.info_schema.query_response_time
5.5
Collects query response time distribution if query_response_time_stats is set to ON.
collect.info_schema.replica_host
5.6
Collects the status data from information_schema.replica_host_status.
collect.info_schema.tables
5.1
Collects the status data from information_schema.tables.
collect.info_schema.tables.databases
5.1
Sets the list of databases to collect table statistics for, or '*' for all.
collect.info_schema.tablestats
5.1
If userstat=1 is set, this parameter can be set to true to collect table statistics.
collect.info_schema.schemastats
5.1
If userstat=1 is set, this parameter can be set to true to collect schema statistics.
collect.info_schema.userstats
5.1
If userstat=1 is set, this parameter can be set to true to collect user statistics.
collect.perf_schema.eventsstatements
5.6
Collects the monitoring data from performance_schema.events_statements_summary_by_digest.
collect.perf_schema.eventsstatements.digest_text_limit
5.6
Sets the maximum length of the normalized statement text. Default value: 120.
collect.perf_schema.eventsstatements.limit
5.6
Limits the number of event statements. Default value: 250.
collect.perf_schema.eventsstatements.timelimit
5.6
Limits how old the 'last_seen' events statements can be, in seconds. Default value: 86400.
collect.perf_schema.eventsstatementssum
5.7
Collects the monitoring data from performance_schema.events_statements_summary_by_digest summed.
collect.perf_schema.eventswaits
5.5
Collects the monitoring data from performance_schema.events_waits_summary_global_by_event_name.
collect.perf_schema.file_events
5.6
Collects the monitoring data from performance_schema.file_summary_by_event_name.
collect.perf_schema.file_instances
5.5
Collects the monitoring data from performance_schema.file_summary_by_instance.
collect.perf_schema.indexiowaits
5.6
Collects the monitoring data from performance_schema.table_io_waits_summary_by_index_usage.
collect.perf_schema.tableiowaits
5.6
Collects the monitoring data from performance_schema.table_io_waits_summary_by_table.
collect.perf_schema.tablelocks
5.6
Collects the monitoring data from performance_schema.table_lock_waits_summary_by_table.
collect.perf_schema.replication_group_members
5.7
Collects the monitoring data from performance_schema.replication_group_members.
collect.perf_schema.replication_group_member_stats
5.7
Collects the monitoring data from performance_schema.replication_group_member_stats.
collect.perf_schema.replication_applier_status_by_worker
5.7
Collects the monitoring data from performance_schema.replication_applier_status_by_worker.
collect.slave_status
5.1
Collects the monitoring data from SHOW SLAVE STATUS (enabled by default).
collect.slave_hosts
5.1
Collects the monitoring data from SHOW SLAVE HOSTS.
collect.heartbeat
5.1
Collects the monitoring data from heartbeat.
collect.heartbeat.database
5.1
Database from where to collect heartbeat data. Default value: heartbeat.
collect.heartbeat.table
5.1
Table from where to collect heartbeat data. Default value: heartbeat.
collect.heartbeat.utc
5.1
Uses UTC for timestamps of the current server (pt-heartbeat is called with --utc). Default value: false.

Global Configuration Parameters

Item
Description
config.my-cnf
Path of .my.cnf file to read MySQL credentials from. Default value: ~/.my.cnf.
log.level
Log level. Default value: info.
exporter.lock_wait_timeout
Sets a lock_wait_timeout (in seconds) on the connection to avoid long metadata locking. Default value: 2.
exporter.log_slow_filter
Adds a log_slow_filter to avoid slow query logging of scrapes.
Note:
Oracle MySQL is not supported.
web.listen-address
Web port listening address.
web.telemetry-path
Metric API path.
version
Prints the version information.

Heartbeat detection

If collect.heartbeat is enabled, mysqld_exporter will scrape replication delay measured by heartbeat mechanisms.
Was this page helpful?
You can also Contact Sales or Submit a Ticket for help.
Yes
No

Feedback