This document describes how to query and download Tencent Cloud Distributed Cache audit logs to help you perform log search and analysis.
Use Cases
When you face the following Ops or security management requirements, you can perform related operations through the audit log feature:
Multi-dimensional efficient log search: Facing massive logs, it supports filtering through combined conditions such as time range, client IP address, user account, run the command, and so on, helping you quickly and accurately lock onto target data.
Fault Troubleshooting and Security Tracing: When the database experiences abnormal access, accidental data deletion, or performance bottlenecks, trace the execution path of historical SQL to reconstruct the event, quickly determine responsibilities, and eliminate security risks.
Offline Analysis and Compliance Archiving: Supports exporting retrieved audit details to local. Suitable for enterprise periodic security compliance reviews, long-term data archiving, or handing over to specialized security teams for in-depth offline data analysis.
Querying Audit Logs
2. In the left sidebar, select Database Audit under the Distributed Cache (Redis OSS-Compatible) menu.
3. At the top of the Database Audit page, select the region to view, and select the Audit Log tab.
4. On the Audit Log tab, you can set the following search conditions to quickly locate target data. After configuration, click Search to return a list of audit logs that meet the conditions.
Audit Instance: In the Audit Instance dropdown list, select the target database instance whose logs you want to view.
Time Range: In the time selection area, select or customize the log time period to query.
Note:
A single query can retrieve up to 7 days of log data. If the time range exceeds this duration, the query must be performed in batches.
Execute Command: In the Execute Command dropdown list, set matching rules (supported options: include, exclude, equal, not equal), and enter specific command keywords in the input box.
Advanced Filter (Optional): Click Advanced Filtering to expand more options, supporting multi-dimensional combined filtering by Client IP, user account, command type, database ID, error info, and execution duration.
5. View the details of audit logs in the audit log list.
|
Execution Time | Records the exact timestamp when the command is submitted and starts executing. |
Client IP | Source client IP address initiating the access request. Commonly used to trace abnormal access sources or troubleshoot network connectivity issues. |
User Account | The database authentication account used for executing the command. |
Database ID | The logical database index where the current operation is located. |
Command Type | The specific operation command keyword triggered (such as GET/SET for string operations, or HSET/LPUSH for hash/list operations). |
Node ID | The unique identifier of the underlying data node that actually receives and processes the command in a distributed cluster architecture. |
Execute Command | The full content of the command request sent by the client. |
Execution Duration (milliseconds) | The actual time consumed by the database kernel to process and complete the command. This metric is a critical indicator for troubleshooting "slow queries" and business lag. |
Error Info | If the command execution fails or is rejected by security policies, the specific error code or exception description is recorded here; if the execution succeeds, this field is typically empty or displays a success status. |
6. Each query supports a maximum of 7 days of log data. For longer time ranges, you need to query in batches.
Downloading Audit Logs
You can export audit logs as files for offline analysis and archiving:
1. On the Audit log query page, enter query conditions to filter audit logs.
2. Above the audit log list, click to customize the fields displayed in the audit log list. Click to create a log file. 3. In the small window of Generate Log File, confirm the instance ID, log time filter, and other information, and select the fields to be included in the log file.
All fields: Exports all detailed fields contained in the logs (such as execution time, client IP address, full command, and so on), regardless of the current display status in the page list.
Interaction with custom list fields: Only exports the fields currently configured and displayed in the log search list.
4. Click Generate File to go to the Audit Log List page, where the file generation progress can be viewed. Wait for the file generation to complete, then copy the Private Network Download Adress. Use this private network URL to download the log file for local viewing.
Note:
When generating and downloading audit log files, please note the following limitations and recommendations:
Network Environment Restrictions: Currently, log files are only available for download via private network URLs. Please ensure you use a cloud server in the same region as the database instance for downloading (for example: log files for instances in the Beijing region must be downloaded via cloud servers in the Beijing region).
File Validity Period: Generated log files are retained for 24 hours. The system will automatically clean up the links after expiration. Please complete the download as soon as possible after the file is generated.
Export Quota Limitation: A single database instance can retain up to 30 log export files simultaneously. Once the upper limit is reached, new files cannot be generated. We recommend that you delete completed historical records in the console promptly after downloading them locally.
Failure Handling Recommendations: If the file generation status displays as "Failed", it is usually caused by timeout or memory overflow due to excessive log data volume in a single export. We recommend narrowing down the query time range and creating/downloading files in batches.
FAQs
Q1: Why are some commands not recorded in the audit logs?
1. Audit Type Mismatch: You have enabled "write command" auditing, but the command executed is a read command (such as GET), which will not be recorded.
2. Degradation Protection: When P99 latency exceeds the threshold, the system automatically discards audit data to prioritize ensuring business availability.
3. Traffic Exceeded: When audit traffic exceeds the limit, some audit data will be discarded.
It is recommended to check the audit type configuration and view the instance monitoring metrics to confirm whether degradation has occurred.
Q2: Why are the downloaded audit logs incomplete? What factors may affect this?
Incomplete exported log data is usually caused by the following factors. We recommend investigating based on your actual situation:
1. Export Field Selection Limitation:
When creating log files, if you select "Interaction with custom list fields", the exported files will be generated strictly based on the columns currently displayed in the list, and hidden columns not shown will be filtered out. To retrieve all details, be sure to select "All Fields" during export.
2. Single Export Data Volume Reaches Upper Limit:
To ensure system stability, a single generated file may have limitations on the maximum number of entries or file size. If the log volume within the selected time period is abnormally large, the excess portion may be truncated. Recommendation: Narrow the time range for each query (for example, changing from "querying one day" to "querying by hour") and generate/download files in batches.
3. Logs have triggered expiration cleanup:
The selected time range may include expired logs. The system strictly adheres to the configured "Audit Log Retention Period" (such as 7 or 30 days) to automatically purge expired data. Purged data cannot be exported.
4. Underlying Collection Filtered or Degraded (Even the Console Cannot Query Them):
Rule Filtering: If the configured audit type is only "write command" or "read command", command types not covered will not be recorded.
Degradation Discarding: During peak business hours, if the instance latency triggers the configured P99 degradation threshold or the instantaneous traffic volume exceeds the collection limit, the system will actively discard some audit requests to ensure high availability of core services.
Q3: After logs transition from "high-frequency storage" to "low-frequency storage", can they still be searched and viewed normally in the console?
Logs stored in low-frequency storage can still be normally searched and viewed. They support precise search in the console using various conditions (such as command, IP address, time, and so on). The main difference between the two storage types lies in the underlying media. Query response times in low-frequency storage may be slightly longer than in high-frequency storage, but they fully meet the requirements for long-term post-incident tracing and compliance reviews.