tencent cloud

TencentDB for MongoDB

文档TencentDB for MongoDBPractical TutorialOptimizing Indexes to Break Through Read/Write Performance Bottlenecks

Optimizing Indexes to Break Through Read/Write Performance Bottlenecks

Download
聚焦模式
字号
最后更新时间: 2026-05-14 11:04:54
Indexes play a crucial role in MongoDB database query performance. By using the minimum number of indexes to meet user query requirements, you can significantly improve database performance and reduce storage costs. This document introduces a series of index optimization analysis processes to help you solve database read/write performance bottleneck issues.

Abnormal Phenomena

For routine Ops, log in to the TencentDB for MongoDB console, click the instance ID to go to the Instance Details page, and view the following information.
Select the System Monitoring tab to check the instance's monitoring data.
It is observed that the CPU consumption of the cluster's Mongod nodes is excessively high, with CPU utilization frequently approaching or even reaching 90% to 100%.
The disk I/O operations per second (IOPS) remain consistently high, indicating excessive I/O consumption. The I/O resource consumption of a single node accounts for 60% of the entire server.
Select the DMC tab, then select the Slow Log Query tab to view the slow log.
A large number of slow logs are generated on the instance. These logs contain a high volume of various find and update requests, which can reach thousands per second during peak periods.
Slow log types vary, query conditions are numerous, and all slow log queries have matching indexes. Their content is shown below.
Mon Aug 2 10:34:24.928 I COMMAND [conn10480929] command xxx.xxx command: find { find: "xxx", filter: { $and: [ { alxxxId: "xxx" }, { state: 0 }, { itemTagList: { $in: [ xx ] } }, { persxxal: 0 } ] }, limit: 3, maxTimeMS: 10000 } planSummary: IXSCAN { alxxxId: 1.0, itemTagList: 1.0 } keysExamined:1650 docsExamined:1650 hasSortStage:0 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:15 nreturned:3 reslen:8129 locks:{ Global: { acquireCount: { r: 32 } }, Database: { acquireCount: { r: 16 } }, Collection: { acquireCount: { r: 16 } } } protocol:op_command 227ms

Mon Aug 2 10:34:22.965 I COMMAND [conn10301893] command xx.txxx command: find { find: "txxitem", filter: { $and: [ { itxxxId: "xxxx" }, { state: 0 }, { itemTagList: { $in: [ xxx ] } }, { persxxal: 0 } ] }, limit: 3, maxTimeMS: 10000 } planSummary: IXSCAN { alxxxId: 1.0, itemTagList: 1.0 } keysExamined:1498 docsExamined:1498 hasSortStage:0 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:12 nreturned:3 reslen:8039 locks:{ Global: { acquireCount: { r: 26 } }, Database: { acquireCount: { r: 13 } }, Collection: { acquireCount: { r: 13 } } } protocol:op_command 158ms

Cause Analysis

Analysis of the slow log reveals that all query requests utilize the { alxxxId: 1.0, itemTagList: 1.0 } index. For this index, the number of keysExamined rows is 1498, and the number of docsExamined rows is also 1498, but the number of documents returned (nreturned =3) is only 3 rows. This means that only 3 data entries meet the query conditions, yet 1498 rows of data and index entries were scanned. Therefore, the key reason affecting read/write performance lies in the suboptimal index configuration.
Note:
The keysExamined metric indicates the number of index entries scanned. The docsExamined metric represents the number of document entries scanned. Higher values for keysExamined and docsExamined suggest that either no index exists or the index selectivity is low.

Index Optimization Process

Step 1: Collecting User Data Patterns

Commonly used business query and update SQL statements are as follows:
Based on AlxxxId (User ID) + itxxxId (single or multiple)
Query the count based on AlxxxId.
Pagination queries are performed based on AlxxxId and the time range (createTime). Some queries concatenate the state field and other fields.
Query based on a combination of AlxxxId, ParentAlxxxId, parentItxxxId, and state.
Query data based on ItxxxId (single or multiple).
Query based on a combination of AlxxxId, state, and updateTime.
Query based on a combination of AlxxxId, state, createTime, and totalStock (inventory quantity).
Based on a combination of AlxxxId (User ID) + itxxxId (single or multiple) + any other fields.
Query based on AlxxxId, digitalxxxrmarkId (watermark ID), and state.
Query based on AlxxxId, itemTagList (tag IDs), state, and other fields.
Query based on AlxxxId + itxxxId (single or multiple) + any other fields.
Other queries.
Commonly used business statistical count query SQL statements are as follows:
A combination of AlxxxId, state, and persxxal.
A combination of AlxxxId, state, and itemType.
A combination of AlxxxId (User ID) + itxxxId (single or multiple) + any other fields.

Step 2: Obtaining Existing Cluster Indexes

The index information for the table is obtained via `db.xxx.getindex()`. The queries are complex, and there are numerous indexes, totaling 30, as shown below.
{ "alxxxId" : 1, "state" : -1, "updateTime" : -1, "itxxxId" : -1, "persxxal" : 1, "srcItxxxId" : -1 }
{ "alxxxId" : 1, "image" : 1 }
{ "itexxxList.vidxxCheck" : 1, "itemType" : 1, "state" : 1 }
{ "alxxxId" : 1, "state" : -1, "newsendTime" : -1, "itxxxId" : 1, "persxxal" : 1 }
{ "_id" : 1 }
{ "alxxxId" : 1, "createTime" : -1, "checkStatus" : 1 }
{ "alxxxId" : 1, "parentItxxxId" : -1, "state" : -1, "updateTime" : -1, "persxxal" : 1, "srcItxxxId" : -1 }
{ "alxxxId" : 1, "state" : -1, "parentItxxxId" : 1, "updateTime" : -1, "persxxal" : -1 }
{ "srcItxxxId" : 1 }
{ "createTime" : 1 }
{ "itexxxList.boyunState" : -1, "itexxxList.wozhituUploadServerId": -1, "itexxxList.photoQiniuUrl" : 1, "itexxxList.sourceType" : 1 }
{ "alxxxId" : 1, "state" : 1, "digitalxxxrmarkId" : 1, "updateTime" : -1 }
{ "itxxxId" : -1 }
{ "alxxxId" : 1, "parentItxxxId" : 1, "parentAlxxxId" : 1, "state" : 1 }
{ "alxxxId" : 1, "videoCover" : 1 }
{ "alxxxId" : 1, "itemType" : 1 }
{ "alxxxId" : 1, "state" : -1, "itemType" : 1, "persxxal" : 1, "updateTime" : 1 }
{ "alxxxId" : 1, "itxxxId" : 1 }
{ "itxxxId" : 1, "alxxxId" : 1 }
{ "alxxxId" : 1, "parentAlxxxId" : 1, "state" : 1 }
{ "alxxxId" : 1, "itemTagList" : 1 }
{ "itexxxList.photoQiniuUrl" : 1, "itexxxList.boyunState" : -1, "itexxxList.sourceType" : 1, "itexxxList.wozhituUploadServerId" : -1 }
{ "alxxxId" : 1, "parentItxxxId" : 1, "state" : 1 }
{ "alxxxId" : 1, "parentItxxxId" : 1, "updateTime" : 1 }
{ "updateTime" : 1 }
{ "itemPhoxxIdList" : -1 }
{ "alxxxId" : 1, "state" : -1, "isTop" : 1 }
{ "alxxxId" : 1, "state" : 1, "itemResxxxIdList" : 1, "updateTime" : -1 }
{ "alxxxId" : 1, "state" : -1, "itexxxList.photoQiniuUrl" : 1 }
{ "itexxxList.qiniuStatus" : 1, "itexxxList.photoNetUrl" : 1, "itexxxList.photoQiniuUrl" : 1 }
{ "itemResxxxIdList" : 1 }

Step 3: Optimizing Indexes

Deleting Unused Indexes

MongoDB supports obtaining the number of hits of each index via the index statistics command, which is as follows:
> db.xxxxx.aggregate({"$indexStats":{}})
{ "name" : "alxxxId_1_parentItxxxId_1_parentAlxxxId_1", "key" : { "alxxxId" : 1, "parentItxxxId" : 1, "parentAlxxxId" : 1 },"host" : "TENCENT64.site:7014", "accesses" : { "ops" : NumberLong(11236765),"since" : ISODate("2020-08-17T06:39:43.840Z") } }
The field meanings are explained as follows.
name: The index name. Statistics are performed on this index name.
ops: The number of index hits, which is the number of times this index is hit by query requests across all queries. If the number of hits is 0 or very low, it indicates that the index is rarely selected as the optimal index and can be considered a useless index.
Use the index statistics command to obtain the number of hits of all indexes, as shown below. Delete indexes directly if their number of hits is 0 or very low. Also, delete indexes with ops less than 10000, given that the business has been running for some time. In total, 11 useless indexes can be deleted, leaving 19 useful indexes (30 - 11 = 19).
db.xxx.aggregate({"$indexStats":{}})
{ "alxxxId" : 1, "state" : -1, "updateTime" : -1, "itxxxId" : -1, "persxxal" : 1, "srcItxxxId" : -1 } "ops" : NumberLong(88518502)
{ "alxxxId" : 1, "image" : 1 } "ops" : NumberLong(293104)
{ "itexxxList.vidxxCheck" : 1, "itemType" : 1, "state" : 1 } "ops" : NumberLong(0)
{ "alxxxId" : 1, "state" : -1, "newsendTime" : -1, "itxxxId" : -1, "persxxal" : 1 } "ops" : NumberLong(33361216)
{ "_id" : 1 } "ops" : NumberLong(3987)
{ "alxxxId" : 1, "createTime" : 1, "checkStatus" : 1 } "ops" : NumberLong(20042796)
{ "alxxxId" : 1, "parentItxxxId" : -1, "state" : -1, "updateTime" : -1, "persxxal" : 1, "srcItxxxId" : -1 } "ops" : NumberLong(43042796)
{ "alxxxId" : 1, "state" : -1, "parentItxxxId" : 1, "updateTime" : -1, "persxxal" : -1 } "ops" : NumberLong(3042796)
{ "itxxxId" : -1 } "ops" : NumberLong(38854593)
{ "srcItxxxId" : -1 } "ops" : NumberLong(0)
{ "createTime" : 1 } "ops" : NumberLong(62)
{ "itexxxList.boyunState" : -1, "itexxxList.wozhituUploadServerId" : -1, "itexxxList.photoQiniuUrl" : 1, "itexxxList.sourceType" : 1 } "ops" : NumberLong(0)
{ "alxxxId" : 1, "state" : 1, "digitalxxxrmarkId" : 1, "updateTime" : -1 } "ops" : NumberLong(140238342)
{ "itxxxId" : -1 } "ops" : NumberLong(38854593)
{ "alxxxId" : 1, "parentItxxxId" : 1, "parentAlxxxId" : 1, "state" : 1 } "ops" : NumberLong(132237254)
{ "alxxxId" : 1, "videoCover" : 1 } { "ops" : NumberLong(2921857)
{ "alxxxId" : 1, "itemType" : 1 } { "ops" : NumberLong(457)
{ "alxxxId" : 1, "state" : -1, "itemType" : 1, "persxxal" : 1, " itxxxId " : 1 } "ops" : NumberLong(68730734)
{ "alxxxId" : 1, "itxxxId" : 1 } "ops" : NumberLong(232360252)
{ "itxxxId" : 1, "alxxxId" : 1 } "ops" : NumberLong(145640252)
{ "alxxxId" : 1, "parentAlxxxId" : 1, "state" : 1 } "ops" : NumberLong(689891)
{ "alxxxId" : 1, "itemTagList" : 1 } "ops" : NumberLong(2898693682)
{ "itexxxList.photoQiniuUrl" : 1, "itexxxList.boyunState" : 1, "itexxxList.sourceType" : 1, "itexxxList.wozhituUploadServerId" : 1 } "ops" : NumberLong(511303207)
{ "alxxxId" : 1, "parentItxxxId" : 1, "state" : 1 } "ops" : NumberLong(0)
{ "alxxxId" : 1, "parentItxxxId" : 1, "updateTime" : 1 } "ops" : NumberLong(0)
{ "updateTime" : 1 } "ops" : NumberLong(1397)
{ "itemPhoxxIdList" : -1 } "ops" : NumberLong(0)
{ "alxxxId" : 1, "state" : -1, "isTop" : 1 } "ops" : NumberLong(213305)
{ "alxxxId" : 1, "state" : 1, "itemResxxxIdList" : 1, "updateTime" : 1 } "ops" : NumberLong(2591780)
{ "alxxxId" : 1, "state" : 1, "itexxxList.photoQiniuUrl" : 1} "ops" : NumberLong(23505)
{ "itexxxList.qiniuStatus" : 1, "itexxxList.photoNetUrl" : 1, "itexxxList.photoQiniuUrl" : 1 } "ops" : NumberLong(0)
{ "itemResxxxIdList" : 1 } "ops" : NumberLong(7)

Deleting Duplicate Indexes

Index duplication is caused by the query order. For this business, different developers have written two indexes, as shown below. Analysis shows that the purpose of these two SQL indexes is the same, so creating either one is sufficient.
db.xxxx.find({{ "alxxxId" : xxx, "itxxxId" : xxx }})
db.xxxx.find({{ " itxxxId " : xxx, " alxxxId " : xxx }})
Index duplication is caused by leftmost prefix matching. Among the two indexes, { itxxxId:1, alxxxId:1 } and { itxxxId :1}, { itxxxId :1} is the duplicate index.
Index duplication is caused by inclusion relationships.
{ "alxxxId" : 1, "parentItxxxId" : 1, "parentAlxxxId" : 1, "state" : 1 }
{ "alxxxId" : 1, "parentAlxxxId" : 1, "state" : 1 }
{ "alxxxId" : 1, " state " : 1 }
These three indexes have the following three queries:
Db.xxx.find({ "alxxxId" : xxx, "parentItxxxId" : xx, "parentAlxxxId" : xxx, "state" : xxx })
Db.xxx.find({ "alxxxId" : xxx, " parentAlxxxId " : xx, " state " : xxx })
Db.xxx.find({ "alxxxId" : xxx, " state " : xxx })
Since these queries all contain common fields, they can be merged into a single index to satisfy both types of SQL queries. The merged index is as follows:
{ "alxxxId" : 1, " state " : 1, " parentAlxxxId " : 1, parentItxxxId :1}
After duplicate indexes are merged and cleaned, the following two indexes can be retained.
{ itxxxId:1, alxxxId:1 }
{ "alxxxId" : 1, "parentItxxxId" : 1, "parentAlxxxId" : 1, "state" : 1 }

Analyzing Index Uniqueness and Removing Overlapping Indexes

By analyzing the combinations of field modules in the table data, you can find that the alxxxId and itxxxId fields are high-frequency fields. Analyzing the Schema information and randomly sampling a portion of the data reveals that the combination of these two fields is unique. It is confirmed that any combination of these two fields represents a unique piece of data. Therefore, the combination of these two fields with any other field is also unique. Consequently, the following several indexes can be merged into a single index: { itxxxId:1, alxxxId:1 }.
{ "alxxxId" : 1, "state" : -1, "updateTime" : -1, "itxxxId" : 1, "persxxal" : 1, "srcItxxxId" : -1 }
{ "alxxxId" : 1, "state" : -1, "itemType" : 1, "persxxal" : 1, " itxxxId " : 1 }
{ "alxxxId" : 1, "state" : -1, "newsendTime" : -1, "itxxxId" : 1, "persxxal" : 1 }
{ "alxxxId" : 1, "state" : 1, "itxxxId" : 1, "updateTime" : -1 }
{ itxxxId:1, alxxxId:1 }

Optimizing Useless Indexes Caused by Non-Equality Queries

From the previous 30 indexes, it can be seen that some of the indexes contain time-type fields, such as createTime and updateTime. These fields are confirmed to be used for various range queries. Range queries are a type of non-equality query. If a range query field appears before other index fields, the subsequent fields cannot utilize the index, as shown below.
db.collection.find({{ "alxxxId" : xx, "parentItxxxId" : xx, "state" : xx, "updateTime" : {$gt: xxxxx}, "persxxal" : xxx, "srcItxxxId" : xxx } })

db.collection.find({{ "alxxxId" : xx, "state" : xx, "parentItxxxId" : xx, "updateTime" : {$lt: xxxxx}, "persxxal" : xxx} })
Both queries contain the updateTime field and perform range queries. All fields except updateTime are used for equality queries. Fields to the right of updateTime cannot utilize the index. Additionally, the first index cannot be matched with the persxxal and srcItxxxId fields, and the second index cannot be matched with the persxxal field.
The user sets the following two indexes for these two queries.
{ "alxxxId" : 1, "parentItxxxId" : -1, "state" : -1, "updateTime" : -1, "persxxal" : 1, "srcItxxxId" : -1 }
{ "alxxxId" : 1, "state" : -1, "parentItxxxId" : 1, "updateTime" : -1, "persxxal" : -1 }
Since the fields of these two indexes are essentially the same, you can optimize them into a single index as shown below to ensure that more fields can be matched by the index.
{ "alxxxId" : 1, "state" : -1, "parentItxxxId" : 1, "persxxal" : -1, "updateTime" : -1 }

Removing Indexes for Infrequently Queried Fields

When deleting useless indexes, remove those with a number of hits below 10,000. However, there is another set of indexes whose number of hits, while in the tens of millions, is relatively low compared to the high-frequency number of hits (in the billions). These indexes with lower hit frequencies are listed below and contain the image and videoCover fields, respectively.
{ "alxxxId" : 1, "image" : 1 } "ops" : NumberLong(293104)
{ "alxxxId" : 1, "videoCover" : 1 } "ops" : NumberLong(292857)
Log in to the MongoDB console. On the Slow Log Query tab, lower the slow log latency threshold and analyze the logs corresponding to these two queries, as shown below.
Mon Aug 2 10:56:46.533 I COMMAND [conn5491176] command xxxx.tbxxxxx command: count { count: "xxxxx", query: { alxxxId: "xxxxxx", itxxxId: "xxxxx", image: "http:/xxxxxxxxxxx/xxxxx.jpg" }, limit: 1 } planSummary: IXSCAN { itxxxId: 1.0,alxxxId:1.0 } keyUpdates:0 writeConflicts:0 numYields:1 reslen:62 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } } } protocol:op_query 4ms

Mon Aug 2 10:47:53.262 I COMMAND [conn10428265] command xxxx.tbxxxxx command: find { find: "xxxxx", filter: { $and: [ { alxxxId: "xxxxxxx" }, { state: 0 }, { itemTagList: { $size: 0 } } ] }, limit: 1, singleBatch: true } planSummary: IXSCAN { alxxxId: 1, videoCover: 1 } keysExamined:128 docsExamined:128 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:22 nreturned:0 reslen:108 locks:{ Global:{ acquireCount: { r: 46 } }, Database: { acquireCount: { r: 23 } }, Collection: { acquireCount: { r: 23 } } } protocol:op_command 148ms
Image field: By analyzing the logs, you can find that the image field in user requests is always queried in combination with alxxxId and itxxxId. Since the combination of alxxxId and itxxxId is unique and the image field is not indexed at all, the { "alxxxId" : 1, "ixxxge" : 1 } index can be deleted.
videoCover field: By analyzing the logs, you can find that the query conditions do not include videoCover. Only some queries match the { alxxxId: 1, videoCover: 1 } index, and the values for keysExamined, docsExamined, and nreturned are different. This confirms that only the alxxxId index field is actually matched. Therefore, the { alxxxId: 1, videoCover: 1 } index can also be deleted.

Analyzing High-Frequency Queries in Logs and Adding Optimal Indexes

Log in to the MongoDB console. On the Slow Log Query tab, lower the slow log latency threshold. Use the mtools tool to analyze queries over a period of time and obtain the following hotspot query information:

These high-frequency hotspot queries account for more than 99% of all queries. Analyze the logs corresponding to this type of query to obtain the following information.
Mon Aug 2 10:47:58.015 I COMMAND [conn4352017] command xxxx.xxx command: find { find: "xxxxx", filter: { $and: [ { alxxxId:"xxxxx" }, { state: 0 }, { itemTagList: { $in: [ xxxxx ] } }, { persxxal: 0 } ] }, projection: { $sortKey: { $meta: "sortKey" } }, sort: { updateTime: 1 }, limit: 3, maxTimeMS: 10000 } planSummary: IXSCAN { alxxxId: 1.0, itexxagList: 1.0 } keysExamined:1327 docsExamined:1327 hasSortStage:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:23 nreturned:3 reslen:12036 locks:{ Global: { acquireCount: { r: 48 } }, Database: { acquireCount: { r: 24 } }, Collection: { acquireCount: { r: 24 } } } protocol:op_command 151ms
Analysis of the logs shows that this high-frequency query matches the { alxxxId: 1.0, itexxagList: 1.0 } index. There is a significant gap between the number of data rows scanned and the number of data rows finally returned: 1327 rows were scanned, but only 3 data entries were obtained.
This index is not optimal. The high-frequency query is an equality query on four fields, but only two fields are covered by the index. You can optimize this index into the following one: { alxxxId: 1.0, itexxagList: 1.0, persxxal:1.0, stat:1.0 }.
Furthermore, the logs show that this high-frequency query also includes a sort operation and a limit restriction. The original SQL for the entire query is as follows:
db.xxx.find({ $and: [ { alxxxId:"xxxx" }, { state: 0 }, { itexxagList: { $in: [ xxxx ] } },{ persxxal: 0 } ] }).sort({updateTime:1}).limit(3)
This query pattern is a common multi-field equality query + a sort operation + a limit restriction. The optimal index for this type of query is likely one of the following two indexes:
Index 1: Index for common multi-field equality queries Analyze its query conditions:
{ $and: [ { alxxxId:"xxx" }, { state: 0 }, { itexxagList: { $in: [ xxxx ] } }, { persxxal: 0 } ] }
Given that all four fields in the SQL are equality queries, you can create an optimal index based on cardinality by placing the field with the highest cardinality on the leftmost side. This yields the following optimal index:
{ alxxxId: 1.0, itexxagList: 1.0 , persxxal:1.0, stat:1.0}
If this index is selected as the optimal one, the execution flow for the entire common multi-field equality query + sort operation + limit restriction query is as follows:
Use the { alxxxId: 1.0, itexxagList: 1.0, persxxal:1.0, stat:1.0} index to find all data that meets the condition { $and: [ { alxxxId:"xxxx" }, { state: 0 }, { itexxagList: { $in: [ xxxx ] } }, { persxxal: 0 } ] }.
Sort the data that meets the conditions in memory.
Retrieve the top three sorted data entries.
Index 2: The optimal index for equality queries + sort operations For a sort query that includes a limit, locate the high-frequency sorting SQL as follows:
{ $and: [ { alxxxId:"xxxx" }, { state: 0 }, { itexxagList: { $in: [ xxxx ] } }, { persxxal: 0 } ] }.sort({updateTime :1}).limit(10)
This query is a high-frequency one. It is recommended to add the following index for such SQL:
{ alxxxId: 1.0, itexxagList: 1.0 , persxxal:1.0, stat:1.0,updateTime :1}

Step 4: Reviewing the Final Retained Indexes

Through the optimization above, retain the following indexes.
{ "itxxxId" : 1, "alxxxId" : 1 }
{ "alxxxId" : 1, "state" : 1, "digitalxxxrmarkId" : 1, "updateTime" : 1 }
{ "alxxxId" : 1, "state" : -1, "parentItxxxId" : 1, "persxxal" : -1, "updateTime" : 1 } { "alxxxId" : 1, "itexxxList.photoQiniuUrl" : 1, }
{ "alxxxId" : 1, "parentAlxxxId" : 1, "state" : 1"parentItxxxId" : 1}
{ alxxxId: 1.0, itexxagList: 1.0 , persxxal:1.0, stat:1.0, updateTime:1}
{ "alxxxId" : 1,"createTime" : -1}

Benefits After Index Optimization

CPU resources are reduced by over 90%. The peak CPU consumption is reduced from over 90% to within 10% after optimization.
Disk I/O resources are reduced by over 85%. Disk I/O consumption is reduced from the previous 60% - 70% to within 10%.
Disk storage costs are reduced by over 20%. Each index corresponds to a disk index file. The number of indexes is reduced from 30 to 8, and the actual disk consumption for data + indexes is reduced by approximately 20%.
Slow logs are reduced by over 99%. Before index optimization, slow logs numbered in the thousands per second. After optimization, the number of slow log entries is reduced to tens.

帮助和支持

本页内容是否解决了您的问题?

填写满意度调查问卷,共创更好文档体验。

文档反馈