tencent cloud

TencentDB for MongoDB

Release Notes and Announcements
Release Notes
Announcements
User Guide
Product Introduction
Overview
Strengths
Use Cases
Cluster Architecture
Product Specifications
Features
Regions and AZs
Terms
Service Regions and Service Providers
Purchase Guide
Billing Overview
MongoDB Pricing
Billing Formula
Payment Overdue
Backup Space Billing
Configuration Adjustment Billing
Getting Started
Quickly Creating an Instance
Connecting to a TencentDB for MongoDB Instance
Reading/Writing Database
Operation Guide
Access Management
Instance Management
Node Management
Version Upgrade
Network Configuration
Monitoring
Backup and Rollback
Database Audit
Data Security
SSL Authentication
Log Management
Database Management
Multi-AZ Deployment
Disaster Recovery/Read-Only Instances
Parameter Configuration
Recycle Bin
Task Management
Performance Optimization
Data Migration Guide
Practical Tutorial
Optimizing Indexes to Break Through Read/Write Performance Bottlenecks
Troubleshooting Mongos Load Imbalance in Sharded Cluster
Considerations for Using Shard Clusters
Sample of Reading and Writing Data in MongoDB Instance
Methods for Importing and Exporting Data Based on CVM Connected with MongoDB
What to Do for Errors of Repeated Instance Creation and Deletion of Databases with the Same Names?
Troubleshooting MongoDB Connection Failures
Shard Removal Task: Guide for Confirming the Progress and Troubleshooting Issues
Performance Fine-Tuning
Ops and Development Guide
Development Specifications
Command Support in Sharded Cluster v3.2
Command Support in v3.6
Development Ops
Troubleshooting
Increased Slow Queries
Number of Connections Exceeding Limit
API Documentation
History
Introduction
API Category
Making API Requests
Instance APIs
Backup APIs
Account APIs
Other APIs
Task APIs
Introduction
Data Types
Error Codes
Instance Connection
Shell Connection Sample
PHP Connection Sample
Node.js Connection Sample
Java Connection Sample
Python Connection Sample
Python Read/Write Sample
Go Connection Sample
PHP Reconnection Sample
Product Performance
Test Environment
Test Method
Test Result
FAQs
Cost
Features
Sharded Cluster
Instance
Rollback and Backup
Connection
Data Migration
Others
Service Agreement
Service Level Agreement
Terms of Service
Glossary
Contact Us

New Features in 7.0

PDF
Mode fokus
Ukuran font
Terakhir diperbarui: 2024-10-25 11:19:45
This document lists the main features introduced in TencentDB for MongoDB 7.0. For more information, see Release Notes for MongoDB 7.0 (Stable Release).

Performance Enhancement of Slot-based Query Execution Engine

The slot-based query execution engine in MongoDB 7.0 is an optimization and extension over earlier versions. This technology breaks down query operations into a series of slots, with each slot handling a specific part of the query, allowing for more fine-grained parallel processing. This design enables the database to handle complex queries more efficiently, especially aggregation pipeline queries involving the $group or $lookup stage. The new engine delivers better performance.

Shardkey Analysis

The analyzeShardKey command and the db.collection.analyzeShardKey() method introduced in MongoDB 7.0 are important tools for analyzing the performance of shardkeys in collections. These tools evaluate the effectiveness of shardkeys based on the results of sampled queries, helping to design better schemas and shardkeys. This ensures a more reasonable distribution of data in sharded clusters and improves query efficiency.
The basic syntax of the analyzeShardKey command is as follows:
db.adminCommand({
analyzeShardKey: <string>,
key: <shardKey>,
keyCharacteristics: <bool>,
readWriteDistribution: <bool>,
sampleRate: <double>,
sampleSize: <int>
})
analyzeShardKey field: Specifies the namespace of the collection to be analyzed.
key field: Defines the shardkey to be analyzed. It can be a candidate shardkey for an unsharded collection or a sharded collection, or the current shardkey of a sharded collection.
keyCharacteristics field: Determines whether to calculate characteristics metrics of the shardkey, such as cardinality, frequency, and monotonicity.
readWriteDistribution field: Determines whether to calculate metrics for read/write distribution.
sampleRate field: Defines the proportion of documents to be sampled in the collection during the calculation of shardkey characteristic metrics.
sampleSize field: Defines the number of documents to be sampled during the calculation of shardkey characteristic metrics.
Note:
Use the configureQueryAnalyzer command or the db.collection.configureQueryAnalyzer() method to configure the query analyzer to sample queries running on the collection.

Queryable Encryption

Queryable Encryption introduced in MongoDB 7.0 is an important security feature that allows users to encrypt sensitive data fields on the client side and store these fields as fully randomized encrypted data on the database server side. It also supports running expressive queries on encrypted data. Here are some key points about Queryable Encryption:
Data encryption and query: Queryable Encryption allows clients to encrypt sensitive data fields and store them in an encrypted format on the database server side. It also supports executing equality and range queries on encrypted data.
Encryption and decryption process: Sensitive data remains encrypted throughout its entire lifecycle (including during transmission, during static storage, in use, in logs, and in backup) and is only decrypted on the client side.
Automatic encryption and explicit encryption: Queryable Encryption can be achieved through automatic or explicit encryption. Automatic encryption allows for encrypted read and write operations without requiring explicit calls for encrypting and decrypting fields. Explicit encryption uses the MongoDB driver’s encryption library to perform encrypted read and write operations, but this requires specifying the logic to use this library for encryption within the application.
Key management: When Queryable Encryption is used in a production environment, you should use a remote key management system (KMS) to store encryption keys.

AutoMerger

AutoMerger introduced in MongoDB 7.0 is an important component of the automatic balancer. It is designed to optimize data distribution in sharded clusters. AutoMerger will automatically run when data or index distribution is imbalanced, when there are too many shards, or during data migration. It automatically merges chunks that meet specific merging requirements. Automatic merging operations are performed every autoMergerIntervalSecs second(s). Administrators can enable or disable AutoMerger through the configureCollectionBalancing command, for example:
db.adminCommand({
configureCollectionBalancing: "<db>.<collection>",
chunkSize: <num>,
defragmentCollection: <bool>,
enableAutoMerger: <bool>
})

New Aggregation Operators: $median and $percentile

$median: It is used to calculate the median of input values. The median is the value in the middle after the numbers are sorted by size. If the number of input values is odd, the median is the middle value. If it is even, the median is the average of the two middle values. In the following example, values are grouped by category, and the median of the value field is calculated for each group.
db.collection.aggregate([
{
$group: {
_id: "$category",
medianValue: { $median: { input: "$value" } }
}
}
])
$percentile: It is used to calculate the value at the specified percentile in an input array. A percentile indicates the value at or below which a given percentage of data items fall. In the following example, values are grouped by category, and the 90th percentile of the value field is calculated for each group.
db.collection.aggregate([
{
$group: {
_id: "$category",
percentileValue: { $percentile: { input: "$value", p: 0.90, method: "approximate" } }
}
}
])

Compound Wildcard Indexes

Compound wildcard indexes introduced in MongoDB 7.0 are a new feature that allows for the creation of indexes on multiple fields, which can include one wildcard term and multiple non-wildcard terms. This type of index is particularly useful for documents with a flexible schema, where document field names may vary within a collection. In the following example, wildcardProjection is used to specify which subfields should be included in the index. The wildcard index term $** specifies every field in the collection, while wildcardProjection restricts the index to the specified fields customFields.addr and customFields.name. For more information, see Compound Wildcard Indexes.
db.runCommand({
createIndexes: "salesData",
indexes: [
{
key: {
tenantId: 1,
"$**": 1
},
name: "tenant_customFields_projection",
wildcardProjection: {
"customFields.addr": 1,
"customFields.name": 1
}
}
]
})

Other Features

ChangeStream supports ultra-large change events and has introduced the $changeStreamSplitLargeEvent phase, which allows for the splitting of ultra-large change events exceeding 16 MB. For detailed information, see Large Change Stream Events.
A new field catalogCacheIndexLookupDurationMillis is added to slow query logs to record the time required for obtaining index information from the index cache. This helps more accurately analyze and diagnose query performance issues, especially in operations involving index lookups. For detailed information, see log Messages for slow queries.
WT engine dynamic traffic throttling automatically adjusts the transaction concurrency of the WT storage engine dynamically to optimize database performance under high load. For detailed information, see Concurrent Storage Engine Transactions.
Security enhancement with support for KMIP 1.0 and 1.1, as well as OpenSSL 3.0 and OpenSSL FIPS, which improves data security.
Statistical metrics for monitoring Chunk migrations are added. For detailed information, see New Sharding Statistics for Chunk Migrations.
Metadata consistency check uses the checkMetadataConsistency command introduced in MongoDB 7.0 to check metadata consistency issues in sharded clusters.

Bantuan dan Dukungan

Apakah halaman ini membantu?

masukan