tencent cloud

Cloud Object Storage

Release Notes and Announcements
Release Notes
Announcements
Product Introduction
Overview
Features
Use Cases
Strengths
Concepts
Regions and Access Endpoints
Specifications and Limits
Service Regions and Service Providers
Billing
Billing Overview
Billing Method
Billable Items
Free Tier
Billing Examples
Viewing and Downloading Bill
Payment Overdue
FAQs
Getting Started
Console
Getting Started with COSBrowser
User Guide
Creating Request
Bucket
Object
Data Management
Batch Operation
Global Acceleration
Monitoring and Alarms
Operations Center
Data Processing
Content Moderation
Smart Toolbox
Data Processing Workflow
Application Integration
User Tools
Tool Overview
Installation and Configuration of Environment
COSBrowser
COSCLI (Beta)
COSCMD
COS Migration
FTP Server
Hadoop
COSDistCp
HDFS TO COS
GooseFS-Lite
Online Tools
Diagnostic Tool
Use Cases
Overview
Access Control and Permission Management
Performance Optimization
Accessing COS with AWS S3 SDK
Data Disaster Recovery and Backup
Domain Name Management Practice
Image Processing
Audio/Video Practices
Workflow
Direct Data Upload
Content Moderation
Data Security
Data Verification
Big Data Practice
COS Cost Optimization Solutions
Using COS in the Third-party Applications
Migration Guide
Migrating Local Data to COS
Migrating Data from Third-Party Cloud Storage Service to COS
Migrating Data from URL to COS
Migrating Data Within COS
Migrating Data Between HDFS and COS
Data Lake Storage
Cloud Native Datalake Storage
Metadata Accelerator
GooseFS
Data Processing
Data Processing Overview
Image Processing
Media Processing
Content Moderation
File Processing Service
File Preview
Troubleshooting
Obtaining RequestId
Slow Upload over Public Network
403 Error for COS Access
Resource Access Error
POST Object Common Exceptions
API Documentation
Introduction
Common Request Headers
Common Response Headers
Error Codes
Request Signature
Action List
Service APIs
Bucket APIs
Object APIs
Batch Operation APIs
Data Processing APIs
Job and Workflow
Content Moderation APIs
Cloud Antivirus API
SDK Documentation
SDK Overview
Preparations
Android SDK
C SDK
C++ SDK
.NET(C#) SDK
Flutter SDK
Go SDK
iOS SDK
Java SDK
JavaScript SDK
Node.js SDK
PHP SDK
Python SDK
React Native SDK
Mini Program SDK
Error Codes
Harmony SDK
Endpoint SDK Quality Optimization
Security and Compliance
Data Disaster Recovery
Data Security
Cloud Access Management
FAQs
Popular Questions
General
Billing
Domain Name Compliance Issues
Bucket Configuration
Domain Names and CDN
Object Operations
Logging and Monitoring
Permission Management
Data Processing
Data Security
Pre-signed URL Issues
SDKs
Tools
APIs
Agreements
Service Level Agreement
Privacy Policy
Data Processing And Security Agreement
Contact Us
Glossary

Client Best Practices

PDF
Modo Foco
Tamanho da Fonte
Última atualização: 2025-11-24 11:20:24

High-Throughput Practice

The client reads data from the bucket each time, and the data size is controlled by the configuration item fs.ofs.block.memory.trunk.byte.
Configuration Item
Configuration Item Content
Description
fs.ofs.block.memory.trunk.byte
1048576
Object block size in bytes. The default value is 1048576 (1 MB).
Note:
If you use COSN SDK, you need to add fs.cosn.trsf before the configuration item name, such as fs.cosn.trsf.fs.ofs.prev.read.block.count.
In high throughput scenarios, the client introduced a pre-read block logic, splitting a file into multiple pre-read blocks and caching them in memory to reduce read latency and enhance throughput. You can specify pre-read block parameters to meet throughput requirements in different scenarios.

Sequential Read Scenario

The client internally detects whether the current scenario is sequential read using a cursor. If it is a sequential read scenario, the pre-read logic will be enabled; otherwise, it will not be used. In the pre-read logic, the client fixes the number of pre-read blocks cached each time. The related configuration items are as follows. You can adjust related parameters based on the model configuration.
Configuration Item
Configuration Item Content
Description
fs.ofs.prev.read.block.count
16
Pre-read block count, default value: 16
fs.ofs.prev.read.block.release.enable
true
Whether to release read blocks from memory, default value: true
fs.ofs.block.max.read.memory.cache.mb
16
Single file available memory amount, default value: 16, unit: MB
Note:
To avoid OOM, you can refer to the following context for memory usage practice and control the global cache model.
fs.ofs.data.transfer.thread.count
32
Core thread count of the IO thread pool for prefetching blocks from the bucket
fs.ofs.data.transfer.max.thread.count
Integer.MAX_VALUE
Maximum number of threads in the IO thread pool

Random Read Scenario

As mentioned in the previous context, the client will detect the current scenario. If it is a random read, the prefetch logic will not trigger. Additionally, in this scenario, it is recommended to adjust the configuration item fs.ofs.block.memory.trunk.byte based on actual business scenarios, modify the data size read from the COS bucket each time, and avoid read amplification in random read scenarios.


Client Memory Usage Practice

To enhance performance, OFS SDK caches data during upload and download. The cache includes memory cache and disk cache, with different applications in upload and download.
Upload: Use memory cache + disk cache
Download: Use memory cache only
Among them, memory cache blocks are allocated on demand, with priority given to memory cache blocks. When memory cache blocks are insufficient, disk cache blocks will be allocated. Disk cache uses off-heap memory during data writing to reduce copy operations between heap memory and kernel-space memory, thereby improving data writing performance.

To avoid multiple files affecting each other (for example, when a certain file is not closed or resources are not released, causing new reading and writing operations to fail), the SDK's default principle is to control the number of caches used by a single file. Meanwhile, to prevent OOM issues, the client provides a global (file system granularity) cache configuration item. You can adjust the configuration based on the client environment to achieve the best performance.

Single File Cache Control Model

OFS SDK uses the following two configuration items to control the cache usage of a single file:
Configuration Item
Configuration Item Content
Description
fs.ofs.block.max.memory.cache.mb
16
Memory cache usage, default value: 16, unit: MB
fs.ofs.block.max.file.cache.mb
256
Disk cache usage, default value: 256, unit: MB
Note:
If you use COSN SDK, you need to add fs.cosn.trsf before the configuration item name, such as fs.cosn.trsf.fs.ofs.block.max.memory.cache.mb.

Global (File System Granularity) Cache Control Model

The SDK provides global (file system granularity) cache control for read and write requests to avoid OOM issues. Details are as follows:
Uploading
Download
OFS SDK provides three configuration items used to control global upload memory.
Configuration Item
Configuration Item Content
Description
fs.ofs.block.total.memory.cache.mb
0
Maximum upload memory usage, default value 0 (no control), MB
fs.ofs.block.total.memory.cache.percent
100
Maximum upload memory usage ratio, default value 100, unit: percentage
fs.ofs.block.total.memory.jvm.heap.percent
0
Maximum JVM memory usage ratio, default value 0 (no control), unit: percentage
The SDK offers two global controls:
Rule 1: Control the maximum memory usage for uploads via fs.ofs.block.total.memory.cache.mb and fs.ofs.block.total.memory.cache.percent. After configuration, the maximum memory size used is: fs.ofs.block.total.memory.cache.mb * fs.ofs.block.total.memory.cache.percent / 100. Once configured, the SDK will calculate the number of files that can be written concurrently based on the global memory cache size / the maximum size of a single file fs.ofs.block.max.memory.cache.mb, and allocate semaphores accordingly. When opening a new file, it will apply for one semaphore. If the application fails, it will forcibly use disk cache. When the file is closed, the semaphore will be returned.
Rule 2: Control the maximum JVM memory via fs.ofs.block.total.memory.jvm.heap.percent. After configuration, the SDK will obtain the JVM maximum memory (ManagementFactory.getMemoryMXBean().getHeapMemoryUsage().getMax()) * fs.ofs.block.total.memory.jvm.heap.percent / 100 to determine.

By default, fs.ofs.block.total.memory.cache.mb and fs.ofs.block.total.memory.jvm.heap.percent are set to 0, meaning no memory control is performed. If both configuration items are non-zero, rule 1 (maximum memory usage) has a higher priority than rule 2 (maximum JVM memory usage).
CHDFS SDK provides three configuration items used to control global download memory.
Configuration Item
Configuration Item Content
Description
fs.ofs.block.total.read.memory.cache.mb
0
Maximum download memory usage, default value 0 (no control), MB
fs.ofs.block.total.read.memory.cache.percent
100
Maximum download memory usage ratio, default value 100, unit: percentage
The SDK controls the maximum memory usage for downloads via fs.ofs.block.total.read.memory.cache.mb and fs.ofs.block.total.read.memory.cache.percent. After configuration, the maximum memory used is: fs.ofs.block.total.read.memory.cache.mb * fs.ofs.block.total.read.memory.cache.percent / 100. Once configured, the SDK calculates the number of files that can be written simultaneously based on the global memory cache size divided by the maximum size of a single file fs.ofs.block.max.memory.cache.mb, thereby allocating signals. When opening a new file, it requests one semaphore. If the request fails, it forcibly uses disk cache. When a file is closed, the semaphore is returned. The request implements a blocking mechanism through queuing. When semaphores are insufficient, it waits for other files to close and release semaphores.



Ajuda e Suporte

Esta página foi útil?

comentários