tencent cloud

Cloud File Storage

Releases Notes and Announcements
Release Notes
Announcements
Product Introduction
Overview
Strengths
Storage Classes and Performance
Use Cases
Recommended Regions
Use Limits
Service Regions and Service Providers
Purchase Guide
Billing Overview
Pricing Overview
General Series Billing
Turbo Series Billing
High-Throughput CFS Billing
Billing Mode
IA ‍Storage Billing
Storage Resource Units
Resource Purchase
Viewing Bills
Arrears Reminder
Getting Started
Creating File Systems and Mount Targets
Using CFS File Systems on Linux Clients
Using CFS File Systems on Windows Clients
Using CFS Turbo on Linux Clients
Using the CFS Client Assistant to Mount File Systems
Operation Guide
Access Management
Managing File Systems
Permission Management
Using Tags
Snapshot Management
Guide for Cross-AZ and Cross-Network Access
Automatically Mounting File Systems
Data Migration Service
User Permission Management
User Quotas
Data Encryption
Data Lifecycle Management
Upgrading Standard File Systems
Practical Tutorial
Selecting Kernels for NFS Clients
Managing Turbo CFS Directories
Terminating Compute Instances
Using CFS on TKE
Using CFS on SCF
Using CFS Turbo on TKE
Using CFS Turbo on TKE Serverless Cluster
Selecting a Network for Turbo CFS
Copying Data
CFS Storage Performance Testing
API Documentation
History
Introduction
API Category
Snapshot APIs
File system APIs
Lifecycle APIs
Other APIs
Data Flow APIs
Making API Requests
Permission Group APIs
Service APIs
Scaling APIs
Data Migration APIs
Data Types
Error Codes
Troubleshooting
Client Use Bottleneck due to Large Number of Small Files or Parallel Requests
FAQs
CFS Service Level Agreement
Contact Us
Glossary
문서Cloud File StoragePractical TutorialCFS Storage Performance Testing

CFS Storage Performance Testing

PDF
포커스 모드
폰트 크기
마지막 업데이트 시간: 2024-05-23 16:40:02
This document mainly introduces how to perform performance testing on the CFS File System in a reasonable manner.

Key Performance Metrics Instructions

Latency: The duration it takes to process read and write requests, measured in milliseconds. Typically, this is benchmarked using a 1 MB file with a single-stream (single-threaded) 4 K small I/O for testing.
IOPS: The number of data blocks read or written per second, measured in operations per second. In the industry, it is commonly benchmarked using a 100 MB file through concurrent (multi-host, multi-threaded) 4 K small I/O for testing.
Throughput: The amount of data read or written per second, measured in GiB/s or MiB/s. Typically, this is benchmarked using a 100 MB file with concurrent (multi-host, multi-threaded) 1 M large I/O for testing.

Notes

CFS's provided Performance Specifications , excluding latency parameters, all require a certain scale of machines with a sufficient number of cores to conduct concurrency stress testing to reach their maximum values. Typically, preparing 16 cloud servers with more than 32C as clients can meet the needs of most testing scenarios. For other stress testing requirements, cloud servers can be configured according to actual situations.
When stress testing CFS's General Standard and General Performance types, due to the server-side cache acceleration feature, read performance may exceed the standard values when cache hits occur. This is an expected phenomenon.
During performance stress testing, especially latency tests, it is necessary to ensure that the CVMs (clients) and CFS are located within the same availability zone. Performance results from cross-availability zone tests will significantly differ from standard values and should be avoided as much as possible.

Directions

Installing Stress Testing Software

CentOs/Tlinux:
sudo yum install fio
Ubuntu/Debian
sudo apt-get install fio

Read Latency Testing

fio -directory=/path/to/cfs -iodepth=1 -time_based=1 -thread -direct=1 -ioengine=libaio -rw=randread -bs=4K -size=1M -numjobs=1 -runtime=60 -group_reporting -name=cfs_test

Write Latency Testing

fio -directory=/path/to/cfs -iodepth=1 -time_based=1 -thread -direct=1 -ioengine=libaio -rw=randwrite -bs=4K -size=1M -numjobs=1 -runtime=60 -group_reporting -name=cfs_test

Read IOPS Testing

fio -directory=/path/to/cfs -iodepth=1 -time_based=1 -thread -direct=1 -ioengine=libaio -rw=randread -bs=4K -size=100M -numjobs=128 -runtime=60 -group_reporting -name=cfs_test

Write IOPS Testing

fio -directory=/path/to/cfs -iodepth=1 -time_based=1 -thread -direct=1 -ioengine=libaio -rw=randwrite -bs=4K -size=100M -numjobs=128 -runtime=60 -group_reporting -name=cfs_test

Read Throughput Testing

fio -directory=/path/to/cfs -iodepth=1 -time_based=1 -thread -direct=1 -ioengine=libaio -rw=randread -bs=1M -size=100M -numjobs=128 -runtime=60 -group_reporting -name=cfs_test

Write Throughput Testing

fio -directory=/path/to/cfs -iodepth=1 -time_based=1 -thread -direct=1 -ioengine=libaio -rw=randwrite -bs=1M -size=100M -numjobs=128 -runtime=60 -group_reporting -name=cfs_test

FIO Parameter Description

Parameter
Parameter Description
direct
Indicates whether to use direct I/O. Default value is 1.
Value of 1: Indicates to use direct I/O, and ignores the client's I/O cache, with data being written or read directly.
Value of 0: Indicates not to use direct I/O.
Note:
This parameter cannot bypass server-side caching.
iodepth
Indicates the I/O queue depth during testing. For example, -iodepth=1 means the maximum number of I/Os in an FIO control request is 1.
rw
Indicates the read/write strategy during testing. You can set it to:
randwrite: random write
randread: random read
read: sequential read
write: sequential write
randrw: mixed random read and write
Note:
Usually, during stress testing, random read/write are used. If sequential read/write performance metrics are required, parameters can be adjusted accordingly.
ioengine
Indicates which I/O engine FIO should use during testing. Usually libaio is chosen to ensure asynchronous issuance of data IO.
Note:
If libaio is not chosen during stress testing, the performance bottleneck is mainly on the ioengine, not on the storage side.
bs
Indicates the block size of the I/O unit.
size
Indicates the testing file size.
FIO will read/write the specified file size in its entirety before stopping the test. Unless it is limited by other options (such as runtime).
If this parameter is not specified, FIO will use the full size of the given file or device. The size can also be given as a percentage between 1 and 100. For example, if size=20% is specified, FIO will use 20% of the full size of the given file or device.
numjobs
Indicates the number of concurrent threads for the testing.
runtime
Indicates the testing duration. It is the FIO execution duration.
group_reporting
Indicates the testing results display mode.
If this parameter is specified, the testing results will summarize the statistical information for each process, instead of for different tasks.
directory
Indicates the file system mount point to be tested.
Note:
When this parameter is chosen, FIO will by default create files at this path, the number of which equals numjobs, for stress testing. This parameter is mandatory for storage stress testing, as specifying a filename directly targets a single file testing.
name
Indicates the name of the testing task. It can be set according to actual needs.
thread
Conduct stress testing using multi-threading rather than multi-processing.
time_based
The value is set to 1: After the specified file size has been fully read and written, I/O operations are automatically repeated until the time specified by the runtime parameter expires.
The value is set to 0: Testing stops immediately upon completion of reading and writing the specified file size.
Note:
Typically, the value is set to 1 during testing, which can ensure that the testing program runs continuously within the specified time period.
Note:
For more parameters about FIO stress testing, see the FIO documentation.
For testing across multiple machines, commands can be concurrently executed using pshell. Or see the FIO documentation to configure the cluster edition stress testing parameters.



도움말 및 지원

문제 해결에 도움이 되었나요?

피드백