tencent cloud

Product Selection Guide
Last updated: 2025-07-17 17:16:39
Product Selection Guide
Last updated: 2025-07-17 17:16:39

Compare Product Forms

Dimension
​GooseFS​
​GooseFSx​
GooseFS-Lite
Core Positioning
Distributed cache acceleration service, oriented for lake storage performance optimization.
High-performance parallel file storage service, providing a fully managed, POSIX-compatible file system.
Lightweight local mount tool, targeting large file high-throughput read scenarios, providing low-cost COS data access.
Architecture Design
Distributed cache system, providing near-compute-node cache.
Distributed architecture-based file system with performance scaling linearly with capacity, supporting multi-client and multi-node parallel access.
Standalone lightweight tool, directly mounts COS buckets via FUSE, with no distributed component dependencies.
Deployment Method
Supports three deployment methods: fully managed, Master-managed, and control plane-managed.
Fully managed cloud service, one-click purchase with auto-scaling, no need for operation and maintenance.
Manual dependency installation (such as FUSE library) and mount required, no managed option available.
Protocol Support
Supports HDFS, FUSE, and POSIX protocols.
Fully compatible with POSIX semantics, supports mounting on Windows/Linux systems.
Supports basic POSIX operations.

Compare Core Features

GooseFS

Layered caching capability: Through hierarchical storage of memory/SSD/HDD, intelligently schedules hot data to the local computing node to enhance data throughput.
Unified namespace: Through a transparent naming mechanism, it can fuse the access semantics of multiple different underlying storage systems, providing users with unified data management capability.
Page Store caching: Uses a memory paging cache mechanism to significantly optimize cache space utilization and cold read efficiency for discrete IO access models.

GooseFSx

Data flow: Data flows between the data accelerator GooseFSx and Cloud Object Storage (COS) as needed.
Preheat data: Preheat data from a COS storage bucket to a GooseFSx directory, automatically, completely, and incrementally preheating your specified data (entire directory, a certain subdirectory, or list) into GooseFSx.
Data settlement: Settle data from a GooseFSx directory to a COS storage bucket, automatically, completely, and incrementally settling your specified data (entire directory, a certain subdirectory, or list) into COS.
Mount a cloud disk to multiple nodes A mounted cloud disk can tolerate simultaneous failures of any number of nodes, ensuring uninterrupted business and data integrity while significantly improving product availability (from 99.9% to 99.9999999%).

GooseFS-Lite

Lightweight mount: Supports mounting COS buckets to the local file system. Compatible with POSIX file operations (sequential read/write, directory operations), but does not support random writes, truncate operations, or soft/hard links.


Advantage Comparison

GooseFS

High performance: Based on a distributed cache architecture, provides users with high-performance data access near the compute node, significantly reducing data access latency.
Cost-intensive: Fully leverage idle local disk resources of computing nodes to provide data access acceleration capability and improve resource utilization.
Ecosystem affinity: Deeply adapts to the ecosystem of mainstream computing frameworks, supports seamless integration with big data and AI computing frameworks such as Spark and TensorFlow.
Ease of use: Provides three deployment methods: fully managed, Master-managed, and control plane-managed, which can be chosen based on the user's actual situation. The fully managed mode eliminates the need for users to operate and maintain clusters.
Integrate CLS log service with cloud-native Prometheus monitoring system, build a multidimensional real-time health monitoring system, simplify operation and maintenance process, and enhance stability.

GooseFSx

Ultra-high performance: Can provide throughput of hundreds of GB per second, million-scale IOPS, and sub-millisecond latency.
Seamless integration with the computing ecosystem: Fully supports POSIX file semantics and requires no code modification to adapt to HPC, AI training, and other scenarios. Supports the automatic batch mount feature to map storage space to local directories.
Data flow: Supports quickly preheating training datasets from COS to GooseFSx and automatically settling the generated results back to COS.
Hot and cold tiered, elastic, and efficient: GooseFSx is decoupled from COS, with each elastically scaling and deeply integrated.
Easy to use: Fully hosted service, one-click deployment via console, no need for cluster operation and maintenance.

GooseFS-Lite

Lightweight deployment: Deploy in the form of a client tool, no need to independently deploy a cache cluster or distributed system, just install on computing nodes.
Low cost and resource reuse: GooseFS-Lite directly leverages local disk or memory resources on computing nodes for data caching, saving hardware investment while avoiding bandwidth consumption caused by cross-node data synchronization.

Compare Application Scenarios

GooseFS

AI training and reasoning: Accelerate data preprocessing (such as data cleansing and small file loading), reduce GPU waiting time.
Big data analysis: Enhance the access performance of Spark/Flink frameworks for COS data, reduce job delay.
Autonomous driving: Accelerate the interaction between local IDC and cloud data, optimize the preprocessing efficiency of road acquisition data and autonomous driving training services.
AI content generation (AIGC): Cache hot data (such as model parameters and vector datasets) to enhance multimodal model training efficiency.

GooseFSx

AI training and reasoning (C50/C60/C70): Support high-speed Checkpoint writing, model training output settlement, and provide balanced read/write performance.
Autonomous driving (C50/C60/C70): Provide end-to-end solutions, offering integrated services for the entire process and full cycle of collection (road data uploading to the cloud), computation (immediate training), and storage (long-term persistent storage).
High-performance computing (C50/C60/C70): Provides parallel file services with high performance, low latency, and high throughput, fully meeting the high-throughput and low-latency needs of HPC computing; integrates with the data lake foundation COS to deliver ultra-high-performance, ultra-large-scale, and ultra-low-cost storage services.
Gene analysis (C50/C60): Accelerates high-performance storage requirements for stages such as gene sequencing and partial alignment; enables free data flow with the data lake foundation COS, allowing immediate access to COS samples on the Omics platform, automatically archiving gene analysis results to COS, and delivering them to end users through COS's Internet distribution capability.
CAE/CAD (C60): Accelerates small object read/write for CAE/CAD, integrates with the data lake foundation COS to provide ultra-high-performance, ultra-large-scale, and ultra-low-cost storage services.
Video rendering (C60): Provides integrated storage services, archives rendering materials at low cost in COS, pulls up the data accelerator GooseFSx C60 as needed for rendering, sinks the final output to COS for long-term maintenance, and delivers it to end users through COS's Internet distribution capability.

GooseFS-Lite

Fast mount COS buckets in standalone environments (such as simulating large file reads), for scenarios with low requirements for complex operations.

Compare Specifications and Limitations

Note:
GooseFS-Lite adopts a localized deployment mode for clients. Its specifications and limitations depend on the local node and are not included in the comparison.
Comparison Item
GooseFS
GooseFSx
Capacity Expansion
Fully managed starting at 20TiB, step length 10TiB
Starting at 9TiB, step length 3TiB for C50

Master-managed and control plane-managed have no fixed starting capacity, and cache space depends on compute node local disks.
Starting at 4.5TiB, step length 1.5TiB for C60 T2
Starting at 36TiB, step length 12TiB for C60 T12
Starting at 14TiB, step length 4.5TiB for C70
Read Bandwidth
Fully managed mode 200MB/s per TiB

C50: 120MB/s per TiB
Master managed mode and management plane managed mode throughput scales elastically with the number of Worker nodes, supporting Tbps-level bandwidth

C60: 200MB/s per TiB
C70: 600MB/s per TiB
Write Bandwidth
Fully managed mode is consistent with COS
C50: 120MB/s per TiB
Master managed mode and management plane managed mode scale elastically with the number of Worker nodes, supporting Tbps-level bandwidth
C60: 200MB/s per TiB
C70: 200MB/s per TiB
Read IPOS
Fully managed mode supports up to 200,000 ops
C50: 10,000 per TiB
Master Hosting Mode
Medium model: 10W ops; Large model: 20W ops
XLarge instance type: 30W ops
C60: 20,000 per TiB
Management is hosted based on the customer's purchased CVM specifications.
C70: 30,000 per TiB
Write IOPS
Fully managed mode is consistent with COS
C50: 10,000 per TiB
Medium model: 10W ops; Large model: 20W ops; XLarge model: 30W ops
C60: 20,000 per TiB
Management is hosted based on the customer's purchased CVM specifications.
C70: 20,000 per TiB
Number of Files
Fully managed: supports up to 1 billion
When deployed capacity is less than 40,000 GiB, each GiB supports 40,000 files.
Master Hosting
Medium model: billion-scale
Large model: billion-scale
XLarge instance: billion-scale
Management hosting: determined by the customer's purchased CVM specifications.
Latency
sub-millisecond
sub-millisecond
Supported Operating Systems
Linux
Linux/Windows
Was this page helpful?
You can also Contact Sales or Submit a Ticket for help.
Yes
No

Feedback