Hyper Computing Cluster takes high-performance CVM instances as nodes and interconnects them through RDMA. It provides network services with high bandwidth and ultra-low latency, significantly improves network performance, and meets the parallel computing demands of large-scale high-performance computing, AI, big data recommendation, and other applications.
Instance Overview
Hyper Computing Cluster provides instances of the following specifications:
|
Recommended | | GPU | Nvidia H800 | TencentOS Server 3.1 (TK4) UEFI Edition Ubuntu Server 24.04 LTS UEFI Edition (Beta) Ubuntu Server 22.04 LTS (TK4) UEFI Edition Ubuntu Server 20.04 LTS (TK4) UEFI Edition |
| | GPU | Nvidia H800 | TencentOS Server 3.1 (TK4) UEFI Edition Ubuntu Server 24.04 LTS UEFI Edition (Beta) Ubuntu Server 22.04 LTS (TK4) UEFI Edition Ubuntu Server 20.04 LTS (TK4) UEFI Edition |
| | GPU | Nvidia A800 | TencentOS Server 2.4 (TK4) Ubuntu Server 24.04 LTS (Beta) Ubuntu Server 22.04 LTS (TK4) |
| | GPU | Nvidia A800 | TencentOS Server 2.4 (TK4) Ubuntu Server 24.04 LTS (Beta) Ubuntu Server 22.04 LTS (TK4) |
| | GPU | Nvidia A100 | TencentOS Server 2.4 (TK4) Ubuntu Server 24.04 LTS (Beta) Ubuntu Server 22.04 LTS (TK4) Ubuntu Server 18.04 LTS CentOS 7.6 |
Beta Testing | | GPU | Nvidia GPU | TencentOS Server 3.1 (TK4) UEFI Edition Ubuntu Server 24.04 LTS UEFI Edition (Beta) Ubuntu Server 22.04 LTS (TK4) UEFI Edition Ubuntu Server 20.04 LTS (TK4) UEFI Edition |
| | GPU | Nvidia GPU | TencentOS Server 3.1 (TK4) UEFI Edition Ubuntu Server 22.04 LTS (TK4) UEFI Edition Ubuntu Server 20.04 LTS (TK4) UEFI Edition |
| | GPU | Nvidia GPU | TencentOS Server 3.1 (TK4) UEFI Edition Ubuntu Server 22.04 LTS (TK4) UEFI Edition Ubuntu Server 20.04 LTS (TK4) UEFI Edition |
Available | | GPU | Nvidia V100 | TencentOS Server 2.4 (TK4) Ubuntu Server 24.04 LTS (Beta) Ubuntu Server 18.04 LTS CentOS 7.6 |
| | GPU | Nvidia V100 | TencentOS Server 2.4 (TK4) Ubuntu Server 24.04 LTS (Beta) Ubuntu Server 18.04 LTS CentOS 7.6 |
| | Standard | - | TencentOS Server 2.4 (TK4) Ubuntu Server 18.04 LTS CentOS 7.6 |
| | Compute | - | TencentOS Server 2.4 (TK4) Ubuntu Server 18.04 LTS CentOS 7.6 |
Instance Specifications
Refer to the introduction below to choose the instance specifications that meet your business needs, especially the minimum requirements for CPU, memory, GPU, and other resources.
GPU HCCPNV5
The GPU HCCPNV5 instance is the latest instance equipped with NVIDIA® H800 Tensor Core GPU. GPUs support 400 GB/s NVLink interconnection, and instances support 3.2 Tbps RDMA interconnection, offering high performance.
Note:
The instance is temporarily on an allowlist basis. Please contact your pre-sales manager to enable purchase permission.
Application Scenario
HCCPNV5 has strong floating-point computing capability and applies to large-scale AI and scientific computing scenarios.
Large-scale deep learning training and big data recommendations.
HPC applications, such as computational finance, quantum simulation of materials, and molecular modeling.
Hardware Specifications
CPU: 2.6 GHz Intel® Xeon® Sapphire Rapids processor with a turbo frequency of 3.1 GHz.
GPU: 8 × NVIDIA® H800 NVLink® 80GB (FP32 64 TFLOPS, TF32 494 TFLOPS, BF16 989 TFLOPS, 400GB/s NVLink®).
Memory: 8-channel DDR5.
Storage: 8 × 6,400 GB NVMe SSDs for high-performance local storage. CBS disks can be used as system and data disks, supporting on-demand expansion. Network: Support 100 Gbps private network bandwidth and 3.2 Tbps low-latency RDMA network dedicated to internal communication of Hyper Computing Cluster instances, with strong packet transporting and receiving capabilities. The public network can be configured as needed, and ENIs can be mounted. |
HCCPNV5 | 192 | 2048 | 2.6/3.1 | Nvidia H800 × 8 | 80GB × 8 | 3.2 Tbps RoCEv2 | 100 | 45 million | 32 | 16 million | 8 × 6400 GB NVMe SSD |
Note:
GPU driver: Consider installing NVIDIA Tesla driver version 535 or later for NVIDIA H800 series. 535.54.03 (Linux) and 536.25 (Windows) are recommended. For driver version information, see the NVIDIA official documentation. GPU HCCPNV5v
The GPU HCCPNV5v instance is the latest instance equipped with NVIDIA® H800 Tensor Core GPU. GPUs support 400 GB/s NVLink interconnection, and instances support 3.2 Tbps RDMA interconnection, offering high performance.
Note:
The instance is temporarily on an allowlist basis. Please contact your pre-sales manager to enable purchase permission.
Application Scenario
HCCPNV5v has strong floating-point computing capability and applies to large-scale AI and scientific computing scenarios.
Large-scale deep learning training and big data recommendations.
HPC applications, such as computational finance, quantum simulation of materials, and molecular modeling.
Hardware Specifications
CPU: 2.6 GHz Intel® Xeon® Sapphire Rapids processor with a turbo frequency of 3.1 GHz.
GPU: 8 × NVIDIA® H800 NVLink® 80GB (FP32 64 TFLOPS, TF32 494 TFLOPS, BF16 989 TFLOPS, 400GB/s NVLink®).
Memory: 8-channel DDR5.
Storage: 8 × 6,400 GB NVMe SSDs for high-performance local storage. CBS disks can be used as system and data disks, supporting on-demand expansion. Network: Support 100 Gbps private network bandwidth and 3.2 Tbps low-latency RDMA network dedicated to internal communication of Hyper Computing Cluster instances, with strong packet transporting and receiving capabilities. The public network can be configured as needed, and ENIs can be mounted. |
HCCPNV5v | 172 | 1939 | 2.6/3.1 | Nvidia H800 × 8 | 80GB × 8 | 3.2 Tbps RoCEv2 | 100 | 15 million | 48 | 16 million | 8 × 6400 GB NVMe SSD |
Note:
GPU driver: Consider installing NVIDIA Tesla driver version 535 or later for NVIDIA H800 series. 535.54.03 (Linux) and 536.25 (Windows) are recommended. For driver version information, see the NVIDIA official documentation. GPU HCCPNV4sne
The GPU HCCPNV4sne instance is a new instance equipped with NVIDIA® A800 Tensor Core GPU. GPUs support 400 GB/s NVLink interconnection, and instances support 1.6 Tbps RDMA interconnection, offering high performance.
Note:
The instance is temporarily on an allowlist basis. Please contact your pre-sales manager to enable purchase permission.
Application Scenario
HCCPNV4sne has strong floating-point computing capability and applies to large-scale AI and scientific computing scenarios.
Large-scale deep learning training and big data recommendations.
HPC applications, such as computational finance, quantum simulation of materials, molecular modeling, and gene sequencing.
Hardware Specifications
CPU: 2.7 GHz Intel® Xeon® Ice Lake processor with a turbo frequency of 3.3 GHz.
GPU: 8 × NVIDIA® A800 NVLink® 80GB (FP64 9.7 TFLOPS, TF32 156 TFLOPS, BF16 312 TFLOPS, 400GB/s NVLink®).
Memory: 8-channel DDR4.
Storage: 4 × 6,400 GB NVMe SSDs for high-performance local storage. CBS disks can be used as system and data disks, supporting on-demand expansion. Network: Support 100 Gbps private network bandwidth and 1.6 Tbps low-latency RDMA network dedicated to internal communication of Hyper Computing Cluster instances, with strong packet transporting and receiving capabilities. The public network can be configured as needed, and ENIs can be mounted. |
HCCPNV4sne | 124 | 1929 | 2.7/3.3 | Nvidia A800 × 8 | 80GB × 8 | 1.6 Tbps RoCEv2 | 100 | 15 million | 48 | 16 million | 4 × 6400 GB NVMe SSD |
Note:
GPU driver: NVIDIA Tesla driver version 450 or later needs to be installed for NVIDIA A800 series. 460.32.03 (Linux) and 461.33 (Windows) are recommended. For driver version information, see the NVIDIA official documentation. GPU HCCPNV4sn
The GPU HCCPNV4sn instance is a new instance equipped with NVIDIA® A800 Tensor Core GPU. GPUs support 400 GB/s NVLink interconnection, and instances support 800 Gbps RDMA interconnection, offering high performance.
Note:
The instance is temporarily on an allowlist basis. Please contact your pre-sales manager to enable purchase permission.
Application Scenario
HCCPNV4sn has strong floating-point computing capability and applies to large-scale AI and scientific computing scenarios.
Large-scale deep learning training and big data recommendations.
HPC applications, such as computational finance, quantum simulation of materials, molecular modeling, and gene sequencing.
Hardware Specifications
CPU: 2.55GHz AMD EPYC™ Milan, with turbo boost up to 3.5GHz.
GPU: 8 × NVIDIA® A800 NVLink® 80GB (FP64 9.7 TFLOPS, TF32 156 TFLOPS, BF16 312 TFLOPS, 400GB/s NVLink®).
Memory: 8-channel DDR4.
Storage: 2 × 7,680 GB NVMe SSDs for high-performance local storage. CBS disks can be used as system and data disks, supporting on-demand expansion. Network: Support 100 Gbps private network bandwidth and 800 Gbps low-latency RDMA network dedicated to internal communication of Hyper Computing Cluster instances, with strong packet transporting and receiving capabilities. The public network can be configured as needed, and ENIs can be mounted. |
HCCPNV4sn | 232 | 1929 | 2.55/3.5 | Nvidia A800 × 8 | 80GB × 8 | 800 Gbps RoCEv2 | 100 | 19 Million | 48 | 16 million | 2 × 7680 GB NVMe SSD |
Note:
GPU driver: NVIDIA Tesla driver version 450 or later needs to be installed for NVIDIA A800 series. 460.32.03 (Linux) and 461.33 (Windows) are recommended. For driver version information, see the NVIDIA official documentation. GPU HCCPNV4h
The GPU HCCPNV4h instance is a new instance equipped with NVIDIA® A100 Tensor Core GPU. It uses NVMe SSDs as instance storage with low latency, ultra-high IOPS, and high throughput, offering high performance.
Application Scenario
HCCPNV4h delivers exceptional double-precision floating-point performance and applies to large-scale AI and scientific computing scenarios.
Large-scale machine learning training and big data recommendations.
HPC applications, such as computational finance, quantum simulation of materials, molecular modeling, and gene sequencing.
Hardware Specifications
CPU: 2.6GHz AMD EPYC™ ROME, with turbo boost up to 3.3GHz.
GPU: 8 × NVIDIA® A100 NVLink® 40GB (FP64 9.7 TFLOPS, TF32 156 TFLOPS, BF16 312 TFLOPS, 600GB/s NVLink®).
Memory: 8-channel DDR4.
Storage: 1 × 480 GB SATA SSD as local system disk and 4 × 3,200 GB NVMe SSDs for high-performance local storage. CBS disks cannot be mounted.
Network: Support 25 Gbps private network bandwidth and 100 Gbps low-latency RDMA network dedicated to internal communication of Hyper Computing Cluster instances, with strong packet transporting and receiving capabilities. The public network can be configured as needed, but ENIs cannot be mounted. |
HCCPNV4h | 192 | 1024 | 2.6/3.3 | Nvidia A100 × 8 | 40GB × 8 | 100 Gbps RoCEv2 | 25 | 10 million | 16 | 2 million | 1 × 480 GB SATA SSD and 4 × 3,200 GB NVMe SSDs |
Note:
GPU driver: NVIDIA Tesla driver version 450 or later needs to be installed for NVIDIA A100 series. 460.32.03 (Linux) and 461.33 (Windows) are recommended. For driver version information, see the NVIDIA official documentation. GPU HCCPNV6 (Beta Testing)
The GPU HCCPNV6 is the latest generation GPU instance. It supports NVLink interconnect between GPU cards and 3.2Tbps RDMA interconnection between instances, delivering high performance.
Note:
The instance is temporarily in allowlist beta testing. Please contact your pre-sales manager to enable purchase permission.
Application Scenario
HCCPNV6 is suitable for large-scale AI training and inference scenarios.
Large model, advertising recommendation, autonomous driving, and other AI training scenarios.
Large model distributed inference.
Hardware Specifications
CPU: AMD EPYC™ Genoa, with turbo boost up to 3.7GHz.
Memory: Collocation with twelve-channel DDR5 memory.
Storage: 4 × 6,400 GB NVMe SSDs for high-performance local storage. CBS disks can be used as system and data disks, supporting on-demand expansion. Network: Support 100 Gbps private network bandwidth and 3.2 Tbps low-latency RDMA network dedicated to internal communication of Hyper Computing Cluster instances, with strong packet transporting and receiving capabilities. The public network can be configured as needed, and ENIs can be mounted. |
HCCPNV6 | 384 | 2304 | 2.6/3.7 | Nvidia GPU × 8 | 3.2 Tbps RoCEv2 | 100 | 45 million | 32 | 16 million | 4 × 6400 GB NVMe SSD |
GPU HCCPNV6e (Beta Testing)
The GPU HCCPNV6e is the latest generation GPU instance. It supports NVLink interconnect between GPU cards and 200Gbps vRDMA network interconnection between instances, offering a cost-effective product solution.
Note:
The instance is temporarily in allowlist beta testing. Please contact your pre-sales manager to enable purchase permission.
Application Scenario
HCCPNV6e is suitable for small- to medium-sized AI training and inference scenarios.
Advertising recommendation, autonomous driving, and other AI training scenarios.
Large model distributed inference.
Hardware Specifications
CPU: AMD EPYC™ Genoa, with turbo boost up to 3.7GHz.
Memory: Collocation with twelve-channel DDR5 memory.
Storage: Support CBS as system and data disks, and on-demand expansion . Network: Support 100 Gbps private network bandwidth and 200 Gbps low-latency low-cost self-developed vRDMA network dedicated to internal communication of high-performance computing clusters, with strong packet transporting and receiving capabilities. The public network can be configured as needed, and ENIs can be mounted. |
HCCPNV6e | 384 | 2304 | 2.6/3.7 | Nvidia GPU × 8 | 200 Gbps vRDMA | 100 | 35 million | 48 | 12,000,000 |
GPU HCCPNV5b (Beta Testing)
The GPU HCCPNV5b is the latest generation GPU instance. It uses a new architecture GPU compute card with 48GB GDDR6 video memory capacity, supporting FP32, FP16, BF16, FP8, and INT8 compute formats, paired with AMD EPYC™ Genoa processors. It supports 200Gbps vRDMA network interconnection between instances, offering a cost-effective product solution.
Note:
The instance is temporarily in allowlist beta testing. Please contact your pre-sales manager to enable purchase permission.
Application Scenario
HCCPNV5b is suitable for small- to medium-sized AI training scenarios.
Computer vision processing.
natural language processing.
Hardware Specifications
CPU: AMD EPYC™ Genoa, with turbo boost up to 3.7GHz.
Memory: Collocation with twelve-channel DDR5 memory.
Storage: Support CBS as system and data disks, and scale up on demand . Network: Support 100 Gbps private network bandwidth and 200 Gbps low-latency low-cost self-developed vRDMA network dedicated to internal communication of high-performance computing clusters, with strong packet transporting and receiving capabilities. The public network can be configured as needed, and ENIs can be mounted. |
HCCPNV5b | 384 | 1536 | 2.6/3.7 | Nvidia GPU × 8 | 200 Gbps vRDMA | 100 | 35 million | 32 | 12,000,000 |
GPU HCCG5vm
The GPU HCCG5vm instance is equipped with NVIDIA® Tesla® V100 GPU and is based on NVMe SSD instance storage. It provides storage resources with low latency, ultra-high IOPS and high throughput, and has powerful performance.
Application Scenario
Large-scale machine learning training and big data recommendations.
HPC applications, such as computational finance, quantum simulation of materials, molecular modeling, and gene sequencing.
Hardware Specifications
CPU: 2.5 GHz Intel® Xeon® Cascade Lake processor with a turbo frequency of 3.1 GHz.
GPU: Equipped with 8 × NVIDIA® Tesla® V100 GPU (FP64 7.8 TFLOPS, FP32 15.7 TFLOPS, 300 GB/s NVLink®).
Memory: 6-channel DDR4.
Storage: 1 × 480 GB SATA SSD as local system disk and 4 × 3,200 GB NVMe SSDs for high-performance local storage. CBS disks cannot be mounted.
Network: Support 25 Gbps private network bandwidth and 100 Gbps low-latency RDMA network dedicated to internal communication of Hyper Computing Cluster instances, with strong packet transporting and receiving capabilities. The public network can be configured as needed, but ENIs cannot be mounted. |
HCCG5vm | 96 | 768 | 2.5/3.1 | Nvidia V100 × 8 | 32GB × 8 | 100 Gbps RoCEv2 | 25 | 10 million | 16 | 2 million | 1 × 480 GB SATA SSD and 4 × 3,200 GB NVMe SSDs |
GPU HCCG5v
The GPU Hyper Computing ClusterG5v instance is equipped with NVIDIA® Tesla® V100 GPU and uses NVMe SSDs for instance storage with low latency, ultra-high IOPS, and high throughput, offering high performance.
Application Scenario
Large-scale machine learning training and big data recommendations.
HPC applications, such as computational finance, quantum simulation of materials, molecular modeling, and gene sequencing.
Hardware Specifications
CPU: 2.5GHz Intel® Xeon® Cascade Lake, with turbo boost up to 3.1GHz.
GPU: 8 × NVIDIA® Tesla® V100 GPU (FP64 7.8 TFLOPS, FP32 15.7 TFLOPS,300GB/s NVLink®).
Memory: 6-channel DDR4.
Storage: 1 × 480 GB SATA SSD as local system disk and 4 × 3,200 GB NVMe SSDs for high-performance local storage. CBS disks cannot be mounted.
Network: Support 25 Gbps private network bandwidth and 100 Gbps low-latency RDMA network dedicated to internal communication of Hyper Computing Cluster instances, with strong packet transporting and receiving capabilities. The public network can be configured as needed, but ENIs cannot be mounted. |
HCCG5v | 96 | 384 | 2.5/3.1 | Nvidia V100 × 8 | 32GB × 8 | 100 Gbps RoCEv2 | 25 | 10 million | 16 | 2 million | 1 × 480 GB SATA SSD and 4 × 3,200 GB NVMe SSDs |
Standard HCCS5
The standard type HCCS5 instance is equipped with a 2.5GHz base clock rate CPU, suitable for compute-intensive applications such as general multi-core batch processing and multi-core high-performance computing applications.
Application Scenario
Large-scale high-performance computing applications.
HPC applications, such as fluid dynamics analysis, industrial simulation, molecular modeling, gene sequencing, and meteorological analysis.
Hardware Specifications
CPU: 2.5 GHz Intel® Xeon® Cascade Lake processor with a turbo frequency of 3.1 GHz.
Memory: 6-channel DDR4.
Storage: 1 × 480 GB SATA SSD. CBS disks cannot be mounted.
Network: Support 25 Gbps private network bandwidth and 100 Gbps low-latency RDMA network dedicated to internal communication of Hyper Computing Cluster instances, with strong packet transporting and receiving capabilities. The public network can be configured as needed, but ENIs cannot be mounted. |
HCCS5 | 96 | 384 | 2.5/3.1 | 100 Gbps RoCEv2 | 25 | 10 million | 16 | 2 million | 1 × 480 GB SATA SSD |
Compute HCCIC5
The high-I/O compute HCCIC5 instance is equipped with a 3.2GHz base clock rate CPU, has high single-core computing performance, and uses NVMe SSD as instance storage, providing low latency and ultra-high IOPS storage resources. It is suitable for compute-intensive and I/O-intensive applications such as batch processing, fluid dynamics, and structural simulation.
Application Scenario
Large-scale high-performance computing applications.
HPC applications, such as fluid dynamics analysis, industrial simulation, molecular modeling, gene sequencing, and meteorological analysis.
Hardware Specifications
CPU: 3.2GHz Intel® Xeon® Cascade Lake, with turbo boost up to 3.7GHz.
Memory: 6-channel DDR4.
Storage: 2 × 480 GB SATA SSDs (RAID1) as local system disks and 2 × 3,840 GB NVMe SSDs for high-performance local storage. CBS disks cannot be mounted.
Network: Support 25 Gbps private network bandwidth and 100 Gbps low-latency RDMA network dedicated to internal communication of Hyper Computing Cluster instances, with strong packet transporting and receiving capabilities. The public network can be configured as needed, but ENIs cannot be mounted. |
HCCIC5 | 64 | 384 | 3.2/3.7 | 100 Gbps RoCEv2 | 25 | 10 million | 16 | 2 million | 2 × 480 GB SATA SSDs (RAID1) and 2 × 3,840 GB NVMe SSDs |