tencent cloud

Tencent Cloud Observability Platform

Release Notes and Announcements
Release Notes
Product Introduction
Overview
Strengths
Basic Features
Basic Concepts
Use Cases
Use Limits
Purchase Guide
Tencent Cloud Product Monitoring
Application Performance Management
Mobile App Performance Monitoring
Real User Monitoring
Cloud Automated Testing
Prometheus Monitoring
Grafana
EventBridge
PTS
Quick Start
Monitoring Overview
Instance Group
Tencent Cloud Product Monitoring
Application Performance Management
Real User Monitoring
Cloud Automated Testing
Performance Testing Service
Prometheus Getting Started
Grafana
Dashboard Creation
EventBridge
Alarm Service
Cloud Product Monitoring
Tencent Cloud Service Metrics
Operation Guide
CVM Agents
Cloud Product Monitoring Integration with Grafana
Troubleshooting
Practical Tutorial
Application Performance Management
Product Introduction
Access Guide
Operation Guide
Practical Tutorial
Parameter Information
FAQs
Mobile App Performance Monitoring
Overview
Operation Guide
Access Guide
Practical Tutorial
Tencent Cloud Real User Monitoring
Product Introduction
Operation Guide
Connection Guide
FAQs
Cloud Automated Testing
Product Introduction
Operation Guide
FAQs
Performance Testing Service
Overview
Operation Guide
Practice Tutorial
JavaScript API List
FAQs
Prometheus Monitoring
Product Introduction
Access Guide
Operation Guide
Practical Tutorial
Terraform
FAQs
Grafana
Product Introduction
Operation Guide
Guide on Grafana Common Features
FAQs
Dashboard
Overview
Operation Guide
Alarm Management
Console Operation Guide
Troubleshooting
FAQs
EventBridge
Product Introduction
Operation Guide
Practical Tutorial
FAQs
Report Management
FAQs
General
Alarm Service
Concepts
Monitoring Charts
CVM Agents
Dynamic Alarm Threshold
CM Connection to Grafana
Documentation Guide
Related Agreements
Application Performance Management Service Level Agreement
APM Privacy Policy
APM Data Processing And Security Agreement
RUM Service Level Agreement
Mobile Performance Monitoring Service Level Agreement
Cloud Automated Testing Service Level Agreement
Prometheus Service Level Agreement
TCMG Service Level Agreements
PTS Service Level Agreement
PTS Use Limits
Cloud Monitor Service Level Agreement
API Documentation
History
Introduction
API Category
Making API Requests
Monitoring Data Query APIs
Alarm APIs
Legacy Alert APIs
Notification Template APIs
TMP APIs
Grafana Service APIs
Event Center APIs
TencentCloud Managed Service for Prometheus APIs
Monitoring APIs
Data Types
Error Codes
Glossary

Performance Testing Service

PDF
フォーカスモード
フォントサイズ
最終更新日: 2025-03-26 18:01:27

Feature Introduction

This document introduces how to use PTS in script mode to quickly initiate performance testing through JavaScript scripts, helping you understand the basic usage of PTS.
Performance Testing Scenario
Description
Script mode
Use our JavaScript code examples as the foundation for scripts, or start from scratch. Supports protocols such as HTTP, WebSocket, and gRPC.
Simple mode
Use our interactive UI to combine different user requests.
JMeter
Conduct performance testing using native JMeter JMX files.
Test plan import
Automatically generate testing scenarios by importing APIs such as HAR.
Traffic recording
Record browser traffic and automatically generate testing scenarios.

Operation Steps

Step 1: Creating a PTS Project

1. Log in to the Tencent Cloud Observability Platform console.
2. In the left sidebar, click PTS > Project List.
3. On the Project List page, click Create Project.
4. Fill in the Project Name, Description, and Tag. After completion, click Save.


Step 2: Creating a Testing Scenario

1. Enter the create testing scenarios page, select Script Mode.



2. Orchestrate the testing scenario, complete the following settings, and then click Save and Run. The created testing scenario is ready to run by default.



The functional modules are described in the following table:
Functional Module
Description
Scenario name
Update the scenario name to facilitate identifying its purpose later.
Load Generation Configuration
The currently supported pressure models: Concurrent pressure mode (virtual user mode) and RPS pressure mode.
Parallel mode (virtual user mode): Concurrent refers to the number of virtual concurrent users. From a business perspective, it can also be understood as the number of users online simultaneously.
RPS mode: Requests per second (RPS), used to measure the throughput of the server. This mode eliminates the complexity of converting concurrent users to RPS, helping users better identify performance bottlenecks.
Concurrent Mode Configuration
Maximum Concurrency: Represents the number of users online simultaneously for the service under pressure.
Number of Incremental Steps: The number of stages by which concurrent pressure increases.
Incremental Duration: The duration for which the pressure increases.
Total Performance Testing Duration: The overall duration of the performance test.
Performance Testing Resource:
A performance testing resource provides 500 concurrent users by default, along with the corresponding underlying resources;
If CPU, memory, or inbound/outbound bandwidth reaches its upper limit, additional performance testing resources can be allocated to increase the CPU, memory, and inbound/outbound bandwidth available for the task;
An increase in the number of performance testing resources will result in higher billing. Billing concurrency = Number of performance testing resources x 500.
Network Type: The general network supports public network access.
Traffic Distribution: Select the pressure traffic ratio for different regions.
RPS Mode Configuration
Maximum RPS: The upper limit of RPS for the performance testing, used to determine the target throughput of the business system. PTS allocates appropriate pressure resources for the performance testing task based on the maximum RPS.
Start RPS: The initial RPS for the performance testing. Users can manually adjust the RPS during the testing process and observe changes in report metrics.
Total Performance Testing Duration: The total duration of a performance test.
Performance Testing Resource: PTS allocates resources from the testing resource pool based on the maximum RPS set by the user. If your requests have slower response times, you can expand the resource pool to ensure the target throughput is achieved.
Traffic Distribution: Distribute the total performance testing traffic across multiple regions by a certain percentage to simulate real-world scenarios with traffic generated by users from different regions.
Scenario Orchestration
Provides common script template examples.
Supports features such as syntax highlighting, intelligent prompts, function references, and code formatting.
Right-click within the JS orchestration area for additional features. For more JS syntax and code examples, see Script Mode Performance Testing.

Step 3: Viewing Performance Testing Report

After you click Save and Run, PTS will initiate the performance testing engine. The console will redirect to the pressure report interface.
Once the performance testing report is generated, you can click the test scenario name in the sidebar to view or download historical reports. In the error details list, you can also click Request Samples to check the sampled information for error requests.




ヘルプとサポート

この記事はお役に立ちましたか?

フィードバック