tencent cloud

Tencent Cloud Observability Platform

Release Notes and Announcements
Release Notes
Product Introduction
Overview
Strengths
Basic Features
Basic Concepts
Use Cases
Use Limits
Purchase Guide
Tencent Cloud Product Monitoring
Application Performance Management
Mobile App Performance Monitoring
Real User Monitoring
Cloud Automated Testing
Prometheus Monitoring
Grafana
EventBridge
PTS
Quick Start
Monitoring Overview
Instance Group
Tencent Cloud Product Monitoring
Application Performance Management
Real User Monitoring
Cloud Automated Testing
Performance Testing Service
Prometheus Getting Started
Grafana
Dashboard Creation
EventBridge
Alarm Service
Cloud Product Monitoring
Tencent Cloud Service Metrics
Operation Guide
CVM Agents
Cloud Product Monitoring Integration with Grafana
Troubleshooting
Practical Tutorial
Application Performance Management
Product Introduction
Access Guide
Operation Guide
Practical Tutorial
Parameter Information
FAQs
Mobile App Performance Monitoring
Overview
Operation Guide
Access Guide
Practical Tutorial
Tencent Cloud Real User Monitoring
Product Introduction
Operation Guide
Connection Guide
FAQs
Cloud Automated Testing
Product Introduction
Operation Guide
FAQs
Performance Testing Service
Overview
Operation Guide
Practice Tutorial
JavaScript API List
FAQs
Prometheus Monitoring
Product Introduction
Access Guide
Operation Guide
Practical Tutorial
Terraform
FAQs
Grafana
Product Introduction
Operation Guide
Guide on Grafana Common Features
FAQs
Dashboard
Overview
Operation Guide
Alarm Management
Console Operation Guide
Troubleshooting
FAQs
EventBridge
Product Introduction
Operation Guide
Practical Tutorial
FAQs
Report Management
FAQs
General
Alarm Service
Concepts
Monitoring Charts
CVM Agents
Dynamic Alarm Threshold
CM Connection to Grafana
Documentation Guide
Related Agreements
Application Performance Management Service Level Agreement
APM Privacy Policy
APM Data Processing And Security Agreement
RUM Service Level Agreement
Mobile Performance Monitoring Service Level Agreement
Cloud Automated Testing Service Level Agreement
Prometheus Service Level Agreement
TCMG Service Level Agreements
PTS Service Level Agreement
PTS Use Limits
Cloud Monitor Service Level Agreement
API Documentation
History
Introduction
API Category
Making API Requests
Monitoring Data Query APIs
Alarm APIs
Legacy Alert APIs
Notification Template APIs
TMP APIs
Grafana Service APIs
Event Center APIs
TencentCloud Managed Service for Prometheus APIs
Monitoring APIs
Data Types
Error Codes
Glossary

Interpreting Reports

PDF
Focus Mode
Font Size
Last updated: 2025-03-10 22:14:23

Overview

PTS displays the results of a round of performance testing in a performance testing report. Performance testing reports are available in two statuses: real-time reports and historical reports. The former allows you to view data in real time during the performance testing, while the latter is for viewing historical data after the performance testing is completed.
Note:
PTS historical reports are retained for 45 days, after which expired reports will be automatically cleared. You can download the performance testing report in PDF format as a backup before the report expires.

Real-Time Report

When you trigger the execution of your performance testing scenario, PTS goes through a series of resource preparation steps and then creates a performance testing task for you. Once the task is created, the console dynamically displays the performance testing data for that task and refreshes it in real time at a certain frequency.


Historical Report

When a performance testing task for your performance testing scenario is completed, you can find the historical report for that task from the historical reports overview page of the scenario. You can click this historical report to review the historical data.


Report Data

Overview

The Overview page displays some of the most critical overview data, such as metadata of the performance testing task itself and the most commonly used metrics from the performance testing results along with their charts (for example, VU, RPS, and average response time).
The top section of the overview page provides a summary of the performance testing task data, where:
Concurrency and total requests are instantaneous values at the moment the performance testing task is running.
RPS, average response time, failure rate, and network traffic are averages during the performance testing task.
The middle section of the overview page contains metadata about the performance testing task, such as the duration, the tester, and the status.
The bottom section of the overview page shows real-time curves for the performance testing task, displaying the instantaneous values of various metrics at different time points.
Note:
For an introduction to the concepts of concurrent users, RPS, and response time, as well as the relationships between them, see FAQs.


Service Details

On the service details page, each URL is categorized as a separate "service," displaying detailed information about all requests sent during the performance testing.
You can click to expand the details of each service to view its data and charts. In the charts, you can click to switch between different Metrics or Aggregations to change what you are viewing.

Note:
In PTS, different services are by default categorized according to different URLs. If you need to customize the service categorization, you can specify the service attribute within the http.Request in scenarios using script mode. See JavaScript API List for more details.

Checkpoint Details

On the Checkpoint Details page, you can view the detailed results of the checkpoints that you have set up in your scenario.

Note:
For information on how to set checkpoints, see Setting Checkpoints.

Script Information

On the Script Information page, you can view a snapshot of the scenario script used when executing the performance testing task.


Multidimensional Analysis

On the Multidimensional Analysis page, you can switch in an interactive style between various combinations of charts that display performance testing result data.
You can click to switch between different Metrics or Aggregations to view different charts.
You can also click Add Metrics at the bottom of the page to create data charts according to your needs.


Load Generator

On the Load Generator page, you can view basic information about the load generator for the performance testing task, logs output during the performance testing, and the resource usage status of the load generator itself.
Among these, the performance testing logs can be categorized by log level (debug/info/error) and log content (user output/engine output), which you can switch when using the drop-down list.
The logs printed by yourself will be displayed on the User Output tab.
The general logs printed by PTS will be displayed on the Engine Output tab.


Request Sampling

By clicking Request Sampling, you can view the detailed information of some sampled requests selected by the load generator.

You can enter the corresponding conditions to filter the requests as needed. In the request list, you can click Details to expand the details page for an individual request.

On the details page of an individual request, you can view the detailed information of the request and its response, as well as the waterfall curve that illustrates the distribution of time taken for the request.

Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback