tencent cloud

Introduction to Models
Last updated: 2025-12-15 16:52:03
Introduction to Models
Last updated: 2025-12-15 16:52:03
Tencent Cloud Agent Development Platform (Tencent Cloud ADP) supports the following models. You can select according to your needs.
The platform divides models into two categories by purpose.
1. Reasoning Models: Used for intent recognition and primarily responsible for improving intent-understanding accuracy.
2. Generation Models: Primarily responsible for reading comprehension and generating answers.

Generation Model

Tencent Cloud ADP has the built-in DeepSeek Full-Performance Edition model and supports user customization. The model details and use cases are as follows:
Model Name
Context Length
Scenario Description
DeepSeek-R1-0528
64K
The latest version of the DeepSeek-R1 model significantly improves intent understanding, copywriting generation, programming skills, and logical reasoning. It better understands constraints and inherent logic in complex commands. With extended thinking capabilities, it handles more complex and time-consuming tasks.
DeepSeek-R1
64K
The reinforcement learning (RL)-driven reasoning model performs on par with OpenAI-o1 in math, code, and inference tasks. It is the same model as the DeepSeek Assistant's deep thinking mode.
DeepSeek-V3
64K
The system has fully switched to the DeepSeek-V3-0324 model, the latest version of the DeepSeek-V3 series. Based on the innovative Mixture of Experts (MoE) architecture and Multi-head Latent Attention (MLA) technology, it achieves comprehensive upgrades in three core areas: reasoning, code generation, and Chinese semantic understanding.
Custom Model
/
Tencent Cloud ADP allows users to customize their experience by adding LLM model APIs that comply with the OpenAI protocol. Users can integrate these custom LLMs as generative models, with supported options including ChatGPT, Claude, Gemini, LlaMa, Qwen, Doubao, and Kimi.

Reasoning Model

The Standard Mode includes the following reasoning models:
Model Name
Context Length
Scenario Description
Advanced Reasoning Model
Maximum input: 8k
Maximum output: 4k
Suitable for scenarios requiring simultaneous configuration of Q&A, documents and workflows. It provides better intent recognition and supports custom role instructions within the reasoning process, though this may increase latency.
DeepSeek-V3
64K
Now fully upgraded to the Deepseek-V3-0324 version. As the latest Deepseek-V3 model, it is built on an innovative Mixture of Experts (MoE) architecture and Multi-Head Latent Attention (MLA) technology, delivering comprehensive improvements in reasoning, code generation, and Chinese language understanding.

Was this page helpful?
You can also Contact Sales or Submit a Ticket for help.
Yes
No

Feedback