tencent cloud

TDMQ for CKafka

Release Notes and Announcements
Release Notes
Broker Release Notes
Announcement
Product Introduction
Introduction and Selection of the TDMQ Product Series
What Is TDMQ for CKafka
Strengths
Scenarios
Technology Architecture
Product Series Introduction
Apache Kafka Version Support Description
Comparison with Apache Kafka
High Availability
Use Limits
Regions and AZs
Related Cloud Services
Billing
Billing Overview
Pricing
Billing Example
Changing from Postpaid by Hour to Monthly Subscription
Renewal
Viewing Consumption Details
Overdue Payments
Refund
Getting Started
Guide for Getting Started
Preparations
VPC Network Access
Public Domain Name Access
User Guide
Usage Process Guide
Configuring Account Permission
Creating Instance
Configuring Topic
Connecting Instance
Managing Messages
Managing Consumer Group
Managing Instance
Changing Instance Specification
Configuring Traffic Throttling
Configuring Elastic Scaling Policy
Configuring Advanced Features
Viewing Monitoring Data and Configuring Alarm Rules
Synchronizing Data Using CKafka Connector
Use Cases
Cluster Resource Assessment
Client Practical Tutorial
Log Integration
Open-Source Ecosystem Integration
Replacing Supporting Route (Old)
Migration Guide
Migration Solution Overview
Migrating Cluster Using Open-Source Tool
Troubleshooting
Topics
Clients
Messages
​​API Reference
History
Introduction
API Category
Making API Requests
Other APIs
ACL APIs
Instance APIs
Routing APIs
DataHub APIs
Topic APIs
Data Types
Error Codes
SDK Reference
SDK Overview
Java SDK
Python SDK
Go SDK
PHP SDK
C++ SDK
Node.js SDK
SDK for Connector
Security and Compliance
Permission Management
Network Security
Deletion Protection
Event Record
CloudAudit
FAQs
Instances
Topics
Consumer Groups
Client-Related
Network-Related
Monitoring
Messages
Agreements
CKafka Service Level Agreements
Contact Us
Glossary

Accessing CKafka via Logstash

PDF
포커스 모드
폰트 크기
마지막 업데이트 시간: 2026-01-20 17:10:14
Logstash is an open source log processing tool that collects data from multiple sources, filters the collected data, and stores the data for other purposes.
Logstash is highly flexible, has powerful syntax analysis features and rich plugins, and supports multiple input and output sources. As a horizontally scalable data pipeline, Logstash works with Elasticsearch and Kibana to provide powerful log collection and search capabilities.

How Logstash Works

Logstash data processing can be divided into three phases: inputs → filters → outputs.
1. Inputs: Data is input from different sources, such as files, syslog, Redis, and Beats.
2. Filters: Data is modified and filtered. This is an intermediate process in the Logstash data pipeline, where events can be changed based on actual conditions. Common filters include grok, mutate, drop, and clone.
3. Outputs: Data is transferred to other destinations. An event can be transferred to multiple outputs, and ends when the transfer is completed. Elasticsearch is the most common output.
Logstash supports encoding and decoding, and allows specified formats for the input and output ends.



Strengths of Accessing Kafka via Logstash

Asynchronous data processing: Data can be processed asynchronously to prevent burst traffic.
Decoupling: When Elasticsearch fails, upstream workloads will not be affected.
Note
The Logstash filtering consumes resources. If deployed on a production server, Logstash will affect the performance of the server.




Operation Steps

Preparations

You have downloaded and installed Logstash. See Downloading Logstash.
You have downloaded and installed JDK 8. See Downloading JDK 8.

Step 1: Obtaining the CKafka Instance Access Address

2. In the left sidebar, select Instance List and click the ID of the target instance to go to the basic instance information page.
3. On the Access Mode section of the instance's basic information page, you can obtain the access address of the instance.



Step 2: Creating Topic

1. On the basic instance information page, select the Topic List tab at the top.
2. On the Topic Management page, click Create to create a Topic named logstash_test.



Step 3: Access CKafka

Note
You can click the tabs below to view the steps for accessing CKafka as inputs or outputs.
As inputs
As outputs
1. Run bin/logstash-plugin list to check whether the installed plugins include logstash-input-kafka.


2. Create the configuration file input.conf in the .bin/ directory. Here, standard output is used as the data destination, and Kafka is used as the data source.
input {
kafka {
bootstrap_servers => "xx.xx.xx.xx:xxxx" // ckafka instance access address
group_id => "logstash_group" // CKafka groupid name.
topics => ["logstash_test"] // CKafka topic name.
consumer_threads => 3 // Number of consumption threads, which is generally the same as the number of CKafka partitions.
auto_offset_reset => "earliest"
}
}
output {
stdout{codec=>rubydebug}
}
3. Run the following command to start Logstash to consume messages.
./logstash -f input.conf
The result is as follows:

You can see that the data in the Topic has been consumed.
1. Run bin/logstash-plugin list to check whether the installed plugins include logstash-output-kafka.


2. Create the configuration file output.conf in the .bin/ directory. Here, standard input is used as the data source, with Kafka as the data destination.
input {
stdin{}
}output {
kafka {
bootstrap_servers => "xx.xx.xx.xx:xxxx" // ckafka instance access address
topic_id => "logstash_test" // CKafka topic name.
}
}
3. Run the following command to start Logstash and send messages to the topic created.
./logstash -f output.conf



4. Start the CKafka consumer to verify the production data in the previous step.




도움말 및 지원

문제 해결에 도움이 되었나요?

피드백