wget https://artifacts.elastic.co/downloads/logstash/logstash-5.6.4.tar.gztar xvf logstash-5.6.4.tar.gzyum install java-1.8.0-openjdk java-1.8.0-openjdk-devel -y
*.conf,配置文件内容可参考 数据源配置文件说明。nohup ./bin/logstash -f ~/*.conf 2>&1 >/dev/null &
docker pull docker.elastic.co/logstash/logstash:5.6.9
*.conf,放置在 /usr/share/logstash/pipeline/目录下,目录可自定义。docker run --rm -it -v ~/pipeline/:/usr/share/logstash/pipeline/ docker.elastic.co/logstash/logstash:5.6.9



/data/config目录下添加了名为 logstash.conf 的配置文件,并将其挂在到 Docker 的/data目录下,从而使得容器启动时可以读取到 logstash.conf 文件。



input {file {path => "/var/log/nginx/access.log" # 文件路径}}filter {}output {elasticsearch {hosts => ["http://172.16.0.89:9200"] # Elasticsearch 集群的内网 VIP 地址和端口index => "nginx_access-%{+YYYY.MM.dd}" # 自定义索引名称,以日期为后缀,每天生成一个索引}}
input{kafka{bootstrap_servers => ["172.16.16.22:9092"]client_id => "test"group_id => "test"auto_offset_reset => "latest" #从最新的偏移量开始消费consumer_threads => 5decorate_events => true #此属性会将当前 topic、offset、group、partition 等信息也带到 message 中topics => ["test1","test2"] #数组类型,可配置多个 topictype => "test" #数据源标记字段}}output {elasticsearch {hosts => ["http://172.16.0.89:9200"] # Elasticsearch 集群的内网 VIP 地址和端口index => "test_kafka"}}
input {jdbc {# mysql 数据库地址jdbc_connection_string => "jdbc:mysql://172.16.32.14:3306/test"# 用户名和密码jdbc_user => "root"jdbc_password => "Elastic123"# 驱动 jar 包,如果自行安装部署 logstash 需要下载该 jar,logstash 默认不提供jdbc_driver_library => "/usr/local/services/logstash-5.6.4/lib/mysql-connector-java-5.1.40.jar"# 驱动类名jdbc_driver_class => "com.mysql.jdbc.Driver"jdbc_paging_enabled => "true"jdbc_page_size => "50000"# 执行的sql 文件路径+名称#statement_filepath => "test.sql"# 执行的sql语句statement => "select * from test_es"# 设置监听间隔 各字段含义(由左至右)分、时、天、月、年,全部为*默认含义为每分钟都更新schedule => "* * * * *"type => "jdbc"}}output {elasticsearch {hosts => ["http://172.16.0.30:9200"]index => "test_mysql"document_id => "%{id}"}}
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.6.4-linux-x86_64.tar.gztar xvf filebeat-5.6.4.tar.gz
nohup ./filebeat 2>&1 >/dev/null &
docker pull docker.elastic.co/beats/filebeat:5.6.9
*.conf, 放置在/usr/share/logstash/pipeline/ 目录下,目录可自定义。docker run docker.elastic.co/beats/filebeat:5.6.9
// 输入源配置filebeat.prospectors:- input_type: logpaths:- /usr/local/services/testlogs/*.log// 输出到 ESoutput.elasticsearch:# Array of hosts to connect to.hosts: ["172.16.0.39:9200"]
文档反馈