ELK的Filebeat

目录

  • 传送门
  • 前言
  • 一、概念
    • 1. 主要功能
    • 2. 架构
    • 3. 使用场景
    • 4. 模块
    • 5. 监控与管理
  • 二、下载地址
  • 三、Linux下7.6.2版本安装
    • filebeat.yml配置文件参考(不要直接拷贝用)
    • 多行匹配配置
    • 过滤配置
    • 最终配置(一、多行匹配、直接读取日志文件、EFK方案)
    • 最终配置(二、多行匹配、直接读取日志文件、ELFK方案)
  • 四、综合理论案例

传送门

SpringMVC的源码解析(精品)
Spring6的源码解析(精品)
SpringBoot3框架(精品)
MyBatis框架(精品)
MyBatis-Plus
SpringDataJPA
SpringCloudNetflix
SpringCloudAlibaba(精品)
Shiro
SpringSecurity
java的LOG日志框架
Activiti(敬请期待)
JDK8新特性
JDK9新特性
JDK10新特性
JDK11新特性
JDK12新特性
JDK13新特性
JDK14新特性
JDK15新特性
JDK16新特性
JDK17新特性
JDK18新特性
JDK19新特性
JDK20新特性
JDK21新特性
其他技术文章传送门入口

前言

ELK设置后抓日志非常好用,当然也不只是用于抓日志。功能强大,全文检索等等。

以下文章不定时更新。

ELK的ElasticStack概念
ELK的ElasticStack语法
ELK的ElasticStack安装
ELK的Logstash
ELK的Kibana
ELK的Filebeat

一、概念

在这里插入图片描述
Filebeat是Elastic Stack中的一个轻量级数据传输工具,专门用于转发和集中化日志数据。它的设计目的是高效地读取和转发日志文件到Elasticsearch或Logstash,进而帮助用户进行日志分析和监控。以下是对Filebeat的详细介绍,包括其主要功能、架构、使用场景和配置示例。

1. 主要功能

1.轻量级:Filebeat是一个轻量级的代理,不会对源系统造成显著的负担,适合在各种环境中部署。
2.日志采集:可以监控和采集本地的日志文件,并将其转发到指定的目标。
3.数据转换:支持对日志数据进行简单的处理,例如解析和格式化。
4.容错性:在目标不可用时,Filebeat可以缓冲数据,确保不会丢失日志信息。

2. 架构

Filebeat的架构主要包括以下几个部分:

5.输入:定义要采集的日志文件及其路径。
6.处理:可以对输入的数据进行处理,包括解析、添加字段等。
7.输出:将处理后的数据发送到目标,例如Elasticsearch或Logstash。
8.模块:Filebeat提供了一些预定义的模块,用于简化常见应用程序和服务的日志采集配置。

3. 使用场景

9.集中化日志管理:将多个服务器的日志集中到一个地方,便于统一管理和分析。
10.监控应用程序和系统日志:实时监控和分析应用程序、系统和网络设备的日志信息。
11.安全事件监控:采集和分析安全相关的日志,以检测和响应安全事件。

4. 模块

Filebeat提供了多个预定义的模块,可以简化常见应用程序和服务的日志采集配置。

5. 监控与管理

Filebeat支持与Elastic Stack中的其他工具(如Kibana)集成,以实现日志的可视化和监控。用户可以在Kibana中创建仪表板,以实时查看采集到的日志数据。
结论
Filebeat是一个强大的工具,适合在各种环境中实现高效的日志采集和转发。通过与Elasticsearch和Kibana的结合,Filebeat使用户能够轻松集中和分析日志数据,帮助快速识别和解决问题。

type: log #input类型为log
enable: true #表示是该log类型配置生效
paths:     #指定要监控的日志,目前按照Go语言的glob函数处理。没有对配置目录做递归处理,比如配置的如果是:
- /var/log/* /*.log  #则只会去/var/log目录的所有子目录中寻找以".log"结尾的文件,而不会寻找/var/log目录下以".log"结尾的文件。
recursive_glob.enabled: #启用全局递归模式,例如/foo/**包括/foo, /foo/*, /foo/*/*
encoding:#指定被监控的文件的编码类型,使用plain和utf-8都是可以处理中文日志的
exclude_lines: ['^DBG'] #不包含匹配正则的行
include_lines: ['^ERR', '^WARN']  #包含匹配正则的行
harvester_buffer_size: 16384 #每个harvester在获取文件时使用的缓冲区的字节大小
max_bytes: 10485760 #单个日志消息可以拥有的最大字节数。max_bytes之后的所有字节都被丢弃而不发送。默认值为10MB (10485760)
exclude_files: ['\.gz$']  #用于匹配希望Filebeat忽略的文件的正则表达式列表
ingore_older: 0 #默认为0,表示禁用,可以配置2h,2m等,注意ignore_older必须大于close_inactive的值.表示忽略超过设置值未更新的
文件或者文件从来没有被harvester收集
close_* #close_ *配置选项用于在特定标准或时间之后关闭harvester。 关闭harvester意味着关闭文件处理程序。 如果在harvester关闭
后文件被更新,则在scan_frequency过后,文件将被重新拾取。 但是,如果在harvester关闭时移动或删除文件,Filebeat将无法再次接收文件
,并且harvester未读取的任何数据都将丢失。
close_inactive  #启动选项时,如果在制定时间没有被读取,将关闭文件句柄
读取的最后一条日志定义为下一次读取的起始点,而不是基于文件的修改时间
如果关闭的文件发生变化,一个新的harverster将在scan_frequency运行后被启动
建议至少设置一个大于读取日志频率的值,配置多个prospector来实现针对不同更新速度的日志文件
使用内部时间戳机制,来反映记录日志的读取,每次读取到最后一行日志时开始倒计时使用2h 5m 来表示
close_rename #当选项启动,如果文件被重命名和移动,filebeat关闭文件的处理读取
close_removed #当选项启动,文件被删除时,filebeat关闭文件的处理读取这个选项启动后,必须启动clean_removed
close_eof #适合只写一次日志的文件,然后filebeat关闭文件的处理读取
close_timeout #当选项启动时,filebeat会给每个harvester设置预定义时间,不管这个文件是否被读取,达到设定时间后,将被关闭
close_timeout 不能等于ignore_older,会导致文件更新时,不会被读取如果output一直没有输出日志事件,这个timeout是不会被启动的,
至少要要有一个事件发送,然后haverter将被关闭
设置0 表示不启动
clean_inactived #从注册表文件中删除先前收获的文件的状态
设置必须大于ignore_older+scan_frequency,以确保在文件仍在收集时没有删除任何状态
配置选项有助于减小注册表文件的大小,特别是如果每天都生成大量的新文件
此配置选项也可用于防止在Linux上重用inode的Filebeat问题
clean_removed #启动选项后,如果文件在磁盘上找不到,将从注册表中清除filebeat
如果关闭close removed 必须关闭clean removed
scan_frequency #prospector检查指定用于收获的路径中的新文件的频率,默认10s
tail_files:#如果设置为true,Filebeat从文件尾开始监控文件新增内容,把新增的每一行文件作为一个事件依次发送,
而不是从文件开始处重新发送所有内容。
symlinks:#符号链接选项允许Filebeat除常规文件外,可以收集符号链接。收集符号链接时,即使报告了符号链接的路径,
Filebeat也会打开并读取原始文件。
backoff: #backoff选项指定Filebeat如何积极地抓取新文件进行更新。默认1s,backoff选项定义Filebeat在达到EOF之后
再次检查文件之间等待的时间。
max_backoff: #在达到EOF之后再次检查文件之前Filebeat等待的最长时间
backoff_factor: #指定backoff尝试等待时间几次,默认是2
harvester_limit:#harvester_limit选项限制一个prospector并行启动的harvester数量,直接影响文件打开数
 
tags #列表中添加标签,用过过滤,例如:tags: ["json"]
fields #可选字段,选择额外的字段进行输出可以是标量值,元组,字典等嵌套类型
默认在sub-dictionary位置
filebeat.inputs:
fields:
app_id: query_engine_12
fields_under_root #如果值为ture,那么fields存储在输出文档的顶级位置
 
multiline.pattern #必须匹配的regexp模式
multiline.negate #定义上面的模式匹配条件的动作是 否定的,默认是false
假如模式匹配条件'^b',默认是false模式,表示讲按照模式匹配进行匹配 将不是以b开头的日志行进行合并
如果是true,表示将不以b开头的日志行进行合并
multiline.match # 指定Filebeat如何将匹配行组合成事件,在之前或者之后,取决于上面所指定的negate
multiline.max_lines #可以组合成一个事件的最大行数,超过将丢弃,默认500
multiline.timeout #定义超时时间,如果开始一个新的事件在超时时间内没有发现匹配,也将发送日志,默认是5smax_procs #设置可以同时执行的最大CPU数。默认值为系统中可用的逻辑CPU的数量。name #为该filebeat指定名字,默认为主机的hostname

二、下载地址

https://www.elastic.co/cn/downloads/filebeat

https://www.elastic.co/cn/downloads/past-releases/filebeat-7-6-2

三、Linux下7.6.2版本安装

cd /usr/local/filebeat
tar -zxvf filebeat-7.6.2-linux-x86_64.tar.gz #  解压tar包,解压时间有点长。
cd /usr/local/filebeat/filebeat-7.6.2-linux-x86_64  # 公司古博服务器
cp /usr/local/filebeat/filebeat-7.6.2-linux-x86_64/filebeat.yml  /usr/local/filebeat/filebeat-7.6.2-linux-x86_64/filebeat.bak.yml # 备份
vim /usr/local/filebeat/filebeat-7.6.2-linux-x86_64/filebeat.yml   #  可以先备份一下
  在 path位置修改一下路径就可以了,单节点的es也不用改其他的。multiline是多行匹配,异常合并这种。深坑:一定要注意缩进,否则无效
  paths:
    -  /java/logs/*/*.log

  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after

/usr/local/filebeat/filebeat-7.6.2-linux-x86_64/filebeat test config  # 验证配置文件命令
nohup /usr/local/filebeat/filebeat-7.6.2-linux-x86_64/filebeat -e -c filebeat.yml > /dev/null 2>&1 &  disown  # 后台启动,并将日志放到黑洞,注意disown参数要带上,该参数解决关闭xshell后filebeat自动退出问题。
#### nohup /usr/local/filebeat/filebeat-7.6.2-linux-x86_64/filebeat -e -c filebeat.yml > /usr/local/filebeat/filebeat-7.6.2-linux-x86_64/filebeat.txt  &  # 非黑洞方式,可以查看日志排查原因。
####tail -n 10 -f  /usr/local/filebeat/filebeat-7.6.2-linux-x86_64/filebeat.txt
###  -c 后面跟自定义配置文件 -d 是开启debug模式  默认端口5044

filebeat.yml配置文件参考(不要直接拷贝用)

###################### Filebeat Configuration Example #########################
 
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
 
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
 
#=========================== Filebeat inputs =============================
 
filebeat.inputs:
 
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
 
- type: log
 
  # Change to true to enable this input configuration.
  enabled: true
 
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    -  /java/logs/*/*.log
    #- /var/logs/es_aaa_index_search_slowlog.log
    #- /var/logs/es_bbb_index_search_slowlog.log
    #- /var/logs/es_ccc_index_search_slowlog.log
    #- /var/logs/es_dddd_index_search_slowlog.log
    #- c:\programdata\elasticsearch\logs\*
 
  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']
 
  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']
 
  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']
 
  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1
 
  ### Multiline options
 
  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation
 
  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[
 
  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false
 
  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after
 
 
#============================= Filebeat modules ===============================
 
filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml
 
  # Set to true to enable config reloading
  reload.enabled: false
 
  # Period on which files under path should be checked for changes
  #reload.period: 10s
 
#==================== Elasticsearch template setting ==========================
 
 
#================================ General =====================================
 
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
name: filebeat222
 
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
 
# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging
 
#cloud.auth:
 
#================================ Outputs =====================================
 
 
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["192.168.110.130:9200","92.168.110.131:9200"]
  hosts: ["localhost:9200"]
 
  # Protocol - either `http` (default) or `https`.
  #protocol: "https"
 
  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "${ES_PWD}"   #通过keystore设置密码

多行匹配配置

在这里插入图片描述
4种组合设置 以b为中心的, b之前的,b之后的各种不同组合规则,比如 让异常 at开头的都在一行中,而不是分开
在这里插入图片描述

异常合并版本:
multiline.pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:'
multiline.negate: false
multiline.match: after
异常合并版本2(生产环境,Kibana全体倒序排列的时候,一个接口内时间正序,很好):
multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after
合并了一个线程的(官方推荐的,合并的有点厉害,这种不行的,合并以后时间都是错乱的)
  multiline.pattern: '^\['
  multiline.negate: true
  multiline.match: after

过滤配置

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

最终配置(一、多行匹配、直接读取日志文件、EFK方案)

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
   # - /var/log/*.log
    - /java/logs/*/*.log
    - /java/logs/auth/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after

 # multiline.pattern: '^\['
 # multiline.negate: true
 # multiline.match: after
 
 # multiline.pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:'
 # multiline.negate: false
 # multiline.match: after


  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== X-Pack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

#================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

最终配置(二、多行匹配、直接读取日志文件、ELFK方案)

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
   # - /var/log/*.log
    - /java/logs/*/*.log
    - /java/logs/auth/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after

 # multiline.pattern: '^\['
 # multiline.negate: true
 # multiline.match: after
 
 # multiline.pattern: '^[[:space:]]+(at|\.{3})\b|^Caused by:'
 # multiline.negate: false
 # multiline.match: after


  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
 # hosts: ["localhost:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== X-Pack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

#================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

1、console一直没成功过,以为是整个有问题,换了另一台linux重新部署还是一样;
2、输出es开始也没成功,最后用最原始filebeat.yml的配置,成功输出到es了,本质还是yml中path那边配置自己项目路径日志有大问题,需要比对案例,空格和格式不能错,一点一点手动写入,不要拷贝;
3、巨坑:filebeat在用nohup和黑洞的方式启动以后,老是自己退出关闭。查找资料是用这个nohup启动以后,必须exit退出以后,再去关闭xshell(这种很容易忘记或者别人误操作,不太好)。还有一种解决办法是加一个配置文件(不太好)。最佳解决办法为nohup命令后面加一个参数。(这个参数将会使启动的nohup进程从当前shell的作业列表中清除,从而避免nohup进程在关闭这个shell时退出)

四、综合理论案例

案例(多虚机ELFK+Redis+TCP)

在这里插入图片描述
在这里插入图片描述
logstash配置一,后面是逐渐完善的。stdout方便测试输出

在这里插入图片描述
logstash配置二

在这里插入图片描述
filebeat的redis配置

在这里插入图片描述
filebeat的beats配置

tcp那个 是logstash直接读取tcp,所以不用配置filebeat

在这里插入图片描述
Kibana直接看到3个索引

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/928179.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

C#调用c++创建的动态链接库dll文件

在C#中调用外部DLL文件是一种常见的编程实践&#xff0c;它具有以下几个重要意义&#xff1a;1.代码重用&#xff1b;2.模块化&#xff1b;3.性能优化&#xff1b;4.安全性&#xff1b;5.跨平台兼容性&#xff1b;6.方便更新和维护&#xff1b;7.利用特定技术或框架&#xff1b…

重建大师重建的模型坐标有偏差怎么解决?

第一遍自由网空三&#xff0c;跑完之后刺点&#xff0c;然后控制点平差增强参数解算&#xff0c;方法如下&#xff1a; &#xff08;1&#xff09;跑完自由网空三后&#xff0c;选择编辑控制点&#xff0c;出现刺点窗口后&#xff0c;导入控制点参数 &#xff08;2&#xff09…

Apache Airflow 快速入门教程

Apache Airflow已经成为Python生态系统中管道编排的事实上的库。与类似的解决方案相反&#xff0c;由于它的简单性和可扩展性&#xff0c;它已经获得了普及。在本文中&#xff0c;我将尝试概述它的主要概念&#xff0c;并让您清楚地了解何时以及如何使用它。 Airflow应用场景 …

GEE Download Data——气温数据的下载

GEE数据下载第二弹!今天我们来分享气温数据的下载。 一、数据介绍 气温数据我们要用到的是MODIS数据产品,MOD11A2 V6.1 产品提供 1200 x 1200 公里网格内 8 天平均陆地表面温度 (LST)。 MOD11A2 中的每个像素值都是该 8 天内收集的所有相应 MOD11A1 LST 像素的简单平均值。…

分布式推理框架 xDit

1. xDiT 简介 xDiT 是一个为大规模多 GPU 集群上的 Diffusion Transformers&#xff08;DiTs&#xff09;设计的可扩展推理引擎。它提供了一套高效的并行方法和 GPU 内核加速技术&#xff0c;以满足实时推理需求。 1.1 DiT 和 LLM DiT&#xff08;Diffusion Transformers&am…

uniapp 自定义导航栏增加首页按钮,仿微信小程序操作胶囊

实现效果如图 抽成组件navbar.vue&#xff0c;放入分包 <template><view class"header-nav-box":style"{height:Props.imgShow?:statusBarHeightpx,background:Props.imgShow?:Props.bgColor||#ffffff;}"><!-- 是否使用图片背景 false…

张伟楠动手学强化学习笔记|第一讲(上)

张伟楠动手学强化学习笔记|第一讲&#xff08;上&#xff09; 人工智能的两种任务类型 预测型任务 有监督学习无监督学习 决策型任务 强化学习 序贯决策(Sequential Decision Making) 智能体序贯地做出一个个决策&#xff0c;并接续看到新的观测&#xff0c;知道最终任务结…

《只狼》运行时提示“mfc140u.dll文件缺失”是什么原因?“找不到mfc140u.dll文件”要怎么解决?教你几招轻松搞定

《只狼》运行时提示“mfc140u.dll文件缺失”的科普与解决方案 作为一名软件开发从业者&#xff0c;在游戏开发和维护过程中&#xff0c;我们经常会遇到各种运行时错误和系统报错。今天&#xff0c;我们就来探讨一下《只狼》这款游戏在运行时提示“mfc140u.dll文件缺失”的原因…

MacOS 命令行详解使用教程

本章讲述MacOs命令行详解的使用教程&#xff0c;感谢大家观看。 本人博客:如烟花般绚烂却又稍纵即逝的主页 MacOs命令行前言&#xff1a; 在 macOS 上,Terminal&#xff08;终端) 是一个功能强大的工具&#xff0c;它允许用户通过命令行直接与系统交互。本教程将详细介绍 macOS…

【计算机网络】实验6:IPV4地址的构造超网及IP数据报

实验 6&#xff1a;IPV4地址的构造超网及IP数据报 一、 实验目的 加深对IPV4地址的构造超网&#xff08;无分类编制&#xff09;的了解。 加深对IP数据包的发送和转发流程的了解。 二、 实验环境 • Cisco Packet Tracer 模拟器 三、 实验内容 1、了解IPV4地址的构造超网…

Java Web 1HTML快速入门

目录 一、Web开发介绍 1.什么是Web&#xff1f; 2.初识Web前端 二、HTML快速入门 1.什么是HTML、CSS&#xff1f; 2、案例练习 3.小结 三、VS Code开发工具 四、基础标签&样式&#xff08;HTML&#xff09; 2、实现标题--样式1&#xff08;新闻标题的颜色&#xff0…

【流程图】各元素形状和含义

判定、文档、数据、数据库、流程处理节点 矩形 - 动词 平行四边形 - 图像 下波浪 - 数据 图片来源http://baike.cu12.com/bkss/62449.shtml

利用机器学习预测离婚:从数据分析到模型构建(含方案和源码)

背景介绍 在当今社会&#xff0c;婚姻关系的稳定性受到了多方面因素的影响&#xff0c;包括经济压力、沟通问题、个人价值观差异等。离婚不仅对夫妻双方产生深远的影响&#xff0c;还可能对子女的成长环境和社会稳定造成不利影响。因此&#xff0c;理解并预测可能导致离婚的因素…

分层架构 IM 系统之 Router 架构分析

通过前面文章的分析&#xff0c;我们已经明确&#xff0c;Router 的核心职责是作为中央存储记录在线客户端的连接状态&#xff0c;Router 在本质上是一个内存数据库。 内存是一种易失性的存储&#xff0c;既如此&#xff0c;Router 的可用性如何保障呢&#xff1f; 副本是分布…

二分查找常规实现

使用二分查找有一个前提条件&#xff1a;要查找的数必须在一个有序数组里。在这个前提下&#xff0c;取中间位置数作为比较对象&#xff1a; 若要查找的值和中间数相等&#xff0c;则查找成功。 若小于中间数&#xff0c;则在中间位置的左半区继续查找。 若大于中间数&#x…

C++ 之弦上舞:string 类与多样字符串操作的优雅旋律

string 类的重要性及与 C 语言字符串对比 在 C 语言中&#xff0c;字符串是以 \0 结尾的字符集合&#xff0c;操作字符串需借助 C 标准库的 str 系列函数&#xff0c;但这些函数与字符串分离&#xff0c;不符合 OOP 思想&#xff0c;且底层空间管理易出错。而在 C 中&#xff0…

获取联通光猫的管理员密码

缘起&#xff1a;联通给免费更换了一个新的光猫&#xff0c;烽火的光路由&#xff0c;一个WAN口&#xff0c;4个LAN口&#xff0c;带USB接口&#xff0c;欣欣然接受。但是呢&#xff0c;发现以前的管理员密码CUAdmin不能用了。经过一系列查询&#xff0c;借助别人的经验&#x…

数组练习(非最终版)

作业1&#xff1a;使用二维数组输出杨辉三角 //杨辉三角 #include <stdio.h> #include <string.h> #include <stdlib.h> int main(int argc, const char *argv[]) {int i,j,n;scanf("%d",&n);int arr[n][n];for(i0;i<n;i){arr[i][0]1;arr[…

【MySQL 进阶之路】索引概述

第06章_索引 1.什么是索引 索引是存储引擎用于快速找到数据记录的一种数据结构&#xff0c;就好比一本教科书的目录部分&#xff0c;通过目录中找到对应文章的页码&#xff0c;便可快速定位到需要的文章。MySQL中也是一样的道理&#xff0c;进行数据查找时&#xff0c;首先查…

微积分复习笔记 Calculus Volume 2 - 3.3 Trigonometric Substitution

3.3 Trigonometric Substitution - Calculus Volume 2 | OpenStax