Windows下搭建EFK实例

资源下载

elasticSearch :下载最新版本的就行
kibana
filebeat:注意选择压缩包下载
更新elasticsearch.yml,默认端口9200:

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 20-02-2024 07:30:46
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: false

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: false
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["RAYL"]

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0

# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

更新kibana.yml配置,默认端口5601:

# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html

# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "localhost"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"

# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://localhost:9200"]

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024

# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug

# Enables you to specify a file where Kibana stores log output.
#logging.appenders.default:
#  type: file
#  fileName: /var/logs/kibana.log
#  layout:
#    type: json

# Logs queries sent to Elasticsearch.
#logging.loggers:
#  - name: elasticsearch.query
#    level: debug

# Logs http responses.
#logging.loggers:
#  - name: http.server.response
#    level: debug

# Logs system usage information.
#logging.loggers:
#  - name: metrics.ops
#    level: debug

# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
i18n.locale: "zh-CN"

# =================== Frequently used (Optional)===================

# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.

# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000

# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb

# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15

# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000

# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000

更新filebeat.yml: 命令行执行 install-service-filebeat.ps1 把filebeat安装为windows服务,在service中搜索
注意如果显示执行脚本未签名,更新windows ExecutionPolicy为RemoteSigned,执行指令Set-ExecutionPolicy RemoteSigned

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input-specific configurations.

# filestream is an input for collecting log messages from files.
- type: log

  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /var/log/*.log
    - your log path(这里改成你的日志地址 * 是通配符匹配文件夹下所有文件例如:E:\Project\log\* )

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboard archive. By default, this URL
# has a value that is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Performance preset - one of "balanced", "throughput", "scale",
  # "latency", or "custom".
  preset: balanced

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors, use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch outputs are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true


elastic执行:命令行到解压后的bin文件夹 ./elasticsearch
启动后浏览器打开 http://localhost:9200
在这里插入图片描述

kibana: 命令行到解压后的bin文件夹 ./kibana.bat
浏览器打开 http://localhost:5601

filebeat: 直接从service启动

kibana创建数据视图,右边可以看到filebeat的源,索引模式匹配右边的数据源名称,这个名称可以在filebeat.yml配置:
在这里插入图片描述
对了,别忘了先安装java8及以上版本,ElasticSearch需要java环境

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/401017.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

计算机网络-局域网和城域网(二)

1.局域网互联设备&#xff1a; 2层网桥&#xff08;生成树、源路由&#xff09;、3层交换机、路由器。网桥要求3层以上协议相同&#xff0c;1、2层协议不同可互联。 2.生成树网桥&#xff1a; 又叫透明网桥&#xff0c;IEEE802.1d&#xff0c;生成树算法。基本思想是在网桥之…

Flask数据库操作-Flask-SQLAlchemy

Flask中一般使用flask-sqlalchemy来操作数据库。flask-sqlalchemy的使用介绍如下&#xff1a; 一、SQLAlchemy SQLALchemy 实际上是对数据库的抽象&#xff0c;让开发者不用直接和 SQL 语句打交道&#xff0c;而是通过 Python 对象来操作数据库&#xff0c;在舍弃一些性能开销…

Java Swing游戏开发学习1

不使用游戏引擎&#xff0c;只使用Java SDK开发游戏的学习。 游戏原理 图片来自某大佬视频讲解 原理结合实际代码 public class GamePanel extends Jpanel implements Runnable {...run(){}// 详情看下图... }项目结构 运行效果 代码code 在我的下载里面可以找到&#xf…

从输入url到页面显示中间发生了什么

文章目录 整体概述URL释义用户输入缓存处理域名解析IP 地址什么是域名解析浏览器查找域名对应IP小结 TCP 三次握手握手时序三次握手数据包分析为什么需要三次握手 HTTP 请求HTTP 响应服务器MVC 后台处理阶段http 响应报文 TCP 四次挥手浏览器渲染 整体概述 浏览器输入 URL 到页…

Cesium 问题:加载 gltf 格式的模型之后太小,如何让相机视角拉近

文章目录 问题分析问题 刚加载的模型太小,如何拉近视角放大 分析 在这里有两种方式进行拉近视角, 一种是点击复位进行视角拉近一种是刚加载就直接拉近视角// 模型三加载 this.damModel = new Cesium.Entity({name: "gltf模型",position:</

Tomcat 学习之 Servlet

目录 1 Servlet 介绍 2 创建一个 Servlet 3 web.xml 介绍&#xff08;不涉及 filter 和 listener 标签&#xff09; 3.1 display-name 3.2 welcome-file-list 3.3 servlet 3.4 session-config 3.5 error-page 4 Tomcat 如何根据 URL 定位到 Servlet 5 执行 Servlet …

QT-Day2

思维导图 作业 使用手动连接&#xff0c;将登录框中的取消按钮使用qt4版本的连接到自定义的槽函数中&#xff0c;在自定义的槽函数中调用关闭函数 将登录按钮使用qt5版本的连接到自定义的槽函数中&#xff0c;在槽函数中判断ui界面上输入的账号是否为"admin"&#x…

【Crypto | CTF】BUUCTF RSA2

天命&#xff1a;密码学越来越难了&#xff0c;看别人笔记都不知道写啥 天命&#xff1a;莫慌&#xff0c;虽然我不会推演法&#xff0c;但我可以用归纳法 虽然我不知道解题的推演&#xff0c;但我可以背公式啊哈哈哈 虽然我不会这题&#xff0c;但是我也能做出来 公式我不知…

ElasticStack安装(windows)

官网 : Elasticsearch 平台 — 大规模查找实时答案 | Elastic Elasticsearch Elastic Stack(一套技术栈) 包含了数据的整合 >提取 >存储 >使用&#xff0c;一整套! 各组件介绍: beats 套件:从各种不同类型的文件/应用中采集数据。比如:a,b,cd,e,aa,bb,ccLogstash:…

挑战杯 基于人工智能的图像分类算法研究与实现 - 深度学习卷积神经网络图像分类

文章目录 0 简介1 常用的分类网络介绍1.1 CNN1.2 VGG1.3 GoogleNet 2 图像分类部分代码实现2.1 环境依赖2.2 需要导入的包2.3 参数设置(路径&#xff0c;图像尺寸&#xff0c;数据集分割比例)2.4 从preprocessedFolder读取图片并返回numpy格式(便于在神经网络中训练)2.5 数据预…

四、分类算法 - 朴素贝叶斯算法

目录 1、朴素贝叶斯算法 1.1 案例 1.2 联合概率、条件概率、相互独立 1.3 贝叶斯公式 1.4 朴素贝叶斯算法原理 1.5 应用场景 2、朴素贝叶斯算法对文本进行分类 2.1 案例 2.2 拉普拉斯平滑系数 3、API 4、案例&#xff1a;20类新闻分类 4.1 步骤分析 4.2 代码分析 …

Java SE 入门到精通—基础语法【Java】

敲重点&#xff01; 本篇讲述了比较重要的基础&#xff0c;是必须要掌握的 1.程序入口 在Java中&#xff0c;main方法是程序的入口点&#xff0c;是JVM&#xff08;Java虚拟机&#xff09;执行Java应用程序的起始点。 main方法的方法签名必须遵循下面规范&#xff1a; publ…

css知识:盒模型盒子塌陷BFC

1. css盒模型 标准盒子模型&#xff0c;content-box 设置宽度即content的宽度 width content 总宽度content&#xff08;width设定值&#xff09; padding border IE/怪异盒子模型&#xff0c;border-box width content border padding 总宽度 width设定值 2. 如何…

好用的备忘录便签软件推荐哪一款?

好用的备忘录便签软件推荐哪一款&#xff1f;备忘录便签软件支持放在桌面上&#xff0c;可以辅助大家快捷记录&#xff0c;而且可以高效辅助大家进行记录&#xff0c;支持在电脑桌面显示的便签工具是比较多的&#xff0c;但是论其是否好用&#xff0c;大家还需要耐心的筛选一番…

Chromium的下载地址

Chromium的下载地址&#xff1a; Download Chromiumhttps://www.chromium.org/getting-involved/download-chromium/ https://commondatastorage.googleapis.com/chromium-browser-snapshots/index.html?prefixWin_x64/https://commondatastorage.googleapis.com/chromium-br…

LLM之RAG实战(二十七)| 如何评估RAG系统

有没有想过今天的一些应用程序是如何看起来几乎神奇地智能的&#xff1f;这种魔力很大一部分来自于一种叫做RAG和LLM的东西。把RAG&#xff08;Retrieval Augmented Generation&#xff09;想象成人工智能世界里聪明的书呆子&#xff0c;它会挖掘大量信息&#xff0c;准确地找到…

Kafka进阶

文章目录 概要应用场景消息队列两种模式kafka的基础架构分区常见问题小结 概要 kafka的传统定义&#xff1a;kafka是一个分布式的基于发布\订阅模式的消息队列&#xff0c;主要用于大数据实时处理领域。 kafka的最新概念&#xff1a;kafka是一个开源的分布式事件流平台&#x…

一种基于动态水位值的Flink调度优化算法(flink1.5以前),等同于实现flink的Credit-based反压原理

优化flink反压 说明1 flink反压介绍1.1 介绍1.2 大数据系统反压现状1.4 flink task与task之间的反压1.5 netty水位机制作用分析 2 反压优化算法3 重点&#xff01; 但是 可但是 flink1.5以后的反压过程。4 flink反压问题的查找瓶颈办法 说明 首先说明&#xff0c;偶然看了个论…

iMazing2024Windows和Mac的iOS设备管理软件(可以替代iTunes进行数据备份和管理)

iMazing2024是一款兼容 Windows 和 Mac 的 iOS 设备管理软件&#xff0c;可以替代 iTunes 进行数据备份和管理。以下是一些 iMazing 的主要功能和优点&#xff1a; 数据备份和恢复&#xff1a;iMazing 提供了强大的数据备份和恢复功能&#xff0c;可以备份 iOS 设备上的各种数据…

Unity xLua开发环境搭建与基础进阶

Unity是一款非常流行的游戏开发引擎&#xff0c;而xLua是一个为Unity开发者提供的Lua框架&#xff0c;可以让开发者使用Lua语言来进行游戏开发。在本文中&#xff0c;我们将介绍如何搭建Unity xLua开发环境&#xff0c;并进行基础进阶的学习。 环境搭建 首先&#xff0c;我们需…