Windows kafka 简单集群搭建

Windows kafka 简单集群搭建

文章目录

  • Windows kafka 简单集群搭建
    • 1.环境说明
    • 2.Zookeeper集群搭建
      • 2.1 ZooKeeper下载
      • 2.2 ZooKeeper安装
        • 2.2.1 解压zookeeper-3.4.8.tar.gz
        • 2.2.2 进入conf目录下,复制zoo_sample.cfg为zoo.cfg
        • 2.2.3 修改zoo.cfg文件
        • 2.2.4 生成myid文件
        • 2.2.5 注意事项
      • 2.3 zookeeper启动
    • 3.kafka集群搭建
      • 3.1 kafka下载
      • 3.2 kafka安装
        • 3.2.1 kafka_2.12-3.7.0.tgz
        • 3.2.2 修改server.properties文件
      • 3.3 kafka启动
        • 3.3.1 注意事项
    • 4.可视化工具Offset Explorer使用
      • 4.1 连接

1.环境说明

项目版本
操作系统环境Windows 11 / 64 位操作系统
Zookeeper环境zookeeper-3.4.8
kafka环境kafka_2.12-3.7.0

2.Zookeeper集群搭建

要搭建kafka集群首先要搭建Zookeeper集群

2.1 ZooKeeper下载

ZooKeeper下载地址
本文选用的是3.4.8版本
在这里插入图片描述
在这里插入图片描述

2.2 ZooKeeper安装

拷贝多份zookeeper程序,此处设置三个server,分别创建目录Server-A、Server-B、Server-C,每个目录下存放一份zookeeper程序

2.2.1 解压zookeeper-3.4.8.tar.gz

在这里插入图片描述

2.2.2 进入conf目录下,复制zoo_sample.cfg为zoo.cfg

在这里插入图片描述

2.2.3 修改zoo.cfg文件

要先创建对应文件夹 E:/zookeeper/tmp/zookeeper-A
在这里插入图片描述

服务A:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=E:/zookeeper/tmp/zookeeper-A
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
server.1=localhost:2887:3887
server.2=localhost:2888:3888
server.3=localhost:2889:3889
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

服务B:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=E:/zookeeper/tmp/zookeeper-B
# the port at which the clients will connect
clientPort=2182
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
server.1=localhost:2887:3887
server.2=localhost:2888:3888
server.3=localhost:2889:3889
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

服务C:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=E:/zookeeper/tmp/zookeeper-C
# the port at which the clients will connect
clientPort=2183
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
server.1=localhost:2887:3887
server.2=localhost:2888:3888
server.3=localhost:2889:3889
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

2.2.4 生成myid文件

进入E:\zookeeper\tmp\zookeeper-A文件夹下,点击鼠标右键,选择在终端打开,执行命令echo 3 > myid

在这里插入图片描述

2.2.5 注意事项

1.注意生成的myid文件中有空格 去除空格,否者无法启动
1.java环境 环境变量中没配置java环境变量也会导致启动失败

2.3 zookeeper启动

双击zkServer.cmd脚本即可启动,如下图所示:
在这里插入图片描述

3.kafka集群搭建

3.1 kafka下载

kafka下载地址
在这里插入图片描述

3.2 kafka安装

拷贝多份kafka程序,此处设置三个broker,分别创建目录kafka_A、kafka_B、kafka_C,每个目录下存放一份kafka程序

3.2.1 kafka_2.12-3.7.0.tgz

在这里插入图片描述

3.2.2 修改server.properties文件

在这里插入图片描述
在这里插入图片描述

kafka_A服务:

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

#
# This configuration file is intended for use in ZK-based mode, where Apache ZooKeeper is required.
# See kafka.server.KafkaConfig for additional details and defaults
#

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
host.name=192.168.8.99
#监听端口
port=9097
#对应3台Zookeeper的IP地址和端口
zookeeper.connect=192.168.8.99:2181,192.168.8.99:2182,192.168.8.99:2183
listeners=PLAINTEXT://192.168.8.99:9097
advertised.listeners=PLAINTEXT://192.168.8.99:9097
############################# Socket Server Settings #############################

# The address the socket server listens on. If not configured, the host name will be equal to the value of
# java.net.InetAddress.getCanonicalHostName(), with PLAINTEXT listener name, and port 9092.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

# Listener name, hostname and port the broker will advertise to clients.
# If not set, it uses the value for "listeners".
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=E:\kafka\kafka_A\kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
#log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
#zookeeper.connect=localhost:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

kafka_B服务:

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

#
# This configuration file is intended for use in ZK-based mode, where Apache ZooKeeper is required.
# See kafka.server.KafkaConfig for additional details and defaults
#

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
host.name=192.168.8.99
#监听端口
port=9098
#对应3台Zookeeper的IP地址和端口
zookeeper.connect=192.168.8.99:2181,192.168.8.99:2182,192.168.8.99:2183
listeners=PLAINTEXT://192.168.8.99:9098
advertised.listeners=PLAINTEXT://192.168.8.99:9098

############################# Socket Server Settings #############################

# The address the socket server listens on. If not configured, the host name will be equal to the value of
# java.net.InetAddress.getCanonicalHostName(), with PLAINTEXT listener name, and port 9092.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

# Listener name, hostname and port the broker will advertise to clients.
# If not set, it uses the value for "listeners".
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=E:\kafka\kafka_B\kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
#log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
#zookeeper.connect=localhost:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

kafka_C服务:

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

#
# This configuration file is intended for use in ZK-based mode, where Apache ZooKeeper is required.
# See kafka.server.KafkaConfig for additional details and defaults
#

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=2
host.name=192.168.8.99
#监听端口
port=9099
#对应3台Zookeeper的IP地址和端口
zookeeper.connect=192.168.8.99:2181,192.168.8.99:2182,192.168.8.99:2183
listeners=PLAINTEXT://192.168.8.99:9099
advertised.listeners=PLAINTEXT://192.168.8.99:9099

############################# Socket Server Settings #############################

# The address the socket server listens on. If not configured, the host name will be equal to the value of
# java.net.InetAddress.getCanonicalHostName(), with PLAINTEXT listener name, and port 9092.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

# Listener name, hostname and port the broker will advertise to clients.
# If not set, it uses the value for "listeners".
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=E:\kafka\kafka_C\kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
#log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
#zookeeper.connect=localhost:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

3.3 kafka启动

进入bin\windows目录下,执行命令

kafka-server-start.bat ..\..\config\server.properties
3.3.1 注意事项

文件夹过深可能导致命令启动失败

4.可视化工具Offset Explorer使用

Offset Explorer下载地址

4.1 连接

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/454072.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

《LeetCode热题100》笔记题解思路技巧优化_Part_1

《LeetCode热题100》笔记&题解&思路&技巧&优化_Part_1 😍😍😍 相知🙌🙌🙌 相识😢😢😢 开始刷题哈希🟢1. 两数之和🟡2. 字母异位词…

JavaScript进阶3之参数按值传递、call,apply,bind和new的实现、继承的多种方式

JavaScript基础 参数按值传递按值传递共享传递 call、apply、bind和new的实现this软绑定硬绑定 call的实现第一步第二步第三步 apply的实现bind的实现返回函数的模拟实现传参的模拟实现构造函数效果的模拟实现构造函数效果的优化实现 new的实现初步实现 继承的多种方式&优缺…

能耗数据采集网关在钢铁企业的应用-天拓四方

能耗数据采集网关是一种集成多种传感器和数据通信技术的智能化设备,它能够实现对生产现场各类能耗数据的实时采集、存储和传输。通过网关设备,企业可以构建一个全面、高效的能源管理系统,对生产过程中的能源消耗进行实时监控和精准控制&#…

pcl弧度角度换算:rad2deg,deg2rad

角度弧度换算公式: 代码及结果在:cmath 中cos sin等常用函数的坑(弧度角度换算)-CSDN博客 pcl也有自带的rad2deg,deg2rad: 头文件 #include<pcl/common/angles.h> 代码如下 #include <iostream> #include<pcl/common/angles.h> int main() {vector<…

全球首个 AI 超级工程师:拥有全栈技能,一个指令就能完成整个开发过程

全球首位AI软件工程师Devin是由初创公司Cognition推出的&#xff0c;它被认为是世界上第一个完全自主的AI软件工程师[2][15]。Devin具备强大的编程和软件开发能力&#xff0c;能够在多个方面协助或完全独立地完成软件开发任务[15]。它的核心能力包括自学新语言、开发迭代App、自…

SpringBoot自动配置原理(简单总结)

SpringBoot2和SpringBoot3的自动配置原理大致相同&#xff0c;只是存放我们导入的starter中的配置类的信息的文件由对应starter的jar包下的META-INF/spring.factories文件变成了META-INF/spring/org.springframework.boot.autoconfigure.AutoConfiguration.imports文件。 此外…

数据分析之一些Excel函数

数据分析之Excel的使用 SUM()求和SUMIF()单条件求和SUMIFS()多条件求和日期函数YEAR()提取年份MONTH()提取月份DAY()提取日DATE()函数 SUBTOTAL()求和IF()函数IF嵌套 VLOOKUP()搜索取值MATCH()返回行值或列值INDEX()定位取值 SUM()求和 SUM(number1,[number2],…) 对选中的区域…

websocket逆向案例

提示&#xff1a;文章写完后&#xff0c;目录可以自动生成&#xff0c;如何生成可参考右边的帮助文档 文章目录 前言一、案例地址二、分析流程三、逆向参数四、webSocket 交互位置总结 前言 本文章中所有内容仅供学习交流使用&#xff0c;不用于其他任何目的&#xff0c;不提供…

Java项目:基于springboot实现的OA协同办公系统(源码+数据库+毕业论文)

一、项目简介 本项目是一套基于Springbootvue实现的付费自习室系统 包含&#xff1a;项目源码、数据库脚本等&#xff0c;该项目附带全部源码可作为毕设使用。 项目都经过严格调试&#xff0c;eclipse或者idea 确保可以运行&#xff01; 该系统功能完善、界面美观、操作简单、…

以客户为中心、以市场为导向的创新研发能力:塑造企业核心竞争力的关键

在当今竞争激烈的市场环境中&#xff0c;企业的生存与发展往往取决于其核心竞争力。其中&#xff0c;以客户为中心、以市场为导向的创新研发能力&#xff0c;成为了塑造企业核心竞争力的关键要素。本文将探讨这一观点&#xff0c;并结合实际案例进行分析。 一、以客户为中心&am…

TCP收发——计算机网络——day02

今天主要讲了TCP的收发 TCP发端步骤 ①socket ②connect ③send ④closeTCP收端步骤 ①socket ②bind ③listen ④accept ⑤recv ⑥clise其函数主要有 connect int connect(int sockfd, const struct sockaddr *addr,socklen_t addrlen);功能:发送链接请求参数:sockfd:套接…

Linux:锁和线程同步的相关概念以及生产者消费者模型

文章目录 加锁的基本原则死锁死锁的概念死锁的条件 线程同步生产者消费者模型模型的理解 理解cp问题条件变量 本篇总结的是关于Linux中锁的相关概念以及生产者消费者模型 加锁的基本原则 加锁的基本原则&#xff1a;谁加锁谁解锁&#xff0c;不要把加锁和解锁这样的操作放在两…

淘宝基于Nginx二次开发的Tengine服务器

最近在群里看到这样一张阿里云网关报错的截图&#xff0c;我保存下来看了下 看到下面有 Tengine提供技术支持&#xff0c;这个Tengine是什么东西呢&#xff1f;我搜索了下似乎是淘宝在nginx的基础上自己改的Web服务器 Tengine还支持OpenResty框架&#xff0c;该框架是基于Ngin…

ios xcode 15 PrivacyInfo.xcprivacy 隐私清单 查询应用使用的隐私api

1.需要升级mac os系统到13 兼容 xcode 15.1 2.升级mac os系统到14 兼容 xcode 15.3 3.选择 New File 4.直接搜索 privacy 能看到有个App Privacy 5.右击Add Row 7.直接选 Label Types 8.选中继续添加就能添加你的隐私清单了 苹果官网文档

UE4案例记录

UE4案例记录&#xff08;制作3D角色显示在UI中&#xff09; 制作3D角色显示在UI中 转载自youtube视频 https://www.youtube.com/channel/UCC8f6SxKJElVvaRb7nF4Axg 新建项目 创建一个Actor 场景组件->摄像机组件->场景捕获组件2D&#xff0c;之后添加一个骨骼网格体…

打破信息获取的界限:灵雀云推出自主研发智能文档机器人KnowledGenie

自LLM&#xff08;Large Language Model&#xff09;技术的迅速崭露头角以来&#xff0c;整个AI领域已经发生了翻天覆地的变化。LLM技术的不断进步&#xff0c;特别是以ChatGPT为代表的技术&#xff0c;为人工智能领域带来了前所未有的发展机遇。这种技术的出现&#xff0c;使得…

全国车辆识别代码信息API查询接口-VIN深度解析

我们先来介绍下什么是vin码&#xff0c;以及vin码的构成结构解析&#xff0c;汽车VIN码&#xff0c;也叫车辆识别号码&#xff0c;通俗可以理解为汽车的身份证号码。 VIN码一共分四大部分&#xff1a; 1~3位&#xff0c;是世界制造厂识别代号&#xff08;WMI&#xff09;&…

计算机网络 —— 运输层

运输层 5.1 运输层概述 运输层的主要任务是&#xff0c;如何为运行在不同主机上的应用进程提供直接的通信服务。运输层协议又称为端到端协议。 根据应用需求的不同&#xff0c;因特网的运输层为应用层提供了两种不同的运输协议&#xff0c;即面向连接的TCP和无连接的UDP 5.2…

c 语言中指针注意事项

看看下面两个 #include<iostream> using namespace std;int main() {int a 10;char p[6];*((int *)p) *(& a); // 正确写法*p *(&a); // 错误写法cout << *(int*)p; } 把原因写在评论区

基于YOLOv8/YOLOv7/YOLOv6/YOLOv5的手写数字和符号识别(深度学习训练+UI界面+训练数据集)

摘要&#xff1a;开发手写数字和符号识别对于智能交互系统具有关键作用。本篇博客详细介绍了如何运用深度学习构建一个手写数字和符号识别&#xff0c;并提供了完整的实现代码。该系统基于强大的YOLOv8算法&#xff0c;并对比了YOLOv7、YOLOv6、YOLOv5&#xff0c;展示了不同模…