鲲鹏服务器安装Kafka

由于项目需求,需要在鲲鹏云主机上安装Kafka,并且要求安装的版本为2.3.X。下面主要从以下几个步骤说明如何安装:
1、下载kafka的安装文件
2、上传到服务器
3、修改配置
4、启动
5、使用工具测试

服务器信息

CPU信息

[root@ecs02 ~]# lscpu
Architecture:                    aarch64
CPU op-mode(s):                  64-bit
Byte Order:                      Little Endian
CPU(s):                          32
On-line CPU(s) list:             0-31
Thread(s) per core:              1
Core(s) per socket:              16
Socket(s):                       2
NUMA node(s):                    2
Vendor ID:                       HiSilicon
Model:                           0
Model name:                      Kunpeng-920
Stepping:                        0x1
CPU max MHz:                     2400.0000
CPU min MHz:                     2400.0000
BogoMIPS:                        200.00
L1d cache:                       2 MiB
L1i cache:                       2 MiB
L2 cache:                        16 MiB
L3 cache:                        64 MiB
NUMA node0 CPU(s):               0-15
NUMA node1 CPU(s):               16-31
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Not affected
Vulnerability Spectre v1:        Mitigation; __user pointer sanitization
Vulnerability Spectre v2:        Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimd
                                 dp asimdfhm

操作系统信息

[root@ecs02 ~]# cat /etc/kylin-release 
Kylin Linux Advanced Server release V10 (Tercel)

Java版本信息

[root@ecs02 ~]# java -version
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (build 1.8.0_242-b08)
OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)

下载Kafka安装文件

访问apache官网,找到kafka下载主页https://kafka.apache.org/downloads,下载完成后得到文件kafka_2.12-2.3.1.tgz。这里可以根据自己的需求下载对应的版本即可,如果需要下载Zookeeper,注意版本需要和Kafka的版本对应,可以从下载的Kafka包中找到对应的Zookeeper版本,比如这里使用的Zookeeper对应的版本为3.4.14
在这里插入图片描述

上传到服务器

将该文件上传到服务器目录/data/public/,远程登录服务器,执行如下命令

cd /data/public/

解压文件,执行如下命令

tar -xvf kafka_2.12-2.3.1.tgz

修改配置

解压后得到/data/public/kafka_2.12-2.3.1/,进入该目录,修改server.properties,搜索关键字listeners=PLAINTEXT,将该行修改为:

listeners=PLAINTEXT://192.168.1.100:9092

注意这里的192.168.1.100为该鲲鹏主机的IP地址,也就是后面应用可以访问到的地址,不要配置为内网无法访问到的地址。完整配置文件为:

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://10.16.39.14:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

启动

启动Kafka,需先启动Zookeeper,然后启动Kafka,这里使用Kafka自带的Zookeeper,执行如下命令

nohup /data/public/kafka_2.12-2.3.1/bin/zookeeper-server-start.sh /data/public/kafka_2.12-2.3.1/config/zookeeper.properties > /dev/null 2>&1 &

然后启动Kafka,执行如下命令:

nohup /data/public/kafka_2.12-2.3.1/bin/kafka-server-start.sh /data/public/kafka_2.12-2.3.1/config/server.properties > /dev/null 2>&1 &

启动完成后,通过netstat命令检查端口是否开启

[root@test public]# netstat -nltp | grep 9092
tcp6       0      0 10.16.39.14:9092        :::*                    LISTEN      1569191/java        
[root@test public]# netstat -nltp | grep 2181
tcp6       0      0 :::2181                 :::*                    LISTEN      1568749/java          

或者执行

[root@test public]# netstat -nltp | grep '2181\|9092'
tcp6       0      0 10.16.39.14:9092        :::*                    LISTEN      1569191/java        
tcp6       0      0 :::2181                 :::*                    LISTEN      1568749/java        

以上表名Kafa已经成功启动。

开启远程访问

开启端口访问,执行如下命令即可:

firewall-cmd --zone=public --add-port=2181/tcp --permanent
firewall-cmd --zone=public --add-port=9092/tcp --permanent
firewall-cmd --reload

远程访问

Kafka客户端工具可以使用OffsetExplorer2

Java端访问可以参看:https://support.huaweicloud.com/usermanual-kafka/kafka-ug-0010.html

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/871262.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

集团数字化转型方案(六)

集团数字化转型方案旨在通过引入前沿技术,如人工智能(AI)、大数据分析、云计算和物联网(IoT),全面提升业务运营效率和市场竞争力。该方案首先实现业务流程的自动化,减少人工干预,通过…

第1章-05-通过浏览器控制台安装JQuery.js库

🏆作者简介,黑夜开发者,CSDN领军人物,全栈领域优质创作者✌,CSDN博客专家,阿里云社区专家博主,2023年CSDN全站百大博主。 🏆数年电商行业从业经验,历任核心研发工程师,项目技术负责人。 🏆本文已收录于专栏:Web爬虫入门与实战精讲。 🎉欢迎 👍点赞✍评论⭐收…

自抗扰控制ADRC原理解析及案例应用

1. ADRC基本原理 1.1 ADRC的基本概念 自抗扰控制(Active Disturbance Rejection Control,ADRC)是一种先进的控制策略,由韩京清研究员于1998年提出。ADRC的核心思想是将系统内部和外部的不确定性因素视为总扰动,并通过…

网络编程:OSI协议,TCP/IP协议,IP地址,UDP编程

目录 国际网络通信协议标准: 1.OSI协议: 2.TCP/IP协议模型: 应用层 : 传输层: 网络层: IPV4协议 IP地址 IP地址的划分: 公有地址 私有地址 MA…

win/mac数字资产管理软件Adobe Bridge (BR)软件下载安装

目录 一、Adobe BR软件介绍 1.1 软件概述 1.2 主要功能 1.3 系统要求 二、Adobe BR安装步骤 2.1 下载软件 2.2 安装前准备 2.3 安装过程 三、Adobe BR使用教程 3.1 基础操作 3.1.1 浏览与预览 3.1.2 搜索与筛选 3.1.3 批量操作 3.2 进阶功能 3.2.1 元数据管理 …

海康VisionMaster使用学习笔记12-通信框架介绍

1. 通信的用途 用途: 通信是连通算法平台和外部设备的重要渠道,在算法平台中既支持外部数据的读入也支持数据的写出,当通信构建起来以后既可以把软件处理结果发送给外界,又可以通过外界发送字符来触发相机拍照或者软件运行。 2. 通信的种类…

面试题目:(4)给表达式添加运算符

目录 题目 代码 思路解析 例子 题目 题目 给定一个仅包含数字 0-9 的字符串 num 和一个目标值整数 target &#xff0c;在 num 的数字之间添加 二元 运算符&#xff08;不是一元&#xff09;、- 或 * &#xff0c;返回 所有能够得到 target 的表达式。1 < num.length &…

Activity的基本用法

文章目录 Activity的基本用法活动是什么新建活动在AndroidManifest文件中注册Acyivity销毁一个活动 Activity的基本用法 活动是什么 **活动&#xff08;Activity&#xff09;是最容易吸引用户的地方&#xff0c;它是一种可以包含用户界面的组件&#xff0c;主要用于和用户进行…

使用 SQLite 处理大量小数据库

使用 SQLite 处理大量小数据库时&#xff0c;需要考虑数据库文件的数量、管理方式、性能优化等因素。SQLite 是轻量级的数据库&#xff0c;适合嵌入式系统和小型项目&#xff0c;但在处理大量数据库文件时&#xff0c;仍需要仔细设计和管理。 一、问题背景 近期一个项目中&…

2024 人工智能最前沿:分享几个大模型(LLMs)的热门研究方向

引言 在人工智能领域&#xff0c;大模型的研究正迅速发展&#xff0c;当前涵盖了很多个研究方向&#xff0c;每个方向都带有其独特的研究重点和挑战。下面给大家盘点几个比较热门的研究方向&#xff0c;主要包括检索增增强生成RAG、大模型Agent、Mamba、MoE、LoRA等&#xff0…

JavaScript - Ajax

Asynchronous JavaScript And XML&#xff0c;异步的JavaScript和XML 作用: 数据交换&#xff1a;通过Ajax可以给服务器发送请求&#xff0c;并获取服务器响应的数据。异步交互&#xff1a;可以在不重新加载整个页面的情况下&#xff0c;与服务器交换数据并更新部分网页的技术…

从台架到实车的语音识别专项测试分析笔记

(网络资源图) 一.语音识别原理及测试范围 1.语音识别的原理: ①.通过麦克风输入人的声音 ②.声学处理:处理掉杂音,噪音 ③.特征处理:提取声音中的关键因素 如:小米 xiao mi ④.模型匹配: 如xiaomi 可以匹配小米或者小蜜,需要根据前后内容计算出概率最大内容进行输出给用户确认…

Leetcode每日刷题之3.无重复字符的最长子串(C++)

1.题目解析 本题的目标是在给定的字符串中找出不含有重复字符的最长子串&#xff0c;并且返回其长度&#xff0c;这道题核心就是如何去重并且不能遗漏以保证子串长度最长&#xff0c;题目来源:3.无重复字符的最长子串 2.算法原理 本题的算法原理主要是"滑动窗口"也就…

自存实践本地访问 nginx放前端打包好的项目

nginx 部署前端项目_哔哩哔哩_bilibili 将打包好的dits文件放到 配置nginx.conf文件的location 启动命令 start nginx.exe 输入localhost即可访问打包好的项目 关闭nginx .\nginx.exe -s quit

Unity--XLua调用C#

Unity–XLua调用C# 由于Unity/C# 和lua是两种语言&#xff0c;两种语言的特性不一样&#xff0c;因此&#xff0c;如果要互相调用的话&#xff0c;需要第三方作桥梁. 因此&#xff0c;为了在Unity中/C#中使用lua的特性&#xff0c;需要在Unity中安装插件&#xff0c;Xlua/toLu…

IDEA2024中,解决建多级包时不分级显示问题

点击右上角的三个点-----外观----不勾选“压缩空的中间软件包”、“平展软件包”这两项即可。

新加坡vps好不好?新加坡vps深度评测

新加坡vps好不好&#xff1f;新加坡VPS是一个好的选择。其优势在于地理位置优越、网络连接快速以及价格合理&#xff1b;劣势在于带宽资源有限、供应商众多导致选择困难、以及安全性和隐私保护问题。下面小编将针对新加坡vps优劣势进行详细分析&#xff1a; 新加坡VPS的优势&a…

水水水水水水水水水水水水水水水水水水水

欢迎关注博主 Mindtechnist 或加入【智能科技社区】一起学习和分享Linux、C、C、Python、Matlab&#xff0c;机器人运动控制、多机器人协作&#xff0c;智能优化算法&#xff0c;滤波估计、多传感器信息融合&#xff0c;机器学习&#xff0c;人工智能等相关领域的知识和技术。关…

RIPRO主题美化-首页底部纯标题文章展示模块+网站统计模块美化 WordPress主题美化

教程 1、找到wp-content/themes/ripro/assets/css/diy.css并将附件内的diy.css内容整体复制进去并保存 2、找到wp-content/themes/ripro/parts/home-mode/ulist.php并将附件内的ulist.php上传进去替换即可 3、找到wp-content/themes/ripro/functions.php并将附件内的functio…

第N11周:seq2seq翻译实战-Pytorch复现

任务&#xff1a; ●为解码器添加上注意力机制 一、前期准备工作 from __future__ import unicode_literals, print_function, division from io import open import unicodedata import string import re import randomimport torch import torch.nn as nn from torch impor…