一、前言
openGauss 6.0.0-RC1是openGauss 2024年3月发布的创新版本,该版本生命周期为0.5年。根据openGauss官网介绍,6.0.0-RC1与之前的版本特性功能保持兼容,另外,在和之前版本兼容的基础上增加了很多新功能,比如分区表性能优化、支持多语言日志、支持SPQ多机并行查询框架、MySQL迁移及兼容性增强、特别是为DataKit增加了很多新特性,并且修复了很多CVE漏洞。
https://docs-opengauss.osinfra.cn/zh/docs/6.0.0-RC1/docs/ReleaseNotes/版本介绍.html,官网这个页面对于openGauss 6.0.0-RC1对于版本继承和新增功能有详细的介绍。
在安装部署方面,openGauss 6.0.0提供了一站式交互式安装功能,极大简化了安装流程,降低了用户学习成本。
根据openGauss官网介绍,openGauss版本号采用点分位命名规则(XX.Y.0)模式,创新版在版本号后增加“-RCx”表示。其中XX.0.0表示LTS版本,Y表示补丁版本,XX.0.0-RCx表示创新版本。正常每两年发布一个LTS版本,创新版本供用户联创测试使用;LTS版本作为长期支持版本,可规模上线使用。涉及重大问题修复时,会按需发布补丁版本。
openGauss生命周期初步规划为:
- LTS版本发布间隔周期2年,社区提供3年维护支持。
- 创新版本发布间隔周期0.5年,社区提供0.5年维护支持。
之前我曾体验过openGauss 5.0的集群安装部署,这次我希望再尝试下6.0的安装部署,另外也希望通过一款名叫zcbus的迁移工具,迁移Oracle 数据到OpenGauss 6.0.0。
二、安装准备
2.1 安装需求
集群环境各服务器应具有相同体系架构,比如要满足如下一些需求:
- 64bit 和 32bit 不能同一集群
- ARM 和 x86 两类系统不能同一集群
2.2 安装流程
openGauss的安装基于以下流程。
2.3 硬件环境要求
集群环境openGauss各服务器应满足以下最低硬件需求,生产环境应根据业务需求适时调整硬件配置,本次采用三台x86服务器、选用CentOS 7.9系统部署OpenGauss 6.0.0一主二备集群。
项目名称 | 配置描述 | 备注信息 |
---|---|---|
服务器数量 | 3(台) | |
内存 | >=32(GB) | 功能调试建议32GB以上,性能及商业部署建议单机不低于128(GB) |
CPU | >= 1 * 8(核),2.0(GHz) | 性能及商业部署建议单机不低于1*16(核),2.0(GHz) 支持超线程和非超线程两种模式,建议选择相同模式 |
硬盘 | > 1(GB) 存放openGauss应用程序 > 300(MB) 存放元数据 > 70(%) 存储空间存放数据库数据 | 用于安装openGauss的硬盘需最少满足如下要求: 至少1GB用于安装openGauss的应用程序。 每个主机需大约300MB用于元数据存储。 预留70%以上的磁盘剩余空间用于数据存储。 建议系统盘配置为RAID1,数据盘配置为RAID5,且规划4组RAID5数据盘用于安装openGauss。 |
网络 | >= 300(兆) 以太网 | 建议设置双网卡冗余bond |
2.4 操作系统要求
2.4.1 软件环境要求
软件类型 | 配置描述 | 备注信息 |
---|---|---|
操作系统 | x86操作系统 CentOS 7.6及以上 | 生产建议选择CentOS 7系列 |
inode个数 | 剩余inode个数>15(亿) | |
工具 | bzip2 | |
Python | Python 3.6.X | python需要通过–enable-shared方式编译 |
2.4.2 软件依赖要求
所需软件 | 建议版本 | 备注信息 |
---|---|---|
libaio-devel | 建议版本:0.3.109-13 | |
flex | 要求版本:2.5.31 以上 | |
bison | 建议版本:2.7-4 | |
ncurses-devel | 建议版本:5.9-13.20130511 | |
glibc-devel | 建议版本:2.17-111 | |
patch | 建议版本:2.7.1-10 | |
redhat-lsb-core | 建议版本:4.1 | |
readline-devel | 建议版本:7.0-13 |
2.5、集群规划
2.5.1 主机名称规划
主机名称 | 描述说明 |
---|---|
xsky-node1 | 主节点hostname |
xsky-node2 | 备节点一hostname |
xsky-node3 | 备节点二hostname |
2.5.2 主机地址规划
IP地址 | 描述说明 |
---|---|
10.110.7.39 | 主节点IP地址 |
10.110.7.40 | 备节点一IP地址 |
10.110.7.41 | 备节点二IP地址 |
2.5.3 端口号规划
端口号 | 参数名称 | 描述说明 |
---|---|---|
15000 | cmServerPortBase | 主CM Server端口号 |
15400 | cmServerPortStandby | 数据库主节点端口号 |
2.5.4 用户及组规划
项目名称 | 名称 | 所属类型 | 规划建议 |
---|---|---|---|
用户名 | omm | 操作系统 | 建议集群各节点密码及ID相同 |
组名 | dbgrp | 操作系统 | 建议集群各节点组ID相同 |
2.5.5 软件目录规划
相较于5.0,openGauss 6.0.0配置文件xml目录设置选项大大减少。
另外官网也提供了诸如xlog的目录可选参数dataNodeXlogPath。
目录名称 | 对应名称 | 目录作用 |
---|---|---|
/opt/huawei/data/cmserver | cmdir | CM数据文件路径。保存CM Server和CM Agent用到的数据文件,参数文件等。 |
/opt/huawei/install/data/dn | dataNode1 | 数据库主节点上的数据目录,及备机数据目录 |
2.6、软件环境准备
2.6.1 安装python3
openGauss 6.0.0安装需要python环境,并且对python版本有一定要求,并要求所有节点都要安装部署python环境。
本次安装选择python 3.6.10版本,使用–enable-shared方式编译。
# root用户执行【所有节点】
-- 安装依赖包
[root@xsky-nodexxx ~]# yum install -y gcc-c++ pcre pcre-devel zlib zlib-devel openssl openssl-devel
-- 创建安装目录
[root@xsky-nodexxx ~]# mkdir /usr/local/python3
[root@xsky-nodexxx ~]# cd /home/soft
-- 如服务器可以连接网络,可通过wget获取安装包,如无法联网,可下载安装包并上传至服务器
[root@xsky-nodexxx soft]# wget https://www.python.org/ftp/python/3.6.10/Python-3.6.10.tar.xz
-- 解压源码包
[root@xsky-nodexxx soft]# tar xvJf Python-3.6.10.tar.xz
[root@xsky-nodexxx soft]# cd Python-3.6.10
-- 配置编译参数
[root@xsky-nodexxx Python-3.6.10]# ./configure --prefix=/usr/local/python3 --enable-optimizations --enable-shared CFLAGS=-fPIC --with-ssl
-- 执行安装
[root@xsky-nodexxx Python-3.6.10]# make && make install
-- 创建链接
[root@xsky-nodexxx Python-3.6.10]# ln -s /usr/local/python3/bin/python3 /usr/bin/python3
-- 检验python版本
[root@xsky-nodexxx ~]# python3
Python 3.6.10 (default, Jul 12 2023, 17:08:53)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
2.6.2 安装软件依赖包
openGauss 6.0.0的安装需要一些软件依赖包。
如果服务器可以联网,可通过配置yum源进行安装,若无法连接外网,可通过挂在iso文件,配置内部yum源进行安装。
# root用户执行【所有节点】
[root@xsky-nodexxx ~]# yum install -y libaio-devel flex bison ncurses-devel glibc-devel patch redhat-lsb-core readline-devel zlib readline
-- 检查是否已安装
[root@xsky-nodexxx ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep -E "libaio-devel|flex|bison|ncurses-devel|glibc-devel|patch|redhat-lsb-core|readline-devel|zlib|readline|expect"
2.7 修改操作系统配置
2.7.1 关闭操作系统防火墙
建议对各安装节点关闭操作系统防火墙,若有特殊需求需要开启操作系统防火墙,可根据openGauss相关服务及协议,将相应IP和端口号添加至openGauss节点主机防火墙白名单中。
# root用户执行【所有节点】
-- 停止 firewalld 服务
[root@xsky-nodexxx ~]# systemctl stop firewalld.service
-- 禁用 firewalld 服务
[root@xsky-nodexxx ~]# systemctl disable firewalld.service
-- 查看 firewalld 服务状态
[root@xsky-nodexxx ~]# systemctl status firewalld
-- 显示如下表示已关闭禁用防火墙
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
2.7.2 关闭SELinux
通常会选择关闭操作系统selinux服务,可通过如下方式关闭各节点selinux服务。
# root用户执行【所有节点】
-- 临时关闭SELinux
[root@xsky-nodexxx ~]# setenforce 0
-- 永久关闭SELinux
[root@xsky-nodexxx ~]# sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/config
-- 检查SELinux状态
[root@xsky-nodexxx ~]# getenforce
-- 状态为Disabled表明SELinux已关闭
Disabled
2.7.3 修改字符集
建议将各数据库节点设置为相同字符集,比如,采用如下方式,设置字符集为UTF8。
# root用户执行【所有节点】
[root@xsky-nodexxx ~]# cat>> /etc/profile<<EOF
export LANG=en_US.UTF-8
EOF
[root@xsky-nodexxx ~]# source /etc/profile
-- 检查字符集
[root@xsky-nodexxx ~]# env |grep -i lang
2.7.4 设置时区和时间
各数据库节点应确保时区和时间一致,下面通过ntp方式设置时区,也可以通过chrony方式配置各节点时钟同步。
# root用户执行【所有节点】
# 使用ntp设置时钟同步
-- 安装ntp服务
[root@xsky-nodexxx ~]# yum install -y ntp
-- 开机启动ntp服务
[root@xsky-nodexxx ~]# systemctl enable ntpd
-- 启动ntp服务
[root@xsky-nodexxx ~]# systemctl start ntpd
-- 设置时区Asia/Shanghai
[root@xsky-nodexxx ~]# timedatectl set-timezone Asia/Shanghai
-- 检查时区
[root@xsky-nodexxx ~]# timedatectl |grep -i zone
-- 启用ntp服务
[root@xsky-nodexxx ~]# timedatectl set-ntp yes
-- 编辑定时任务列表
[root@xsky-nodexxx ~]# crontab -e
-- 使用vi/vim对定时任务列表进行编辑,添加如下内容变保存退出
0 12 * * * ntpdate cn.pool.ntp.org
-- 查看时间及时区
[root@xsky-nodexxx ~]# timedatectl status
2.7.5 修改硬件时钟
硬件时钟,也称为实时时钟,是独立的硬件设备(如电池、电容原件等),一般服务器在开机时,操作系统向硬件时钟同步时间。
# root用户执行【所有节点】
-- 将当前系统时间写入硬件时钟
[root@xsky-nodexxx ~]# hwclock --systohc
-- 查看硬件时钟
[root@xsky-nodexxx ~]# hwclock
2.7.6 关闭SWAP分区
关闭swap交换内存是为了保障数据库的访问性能,避免把数据库的缓冲区内存淘汰到磁盘上。
# root用户执行【所有节点】
-- 禁用当前的 swap 分区
[root@xsky-nodexxx ~]# swapoff -a
# 永久关闭swap 分区
-- 使用vi/vim编辑 /etc/fstab 文件,注释如下内容
UUID=<swap_partition_uuid> swap swap defaults 0 0
-- 保存退出,下次重启会生效
2.7.7 设置root用户远程登陆
# root用户执行【所有节点】
[root@xsky-nodexxx ~]# cat >>/etc/ssh/sshd_config<<EOF
PermitRootLogin yes
EOF
-- 检查设置结果
[root@xsky-nodexxx ~]# cat /etc/ssh/sshd_config |grep PermitRootLogin
2.7.8 配置SSH
# root用户执行【所有节点】
-- 禁用 SSH 登录时的横幅(Banner)
[root@xsky-nodexxx ~]# sed -i '/Banner/s/^/#/' /etc/ssh/sshd_config
-- 目的是禁用允许以 root 用户登录的配置选项
[root@xsky-nodexxx ~]# sed -i '/PermitRootLogin/s/^/#/' /etc/ssh/sshd_config
[root@xsky-nodexxx ~]# echo -e "\n" >> /etc/ssh/sshd_config
[root@xsky-nodexxx ~]# echo "Banner none " >> /etc/ssh/sshd_config
# 修改Banner配置,去掉连接到系统时,系统提示的欢迎信息。欢迎信息会干扰安装时远程操作的返回结果,影响安装正常执行
[root@xsky-nodexxx ~]# echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
[root@xsky-nodexxx ~]# cat /etc/ssh/sshd_config |grep -v ^#|grep -E 'PermitRootLogin|Banner'
-- 重启生效
[root@xsky-nodexxx ~]# systemctl restart sshd.service
-- 查看SSH状态
[root@xsky-nodexxx ~]# systemctl status sshd.service
2.7.9 添加hosts解析
# root用户执行【所有节点】
[root@xsky-nodexxx ~]# cat >> /etc/hosts<<EOF
10.110.7.39 xsky-node1
10.110.7.40 xsky-node2
10.110.7.41 xsky-node3
10.110.7.42 xsky-node4
EOF
2.7.10 调整系统内核参数
官网初始化安装环境 (osinfra.cn) 这一页**[配置操作系统参数]**有对操作系统参数的相应配置要求,以下内容节选自官网,
参数名称 | 参数说明 | 预安装时是否由脚本自动设置 | 推荐取值 |
---|---|---|---|
net.ipv4.tcp_max_tw_buckets | 表示同时保持TIME_WAIT状态的TCP/IP连接最大数量。如果超过所配置的取值,TIME_WAIT将立刻被释放并打印警告信息。 | 是 | 10000 |
net.ipv4.tcp_tw_reuse | 允许将TIME-WAIT状态的sockets重新用于新的TCP连接。0表示关闭。1表示开启。 | 是 | 1 |
net.ipv4.tcp_tw_recycle | 表示开启TCP连接中TIME-WAIT状态sockets的快速回收。0表示关闭。1表示开启。 | 是 | 1 |
net.ipv4.tcp_keepalive_time | 表示当keepalive启用的时候,TCP发送keepalive消息的频度。 | 是 | 30 |
net.ipv4.tcp_keepalive_probes | 在认定连接失效之前,发送TCP的keepalive探测包数量。这个值乘以tcp_keepalive_intvl之后决定了一个连接发送了keepalive之后可以有多少时间没有回应。 | 是 | 9 |
net.ipv4.tcp_keepalive_intvl | 当探测没有确认时,重新发送探测的频度。 | 是 | 30 |
net.ipv4.tcp_retries1 | 在连接建立过程中TCP协议最大重试次数。 | 否 | 5 |
net.ipv4.tcp_syn_retries | TCP协议SYN报文最大重试次数。 | 否 | 5 |
net.ipv4.tcp_synack_retries | TCP协议SYN应答报文最大重试次数。 | 否 | 5 |
net.ipv4.tcp_retries2 | 控制内核向已经建立连接的远程主机重新发送数据的次数,低值可以更早的检测到与远程主机失效的连接,因此服务器可以更快的释放该连接。发生“connection reset by peer”时可以尝试调大该值规避问题。 | 是 | 12 |
vm.overcommit_memory | 控制在做内存分配的时候,内核的检查方式。0:表示系统会尽量精确计算当前可用的内存。1:表示不作检查直接返回成功。2:内存总量×vm.overcommit_ratio/100+SWAP的总量,如果申请空间超过此数值则返回失败。内核默认是2过于保守,推荐设置为0,如果系统压力大可以设置为1。 | 是 | 0 |
net.ipv4.tcp_rmem | TCP协议接收端缓冲区的可用内存大小。分无压力、有压力和压力大三个区间,单位为页面。 | 是 | 8192 250000 16777216 |
net.ipv4.tcp_wmem | TCP协议发送端缓冲区的可用内存大小。分无压力、有压力和压力大三个区间,单位为页面。 | 是 | 8192 250000 16777216 |
net.core.wmem_max | socket发送端缓冲区大小的最大值。 | 是 | 21299200 |
net.core.rmem_max | socket接收端缓冲区大小的最大值。 | 是 | 21299200 |
net.core.wmem_default | socket发送端缓冲区大小的默认值。 | 是 | 21299200 |
net.core.rmem_default | socket接收端缓冲区大小的默认值。 | 是 | 21299200 |
net.ipv4.ip_local_port_range | 物理机可用临时端口范围。 | 否 | 26000-65535 |
kernel.sem | 内核信号量参数设置大小。 | 是 | 250 6400000 1000 25600 |
vm.min_free_kbytes | 保证物理内存有足够空闲空间,防止突发性换页。 | 是 | 系统总内存的5% |
net.core.somaxconn | 定义了系统中每一个端口最大的监听队列的长度,这是个全局的参数。 | 是 | 65535 |
net.ipv4.tcp_syncookies | 当出现SYN等待队列溢出时,启用cookies来处理,可防范少量SYN攻击。0表示关闭SYN Cookies。1表示开启SYN Cookies。 | 是 | 1 |
net.core.netdev_max_backlog | 在每个网络接口接收数据包的速率比内核处理这些包的速率快时,允许送到队列的数据包的最大数目。 | 是 | 65535 |
net.ipv4.tcp_max_syn_backlog | 记录的那些尚未收到客户端确认信息的连接请求的最大值。 | 是 | 65535 |
net.ipv4.tcp_fin_timeout | 系统默认的超时时间。 | 否 | 60 |
kernel.shmall | 内核可用的共享内存总量。 | 是 | 1152921504606846720 |
kernel.shmmax | 内核参数定义单个共享内存段的最大值。 | 是 | 18446744073709551615 |
net.ipv4.tcp_sack | 启用有选择的应答,通过有选择地应答乱序接受到的报文来提高性能,让发送者只发送丢失的报文段(对于广域网来说)这个选项应该启用,但是会增加对CPU的占用。0表示关闭。1表示开启。 | 否 | 1 |
net.ipv4.tcp_timestamps | TCP时间戳(会在TCP包头增加12节),以一种比重发超时更精确的方式(参考RFC 1323)来启用对RTT的计算,启用可以实现更好的性能。0表示关闭。1表示开启。 | 否 | 1 |
vm.extfrag_threshold | 系统内存不够用时,linux会为当前系统内存碎片情况打分,如果超过vm.extfrag_threshold的值,kswapd就会触发memory compaction。所以这个值设置的接近1000,说明系统在内存碎片的处理倾向于把旧的页换出,以符合申请的需要,而设置接近0,表示系统在内存碎片的处理倾向做memory compaction。 | 否 | 500 |
vm.overcommit_ratio | 系统使用绝不过量分配内存的算法时,系统整个内存地址空间不得超过swap+RAM值的此参数百分比,当vm.overcommit_memory=2时此参数生效。 | 否 | 90 |
MTU | 节点网卡最大传输单元。OS默认值为1500,调整为8192可以提升数据收发的性能。 | 否 | 8192 |
重点是文件系统参数、transparent_hugepage设置、文件句柄设置、系统支持的最大进程数设置、网卡参数配置,可参照官网进行配置。
2.8 创建目录
/opt/software/openGauss用于存放下载的openGauss软件目录。
# root用户执行【主节点】
[root@xsky-node1 ~]# mkdir -p /opt/software/openGauss
[root@xsky-node1 ~]# chmod 755 -R /opt/software
三、下载软件安装包
3.1 下载安装包
使用注册账号登录openGuass官网https://www.opengauss.org/zh/download/下载页面,下载与当前操作系统匹配的软件安装包至主节点/opt/software/openGauss目录,本次下载Centos系统支持的openGauss 6.0.0 企业版。
可用鼠标右键点击“立即下载”按钮,然后选择“复制链接”,如数据库服务器可连外网,可在服务器上通过wget获取openGauss 6.0.0企业版软件安装包。
[root@xsky-node1 ~]# cd /opt/software/openGauss
[root@xsky-node1 openGauss]# wget https://opengauss.obs.cn-south-1.myhuaweicloud.com/6.0.0-RC1/x86/openGauss-6.0.0-RC1-CentOS-64bit-all.tar.gz
3.2 校验安装包
点击openGauss_6.0.0-RC1 企业版对应的SHA256信息,将复制的内容粘贴到文本文件,显示内容为:2dad94f35807c0d6945bf84f638693148a2de05b4fe51b420f04fd5d94015977,然后将下载的文件通过sha256sum命令进行校验,已确保下载安装包完整性。
# root用户执行【主节点】
[root@xsky-node1 openGauss]# sha256sum openGauss-6.0.0-RC1-CentOS-64bit-all.tar.gz
2dad94f35807c0d6945bf84f638693148a2de05b4fe51b420f04fd5d94015977 openGauss-6.0.0-RC1-CentOS-64bit-all.tar.gz
-- 如校验的值和官网SHA256值相同,表明文件完整
3.3 解压安装包
# root用户执行【主节点】
[root@xsky-node1 openGauss]# tar -zxvf openGauss-6.0.0-RC1-CentOS-64bit-all.tar.gz
[root@xsky-node1 openGauss]# tar -zxvf openGauss-6.0.0-RC1-CentOS-64bit-om.tar.gz
3.4 建立互信
此处使用脚本建立root互信,也可手工建立互信,建立互信前将集群各节点root用户设置为相同口令。
# root用户执行【主节点】
-- 1) 在主节点创建sshtrust.sh互信脚本
[root@xsky-node1 ~]# tee -a /root/sshtrust.sh << EOF
#!/bin/bash
HOSTLIST="
xsky-node1
xsky-node2
xsky-node3"
# 依赖sshpass包,使用前先rpm -qa|grep sshpass,若无回显则需要安装
rpm -q sshpass &> /dev/null || yum -y install sshpass
[ -f /root/.ssh/id_rsa ] || ssh-keygen -f /root/.ssh/id_rsa -P ''
export SSHPASS=root
for HOST in $HOSTLIST; do
{
HOSTNAME=$(getent hosts $HOST | awk '{ print $2 }')
sshpass -e ssh-copy-id -o StrictHostKeyChecking=no root@$HOSTNAME
# 拷贝主节点/root/.ssh 目录到其他节点
scp -r /root/.ssh root@$HOSTNAME:/root
} &
done
wait
EOF
-- 2) 授权/root/sshtrust.sh
[root@xsky-node1 ~]# chmod +x /root/sshtrust.sh
-- 3) 执行/root/sshtrust.sh
[root@xsky-node1 ~]# sh sshtrust.sh
Generating public/private rsa key pair.
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:p8d8j8Lyj2iwpf1MgwFX62pBr9mJvxX1HSDbCPEzJRY root@xsky-node1
The key's randomart image is:
+---[RSA 2048]----+
| ooE.o |
| .+.B . |
| . o .* ... |
| + o o. .o|
| S + . o|
| . .# . . |
| *O.O o |
| o.+*o= o |
| ..+B+o . |
+----[SHA256]-----+
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Number of key(s) added: 1
Now try logging into the machine, with: "ssh -o 'StrictHostKeyChecking=no' 'root@xsky-node1'"
and check to make sure that only the key(s) you wanted were added.
Number of key(s) added: 1
Now try logging into the machine, with: "ssh -o 'StrictHostKeyChecking=no' 'root@xsky-node2'"
and check to make sure that only the key(s) you wanted were added.
Number of key(s) added: 1
Now try logging into the machine, with: "ssh -o 'StrictHostKeyChecking=no' 'root@xsky-node3'"
and check to make sure that only the key(s) you wanted were added.
id_rsa 100% 1679 2.8MB/s 00:00
id_rsa.pub 100% 397 1.1MB/s 00:00
known_hosts 100% 736 2.0MB/s 00:00
authorized_keys 100% 97 263.3KB/s 00:00
id_rsa 100% 1679 3.7MB/s 00:00
id_rsa.pub 100% 397 1.3MB/s 00:00
known_hosts 100% 736 2.0MB/s 00:00
authorized_keys 100% 97 348.7KB/s 00:00
id_rsa 100% 1679 2.8MB/s 00:00
id_rsa.pub 100% 397 783.1KB/s 00:00
known_hosts 100% 736 1.4MB/s 00:00
authorized_keys 100% 97 189.9KB/s 00:00
id_rsa 100% 1679 3.0MB/s 00:00
id_rsa.pub 100% 397 1.0MB/s 00:00
known_hosts 100% 736 1.7MB/s 00:00
authorized_keys 100% 97 281.1KB/s 00:00
-- 4) 测试互信
[root@xsky-node1 ~]# ssh xsky-node2
Last login: Wed Jun 12 13:46:55 2024 from xsky-node1
[root@xsky-node2 ~]# ssh xsky-node3
Last login: Wed Jun 12 13:46:30 2024 from xsky-node2
[root@xsky-node3 ~]# ssh xsky-node1
Last login: Wed Jun 12 13:48:37 2024 from 192.168.xxx.xxx
[root@xsky-node1 ~]# ssh xsky-node2
Last login: Wed Jun 12 13:48:42 2024 from xsky-node1
四、创建XML配置文件
在规划的openGauss主节点/opt/software/openGauss目录下创建cluster_config.xml配置文件。
cluster_config.xml文件包含部署openGauss的服务器信息、安装路径、IP地址以及端口号等。用于告知openGauss如何部署。可根据不同场景配置对应的XML文件,如一主一备、一主一备一级联、一主二备等,本次安装部署为一主二备集群环境。
# 主节点omm用户操作
[root@xsky-node1 ~]# su - omm
[omm@xsky-node1 ~]$ cd /opt/software/openGauss/
[omm@xsky-node1 openGauss]$ tee -a /opt/software/openGauss/cluster_config.xml << EOF
<?xml version="1.0" encoding="UTF-8"?>
<ROOT>
<!-- openGauss整体信息 -->
<CLUSTER>
<PARAM name="clusterName" value="Cluster_GaussDB" />
<PARAM name="nodeNames" value="xsky-node1,xsky-node2,xsky-node3" />
<PARAM name="gaussdbAppPath" value="/opt/huawei/install/app" />
<PARAM name="gaussdbLogPath" value="/var/log/omm" />
<PARAM name="tmpMppdbPath" value="/opt/huawei/tmp"/>
<PARAM name="gaussdbToolPath" value="/opt/huawei/install/om" />
<PARAM name="corePath" value="/opt/huawei/corefile"/>
<PARAM name="backIp1s" value="10.110.7.39,10.110.7.40,10.110.7.41"/>
</CLUSTER>
<!-- 每台服务器上的节点部署信息 -->
<DEVICELIST>
<!-- node1上的节点部署信息 -->
<DEVICE sn="xsky-node1">
<PARAM name="name" value="xsky-node1"/>
<PARAM name="azName" value="AZ1"/>
<PARAM name="azPriority" value="1"/>
<!-- 如果服务器只有一个网卡可用,将backIP1和sshIP1配置成同一个IP -->
<PARAM name="backIp1" value="10.110.7.39"/>
<PARAM name="sshIp1" value="10.110.7.39"/>
<!--CM节点部署信息-->
<PARAM name="cmsNum" value="1"/>
<PARAM name="cmServerPortBase" value="15000"/>
<PARAM name="cmServerListenIp1" value="10.110.7.39,10.110.7.40,10.110.7.41"/>
<PARAM name="cmServerHaIp1" value="10.110.7.39,10.110.7.40,10.110.7.41"/>
<PARAM name="cmServerlevel" value="1"/>
<PARAM name="cmServerRelation" value="xsky-node1,xsky-node2,xsky-node3"/>
<PARAM name="cmDir" value="/opt/huawei/data/cmserver"/>
<!--dn-->
<PARAM name="dataNum" value="1"/>
<PARAM name="dataPortBase" value="15400"/>
<PARAM name="dataNode1" value="/opt/huawei/install/data/dn,xsky-node2,/opt/huawei/install/data/dn,xsky-node3,/opt/huawei/install/data/dn"/>
<PARAM name="dataNode1_syncNum" value="0"/>
</DEVICE>
<!-- node2上的节点部署信息,其中“name”的值配置为主机名称 -->
<DEVICE sn="xsky-node2">
<PARAM name="name" value="xsky-node2"/>
<PARAM name="azName" value="AZ1"/>
<PARAM name="azPriority" value="1"/>
<!-- 如果服务器只有一个网卡可用,将backIP1和sshIP1配置成同一个IP -->
<PARAM name="backIp1" value="10.110.7.40"/>
<PARAM name="sshIp1" value="10.110.7.40"/>
<!-- cm -->
<PARAM name="cmServerPortStandby" value="15000"/>
<PARAM name="cmDir" value="/opt/huawei/data/cmserver"/>
</DEVICE>
<!-- node3上的节点部署信息,其中“name”的值配置为主机名称 -->
<DEVICE sn="xsky-node3">
<PARAM name="name" value="xsky-node3"/>
<PARAM name="azName" value="AZ1"/>
<PARAM name="azPriority" value="1"/>
<!-- 如果服务器只有一个网卡可用,将backIP1和sshIP1配置成同一个IP -->
<PARAM name="backIp1" value="10.110.7.41"/>
<PARAM name="sshIp1" value="10.110.7.41"/>
<!-- cm -->
<PARAM name="cmServerPortStandby" value="15000"/>
<PARAM name="cmDir" value="/opt/huawei/data/cmserver"/>
</DEVICE>
</DEVICELIST>
</ROOT>
EOF
五、执行安装
5.1 初始化安装环境
安装时添加–non-interactive 参数,采用前置执行模式。
–non-interactive
指定前置执行模式。
- 当不指定该参数时,则为安全交互模式,在此模式下用户需要人机交互输入密码。
- 当指定该参数时,为非交互模式,不需要进行人机交互
# root用户执行【主节点】
[root@xsky-node1 ~]# cd /opt/software/openGauss/script
-- 执行初始化安装
[root@xsky-node1 script]# ./gs_preinstall -U omm -G dbgrp -X /opt/software/openGauss/cluster_config.xml --non-interactive
-- 命令执行输出结果如下
Parsing the configuration file.
Successfully parsed the configuration file.
Installing the tools on the local node.
Successfully installed the tools on the local node.
Setting host ip env
Successfully set host ip env.
Distributing package.
Begin to distribute package to tool path.
Successfully distribute package to tool path.
Begin to distribute package to package path.
Successfully distribute package to package path.
Successfully distributed package.
Preparing SSH service.
Successfully prepared SSH service.
Installing the tools in the cluster.
Successfully installed the tools in the cluster.
Checking hostname mapping.
Successfully checked hostname mapping.
Checking OS software.
Successfully check OS software.
Checking OS version.
Successfully checked OS version.
Checking cpu instructions.
Successfully checked cpu instructions.
Creating cluster's path.
Successfully created cluster's path.
Set and check OS parameter.
Setting OS parameters.
Successfully set OS parameters.
Warning: Installation environment contains some warning messages.
Please get more details by "/opt/software/openGauss/script/gs_checkos -i A -h xsky-node1,xsky-node2,xsky-node3 -X /opt/software/openGauss/cluster_config.xml --detail".
Set and check OS parameter completed.
Preparing CRON service.
Successfully prepared CRON service.
Setting user environmental variables.
Successfully set user environmental variables.
Setting the dynamic link library.
Successfully set the dynamic link library.
Setting Core file
Successfully set core path.
Setting pssh path
Successfully set pssh path.
Setting Cgroup.
Successfully set Cgroup.
Set ARM Optimization.
No need to set ARM Optimization.
Fixing server package owner.
Setting finish flag.
Successfully set finish flag.
Preinstallation succeeded.
# 可通过 /opt/software/openGauss/script/gs_checkos -i A -h xsky-node1,xsky-node2,xsky-node3 -X /opt/software/openGauss/cluster_config.xml --detail 命令查看安装过程详细信息,有无Warning等信息,比如本次执行得到的信息如下:
[root@xsky-node1 install]# /opt/software/openGauss/script/gs_checkos -i A -h xsky-node1,xsky-node2,xsky-node3 -X /opt/software/openGauss/cluster_config.xml --detail
Checking items:
A1. [ OS version status ] : Normal
[xsky-node1]
centos_7.9.2009_64bit
[xsky-node2]
centos_7.9.2009_64bit
[xsky-node3]
centos_7.9.2009_64bit
A2. [ Kernel version status ] : Warning
[xsky-node1]
3.10.0-1160.118.1.el7.x86_64
[xsky-node2]
3.10.0-1160.83.1.el7.x86_64
[xsky-node3]
3.10.0-1160.83.1.el7.x86_64
A3. [ Unicode status ] : Normal
The values of all unicode are same. The value is "LANG=en_US.UTF-8".
A4. [ Time zone status ] : Normal
The informations about all timezones are same. The value is "+0800".
A5. [ Swap memory status ] : Normal
The value about swap memory is correct.
A6. [ System control parameters status ] : Normal
All values about system control parameters are correct.
A7. [ File system configuration status ] : Normal
Both soft nofile and hard nofile are correct.
A8. [ Disk configuration status ] : Normal
The value about XFS mount parameters is correct.
A9. [ Pre-read block size status ] : Normal
The value about Pre-read block size is correct.
A10.[ IO scheduler status ] : Normal
The value of IO scheduler is correct.
A11.[ Network card configuration status ] : Warning
[xsky-node2]
BondMode Null
Warning reason: network 'em1' 'mtu' RealValue '1500' ExpectedValue '8192'
[xsky-node3]
BondMode Null
Warning reason: network 'em1' 'mtu' RealValue '1500' ExpectedValue '8192'
[xsky-node1]
BondMode Null
Warning reason: network 'em1' 'mtu' RealValue '1500' ExpectedValue '8192'
A12.[ Time consistency status ] : Warning
[xsky-node2]
The NTPD not detected on machine and local time is "2024-06-03 17:42:27".
[xsky-node1]
The NTPD not detected on machine and local time is "2024-06-03 17:42:27".
[xsky-node3]
The NTPD not detected on machine and local time is "2024-06-03 17:42:27".
A13.[ Firewall service status ] : Normal
The firewall service is stopped.
A14.[ THP service status ] : Normal
The THP service is stopped.
Total numbers:14. Abnormal numbers:0. Warning numbers:3.
5.2 执行安装
上述执行完毕,若无严重告警信息,可在主节点执行软件安装。
# omm用户执行【主节点】
-- 切换至omm用户
[root@xsky-node1 ~]# su - omm
# --gsinit-parameter="--locale=en_US.utf8" 用于安装时指定字符集,本次设置字符集为UTF8
[omm@xsky-node1 ~]$ ./gs_install -X /opt/software/openGauss/cluster_config.xml --gsinit-parameter="--locale=en_US.utf8"
-- 上述命令执行输出结果如下
Parsing the configuration file.
Successfully checked gs_uninstall on every node.
Check preinstall on every node.
Successfully checked preinstall on every node.
Creating the backup directory.
Last time end with Install cluster.
Continue this step.
Successfully created the backup directory.
begin deploy..
Rolling back.
Rollback succeeded.
Installing the cluster.
begin prepare Install Cluster..
Checking the installation environment on all nodes.
begin install Cluster..
Installing applications on all nodes.
Successfully installed APP.
begin init Instance..
encrypt cipher and rand files for database.
Please enter password for database:
Please repeat for database:
begin to create CA cert files
The sslcert will be generated in /opt/huawei/install/app/share/sslcert/om
Create CA files for cm beginning.
Create CA files on directory [/opt/huawei/install/app_ed7f8e37/share/sslcert/cm]. file list: ['cacert.pem', 'server.key', 'server.crt', 'client.key', 'client.crt', 'server.key.cipher', 'server.key.rand', 'client.key.cipher', 'client.key.rand']
Non-dss_ssl_enable, no need to create CA for DSS
Cluster installation is completed.
Configuring.
Deleting instances from all nodes.
Successfully deleted instances from all nodes.
Checking node configuration on all nodes.
Initializing instances on all nodes.
Updating instance configuration on all nodes.
Check consistence of memCheck and coresCheck on database nodes.
Successfully check consistence of memCheck and coresCheck on all nodes.
Configuring pg_hba on all nodes.
Configuration is completed.
Starting cluster.
======================================================================
Successfully started primary instance. Wait for standby instance.
======================================================================
.
Successfully started cluster.
======================================================================
cluster_state : Normal
redistributing : No
node_count : 3
Datanode State
primary : 1
standby : 2
secondary : 0
cascade_standby : 0
building : 0
abnormal : 0
down : 0
Successfully installed application.
end deploy..
5.3 安装验证
5.3.1 检查数据库集群状态
# omm用户执行【任意节点】
[root@xsky-nodexxx ~]# su - omm
# 使用cm_ctl命令如下方式查看集群状态
[omm@xsky-node1 ~]$ cm_ctl query -Cv
[ CMServer State ]
node instance state
------------------------------
1 xsky-node1 1 Primary
2 xsky-node2 2 Standby
3 xsky-node3 3 Standby
[ Cluster State ]
cluster_state : Normal
redistributing : No
balanced : Yes
current_az : AZ_ALL
[ Datanode State ]
node instance state | node instance state | node instance state
------------------------------------------------------------------------------------------------------------------------------
1 xsky-node1 6001 P Primary Normal | 2 xsky-node2 6002 S Standby Normal | 3 xsky-node3 6003 S Standby Normal
[omm@xsky-node1 ~]$
[omm@xsky-node1 ~]$ gs_om -t status
-----------------------------------------------------------------------
cluster_state : Normal
redistributing : No
balanced : Yes
-----------------------------------------------------------------------
[omm@xsky-node1 ~]$ gs_om -t status --detail
[ CMServer State ]
node node_ip instance state
------------------------------------------------------------------------------
1 xsky-node1 10.110.7.39 1 /opt/huawei/data/cmserver/cm_server Primary
2 xsky-node2 10.110.7.40 2 /opt/huawei/data/cmserver/cm_server Standby
3 xsky-node3 10.110.7.41 3 /opt/huawei/data/cmserver/cm_server Standby
[ Cluster State ]
cluster_state : Normal
redistributing : No
balanced : Yes
current_az : AZ_ALL
[ Datanode State ]
node node_ip instance state
---------------------------------------------------------------------------------------
1 xsky-node1 10.110.7.39 6001 15400 /opt/huawei/install/data/dn P Primary Normal
2 xsky-node2 10.110.7.40 6002 15400 /opt/huawei/install/data/dn S Standby Normal
3 xsky-node3 10.110.7.41 6003 15400 /opt/huawei/install/data/dn S Standby Normal
-- state状态显示为Normal表示数据库可正常使用
5.3.2 连接数据库测试
# omm用户执行
[root@xsky-node1 ~]# su - omm
-- 15400 为数据库主节点端口号,
[omm@xsky-node1 ~]$ gsql -d gaussdb -p 26000
failed to connect /opt/huawei/tmp:26000.
[omm@xsky-node1 ~]$ gsql -d postgres -p 15400
gsql ((openGauss 6.0.0-RC1 build ed7f8e37) compiled at 2024-03-31 11:59:31 commit 0 last mr )
Non-SSL connection (SSL connection is recommended when requiring high-security)
Type "help" for help.
openGauss=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
------------+------------+----------+------------+------------+-------------------
xxx | xxxx | UTF8 | en_US.utf8 | en_US.utf8 | =Tc/xxxxx +
| | | | | xxxxx=CTc/xxxxxx+
| | | | | xxxxx=APm/xxxxxx
postgres | omm | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | omm | UTF8 | en_US.utf8 | en_US.utf8 | =c/omm +
| | | | | omm=CTc/omm
template1 | omm | UTF8 | en_US.utf8 | en_US.utf8 | =c/omm +
| | | | | omm=CTc/omm
(5 rows)
六、安装ZCBUS
6.1 zcbus介绍
zcbus是由首科软创(北京)软件有限公司[公司网址:http://www.zcbus.net/index.html]开发的一款基于实时增量数据捕获实现异构数据库数据交换同步的数据交换平台,可实现集中数据同步管理,为企业数据集中、分发、交换、清洗、加工等提供更为合理的解决方案。
目前zcbus支持多种数据库产品,包括oracle、MySQL、和一些开源数据库产品、目前也支持一些主流的国产数据库,如下所示,是当前zcbus所支持的一些软件信息。
6.2 zcbus下载
当前zcbus最新版本为2.0.1,可通过如下方式来下载最新安装包,zcbus提供了x86和arm两种版本,可根据实际操作系统选择下载对应的安装版本包。
另外zcbus也提供了单机版本和集群版本,本次选择单机版本进行安装。
单机运行环境的最低配置建议:
主机\交换服务器 | 1 台,ARM 或 X86、CPU 4 核、16GB 内存、存储不少于 50GB |
---|---|
操作系统 | RedHat/CentOS 7(或麒麟) |
网络 | 数据交换服务器需要与共享的数据库或大数据平台保持网络畅通;如两者网络不畅通,可通过前置机跳转方式实现数据交换 |
seLinux | 关闭 |
数据缓存周期 | 默认 30 天,可调整 |
数据交换规模 | 单机支持 1000 以内的数据表,1-5 条数据复制链路 |
可通过如下方式下载zcbus最新安装包。
[root@dsmart ~]# wget http://zbomc.com:8888/ZCBUS/2.0/zcbus.simple_server.docker.v2.0.1.x86_64.v1.zcbus.tar.gz
[root@dsmart ~]# wget http://zbomc.com:8888/ZCBUS/2.0/zcbus.simple_server.docker.v2.0.1.x86_64.v2.zcbus.tar.gz
6.3 解压zcbus安装包
-- 本次将下载的zcbus软件包放在了/home/soft目录下
[root@dsmart soft]# gunzip zcbus.simple_server.docker.v2.0.1.x86_64.v1.zcbus.tar.gz
[root@dsmart soft]# gunzip zcbus.simple_server.docker.v2.0.1.x86_64.v2.zcbus.tar.gz
[root@dsmart soft]# tar -xvf zcbus.simple_server.docker.v2.0.1.x86_64.v2.zcbus.tar
[root@dsmart soft]# tar -xvf zcbus.simple_server.docker.v2.0.1.x86_64.v2.zcbus.tar
6.4 修改配置文件
[root@dsmart ~]# cd /home/soft/simple_server/common
-- 修改zcbus.properties配置文件,本次修改后的配置文件信息如下所示:
####################################################################
## Copyright(c) ZCBUS Corporation 2022. All rights reserved. ##
## ##
## Specify values for the variables listed below to customize ##
## your installation. ##
## ##
## Each variable is associated with a comment. The comment ##
## can help to populate the variables with the appropriate ##
## values. ##
## ##
## IMPORTANT NOTE: This file should be secured to have read ##
## permission only by the zcbus user or an administrator who ##
## own this installation to protect any sensitive input values. ##
## ##
####################################################################
#-------------------------------------------------------------------------------
# Specify the installation option.
# Specify ZCBUS INSTALL home ,for storage docker's cache and images
#-------------------------------------------------------------------------------
ZC_DATA_HOME=/data/zcbus
#-------------------------------------------------------------------------------
# Specify the installation option.
# Specify ZCBUS'S docker basic home ,for storage docker's cache and images
#-------------------------------------------------------------------------------
ZC_DOCKER_HOME=/data/docker
#-------------------------------------------------------------------------------
# Specify the installation option.
# Specify docker's username
#-------------------------------------------------------------------------------
ZC_DOCKER_USER=zcbus
#-------------------------------------------------------------------------------
# Specify a location to install ZCBUS'S TYPE ,Only support cloud_client
#-------------------------------------------------------------------------------
ZC_TYPE=server
#-------------------------------------------------------------------------------
# Specify a docker's listener port
#-------------------------------------------------------------------------------
ZC_DOCKER_SERVER_PORT=8899
#-------------------------------------------------------------------------------
# Zcbus client remote location API service URL connection service
#-------------------------------------------------------------------------------
ZC_CUSTOMER_URL=http://v2.zbomc.com
#-------------------------------------------------------------------------------
# Remote receiving zcbus data stream port service
#-------------------------------------------------------------------------------
ZC_CACHE_SERVER=zcbuskafka:9092
#-------------------------------------------------------------------------------
# Remote receiving zcbus data resource port service
#-------------------------------------------------------------------------------
ZC_NET_DB_SERVER_DBNAME=zcbus
ZC_NET_DB_SERVER_HOST=zcbusnet
ZC_NET_DB_SERVER_PORT=33060
ZC_NET_DB_SERVER_USER=QFlYT0k6
ZC_NET_DB_SERVER_PWD=e0twWGp8aVtWfGB8dn9YdTo
ZC_NET_DB_SERVER_ID=2
ZC_INSTALL_MODE=0
#-------------------------------------------------------------------------------
# Remote receiving zcbus data resource port service,include master and slave's ip
#-------------------------------------------------------------------------------
ZC_DB_IPPORT=zcbusdb:3306
#-------------------------------------------------------------------------------
# Remote zcbus'services nodes ip
#-------------------------------------------------------------------------------
ZC_NODE_IPS=zcbusdb
#-------------------------------------------------------------------------------
#-------------------------------------------------------------------------------
# Remote install zcbus type integrate/external
#-------------------------------------------------------------------------------
ZC_METHOD=integrate
#-------------------------------------------------------------------------------
# Remote install zcbus include database docker
# 0 is equal ZC_METHOD's integrate mode,all server include docker,and other is external mode
# 1 is include zcbusvue/zcbusrestapi
# 2 is include zcbusvue/zcbusrestapi,zcbusdb
# 3 is include zcbusvue/zcbusrestapi,zcbuskafka/zcbuszookeeper
#-------------------------------------------------------------------------------
ZC_SERV_LOCALTION=0
#-------------------------------------------------------------------------------
# Parameter set to zcbus_cache/kafka_cache
# zcbus_cache is zcbus's mq for zcbus single mode
# kafka_cache is kafka or zcbus cluster mode
#-------------------------------------------------------------------------------
ZC_CACHE_TYPE=zcbus_kafka
#-------------------------------------------------------------------------------
# for ZCBUS COM CODE default 1
#-------------------------------------------------------------------------------
ZC_COM_CODE=1
#-------------------------------------------------------------------------------
# Parameter set to http/https ,default http
#-------------------------------------------------------------------------------
ZC_HTTP_MODE=http
#-------------------------------------------------------------------------------
# Parameter set to 1, Kafka uses Sasl authentication, set to 0, no authentication method
#-------------------------------------------------------------------------------
ZC_CACHE_SASL=0
#-------------------------------------------------------------------------------
# if add zcbus's default container :1 is add container,0 is not add container
#-------------------------------------------------------------------------------
ZC_IF_ADD_DEFAULT_ZCBUS=1
#------------------------------------------------------------------------------
#- for check data file limit
#------------------------------------------------------------------------------
ZC_DATA_DIR_LIMIT=10G
#------------------------------------------------------------------------------
#- for check docker limit
#------------------------------------------------------------------------------
ZC_DOCKER_DIR_LIMIT=5G
#-------------------------------------------------------------------------------
# for ZCBUS use memory min limit set, Suggest setting the minimum value 4GB
#-------------------------------------------------------------------------------
ZC_MEMORY_LIMIT=0G
注意:部分参数说明
1.通过修改以下参数,看更改数据存储路径
- ZC_DATA_HOME=/data/zcbus
- ZC_DOCKER_HOME=/data/docker
2.通过修改以下 ip 地址及端口号,可修改为外置 Kafka
- ZC_CACHE_SERVER
3.修改如下参数,可考虑安装结构为全部安装还是部分安装
- ZC_SERV_LOCALTION
4.修改如下参数,可将内置数据库修改为外置数据库
- ZC_NET_DB_SERVER_DBNAME=zcbus
- ZC_NET_DB_SERVER_HOST=zcbusnet
- ZC_NET_DB_SERVER_PORT=33060
- ZC_NET_DB_SERVER_USER=QFlYT0k6
- ZC_NET_DB_SERVER_PWD=e0twWGp8aVtWfGB8dn9YdTo
- ZC_NET_DB_SERVER_ID=1
- ZC_INSTALL_MODE=1
5.修改如下信息,可修改数据库端口号
- ZC_DB_IPPORT=zcbusdb:3306
6.修改如下参数,可决定是否走 SASL 认证方式
- ZC_CACHE_SASL=0
6.3 安装zcbus
执行安装前,要对/etc/hosts进行设置,添加IP地址和hostname主机信息。
# 使用root用户安装
[root@dsmart ~]# cd /home/soft/simple_server/
-- 执行以下命令安装
(base) [root@dsmart simple_server]# ./zcmgr.sh install
-- 上述命令执行过程如下
10.110.8.42==? Check ZC_DOCKER_HOME=/data/docker ZC_DATA_HOME=/data/zcbus ok
10.110.8.42==? Check Memory 16 > 0 ok ...
10.110.8.42==? Check /etc/hosts Sucessfull...
10.110.8.42==[Step 1]: checking OS version/firewalld/seLinux and config ...
############################## check host ::: /etc/hosts #############################################
10.110.8.42==? check host ::: 1 ::: OK
10.110.8.42==? 10.110.8.42 dsmart
############################## check selinux ::: /etc/selinux/config #################################
10.110.8.42==? check selinux ::: disabled ::: OK
setenforce: SELinux is disabled
############################## check firewalld #######################################################
10.110.8.42==? check firewalld ::: not running ::: OK
############################## check sysctl ::: /etc/sysctl.conf #####################################
10.110.8.42==? vm.max_map_count ::: 2000000 (>=2000000) ::: OK
10.110.8.42==? kernel.shmall ::: 4294967296 (>=4294967296) ::: OK
10.110.8.42==? fs.aio-max-nr ::: 1048576 (>=1048576) ::: OK
10.110.8.42==? fs.file-max ::: 6815744 (>=6815744) ::: OK
10.110.8.42==? kernel.shmmax ::: 54975581388 (>=2070833152) ::: OK
10.110.8.42==? kernel.shmmni ::: 4096 (>=4096) ::: OK
10.110.8.42==? kernel.sem ::: 250 (>=250) 32000 (>=32000) 100 (>=100) 128 (>=128) ::: OK
10.110.8.42==? net.ipv4.ip_local_port_range ::: 1024 (>=1024) 65500 (>=65500) ::: OK
10.110.8.42==? net.core.rmem_default ::: 262144 (>=262144) ::: OK
10.110.8.42==? net.core.rmem_max ::: 4194304 (>=4194304) ::: OK
10.110.8.42==? net.core.wmem_default ::: 262144 (>=262144) ::: OK
10.110.8.42==? net.core.wmem_max ::: 1048576 (>=1048576) ::: OK
10.110.8.42==? kernel.threads-max ::: 999999 (>=999999) ::: OK
10.110.8.42==? kernel.pid_max ::: 999999 (>=999999) ::: OK
10.110.8.42==? vm.max_map_count ::: 2000000 (>=1999999) ::: OK
10.110.8.42==? net.ipv4.ip_forward ::: 1 (>=1) ::: OK
10.110.8.42==? fs.inotify.max_user_watches ::: 1048576 (>=1048576) ::: OK
10.110.8.42==? fs.inotify.max_user_instances ::: 1048576 (>=1048576) ::: OK
############################## check limits ::: /etc/security/limits.conf ############################
10.110.8.42==? soft-nofile ::: 1048576 (>=1048500) ::: OK
10.110.8.42==? hard-nofile ::: 1048576 (>=1048500) ::: OK
10.110.8.42==? soft-nproc ::: 1048576 (>=65536) ::: OK
10.110.8.42==? hard-nproc ::: 1048576 (>=65536) ::: OK
############################## CHECK RESULT ##########################################################
10.110.8.42==? OK : 25 ERROR : 0 WARNING : 0
[INFO] whether to start install zcbus ... Please input 'y/Y' to continue/press Ctrl+C to exit :y
-- 根据提升输入y
[INFO] whether to install chinese[0]/english[1],defaut is 0:
-- 按回车键选择默认
[INFO] whether to install integrate/external ,defaut is integrate:
####################install module####################
.......................................[ server ]
.......................................[ cloud_server ]
Please input Select Mode,default mode is [ server ]:
10.110.8.42==? ***************************************************************************************************
10.110.8.42==? ****************************** Ready Install for zcbus grid cluster ....***************************
10.110.8.42==? ****************************** check variabels for ....***************************
10.110.8.42==? ***************************************************************************************************
10.110.8.42==? check status ZC_DATA_HOME sucessfull..
10.110.8.42==? check status ZC_DOCKER_HOME sucessfull..
10.110.8.42==? check status ZC_DOCKER_SERVER_PORT sucessfull..
10.110.8.42==? check status ZC_TYPE sucessfull..
10.110.8.42==? check status ZC_CUSTOMER_URL sucessfull..
10.110.8.42==? check status ZC_CACHE_SERVER sucessfull..
10.110.8.42==? check status ZC_NET_DB_SERVER_DBNAME sucessfull..
10.110.8.42==? check status ZC_NET_DB_SERVER_HOST sucessfull..
10.110.8.42==? check status ZC_NET_DB_SERVER_PORT sucessfull..
10.110.8.42==? check status ZC_NET_DB_SERVER_USER sucessfull..
10.110.8.42==? check status ZC_NET_DB_SERVER_PWD sucessfull..
10.110.8.42==? check status ZC_NET_DB_SERVER_ID sucessfull..
10.110.8.42==? check status ZC_DB_IPPORT sucessfull..
10.110.8.42==? check status ZC_METHOD sucessfull..
10.110.8.42==? check status ZC_SERV_LOCALTION sucessfull..
10.110.8.42==? check status ZC_IF_ADD_DEFAULT_ZCBUS sucessfull..
10.110.8.42==? check status ZC_INSTALL_MODE sucessfull..
10.110.8.42==? check status ZC_CACHE_SASL sucessfull..
10.110.8.42==? check status ZC_CACHE_TYPE sucessfull..
will change 's restapi install mode ...
10.110.8.42==[Step 2]: ready for data dir path ...
[INFO] Please input docker path /data/zcbus:
10.110.8.42==? Load Path to /data/zcbus ...
check sucessfull for zclimit 10G < 304G[/data/zcbus]...
10.110.8.42==[Step 3]: add zcbus user ...
docker:x:995:zcbus
docker group exists ...
uid=54325(zcbus) gid=54334(zcbus) groups=54334(zcbus),995(docker)
10.110.8.42==[Step 4]: checking if docker is installed ...
check sucessfull for zclimit 5G < 304G[/data/docker]...
############################## docker version: 24.0.7 ################################################
10.110.8.42==[Step 5]: checking docker-compose is installed ...
############################## docker-compose version: 2.24.5 ########################################
10.110.8.42==[Step 6]: checking mysql directory ...
############################## Create directory mysql /data/zcbus/zcbusdata .... #####################
10.110.8.42==? Mysql data directory /data/zcbus/zcbusdata/mysql/data create Successful!
10.110.8.42==[Step 7]: checking kafka directory ...
############################## Create directory kafka /data/zcbus/zcbusdata/kafka .... ###############
10.110.8.42==? Kafka data directory /data/zcbus/zcbusdata/zcbuskafka/logs create Successful!
10.110.8.42==[Step 8]: checking cache directory ...
############################## Create directory cache /data/zcbus/zcbusdata .... #####################
10.110.8.42==? Mysql data directory /data/zcbus/zcbusdata/cache create Successful!
10.110.8.42==[Step 9]: checking zookeeper directory ...
############################## Create directory kafka /data/zcbus/zcbusdata/kafka .... ###############
10.110.8.42==? Zookeeoer data directory /data/zcbus/zcbusdata/zcbuszookeeper/data create Successful!
10.110.8.42==[Step 10]: loading zcbus images ...
[INFO] Please input if load images y/n:y
docker load -i /home/soft/simple_server/soft/images/prepare.tar.gz
Loaded image: reg.zbomc.com/zcbus/prepare:latest
10.110.8.42==? Load images to docker's sucessfull...
docker load -i /home/soft/simple_server/soft/images/zcbus_vue.tar.gz
Loaded image: reg.zbomc.com/zcbus_vue:v2.0.1
10.110.8.42==? Load images to docker's sucessfull...
docker load -i /home/soft/simple_server/soft/images/zcbus_restapi.tar.gz
Loaded image: reg.zbomc.com/zcbus_restapi:v2.0.1
10.110.8.42==? Load images to docker's sucessfull...
docker load -i /home/soft/simple_server/soft/images/zcbus_server.tar.gz
Loaded image: reg.zbomc.com/zcbus_server:v2.0.1
10.110.8.42==? Load images to docker's sucessfull...
docker load -i /home/soft/simple_server/soft/images/zcbus_net.tar.gz
Loaded image: reg.zbomc.com/zcbus_net:v2.0.1
10.110.8.42==? Load images to docker's sucessfull...
docker load -i /home/soft/simple_server/soft/images/zcbus_mysql.tar.gz
Loaded image: reg.zbomc.com/zcbus_mysql:v2.0.1
10.110.8.42==? Load images to docker's sucessfull...
docker load -i /home/soft/simple_server/soft/images/zcbus_kafka.tar.gz
Loaded image: reg.zbomc.com/zcbus_kafka:v2.0.1
10.110.8.42==? Load images to docker's sucessfull...
docker load -i /home/soft/simple_server/soft/images/zcbus_zookeeper.tar.gz
Loaded image: reg.zbomc.com/zcbus_zookeeper:v2.0.1
10.110.8.42==? Load images to docker's sucessfull...
10.110.8.42==?
10.110.8.42==? network zcbus is already exists ...
10.110.8.42==? /home/soft/simple_server/common/zcbus.properties ZC_SERV_LOCALTION :[0]
Check hostname [dsmart]'s ip is :[10.110.8.42]
===> Ready basic info zctype server : integrate ,ymlbasic : docker-compose-server.yml.jinja
====zctype:[ server ],[cache_mode: zcbus_kafka ]
Append zcbuszookeeper sucessfull...
Append zcbuskafka sucessfull...
=====fulldict[ dict_keys(['zcbusdb', 'zcbusnet', 'zcbusrestapi', 'zcbusvue', 'zcbuszookeeper', 'zcbuskafka']) ]
Flush data to /compose_location/docker-compose.yml
===> Finish init basic ...
10.110.8.42==? prepare server Sucessfull...
10.110.8.42==?
10.110.8.42==?
10.110.8.42==[Step 11]: checking if ports is used ...
10.110.8.42==? Port 33060 is available!!
10.110.8.42==? Port 8890 is available!!
10.110.8.42==[Step 12]: ready basic soft for container ...
10.110.8.42==? Ready compare soft ...
10.110.8.42==? Read zcbusserver jdk sucessfull....
10.110.8.42==? Read zcbusserver jar sucessfull....
10.110.8.42==? Read zcbusserver bin sucessfull....
10.110.8.42==? Read zcbusserver lib sucessfull....
10.110.8.42==? /data/zcbus/module/lib to /data/zcbus/zcbusdata/zcbusserver/ is build sucessfull...
10.110.8.42==? /data/zcbus/module/bin to /data/zcbus/zcbusdata/zcbusserver/ is build sucessfull...
10.110.8.42==? /data/zcbus/module/jdk to /data/zcbus/zcbusdata/zcbusserver/ is build sucessfull...
10.110.8.42==? /data/zcbus/module/jar to /data/zcbus/zcbusdata/zcbusserver/ is build sucessfull...
10.110.8.42==[Step 13]: starting zcbus ...
[+] Running 6/6
? Container zcbuszookeeper Started 10.7s
? Container zcbusdb Started 10.7s
? Container zcbusnet Started 10.7s
? Container zcbuskafka Started 3.0s
? Container zcbusrestapi Started 3.0s
? Container zcbusvue Started 8.6s
10.110.8.42==? =============will load data mode for server================
10.110.8.42==? Will Install for load zcbus's data...
10.110.8.42==? Check zcbusdb Connect start ...
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
10.110.8.42==? Check zcbusdb Connect Failed ,wait 10 seconds and retry 1 times ...
10.110.8.42==? Check zcbusdb Connect start ...
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
10.110.8.42==? Check zcbusdb Connect Failed ,wait 10 seconds and retry 2 times ...
10.110.8.42==? Check zcbusdb Connect start ...
mysql: [Warning] Using a password on the command line interface can be insecure.
10.110.8.42==? Check zcbusdb Connect Sucessfull...
10.110.8.42==? Start Load data to zcbusdb ...
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1146 (42S02) at line 1: Table 'zcbus.bus_cluster_version' doesn't exist
INFO => initDB first time ...
INFO => change password for zcbus...
INFO => finished password for zcbus...
INFO => sql source /zcbus/createdb.sql
mysql: [Warning] Using a password on the command line interface can be insecure.
INFO => DEAL sql file account_api...
INFO => DEAL sql file account_menu...
INFO => DEAL sql file account_rel_menu_api...
INFO => DEAL sql file account_rel_role_menu...
INFO => DEAL sql file bus_dict_item...
INFO => DEAL sql file bus_dict_type...
INFO => DEAL sql file bus_dict_style...
INFO => DEAL sql file bus_cluster_version...
INFO => DEAL sql file bus_parameter_module...
INFO => DEAL sql file custom_charset_to_big5...
INFO => DEAL sql file simple_server...
INFO => DEAL sql file bus_parameter_module_image...
INFO => DEAL sql file bus_sys_parameter...
INFO => DEAL sql file bus_sql_parameter...
INFO => DEAL sql file bus_service_type_model...
INFO => DEAL sql file bus_search_group...
INFO => DEAL sql file bus_search_group_sql...
INFO => DEAL sql file bus_search_group_sql_map...
INFO => DEAL sql file bus_tool_sql_record...
INFO => DEAL sql file bus_api_key_map...
INFO => DEAL sql file bus_dict_table_column...
INFO => DEAL sql file bus_dict_table_type...
INFO => DEAL sql file bus_aux_publish_down_tab_list...
INFO => DEAL sql file bus_aux_publish_up_tab_list...
INFO => DEAL sql file bus_msg_dispatch...
INFO => DEAL sql file bus_msg_model...
INFO => DEAL sql file sys_article...
INFO => DEAL sql file bus_dict_table_list...
INFO => DEAL sql file zbomc_sys_password_blacklist...
INFO => DEAL sql file update...
INFO => DEAL sql file p1_server...
INFO => DEAL sql file p2...
auto start is set to 0,not start ???
INFO => Will exec sql for /zcbus/zcbus.v2.0.1.sql...
mysql: [Warning] Using a password on the command line interface can be insecure.
INFO => sql :select version from bus_cluster_version
mysql: [Warning] Using a password on the command line interface can be insecure.
===================not found upgrade sql =====================
INFO => Not Found upgrade sql file
===============
10.110.8.42==? Finished Load data to zcbusdb sucessfull...
====>>>ZCBUS [ Sun Jun 9 10:39:58 CST 2024 ]
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
zcbusdb reg.zbomc.com/zcbus_mysql:v2.0.1 "docker-entrypoint.s…" zcbusdb 47 seconds ago Up 36 seconds 3306/tcp, 33060/tcp
zcbuskafka reg.zbomc.com/zcbus_kafka:v2.0.1 "docker-entrypoint.sh" zcbuskafka 37 seconds ago Up 34 seconds
zcbusnet reg.zbomc.com/zcbus_net:v2.0.1 "sh /run.sh" zcbusnet 47 seconds ago Up 36 seconds 0.0.0.0:33060->33060/tcp, :::33060->33060/tcp
zcbusrestapi reg.zbomc.com/zcbus_restapi:v2.0.1 "sh /run.sh" zcbusrestapi 37 seconds ago Up 34 seconds 7080/tcp
zcbusvue reg.zbomc.com/zcbus_vue:v2.0.1 "/docker-entrypoint.…" zcbusvue 37 seconds ago Up 28 seconds 0.0.0.0:8890->80/tcp, :::8890->80/tcp
zcbuszookeeper reg.zbomc.com/zcbus_zookeeper:v2.0.1 "docker-entrypoint.sh" zcbuszookeeper 47 seconds ago Up 36 seconds
10.110.8.42==?
10.110.8.42==[Step 13]: sync zcbus_docker to /data/zcbus/...
copy /home/soft/simple_server/bin /data/zcbus/...
10.110.8.42==[Step 13]: ready basic soft for basic zcbus_docker server ...
Check hostname [dsmart]'s ip is :[10.110.8.42]
10.110.8.42==? ZC_IPADDRESS :10.110.8.42
10.110.8.42==? ==============>/data/zcbus========10.110.8.42=======
Note: add zcbus_docker service
? add zcbus_docker service successfully ...
10.110.8.42==? Read Master database info to /home/soft/simple_server/config/zcbus_master.ini
[INF] load libmysqlclient.so
[LV0] 2024-06-09 10:40:03: connect to mysql zcbus/***@10.110.8.42:33060 ...
[INF] set client character set utf8mb4...
[INF] new client character set: utf8mb4
[INF] MYSQL VERSION: 50743
[INF] MYSQL INFO: 5.7.43-log
SET SESSION sql_mode='ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION'
[INF] connection test ok.
zcbus_docker is Stopping ...
zcbus_docker is Finished ...
10.110.8.42==? sync basic info to /data/zcbus/ begin ...
copy /home/soft/simple_server/bin /data/zcbus/...
copy /home/soft/simple_server/yaml /data/zcbus/...
copy /home/soft/simple_server/common/add_zcbus_docker_service.sh /data/zcbus/common/...
copy /home/soft/simple_server/common/docker.service /data/zcbus/common/...
copy /home/soft/simple_server/common/zcbus_client.rsp /data/zcbus/common/...
copy /home/soft/simple_server/common/zcbus_docker.service /data/zcbus/common/...
copy /home/soft/simple_server/common/zcbus.properties /data/zcbus/common/...
copy /home/soft/simple_server/common/zcbus.yml /data/zcbus/common/...
copy /home/soft/simple_server/common/.check /data/zcbus/common/...
copy /home/soft/simple_server/common/.zcbus.common /data/zcbus/common/...
copy /home/soft/simple_server/zcmgr.sh /data/zcbus/...
copy /home/soft/simple_server/soft/docker-20.10.10.tgz /data/zcbus/soft/...
copy /home/soft/simple_server/soft/docker-compose-Linux-x86_64 /data/zcbus/soft/...
copy /home/soft/simple_server/soft/zcbus /data/zcbus/soft/...
10.110.8.42==? sync basic info to /data/zcbus/ end ...
10.110.8.42==? Test zcbus_docker Connect to zcbus_master.ini Sucessfull...
10.110.8.42==? =========================== ready restart zcbus_docker ====================
zcbus_docker is Stopping ...
zcbus_docker is Finished ...
[+] Running 6/0
? Container zcbusnet Running 0.0s
? Container zcbuszookeeper Running 0.0s
? Container zcbusrestapi Running 0.0s
? Container zcbusdb Running 0.0s
? Container zcbuskafka Running 0.0s
? Container zcbusvue Running 0.0s
10.110.8.42==? sync config info to /data/zcbus/ begin ...
copy /home/soft/simple_server/config /data/zcbus/...
copy /home/soft/simple_server/yaml /data/zcbus/...
10.110.8.42==? sync config info to /data/zcbus/ end ...
[+] Restarting 3/3
? Container zcbusnet Started 10.8s
? Container zcbusrestapi Started 10.7s
? Container zcbusvue Started 10.9s
10.110.8.42==? =============================================================
10.110.8.42==? Manager console Website Address: http://10.110.8.42:8890
10.110.8.42==? Default login account : admin
10.110.8.42==? Default login password: 123456
10.110.8.42==?
10.110.8.42==? =============================================================
10.110.8.42==? ----Zcbus has been installed and started successfully.----
no such service: zcbus
10.110.8.42==? Zcbus Container zcbus Not Exists and install...
10.110.8.42==? /home/soft/simple_server/common/zcbus.properties ZC_SERV_LOCALTION :[0]
Check hostname [dsmart]'s ip is :[10.110.8.42]
Append zcbus sucessfull...
Flush data to /compose_location/docker-compose.yml
[+] Running 7/7
? Container zcbus Started 0.4s
? Container zcbuszookeeper Running 0.0s
? Container zcbusdb Running 0.0s
? Container zcbusnet Running 0.0s
? Container zcbuskafka Running 0.0s
? Container zcbusrestapi Running 0.0s
? Container zcbusvue Running 0.0s
10.110.8.42==? =============will load data mode for zcbus================
====>>>ZCBUS [ Sun Jun 9 10:40:28 CST 2024 ]
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
zcbus reg.zbomc.com/zcbus_server:v2.0.1 "/bin/bash -c ${ZCBU…" zcbus Less than a second ago Up Less than a second
zcbusdb reg.zbomc.com/zcbus_mysql:v2.0.1 "docker-entrypoint.s…" zcbusdb About a minute ago Up About a minute 3306/tcp, 33060/tcp
zcbuskafka reg.zbomc.com/zcbus_kafka:v2.0.1 "docker-entrypoint.sh" zcbuskafka About a minute ago Up About a minute
zcbusnet reg.zbomc.com/zcbus_net:v2.0.1 "sh /run.sh" zcbusnet About a minute ago Up 11 seconds 0.0.0.0:33060->33060/tcp, :::33060->33060/tcp
zcbusrestapi reg.zbomc.com/zcbus_restapi:v2.0.1 "sh /run.sh" zcbusrestapi About a minute ago Up 11 seconds 7080/tcp
zcbusvue reg.zbomc.com/zcbus_vue:v2.0.1 "/docker-entrypoint.…" zcbusvue About a minute ago Up 11 seconds 0.0.0.0:8890->80/tcp, :::8890->80/tcp
zcbuszookeeper reg.zbomc.com/zcbus_zookeeper:v2.0.1 "docker-entrypoint.sh" zcbuszookeeper About a minute ago Up About a minute
root 36000 1 0 10:40 pts/2 00:00:00 /data/zcbus/bin/zcbus_docker -log_level 2
root 36058 1 0 10:40 pts/2 00:00:00 /data/zcbus/bin/zcbus_docker -manager -log_level 2
root 36124 1 0 10:40 pts/2 00:00:00 /data/zcbus/bin/zcbus_docker -listener -log_level 2
10.110.8.42==? sync config info to /data/zcbus/ begin ...
copy /home/soft/simple_server/config /data/zcbus/...
copy /home/soft/simple_server/yaml /data/zcbus/...
10.110.8.42==? sync config info to /data/zcbus/ end ...
[+] Restarting 3/3
? Container zcbusvue Started 11.1s
? Container zcbusnet Started 11.1s
? Container zcbusrestapi Started 10.9s
10.110.8.42==? =============================================================
10.110.8.42==? Manager console Website Address: http://10.110.8.42:8890
10.110.8.42==? Default login account : admin
10.110.8.42==? Default login password: 123456
10.110.8.42==?
10.110.8.42==? =============================================================
10.110.8.42==? ----Zcbus has been installed and started successfully.----
上述执行完毕,会在末尾提示zcbus登录信息,包含用户名和密码。
Manager console Website Address: http://10.110.8.42:8890
Default login account : admin
Default login password: 123456
6.4 安装检查
-- 可通过如下方式检查安装信息
(base) [root@dsmart simple_server]# ./zcmgr.sh check
10.110.8.42==? Check ZC_DOCKER_HOME=/data/docker ZC_DATA_HOME=/data/zcbus ok
====>>>ZCBUS [ Sun Jun 9 10:50:24 CST 2024 ]
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
zcbus reg.zbomc.com/zcbus_server:v2.0.1 "/bin/bash -c ${ZCBU…" zcbus 9 minutes ago Up 9 minutes
zcbusdb reg.zbomc.com/zcbus_mysql:v2.0.1 "docker-entrypoint.s…" zcbusdb 11 minutes ago Up 11 minutes 3306/tcp, 33060/tcp
zcbuskafka reg.zbomc.com/zcbus_kafka:v2.0.1 "docker-entrypoint.sh" zcbuskafka 11 minutes ago Up 11 minutes
zcbusnet reg.zbomc.com/zcbus_net:v2.0.1 "sh /run.sh" zcbusnet 11 minutes ago Up 9 minutes 0.0.0.0:33060->33060/tcp, :::33060->33060/tcp
zcbusrestapi reg.zbomc.com/zcbus_restapi:v2.0.1 "sh /run.sh" zcbusrestapi 11 minutes ago Up 9 minutes 7080/tcp
zcbusvue reg.zbomc.com/zcbus_vue:v2.0.1 "/docker-entrypoint.…" zcbusvue 11 minutes ago Up 9 minutes 0.0.0.0:8890->80/tcp, :::8890->80/tcp
zcbuszookeeper reg.zbomc.com/zcbus_zookeeper:v2.0.1 "docker-entrypoint.sh" zcbuszookeeper 11 minutes ago Up 11 minutes
root 36000 1 0 10:40 pts/2 00:00:00 /data/zcbus/bin/zcbus_docker -log_level 2
root 36058 1 0 10:40 pts/2 00:00:00 /data/zcbus/bin/zcbus_docker -manager -log_level 2
root 36124 1 0 10:40 pts/2 00:00:00 /data/zcbus/bin/zcbus_docker -listener -log_level 2
七、数据同步
7.1 登录zcbus
使用上面安装的IP地址、用户名和密码登录zcbus管理控制台,如下所示:
首次登录会提示修改初始密码后才能重新登陆。
登录后的首页界面信息如下:
7.2 数据同步操作
7.2.1 新增容器
点击“数据同步”,然后点击“新增”按钮,如下所示:
点击**“新增”**按钮,弹出如下信息,设置一个名称,如下所示:
然后点击**“提交”**,信息如下:
然后点击**“配置”**按钮,如下所示:
7.2.2 选择源端数据库
然后点击“选择源端数据库”,选择oracle数据库,如下所示:
7.2.3 输入源端数据库连接信息
然后进入如下界面,根据提示设置源端oracle相关配置信息,如下所示:
点击“测试连接”,若连接成功,则提示如下信息:
注意:要对用户赋予相应的权限。
7.2.4 选库选表
然后点击“下一步,配置源端表”,如下所示:
然后点击“已选表”,可根据需要选择是否发布数据表,如下所示:
源端数据库及表信息如下:
7.2.5 选择目标端数据库
选择OPENGAUSS数据库。
7.2.6 输入目标端数据库连接信息
然后点击连接测试。
7.2.7 开启全量订阅
如下所示,选择“开启全量订阅”
7.2.8 进行数据同步
数据全量同步后,可进行增量同步,并可以对同步过程进行监控,监控界面如下:
八、附录
相较于openGauss 6.0.0之前版本,6.0.0的安装更简单便捷,节省了数据库学习成本。
另外通过zcbus数据同步软件,可实现oracle到openGauss的数据全量和增量同步,另外还可以实现MySQL等数据库产品到openGuass的同步。
在同步过程中,确实感触到openGauss 6.0.0对于Oracle数据库的支持的增强,兼容性很好。
zcbus也是一款优秀的数据同步工具,另外zcbus也有很多其它功能,比如可实现多点数据同步,数据的清洗,是一款非常优秀的数据交换平台。后期也会对zcbus有更多详细的介绍