ClickHouse 5节点集群安装

ClickHouse 5节点集群安装

在此架构中,配置了五台服务器。其中两个用于托管数据副本。其他三台服务器用于协调数据的复制。在此示例中,我们将创建一个数据库和表,将使用 ReplicatedMergeTree 表引擎在两个数据节点之间复制该数据库和表。

官方文档:https://clickhouse.com/docs/en/architecture/replication

部署环境

在这里插入图片描述

节点清单:

主机名节点IP操作系统节点配置描述
clickhouse-01192.168.72.51Ubuntu22.042C/4G/100G DISKClickhose server, client
clickhouse-02192.168.72.52Ubuntu22.042C/4G/100G DISKClickhose server, client
clickhouse-keeper-01192.168.72.53Ubuntu22.042C/4G/100G DISKClikhouse keeper
clickhouse-keeper-02192.168.72.54Ubuntu22.042C/4G/100G DISKClikhouse keeper
clickhouse-keeper-03192.168.72.55Ubuntu22.042C/4G/100G DISKClikhouse keeper

说明:

在生产环境中,我们强烈建议为 ClickHouse keeper 使用专用主机。在测试环境中,可以在同一服务器上组合运行 ClickHouse Server 和 ClickHouse Keeper。另一个基本示例“横向扩展”就使用了这种方法。在此示例中,我们介绍了将 Keeper 与 ClickHouse Server 分离的推荐方法。 Keeper 服务器可以更小,4GB RAM 通常足以用于每个 Keeper 服务器,直到您的 ClickHouse 服务器变得非常大。

在所有节点上配置主机名

hostnamectl set-hostname clickhouse-01
hostnamectl set-hostname clickhouse-02
hostnamectl set-hostname clickhouse-keeper-01
hostnamectl set-hostname clickhouse-keeper-02
hostnamectl set-hostname clickhouse-keeper-03

在所有节点上编辑 /etc/hosts 文件

cat >/etc/hosts<<EOF
192.168.72.51 clickhouse-01 clickhouse-01.example.com
192.168.72.52 clickhouse-02 clickhouse-02.example.com
192.168.72.53 clickhouse-keeper-01 clickhouse-keeper-01.example.com
192.168.72.54 clickhouse-keeper-02 clickhouse-keeper-02.example.com
192.168.72.55 clickhouse-keeper-03 clickhouse-keeper-03.example.com
EOF

安装clickhouse

clickhouse-01和clickhouse-02节点执行

在clickhouse-01和clickhouse-02节点上需要安装ClickHouse-server及client

sudo apt-get install -y apt-transport-https ca-certificates dirmngr
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754
echo "deb https://packages.clickhouse.com/deb stable main" | sudo tee \
    /etc/apt/sources.list.d/clickhouse.list
sudo apt-get update
sudo apt-get install -y clickhouse-server clickhouse-client

clickhouse-keeper-01~03节点执行

在clickhouse-keeper-01~03节点上仅安装clickhose-keeper

sudo apt-get install -y apt-transport-https ca-certificates dirmngr
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754
echo "deb https://packages.clickhouse.com/deb stable main" | sudo tee \
    /etc/apt/sources.list.d/clickhouse.list
sudo apt-get update
sudo apt-get install -y clickhouse-keeper

创建clickhose-keeper相关目录

mkdir -p /etc/clickhouse-keeper/config.d
mkdir -p /var/log/clickhouse-keeper
mkdir -p /var/lib/clickhouse-keeper/coordination/log
mkdir -p /var/lib/clickhouse-keeper/coordination/snapshots
mkdir -p /var/lib/clickhouse-keeper/cores
chown -R clickhouse.clickhouse /etc/clickhouse-keeper /var/log/clickhouse-keeper /var/lib/clickhouse-keeper

clickhouse-01配置

对于 clickhouse-01 有五个配置文件。您可以选择将这些文件合并为一个文件,但为了文档的清晰性,单独查看它们可能会更简单。当您通读配置文件时,您会发现 clickhouse-01 和 clickhouse-02 之间的大部分配置是相同的;差异将被突出显示。

网络和日志记录配置

这些值可以根据您的意愿进行定制。此示例配置为您提供:

  • 调试日志将以 1000M 滚动 3 次
  • 使用clickhouse-client连接时显示的名称是cluster_1S_2R node 1
  • ClickHouse 将侦听 IPV4 网络的端口 8123 和 9000。

clickhouse-01 上的 /etc/clickhouse-server/config.d/network-and-logging.xml

root@clickhouse-01:~# cat /etc/clickhouse-server/config.d/network-and-logging.xml
<clickhouse>
    <logger>
        <level>debug</level>
        <log>/var/log/clickhouse-server/clickhouse-server.log</log>
        <errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
        <size>1000M</size>
        <count>3</count>
    </logger>
    <display_name>cluster_1S_2R node 1</display_name>
    <listen_host>0.0.0.0</listen_host>
    <http_port>8123</http_port>
    <tcp_port>9000</tcp_port>
</clickhouse>

宏配置

shardreplica降低了分布式 DDL 的复杂性。配置的值会自动替换到您的 DDL 查询中,从而简化您的 DDL。此配置的宏指定每个节点的分片和副本数量。
在此 1 分片 2 副本示例中,副本宏是 clickhouse-01 上的replica_1和 clickhouse-02 上的replica_2 。 clickhouse-01 和 clickhouse-02 上的分片宏均为1因为只有一个分片。

clickhouse-01 上的 /etc/clickhouse-server/config.d/macros.xml

root@clickhouse-01:~# cat /etc/clickhouse-server/config.d/macros.xml
<clickhouse>
    <macros>
        <shard>01</shard>
        <replica>01</replica>
        <cluster>cluster_1S_2R</cluster>
    </macros>
</clickhouse>

复制和分片配置

从顶部开始:

  • XML 的remote_servers 部分指定环境中的每个集群。属性replace=true将默认ClickHouse配置中的示例remote_servers替换为此文件中指定的remote_server配置。如果没有此属性,此文件中的远程服务器将被附加到默认的示例列表中。
  • 在此示例中,有一个名为cluster_1S_2R的集群。
  • 为名为cluster_1S_2R的集群创建一个机密,其值为mysecretphrase 。该秘密在环境中的所有远程服务器之间共享,以确保正确的服务器连接在一起。
  • 集群cluster_1S_2R有 1 个分片和 2 个副本。查看本文档开头的架构图,并将其与下面 XML 中的shard定义进行比较。分片定义包含两个副本。指定每个副本的主机和端口。一个副本存储在clickhouse-01上,另一个副本存储在clickhouse-02上。
  • 分片的内部复制设置为 true。每个分片都可以在配置文件中定义internal_replication 参数。如果该参数设置为true,则写操作会选择第一个健康的副本并向其写入数据。

clickhouse-01 上的 /etc/clickhouse-server/config.d/remote-servers.xml

root@clickhouse-01:~# cat /etc/clickhouse-server/config.d/remote-servers.xml
<clickhouse>
    <remote_servers replace="true">
        <cluster_1S_2R>
            <secret>mysecretphrase</secret>
            <shard>
                <internal_replication>true</internal_replication>
                <replica>
                    <host>clickhouse-01</host>
                    <port>9000</port>
                </replica>
                <replica>
                    <host>clickhouse-02</host>
                    <port>9000</port>
                </replica>
            </shard>
        </cluster_1S_2R>
    </remote_servers>
</clickhouse>

配置Keeper的使用

此配置文件use-keeper.xml将 ClickHouse Server 配置为使用 ClickHouse Keeper 来协调复制和分布式 DDL。此文件指定 ClickHouse Server 应在端口 9181 上的节点 clickhouse-keeper-01 - 03 上使用 Keeper,并且该文件在clickhouse-01clickhouse-02上相同。

clickhouse-01 上的 /etc/clickhouse-server/config.d/use-keeper.xml

root@clickhouse-01:~# cat /etc/clickhouse-server/config.d/use-keeper.xml
<clickhouse>
    <zookeeper>
        <!-- where are the ZK nodes -->
        <node>
            <host>clickhouse-keeper-01</host>
            <port>9181</port>
        </node>
        <node>
            <host>clickhouse-keeper-02</host>
            <port>9181</port>
        </node>
        <node>
            <host>clickhouse-keeper-03</host>
            <port>9181</port>
        </node>
    </zookeeper>
</clickhouse>

clickhouse-02配置

由于 clickhouse-01 和 clickhouse-02 上的配置非常相似,这里仅指出差异。

网络和日志记录配置

该文件在 clickhouse-01 和 clickhouse-02 上都是相同的,但display_name除外。

clickhouse-02 上的 /etc/clickhouse-server/config.d/network-and-logging.xml

root@clickhouse-02:~# cat /etc/clickhouse-server/config.d/network-and-logging.xml
<clickhouse>
    <logger>
        <level>debug</level>
        <log>/var/log/clickhouse-server/clickhouse-server.log</log>
        <errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
        <size>1000M</size>
        <count>3</count>
    </logger>
    <display_name>cluster_1S_2R node 2</display_name>
    <listen_host>0.0.0.0</listen_host>
    <http_port>8123</http_port>
    <tcp_port>9000</tcp_port>
</clickhouse>

宏配置

clickhouse-01 和 clickhouse-02 之间的宏配置不同。 replica在此节点上设置为02

clickhouse-02 上的 /etc/clickhouse-server/config.d/macros.xml

root@clickhouse-02:~# cat /etc/clickhouse-server/config.d/macros.xml
<clickhouse>
    <macros>
        <shard>01</shard>
        <replica>02</replica>
        <cluster>cluster_1S_2R</cluster>
    </macros>
</clickhouse>

复制和分片配置

该文件在 clickhouse-01 和 clickhouse-02 上是相同的。

clickhouse-02 上的 /etc/clickhouse-server/config.d/remote-servers.xml

root@clickhouse-02:~# cat /etc/clickhouse-server/config.d/remote-servers.xml
<clickhouse>
    <remote_servers replace="true">
        <cluster_1S_2R>
            <secret>mysecretphrase</secret>
            <shard>
                <internal_replication>true</internal_replication>
                <replica>
                    <host>clickhouse-01</host>
                    <port>9000</port>
                </replica>
                <replica>
                    <host>clickhouse-02</host>
                    <port>9000</port>
                </replica>
            </shard>
        </cluster_1S_2R>
    </remote_servers>
</clickhouse>

配置Keeper的使用

该文件在 clickhouse-01 和 clickhouse-02 上是相同的。

clickhouse-02 上的 /etc/clickhouse-server/config.d/use-keeper.xml

root@clickhouse-02:~# cat /etc/clickhouse-server/config.d/use-keeper.xml
<clickhouse>
    <zookeeper>
        <!-- where are the ZK nodes -->
        <node>
            <host>clickhouse-keeper-01</host>
            <port>9181</port>
        </node>
        <node>
            <host>clickhouse-keeper-02</host>
            <port>9181</port>
        </node>
        <node>
            <host>clickhouse-keeper-03</host>
            <port>9181</port>
        </node>
    </zookeeper>
</clickhouse>

clickhouse-keeper-01配置

最佳实践

通过编辑配置文件来配置 ClickHouse Keeper 时,您应该:

  • 备份 /etc/clickhouse-keeper/keeper_config.xml
  • 编辑 /etc/clickhouse-keeper/keeper_config.xml 文件

ClickHouse Keeper 提供数据复制和分布式 DDL 查询执行的协调系统。 ClickHouse Keeper 与 Apache ZooKeeper 兼容。此配置在端口 9181 上启用 ClickHouse Keeper。突出显示的行指定该 Keeper 实例的 server_id 为 1。这是三台服务器的enable-keeper.xml文件中的唯一区别。 clickhouse-keeper-02server_id设置为2clickhouse-keeper-03server_id设置为3 。 raft 配置部分在所有三台服务器上都是相同的,下面突出显示以向您展示 raft 配置中server_idserver实例之间的关系。

说明

如果出于任何原因更换或重建 Keeper 节点,请勿重复使用现有的server_id 。例如,如果重建了server_id2的Keeper节点,则将其server_id设置为4或更高。

备份所有节点keeper_config.xml配置

# 备份配置
cp /etc/clickhouse-keeper/keeper_config.xml{,.bak}
# 清空默认配置
echo > /etc/clickhouse-keeper/keeper_config.xml

clickhouse-keeper-01 上的 /etc/clickhouse-keeper/keeper_config.xml

root@clickhouse-keeper-01:~# cat /etc/clickhouse-keeper/keeper_config.xml
<clickhouse>
    <logger>
        <level>trace</level>
        <log>/var/log/clickhouse-keeper/clickhouse-keeper.log</log>
        <errorlog>/var/log/clickhouse-keeper/clickhouse-keeper.err.log</errorlog>
        <size>1000M</size>
        <count>3</count>
    </logger>
    <listen_host>0.0.0.0</listen_host>
    <keeper_server>
        <tcp_port>9181</tcp_port>
        <server_id>1</server_id>
        <log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
        <snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
        <coordination_settings>
            <operation_timeout_ms>10000</operation_timeout_ms>
            <session_timeout_ms>30000</session_timeout_ms>
            <raft_logs_level>trace</raft_logs_level>
        </coordination_settings>
        <raft_configuration>
            <server>
                <id>1</id>
                <hostname>clickhouse-keeper-01</hostname>
                <port>9234</port>
            </server>
            <server>
                <id>2</id>
                <hostname>clickhouse-keeper-02</hostname>
                <port>9234</port>
            </server>
            <server>
                <id>3</id>
                <hostname>clickhouse-keeper-03</hostname>
                <port>9234</port>
            </server>
        </raft_configuration>
    </keeper_server>
</clickhouse>

clickhouse-keeper-02配置

clickhouse-keeper-01clickhouse-keeper-02之间只有一行差异。该节点上的server_id设置为2

clickhouse-keeper-02 上的 /etc/clickhouse-keeper/keeper_config.xml

root@clickhouse-keeper-02:~# cat /etc/clickhouse-keeper/keeper_config.xml
<clickhouse>
    <logger>
        <level>trace</level>
        <log>/var/log/clickhouse-keeper/clickhouse-keeper.log</log>
        <errorlog>/var/log/clickhouse-keeper/clickhouse-keeper.err.log</errorlog>
        <size>1000M</size>
        <count>3</count>
    </logger>
    <listen_host>0.0.0.0</listen_host>
    <keeper_server>
        <tcp_port>9181</tcp_port>
        <server_id>2</server_id>
        <log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
        <snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
        <coordination_settings>
            <operation_timeout_ms>10000</operation_timeout_ms>
            <session_timeout_ms>30000</session_timeout_ms>
            <raft_logs_level>trace</raft_logs_level>
        </coordination_settings>
        <raft_configuration>
            <server>
                <id>1</id>
                <hostname>clickhouse-keeper-01</hostname>
                <port>9234</port>
            </server>
            <server>
                <id>2</id>
                <hostname>clickhouse-keeper-02</hostname>
                <port>9234</port>
            </server>
            <server>
                <id>3</id>
                <hostname>clickhouse-keeper-03</hostname>
                <port>9234</port>
            </server>
        </raft_configuration>
    </keeper_server>
</clickhouse>

clickhouse-keeper-03配置

clickhouse-keeper-01clickhouse-keeper-03之间只有一行差异。该节点上的server_id设置为3

clickhouse-keeper-03 上的 /etc/clickhouse-keeper/keeper_config.xml

root@clickhouse-keeper-03:~# cat /etc/clickhouse-keeper/keeper_config.xml
<clickhouse>
    <logger>
        <level>trace</level>
        <log>/var/log/clickhouse-keeper/clickhouse-keeper.log</log>
        <errorlog>/var/log/clickhouse-keeper/clickhouse-keeper.err.log</errorlog>
        <size>1000M</size>
        <count>3</count>
    </logger>
    <listen_host>0.0.0.0</listen_host>
    <keeper_server>
        <tcp_port>9181</tcp_port>
        <server_id>3</server_id>
        <log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
        <snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
        <coordination_settings>
            <operation_timeout_ms>10000</operation_timeout_ms>
            <session_timeout_ms>30000</session_timeout_ms>
            <raft_logs_level>trace</raft_logs_level>
        </coordination_settings>
        <raft_configuration>
            <server>
                <id>1</id>
                <hostname>clickhouse-keeper-01</hostname>
                <port>9234</port>
            </server>
            <server>
                <id>2</id>
                <hostname>clickhouse-keeper-02</hostname>
                <port>9234</port>
            </server>
            <server>
                <id>3</id>
                <hostname>clickhouse-keeper-03</hostname>
                <port>9234</port>
            </server>
        </raft_configuration>
    </keeper_server>
</clickhouse>

启动服务

clickhouse-keeper-01~03节点执行;

启动服务

systemctl enable --now clickhouse-keeper.service

确认clickhouse-keeper-01服务运行状态

root@clickhouse-keeper-01:~# systemctl status clickhouse-keeper.service 
● clickhouse-keeper.service - ClickHouse Keeper - zookeeper compatible distributed coordination server
     Loaded: loaded (/lib/systemd/system/clickhouse-keeper.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2024-10-27 19:44:26 CST; 3h 0min ago
   Main PID: 3460 (clickhouse-keep)
      Tasks: 41 (limit: 4556)
     Memory: 58.8M
        CPU: 1min 13.000s
     CGroup: /system.slice/clickhouse-keeper.service
             └─3460 /usr/bin/clickhouse-keeper --config=/etc/clickhouse-keeper/keeper_config.xml --pid-file=/run/clickhouse-keeper/clickhouse-keeper.pid

Oct 27 19:44:26 clickhouse-keeper-01 systemd[1]: Started ClickHouse Keeper - zookeeper compatible distributed coordination server.
Oct 27 19:44:26 clickhouse-keeper-01 clickhouse-keeper[3460]: Processing configuration file '/etc/clickhouse-keeper/keeper_config.xml'.
Oct 27 19:44:26 clickhouse-keeper-01 clickhouse-keeper[3460]: Logging trace to /var/log/clickhouse-keeper/clickhouse-keeper.log
Oct 27 19:44:26 clickhouse-keeper-01 clickhouse-keeper[3460]: Logging errors to /var/log/clickhouse-keeper/clickhouse-keeper.err.log

确认clickhouse-keeper-02服务运行状态

root@clickhouse-keeper-02:~# systemctl status clickhouse-keeper.service 
● clickhouse-keeper.service - ClickHouse Keeper - zookeeper compatible distributed coordination server
     Loaded: loaded (/lib/systemd/system/clickhouse-keeper.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2024-10-27 19:44:28 CST; 3h 0min ago
   Main PID: 3053 (clickhouse-keep)
      Tasks: 41 (limit: 4556)
     Memory: 44.7M
        CPU: 1min 557ms
     CGroup: /system.slice/clickhouse-keeper.service
             └─3053 /usr/bin/clickhouse-keeper --config=/etc/clickhouse-keeper/keeper_config.xml --pid-file=/run/clickhouse-keeper/clickhouse-keeper.pid

Oct 27 19:44:28 clickhouse-keeper-02 systemd[1]: Started ClickHouse Keeper - zookeeper compatible distributed coordination server.
Oct 27 19:44:28 clickhouse-keeper-02 clickhouse-keeper[3053]: Processing configuration file '/etc/clickhouse-keeper/keeper_config.xml'.
Oct 27 19:44:28 clickhouse-keeper-02 clickhouse-keeper[3053]: Logging trace to /var/log/clickhouse-keeper/clickhouse-keeper.log
Oct 27 19:44:28 clickhouse-keeper-02 clickhouse-keeper[3053]: Logging errors to /var/log/clickhouse-keeper/clickhouse-keeper.err.log

确认clickhouse-keeper-03服务运行状态

root@clickhouse-keeper-03:~# systemctl status clickhouse-keeper.service
● clickhouse-keeper.service - ClickHouse Keeper - zookeeper compatible distributed coordination server
     Loaded: loaded (/lib/systemd/system/clickhouse-keeper.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2024-10-27 19:44:30 CST; 3h 0min ago
   Main PID: 2991 (clickhouse-keep)
      Tasks: 41 (limit: 4556)
     Memory: 43.4M
        CPU: 1min 336ms
     CGroup: /system.slice/clickhouse-keeper.service
             └─2991 /usr/bin/clickhouse-keeper --config=/etc/clickhouse-keeper/keeper_config.xml --pid-file=/run/clickhouse-keeper/clickhouse-keeper.pid

Oct 27 19:44:30 clickhouse-keeper-03 systemd[1]: Started ClickHouse Keeper - zookeeper compatible distributed coordination server.
Oct 27 19:44:30 clickhouse-keeper-03 clickhouse-keeper[2991]: Processing configuration file '/etc/clickhouse-keeper/keeper_config.xml'.
Oct 27 19:44:30 clickhouse-keeper-03 clickhouse-keeper[2991]: Logging trace to /var/log/clickhouse-keeper/clickhouse-keeper.log
Oct 27 19:44:30 clickhouse-keeper-03 clickhouse-keeper[2991]: Logging errors to /var/log/clickhouse-keeper/clickhouse-keeper.err.log

clickhouse-01~02 节点执行;

systemctl enable --now clickhouse-server.service
systemctl restart clickhouse-server.service

确认clickhouse-01服务运行状态

root@clickhouse-01:~# systemctl status clickhouse-server.service 
● clickhouse-server.service - ClickHouse Server (analytic DBMS for big data)
     Loaded: loaded (/lib/systemd/system/clickhouse-server.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2024-10-27 19:55:27 CST; 2h 51min ago
   Main PID: 3107 (clickhouse-serv)
      Tasks: 701 (limit: 4556)
     Memory: 802.6M
        CPU: 25min 4.495s
     CGroup: /system.slice/clickhouse-server.service
             ├─3104 clickhouse-watchdog "" "" "" "" "" "" "" --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid
             └─3107 /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid

Oct 27 19:55:26 clickhouse-01 systemd[1]: Starting ClickHouse Server (analytic DBMS for big data)...
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Processing configuration file '/etc/clickhouse-server/config.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/macros.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/network-and-logging.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/remote-servers.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/use-keeper.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Logging debug to /var/log/clickhouse-server/clickhouse-server.log
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Oct 27 19:55:26 clickhouse-01 systemd[1]: clickhouse-server.service: Supervising process 3107 which is not our child. We'll most likely not notice when it exits.
Oct 27 19:55:27 clickhouse-01 systemd[1]: Started ClickHouse Server (analytic DBMS for big data).
root@clickhouse-01:~# 

确认clickhouse-02服务运行状态

root@clickhouse-01:~# systemctl status clickhouse-server.service 
● clickhouse-server.service - ClickHouse Server (analytic DBMS for big data)
     Loaded: loaded (/lib/systemd/system/clickhouse-server.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2024-10-27 19:55:27 CST; 2h 51min ago
   Main PID: 3107 (clickhouse-serv)
      Tasks: 701 (limit: 4556)
     Memory: 759.0M
        CPU: 25min 6.801s
     CGroup: /system.slice/clickhouse-server.service
             ├─3104 clickhouse-watchdog "" "" "" "" "" "" "" --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid
             └─3107 /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid

Oct 27 19:55:26 clickhouse-01 systemd[1]: Starting ClickHouse Server (analytic DBMS for big data)...
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Processing configuration file '/etc/clickhouse-server/config.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/macros.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/network-and-logging.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/remote-servers.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Merging configuration file '/etc/clickhouse-server/config.d/use-keeper.xml'.
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Logging debug to /var/log/clickhouse-server/clickhouse-server.log
Oct 27 19:55:26 clickhouse-01 clickhouse-server[3104]: Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
Oct 27 19:55:26 clickhouse-01 systemd[1]: clickhouse-server.service: Supervising process 3107 which is not our child. We'll most likely not notice when it exits.
Oct 27 19:55:27 clickhouse-01 systemd[1]: Started ClickHouse Server (analytic DBMS for big data).
root@clickhouse-01:~# 

测试集群

要获得 ReplicatedMergeTree 和 ClickHouse Keeper 的经验,您可以运行以下命令:

  • 在上面配置的集群上创建数据库
  • 使用 ReplicatedMergeTree 表引擎在数据库上创建表
  • 在一个节点上插入数据,在另一个节点上查询
  • 停止一个ClickHouse服务器节点
  • 在运行节点上插入更多数据
  • 重新启动停止的节点
  • 查询重启节点时验证数据是否可用

验证 ClickHouse Keeper 是否正在运行

mntr命令用于验证 ClickHouse Keeper 是否正在运行并获取有关三个 Keeper 节点关系的状态信息。在此示例中使用的配置中,三个节点一起工作。节点将选举领导者,其余节点将成为追随者。 mntr命令提供与性能以及特定节点是跟随者还是领导者相关的信息。

提示

您可能需要安装netcat才能将mntr命令发送到 Keeper。请参阅nmap.org页面以获取下载信息。

从 clickhouse-keeper-01、clickhouse-keeper-02 和 clickhouse-keeper-03 上的 shell 运行

echo mntr | nc localhost 9181

来自关注者的回应

zk_version  v23.3.1.2823-testing-46e85357ce2da2a99f56ee83a079e892d7ec3726
zk_avg_latency  0
zk_max_latency  0
zk_min_latency  0
zk_packets_received 0
zk_packets_sent 0
zk_num_alive_connections    0
zk_outstanding_requests 0
zk_server_state follower
zk_znode_count  6
zk_watch_count  0
zk_ephemerals_count 0
zk_approximate_data_size    1271
zk_key_arena_size   4096
zk_latest_snapshot_size 0
zk_open_file_descriptor_count   46
zk_max_file_descriptor_count    18446744073709551615

leader的回应

zk_version  v23.3.1.2823-testing-46e85357ce2da2a99f56ee83a079e892d7ec3726
zk_avg_latency  0
zk_max_latency  0
zk_min_latency  0
zk_packets_received 0
zk_packets_sent 0
zk_num_alive_connections    0
zk_outstanding_requests 0
zk_server_state leader
zk_znode_count  6
zk_watch_count  0
zk_ephemerals_count 0
zk_approximate_data_size    1271
zk_key_arena_size   4096
zk_latest_snapshot_size 0
zk_open_file_descriptor_count   48
zk_max_file_descriptor_count    18446744073709551615
zk_followers    2
zk_synced_followers 2

验证 ClickHouse 集群功能

在一个 shell 中使用clickhouse client连接到节点clickhouse-01 ,并在另一个 shell 中使用clickhouse client端连接到节点clickhouse-02

1、在上面配置的集群上创建数据库

在节点 clickhouse-01 或 clickhouse-02 上运行

CREATE DATABASE db1 ON CLUSTER cluster_1S_2R
┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ clickhouse-0290000 │       │                   10 │
│ clickhouse-0190000 │       │                   00 │
└───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘

2、使用 ReplicatedMergeTree 表引擎在数据库上创建表

在节点 clickhouse-01 或 clickhouse-02 上运行

CREATE TABLE db1.table1 ON CLUSTER cluster_1S_2R
(
    `id` UInt64,
    `column1` String
)
ENGINE = ReplicatedMergeTree
ORDER BY id
┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ clickhouse-0290000 │       │                   10 │
│ clickhouse-0190000 │       │                   00 │
└───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘

3、在一个节点上插入数据,在另一个节点上查询

在节点 clickhouse-01 上运行

INSERT INTO db1.table1 (id, column1) VALUES (1, 'abc');

4、查询节点clickhouse-02上的表

在节点 clickhouse-02 上运行

SELECT *
FROM db1.table1
┌─id─┬─column1─┐
│  1 │ abc     │
└────┴─────────┘

5、在另一个节点上插入数据,并在节点clickhouse-01上查询

在节点 clickhouse-02 上运行

INSERT INTO db1.table1 (id, column1) VALUES (2, 'def');

在节点 clickhouse-01 上运行

SELECT *
FROM db1.table1
┌─id─┬─column1─┐
│  1 │ abc     │
└────┴─────────┘
┌─id─┬─column1─┐
│  2 │ def     │
└────┴─────────┘

6、停止一个 ClickHouse 服务器节点 通过运行类似于启动该节点的命令的操作系统命令来停止其中一个 ClickHouse 服务器节点。如果您使用systemctl start启动节点,则使用systemctl stop停止它。

root@clickhouse-01:~# systemctl stop clickhouse-server.service

7、在运行节点上插入更多数据

在正在运行的节点上运行

INSERT INTO db1.table1 (id, column1) VALUES (3, 'ghi');

选择数据:

在正在运行的节点上运行

SELECT *
FROM db1.table1
┌─id─┬─column1─┐
│  1 │ abc     │
└────┴─────────┘
┌─id─┬─column1─┐
│  2 │ def     │
└────┴─────────┘
┌─id─┬─column1─┐
│  3 │ ghi     │
└────┴─────────┘

8、重新启动停止的节点并从那里选择

root@clickhouse-01:~# systemctl start clickhouse-server.service

在重启的节点上运行

SELECT *
FROM db1.table1
┌─id─┬─column1─┐
│  1 │ abc     │
└────┴─────────┘
┌─id─┬─column1─┐
│  2 │ def     │
└────┴─────────┘
┌─id─┬─column1─┐
│  3 │ ghi     │
└────┴─────────┘

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/903035.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

简单易用的Android主线程耗时检测类 MainThreadMonitor

适用场景 debug 本地测试 文章目录 代码类 MainThreadMonitor.java使用方式 Application的attachBaseContextlog输出示例 代码类 MainThreadMonitor.java public class MainThreadMonitor {private static final String TAG "MainThreadMonitor";private static Sc…

uniapp的IOS证书申请(测试和正式环境)及UDID配置流程

1.说明 本教程只提供uniapp在ios端的证书文件申请&#xff08;包含正式环境和开发环境&#xff09;、UDID配置说明&#xff0c;请勿用文档中的账号和其他隐私数据进行测试&#xff0c;请勿侵权&#xff01; 2.申请前准备 证书生成网站&#xff1a;苹果应用上传、解析&#x…

iOS Block 详解(Object-C)

Block 是苹果公司较晚推出的一个语法,与很多语法的闭包差不多意思 一:Block声明 PS:很多人学不好Block,大概率是被它的声明写法给吓到了,写法确实有点奇怪 返回值类型(^block变量名)(参数列表) 例如: int(^personBlock)(NSString *,int) 返回值类型(^block变量名)(参数列表…

iOS 18.2开发者预览版 Beta 1版本发布,欧盟允许卸载应用商店

苹果今天为开发人员推送了iOS 18.2开发者预览版 Beta 1版本 更新&#xff08;内部版本号&#xff1a;22C5109p&#xff09;&#xff0c;本次更新距离上次发布 Beta / RC 间隔 2 天。该版本仅适用于支持Apple Intelligence的设备&#xff0c;包括iPhone 15 Pro系列和iPhone 16系…

uniapp 中间tabbar的实现

UI 需求 &#xff1a; 有五个tabbr栏 &#xff0c;中间的按钮更大 &#xff0c;如图 &#xff1a; 说明 &#xff1a; 在tabbar中的list 配置 其他四个tabbar &#xff1a;首页 精华 社区 我的 1. 在page.json中配置 在tabbar中 &#xff0c;与list 平级 &#xff0c;设置按钮…

sa-token 所有的异常都是未登录异常的问题

在使用satoken的时候&#xff0c;有这么一个问题&#xff0c;就是不管我是什么错误&#xff0c;都会弹出未登录异常&#xff0c;起初的时候我以为satoken的拦截器会拦截所有的异常&#xff0c;但是今后测试才发现忽略了一点&#xff0c;也是最重要最容易忽视的一点。 如果我现在…

大模型产品经理岗位职责,大模型产品经理入门到精通, 收藏这篇就够了

1. 产品及公司介绍 产品&#xff1a;开源企业级LLMops&#xff08;大模型应用开发平台&#xff09;&#xff1a;毕昇BISHENG。7800 Github Star&#xff0c;被多名开发者评价为“目前见过功能最强大&#xff0c;最适合企业内落地的开源大模型应用开发平台”&#xff0c;已服务…

项目解决方案:在弱网(低带宽、高延迟、有丢包的网络)环境下建设视频监控平台的设计方案(上)

目录 一、需求分析 1、业务需求分析 &#xff08;1&#xff09;提升用户体验 &#xff08;2&#xff09;降低带宽消耗 &#xff08;3&#xff09;增强适应性 2、功能需求分析 &#xff08;1&#xff09;视频汇聚联网 &#xff08;2&#xff09;分辨率转换 &#xff08;3&#…

AI Weekly3:过去一周重要的AI资讯汇总

本周&#xff0c;人工智能领域的发展势头依旧迅猛&#xff0c;不断突破界限。无论是自动驾驶技术的精进&#xff0c;AI模型的革新&#xff0c;还是AI在金融科技领域的广泛应用&#xff0c;每一项新成就都在昭示着人工智能正逐步融入我们日常生活的每一个角落。 &#x1f680;本…

轻松部署自己的AI聊天助手LocalGPT并实现无公网IP远程交互

文章目录 前言环境准备1. localGPT部署2. 启动和使用3. 安装cpolar 内网穿透4. 创建公网地址5. 公网地址访问6. 固定公网地址 前言 本文主要介绍如何本地部署LocalGPT并实现远程访问&#xff0c;由于localGPT只能通过本地局域网IP地址端口号的形式访问&#xff0c;实现远程访问…

新手入门c++(8)

到时候了&#xff0c;是时候给你们讲一下其他的定义形式与格式化输入输出了。 1.长整型变量 长整型变量分为两种&#xff1a; ①long类型 在计算机编程中&#xff0c;long 类型是一个整型数据类型&#xff0c;用于存储较大的整数。它的大小和范围取决于操作系统和编译器的实…

【不同开源基座大模型对比及领域落地的选型考虑】

Key Takeaways&#xff1a; 1、从数据、Tokenizer、模型架构对比不同qwen、deepseek、llama、yi等模型 对于开源大模型的数据和预处理来说&#xff0c;一般我们会关注如下的一些维度&#xff1b; 预训练数据&#xff1a;训练数据的数量、质量与多样性&#xff0c;是模型泛化能…

太阳能面板分割系统:训练自动化

太阳能面板分割系统源码&#xff06;数据集分享 [yolov8-seg-EfficientHead&#xff06;yolov8-seg-vanillanet等50全套改进创新点发刊_一键训练教程_Web前端展示] 1.研究背景与意义 项目参考ILSVRC ImageNet Large Scale Visual Recognition Challenge 项目来源AAAI Globa…

Qt6.7.2中使用OpenSSL的坑

最近编写Qt Quick项目&#xff0c;使用Qt6.7.2版本&#xff0c;CMAKE编译&#xff0c;开始QtCreator运行代码都没有问题&#xff0c;访问https也正常&#xff0c;但打出安装包后一试&#xff0c;发现https访问不了&#xff0c;尴尬&#xff01;&#xff01; 查看了相关日志发现…

Flutter登录界面使用主题

Now, let’s use the theme we initially created in our main function for a simple login screen: 现在&#xff0c;让我们使用最初在主函数中创建的主题来制作一个简单的登录屏幕&#xff1a; Create a Login Screen Widget: Inside the main.dartfile, create a new wid…

尚硅谷 | Nginx | 学习笔记

尚硅谷 | Nginx | 学习笔记 尚硅谷Nginx教程由浅入深&#xff08;一套打通丨初学者也可掌握&#xff09;_哔哩哔哩_bilibili 文章目录 尚硅谷 | Nginx | 学习笔记一、Nginx相关概念1.Nginx是什么2.正向代理和反向代理正向代理反向代理 3.负载均衡和动静分离负载均衡动静分离 二…

[论文阅读]Detecting Pretraining Data from Large Language Models

Detecting Pretraining Data from Large Language Models http://arxiv.org/abs/2310.16789 这篇文章正式提出了Min-k%方法来实现成员推理攻击 贡献 介绍了WIKIMIA动态基准测试。旨在定期自动评估任何新发布的预训练 LLMs。通过利用 Wikipedia 数据时间戳和模型发布日期&am…

C#与C++交互开发系列(十三):在C#中使用C++编写的DLL,导出类的完整指南

前言 在跨平台和跨语言开发中,C++ 和 C# 的互操作性可以帮助我们实现更灵活且高性能的解决方案。C++ DLL 可以封装高效的算法或硬件相关的代码,而在 C# 中调用这些功能则可以大大简化开发。然而,由于 C++ 和 C# 的底层实现不同,导出 C++ 类并在 C# 中使用并不简单。因此,…

精选:HR招聘管理工具Top5使用体验

作为企业招聘者&#xff0c;如何在选择中找到开启高效招聘之门的钥匙&#xff0c;成为了每一位企业招聘管理者必须面对的难题&#xff0c;在面对市场上琳琅满目的招聘工具&#xff0c;你是否也曾感到无头绪&#xff0c;不知所措&#xff1f;每个工具都声称自己拥有独特的优势和…

python之多任务爬虫——线程、进程、协程的介绍与使用(16)

文章目录 1、什么是多任务?1.1 进程和线程的概念1.2 多线程与多进程的区别1.3 并发和并行2、python中的全局解释器锁3、多线程执行机制4、python中实现多线程(threading模块)4.1 模块介绍4.2 模块的使用5、python实现多进行程(Multiprocessing模块)5.1 导入模块5.2 模块的…