【k8s】kubeasz 3.6.3 + virtualbox 搭建本地虚拟机openeuler 22.03 三节点集群 离线方案

kubeasz项目源码地址

GitHub - easzlab/kubeasz: 使用Ansible脚本安装K8S集群,介绍组件交互原理,方便直接,不受国内网络环境影响

拉取代码,并切换到最近发布的分支

git clone https://github.com/easzlab/kubeasz
cd kubeasz
git checkout 3.6.3

ssh-copy-id,在root用户状态下执行

ssh-copy-id root@10.47.76.73
ssh-copy-id root@10.47.76.74
ssh-copy-id root@10.47.76.76

本机(Ubuntu 22.04 x86_64)安装ansible

sudo apt install ansible -y

下载资源
yeqiang@yeqiang-MS-7B23:~/Downloads/src/kubeasz$ sudo ./ezdown -D
2024-03-25 10:03:40 INFO Action begin: download_all
2024-03-25 10:03:40 INFO downloading docker binaries, arch:x86_64, version:24.0.7
--2024-03-25 10:03:40--  https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/static/stable/x86_64/docker-24.0.7.tgz
正在解析主机 mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)... 101.6.15.130, 2402:f000:1:400::2
正在连接 mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)|101.6.15.130|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度: 69831072 (67M) [application/octet-stream]
正在保存至: ‘docker-24.0.7.tgz’

docker-24.0.7.tgz                          100%[=====================================================================================>]  66.60M  1.54MB/s    用时 41s   

2024-03-25 10:04:21 (1.64 MB/s) - 已保存 ‘docker-24.0.7.tgz’ [69831072/69831072])

2024-03-25 10:04:22 WARN docker is already running.
2024-03-25 10:04:22 INFO downloading kubeasz: 3.6.3
3.6.3: Pulling from easzlab/kubeasz
f56be85fc22e: Pull complete 
ea5757f4b3f8: Pull complete 
bd0557c686d8: Pull complete 
37d4153ce1d0: Pull complete 
b39eb9b4269d: Pull complete 
a3cff94972c7: Pull complete 
b66d4ab4ee64: Pull complete 
Digest: sha256:13135e1ef95ecdb392677b9b7067923cf41fc4371cd0c1eb8b024cf442512a63
Status: Downloaded newer image for easzlab/kubeasz:3.6.3
docker.io/easzlab/kubeasz:3.6.3
2024-03-25 10:05:39 DEBUG  run a temporary container
7b65d19edc6efd95cc4bc646401407fcff91e0aa7681a60ae1c84a5108a30ee8
2024-03-25 10:05:42 DEBUG cp kubeasz code from the temporary container
Successfully copied 2.89MB to /etc/kubeasz
2024-03-25 10:05:42 DEBUG stop&remove temporary container
temp_easz
2024-03-25 10:05:44 INFO downloading kubernetes: v1.29.0 binaries
v1.29.0: Pulling from easzlab/kubeasz-k8s-bin
1b7ca6aea1dd: Already exists 
1cf75602dde9: Pull complete 
4ae371062546: Pull complete 
Digest: sha256:adf57dbaec3f7c08b2276aac03a1bb4feae5e0ef294dfdc191c6603e85cf6ccd
Status: Downloaded newer image for easzlab/kubeasz-k8s-bin:v1.29.0
docker.io/easzlab/kubeasz-k8s-bin:v1.29.0
2024-03-25 10:21:01 DEBUG run a temporary container
09a9a5c39ea39f98176ff67418654965bb6885e33db405ae84371dbe2a2861dc
2024-03-25 10:21:13 DEBUG cp k8s binaries
Successfully copied 515MB to /etc/kubeasz/k8s_bin_tmp
2024-03-25 10:21:14 DEBUG stop&remove temporary container
temp_k8s_bin
2024-03-25 10:21:14 INFO downloading extral binaries kubeasz-ext-bin:1.9.0
1.9.0: Pulling from easzlab/kubeasz-ext-bin
070eb51debd9: Pull complete 
824ac05263f5: Pull complete 
6ab8bf2594e2: Pull complete 
cb81b024c20f: Pull complete 
e4d14742b324: Pull complete 
f84999fd6cee: Pull complete 
50eb857ee625: Pull complete 
89e5b14263dd: Pull complete 
Digest: sha256:aaf5296518cb3f03602e545bac9216925184dbfcbb6c70e4bde76f9751cf21c3
Status: Downloaded newer image for easzlab/kubeasz-ext-bin:1.9.0
docker.io/easzlab/kubeasz-ext-bin:1.9.0
2024-03-25 10:25:26 DEBUG run a temporary container
fa445258b9b658dfe599946d00f1e4e570994d3a8e69a88dfc92aba420cae614
2024-03-25 10:25:30 DEBUG cp extral binaries
Successfully copied 648MB to /etc/kubeasz/extra_bin_tmp
2024-03-25 10:25:31 DEBUG stop&remove temporary container
temp_ext_bin
2: Pulling from library/registry
619be1103602: Pull complete 
5daf2fb85fb9: Pull complete 
ca5f23059090: Pull complete 
8f2a82336004: Pull complete 
68c26f40ad80: Pull complete 
Digest: sha256:fb9c9aef62af3955f6014613456551c92e88a67dcf1fc51f5f91bcbd1832813f
Status: Downloaded newer image for registry:2
docker.io/library/registry:2
2024-03-25 10:25:47 INFO start local registry ...
c3830f310e24c9a3cb310cec259a74f438fb438103612e4919842a41016f7dae
2024-03-25 10:25:49 INFO download default images, then upload to the local registry
v3.26.4: Pulling from calico/cni
2a2cc8873d88: Pull complete 
f689a1b6ffc9: Pull complete 
222ddc102977: Pull complete 
bb231ec660e2: Pull complete 
c274814db7a5: Pull complete 
c04ab43d8c14: Pull complete 
56e4809beb2c: Pull complete 
82a9d7b9ead4: Pull complete 
2e8423cc9523: Pull complete 
dbb2b79785d1: Pull complete 
15e4b2899800: Pull complete 
4f4fb700ef54: Pull complete 
Digest: sha256:7c5895c5d6ed3266bcd405fbcdbb078ca484688673c3479f0f18bf072d58c242
Status: Downloaded newer image for calico/cni:v3.26.4
docker.io/calico/cni:v3.26.4
v3.26.4: Pulling from calico/kube-controllers
312c81d49b31: Pull complete 
21f1655e08ac: Pull complete 
807fead6050f: Pull complete 
1abfcfa9d8cd: Pull complete 
9398ffacf522: Pull complete 
3379ce07ff21: Pull complete 
f5745fd91cba: Pull complete 
b2d1ec87e4a2: Pull complete 
9ebe38a91c19: Pull complete 
d92a41934dc3: Pull complete 
7427cd509920: Pull complete 
1726ce00d070: Pull complete 
dcd892b22925: Pull complete 
8b58b0d1e6a1: Pull complete 
Digest: sha256:5fce14b4dfcd63f1a4663176be4f236600b410cd896d054f56291c566292c86e
Status: Downloaded newer image for calico/kube-controllers:v3.26.4
docker.io/calico/kube-controllers:v3.26.4
v3.26.4: Pulling from calico/node
c596d07e602a: Pull complete 
9ae8e7f0c0b3: Pull complete 
Digest: sha256:a8b77a5f27b167501465f7f5fb7601c44af4df8dccd1c7201363bbb301d1fe40
Status: Downloaded newer image for calico/node:v3.26.4
docker.io/calico/node:v3.26.4
The push refers to repository [easzlab.io.local:5000/calico/cni]
5f70bf18a086: Pushed 
7dff43aa1268: Pushed 
14fdc63b97b8: Pushed 
ae844ae009c7: Pushed 
3d2540981e86: Pushed 
5743eb3b1640: Pushed 
6c2e5970601b: Pushed 
50fa5e13eb34: Pushed 
468901d6015e: Pushed 
e4dea417b6a9: Pushed 
fbe0fc515554: Pushed 
8a287df44e83: Pushed 
v3.26.4: digest: sha256:3540aa94aea8fcd41edd8490a82847bbf6a9a52215f0550c27e196441d234f57 size: 2823
The push refers to repository [easzlab.io.local:5000/calico/kube-controllers]
15e2f86dd9c8: Pushed 
6de775fe835c: Pushed 
2bf7b670d125: Pushed 
c40c18a1888a: Pushed 
f65cfcb50057: Pushed 
999a8e768b19: Pushed 
04873e012646: Pushed 
73e66a55b78b: Pushed 
aff2e5741039: Pushed 
69fff1fdf097: Pushed 
1fe60555ee28: Pushed 
1e3024c01822: Pushed 
ff28c98ce459: Pushed 
2235e9b55c14: Pushed 
v3.26.4: digest: sha256:b7625323054de4420ba27761d4120ad300d3aa7e0109c8bc41a24ca4bcdd3471 size: 3240
The push refers to repository [easzlab.io.local:5000/calico/node]
c0eef34472c4: Pushed 
f4270759c5ec: Pushed 
v3.26.4: digest: sha256:0b242b133d70518988a5a36c1401ee4f37bf937743ecceafd242bd821b6645c6 size: 737
1.11.1: Pulling from coredns/coredns
dd5ad9c9c29f: Pull complete 
960043b8858c: Pull complete 
b4ca4c215f48: Pull complete 
eebb06941f3e: Pull complete 
02cd68c0cbf6: Pull complete 
d3c894b5b2b0: Pull complete 
b40161cd83fc: Pull complete 
46ba3f23f1d3: Pull complete 
4fa131a1b726: Pull complete 
860aeecad371: Pull complete 
c54d895c1975: Pull complete 
Digest: sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
Status: Downloaded newer image for coredns/coredns:1.11.1
docker.io/coredns/coredns:1.11.1
The push refers to repository [easzlab.io.local:5000/coredns/coredns]
545a68d51bc4: Pushed 
aec96fc6d10e: Pushed 
4cb10dd2545b: Pushed 
d2d7ec0f6756: Pushed 
1a73b54f556b: Pushed 
e624a5370eca: Pushed 
d52f02c6501c: Pushed 
ff5700ec5418: Pushed 
7bea6b893187: Pushed 
6fbdf253bbc2: Pushed 
e023e0e48e6e: Pushed 
1.11.1: digest: sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870 size: 2612
1.22.23: Pulling from easzlab/k8s-dns-node-cache
6c4682e3383e: Pull complete 
89414bc462f0: Pull complete 
e11308cddc2e: Pull complete 
ac73bbef8d7c: Pull complete 
07a0455b7f8d: Pull complete 
772dc49a1658: Pull complete 
Digest: sha256:9fced15a756c8cec1fd8347a268958d49a2927f713bf742a821752b9f39bcead
Status: Downloaded newer image for easzlab/k8s-dns-node-cache:1.22.23
docker.io/easzlab/k8s-dns-node-cache:1.22.23
The push refers to repository [easzlab.io.local:5000/easzlab/k8s-dns-node-cache]
4f165a38d33f: Pushed 
71ff73bde640: Pushed 
0b6ea7c7e5fa: Pushed 
5c3659a2da85: Pushed 
66673051b8a2: Pushed 
2e1e0b8e464d: Pushed 
1.22.23: digest: sha256:9cebf9ba45e040b2b4bc3a3c6e9e2662a080e4a588750bc3a3477fec51f9f395 size: 1571
v2.7.0: Pulling from kubernetesui/dashboard
ee3247c7e545: Pull complete 
8e052fd7e2d0: Pull complete 
Digest: sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Status: Downloaded newer image for kubernetesui/dashboard:v2.7.0
docker.io/kubernetesui/dashboard:v2.7.0
The push refers to repository [easzlab.io.local:5000/kubernetesui/dashboard]
c88361932af5: Pushed 
bd8a70623766: Pushed 
v2.7.0: digest: sha256:ef134f101e8a4e96806d0dd839c87c7f76b87b496377422d20a65418178ec289 size: 736
v1.0.8: Pulling from kubernetesui/metrics-scraper
978be80e3ee3: Pull complete 
5866d2c04d96: Pull complete 
Digest: sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
Status: Downloaded newer image for kubernetesui/metrics-scraper:v1.0.8
docker.io/kubernetesui/metrics-scraper:v1.0.8
The push refers to repository [easzlab.io.local:5000/kubernetesui/metrics-scraper]
bcec7eb9e567: Pushed 
d01384fea991: Pushed 
v1.0.8: digest: sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a size: 736
v0.6.4: Pulling from easzlab/metrics-server
a7ca0d9ba68f: Pull complete 
fe5ca62666f0: Pull complete 
b02a7525f878: Pull complete 
fcb6f6d2c998: Pull complete 
e8c73c638ae9: Pull complete 
1e3d9b7d1452: Pull complete 
4aa0ea1413d3: Pull complete 
7c881f9ab25e: Pull complete 
5627a970d25e: Pull complete 
c11e15826cd6: Pull complete 
Digest: sha256:08b3388f924fa52a3c9d0a9bad43746250a3e82c1414e6cefb7966dd29a9e760
Status: Downloaded newer image for easzlab/metrics-server:v0.6.4
docker.io/easzlab/metrics-server:v0.6.4
The push refers to repository [easzlab.io.local:5000/easzlab/metrics-server]
2e843aeae1b3: Pushed 
4cb10dd2545b: Mounted from coredns/coredns 
d2d7ec0f6756: Mounted from coredns/coredns 
1a73b54f556b: Mounted from coredns/coredns 
e624a5370eca: Mounted from coredns/coredns 
d52f02c6501c: Mounted from coredns/coredns 
ff5700ec5418: Mounted from coredns/coredns 
7bea6b893187: Mounted from coredns/coredns 
6fbdf253bbc2: Mounted from coredns/coredns 
e023e0e48e6e: Mounted from coredns/coredns 
v0.6.4: digest: sha256:3f9cbdca6bedc8cac2d7575d29ceb2be5d17ea3dc812de9631f95ba48205d1b3 size: 2402
3.9: Pulling from easzlab/pause
61fec91190a0: Pull complete 
Digest: sha256:d5fee2a95eaaefc3a0b8a914601b685e4170cb870ac319ac5a9bfb7938389852
Status: Downloaded newer image for easzlab/pause:3.9
docker.io/easzlab/pause:3.9
The push refers to repository [easzlab.io.local:5000/easzlab/pause]
e3e5579ddd43: Pushed 
3.9: digest: sha256:3ec9d4ec5512356b5e77b13fddac2e9016e7aba17dd295ae23c94b2b901813de size: 527
2024-03-25 10:38:57 INFO Action successed: download_all
 

主要下载的资源

参考下图,由于dzdown不支持openeuler 22.03,后续需要自行下载系统离线包

切换root,cd到下载的资源目录

sudo su
cd /etc/kubeasz/
容器化运行kubeasz

root@yeqiang-MS-7B23:/etc/kubeasz# ./ezdown -S
2024-03-25 10:49:31 INFO Action begin: start_kubeasz_docker
Loaded image: easzlab/kubeasz:3.6.3
2024-03-25 10:49:32 INFO try to run kubeasz in a container
2024-03-25 10:49:32 DEBUG get host IP: 10.47.76.45
77883f725775c025a4cde5aa1d8d148089499d292795c5899cdb9cef5bebb832
2024-03-25 10:49:33 INFO Action successed: start_kubeasz_docker
 

可以看到启动的容器

创建名为k8s-local的集群

root@yeqiang-MS-7B23:/etc/kubeasz# docker exec -it kubeasz ezctl new k8s-local
2024-03-25 10:54:05 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-local
2024-03-25 10:54:05 DEBUG set versions
2024-03-25 10:54:06 DEBUG disable registry mirrors
2024-03-25 10:54:06 DEBUG cluster k8s-local: files successfully created.
2024-03-25 10:54:06 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-local/hosts'
2024-03-25 10:54:06 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-local/config.yml'
 

配置/etc/kubeasz/cluster/k8s-local/hosts

# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
10.47.76.73
10.47.76.74
10.47.76.76

# master node(s), set unique 'k8s_nodename' for each node
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character
[kube_master]
10.47.76.73 k8s_nodename='master-01'
10.47.76.74 k8s_nodename='master-02'
10.47.76.76 k8s_nodename='master-03'

# work node(s), set unique 'k8s_nodename' for each node
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character
[kube_node]
10.47.76.73 k8s_nodename='worker-01'
10.47.76.74 k8s_nodename='worker-02'
10.47.76.76 k8s_nodename='worker-03'

# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
10.47.76.73 NEW_INSTALL=true

# [optional] loadbalance for accessing k8s from outside
[ex_lb]
10.47.76.73 LB_ROLE=backup EX_APISERVER_VIP=10.47.76.201 EX_APISERVER_PORT=8443
10.47.76.74 LB_ROLE=master EX_APISERVER_VIP=10.47.76.201 EX_APISERVER_PORT=8443

# [optional] ntp server for the cluster
[chrony]
10.47.76.73

[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"

# Cluster container-runtime supported: docker, containerd
# if k8s version >= 1.24, docker is not supported
CONTAINER_RUNTIME="containerd"

# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"

# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.68.0.0/16"

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="172.20.0.0/16"

# NodePort Range
NODE_PORT_RANGE="30000-32767"

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local"

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/opt/kube/bin"

# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"

# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/k8s-local"

# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"

# Default 'k8s_nodename' is empty
k8s_nodename=''

# Default python interpreter
ansible_python_interpreter=/usr/bin/python3

配置/etc/kubeasz/clusters/k8s-local/config.yml

############################
# prepare
############################
# 可选离线安装系统软件包 (offline|online)
INSTALL_SOURCE: "offline"

# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
# (deprecated) 未更新上游项目,未验证最新k8s集群安装,不建议启用
OS_HARDEN: false


############################
# role:deploy
############################
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"

# force to recreate CA and other certs, not suggested to set 'true'
CHANGE_CA: false

# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"

# k8s version
K8S_VER: "1.29.0"

# set unique 'k8s_nodename' for each node, if not set(default:'') ip add will be used
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character (e.g. 'example.com'),
# regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'
K8S_NODENAME: "{%- if k8s_nodename != '' -%} \
                    {{ k8s_nodename|replace('_', '-')|lower }} \
               {%- else -%} \
                    {{ inventory_hostname }} \
               {%- endif -%}"

############################
# role:etcd
############################
# 设置不同的wal目录,可以避免磁盘io竞争,提高性能
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""


############################
# role:runtime [containerd,docker]
############################
# [.]启用拉取加速镜像仓库
ENABLE_MIRROR_REGISTRY: false

# [.]添加信任的私有仓库
INSECURE_REG:
  - "http://easzlab.io.local:5000"
  - "https://{{ HARBOR_REGISTRY }}"

# [.]基础容器镜像
SANDBOX_IMAGE: "easzlab.io.local:5000/easzlab/pause:3.9"

# [containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"

# [docker]容器存储目录
DOCKER_STORAGE_DIR: "/var/lib/docker"

# [docker]开启Restful API
DOCKER_ENABLE_REMOTE_API: false


############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)
MASTER_CERT_HOSTS:
  - "10.47.76.73"
  

# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24


############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"

# node节点最大pod 数
MAX_PODS: 110

# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
# 数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "no"

# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: "no"


############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false

# [flannel] 
flannel_ver: "v0.22.2"

# ------------------------------------------- calico
# [calico] IPIP隧道模式可选项有: [Always, CrossSubnet, Never],跨子网可以配置为Always与CrossSubnet(公有云建议使用always比较省事,其他的话需要修改各自公有云的网络配置,具体可以参考各个公有云说明)
# 其次CrossSubnet为隧道+BGP路由混合模式可以提升网络性能,同子网配置为Never即可.
CALICO_IPV4POOL_IPIP: "Always"

# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"

# [calico]设置calico 网络 backend: bird, vxlan, none
CALICO_NETWORKING_BACKEND: "bird"

# [calico]设置calico 是否使用route reflectors
# 如果集群规模超过50个节点,建议启用该特性
CALICO_RR_ENABLED: false

# CALICO_RR_NODES 配置route reflectors的节点,如果未设置默认使用集群master节点 
# CALICO_RR_NODES: ["192.168.1.1", "192.168.1.2"]
CALICO_RR_NODES: []

# [calico]更新支持calico 版本: ["3.19", "3.23"]
calico_ver: "v3.26.4"

# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"

# ------------------------------------------- cilium
# [cilium]镜像版本
cilium_ver: "1.14.5"
cilium_connectivity_check: true
cilium_hubble_enabled: false
cilium_hubble_ui_enabled: false

# ------------------------------------------- kube-ovn
# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.11.5"

# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"

# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: true

# [kube-router]kube-router 镜像版本
kube_router_ver: "v1.5.4"


############################
# role:cluster-addon
############################
# coredns 自动安装
dns_install: "yes"
corednsVer: "1.11.1"
ENABLE_LOCAL_DNS_CACHE: true
dnsNodeCacheVer: "1.22.23"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"

# metric server 自动安装
metricsserver_install: "yes"
metricsVer: "v0.6.4"

# dashboard 自动安装
dashboard_install: "yes"
dashboardVer: "v2.7.0"
dashboardMetricsScraperVer: "v1.0.8"

# prometheus 自动安装
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "45.23.0"

# kubeapps 自动安装,如果选择安装,默认同时安装local-storage(提供storageClass: "local-path")
kubeapps_install: "no"
kubeapps_install_namespace: "kubeapps"
kubeapps_working_namespace: "default"
kubeapps_storage_class: "local-path"
kubeapps_chart_ver: "12.4.3"

# local-storage (local-path-provisioner) 自动安装
local_path_provisioner_install: "no"
local_path_provisioner_ver: "v0.0.24"
# 设置默认本地存储路径
local_path_provisioner_dir: "/opt/local-path-provisioner"

# nfs-provisioner 自动安装
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.2"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"

# network-check 自动安装
network_check_enabled: false 
network_check_schedule: "*/5 * * * *"

############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v2.8.4"
HARBOR_DOMAIN: "harbor.easzlab.io.local"
HARBOR_PATH: /var/data
HARBOR_TLS_PORT: 8443
HARBOR_REGISTRY: "{{ HARBOR_DOMAIN }}:{{ HARBOR_TLS_PORT }}"

# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: true

# install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false

提前配置三台虚拟机hosts的registry域名指向,部署机(10.47.76.45)

ansible -i clusters/k8s-local/hosts etcd  -m shell -a "echo '10.47.76.45 easzlab.io.local' >> /etc/hosts"

安装

docker exec -it kubeasz ezctl setup k8s-local all

etcd启动故障

{"changed": true, "cmd": "systemctl daemon-reload && systemctl restart etcd", "delta": "0:01:30.273138", "end": "2024-03-25 15:59:28.935152", "msg": "non-zero return code", "rc": 1, "start": "2024-03-25 15:57:58.662014", "stderr": "Job for etcd.service failed because a timeout was exceeded.\nSee \"systemctl status etcd.service\" and \"journalctl -xeu etcd.service\" for details.", "stderr_lines": ["Job for etcd.service failed because a timeout was exceeded.", "See \"systemctl status etcd.service\" and \"journalctl -xeu etcd.service\" for details."], "stdout": "", "stdout_lines": []}

由于这里是全新的虚拟机环境,可以排除系统环境因素造成,大概率是超时时间内未完成etcd集群启动导致的。

"error":"dial tcp 10.47.76.74:2380: connect: no route to host"

尝试手动重启etcd失败

 root@yeqiang-MS-7B23:/etc/kubeasz# ansible -i clusters/k8s-local/hosts etcd -m shell -a "systemctl restart etcd"
10.47.76.76 | FAILED | rc=1 >>
Job for etcd.service failed because a timeout was exceeded.
See "systemctl status etcd.service" and "journalctl -xeu etcd.service" for details.non-zero return code
10.47.76.74 | FAILED | rc=1 >>
Job for etcd.service failed because a timeout was exceeded.
See "systemctl status etcd.service" and "journalctl -xeu etcd.service" for details.non-zero return code
10.47.76.73 | FAILED | rc=1 >>
Job for etcd.service failed because a timeout was exceeded.
See "systemctl status etcd.service" and "journalctl -xeu etcd.service" for details.non-zero return code

 

这应该是部署脚本的bug了,直接重启三台虚拟机

ansible -i clusters/k8s-local/hosts etcd -m shell -a "reboot"

重启后故障依旧,检查防火墙发现未被kubeasz关闭。直接关闭防火墙

ansible -i clusters/k8s-local/hosts etcd -m shell -a "systemctl disable firewalld --now"

成功了。

重新执行setup

docker exec -it kubeasz ezctl setup k8s-local all

又报错了

TASK [kube-master : 创建user:kubernetes角色绑定] ************************************************************************************************************************
fatal: [10.47.76.73]: FAILED! => {"changed": true, "cmd": ["/etc/kubeasz/bin/kubectl", "create", "clusterrolebinding", "kubernetes-crb", "--clusterrole=system:kubelet-api-admin", "--user=kubernetes"], "delta": "0:00:07.036646", "end": "2024-03-25 16:17:22.368261", "msg": "non-zero return code", "rc": 1, "start": "2024-03-25 16:17:15.331615", "stderr": "error: failed to create clusterrolebinding: etcdserver: request timed out", "stderr_lines": ["error: failed to create clusterrolebinding: etcdserver: request timed out"], "stdout": "", "stdout_lines": []}
 

又是etcd引发的问题,考虑firewalld停用后iptables规则没有清空,重启三台虚拟机再次运行setup

重启后再次setup报错

TASK [kube-node : 轮询等待node达到Ready状态] ****************************************************************************************************************************
fatal: [10.47.76.73]: FAILED! => {"attempts": 8, "changed": true, "cmd": "/etc/kubeasz/bin/kubectl get node worker-01|awk 'NR>1{print $2}'", "delta": "0:00:00.044266", "end": "2024-03-25 16:29:30.470189", "msg": "", "rc": 0, "start": "2024-03-25 16:29:30.425923", "stderr": "Error from server (NotFound): nodes \"worker-01\" not found", "stderr_lines": ["Error from server (NotFound): nodes \"worker-01\" not found"], "stdout": "", "stdout_lines": []}
fatal: [10.47.76.74]: FAILED! => {"attempts": 8, "changed": true, "cmd": "/etc/kubeasz/bin/kubectl get node worker-02|awk 'NR>1{print $2}'", "delta": "0:00:00.047791", "end": "2024-03-25 16:29:30.473716", "msg": "", "rc": 0, "start": "2024-03-25 16:29:30.425925", "stderr": "Error from server (NotFound): nodes \"worker-02\" not found", "stderr_lines": ["Error from server (NotFound): nodes \"worker-02\" not found"], "stdout": "", "stdout_lines": []}
fatal: [10.47.76.76]: FAILED! => {"attempts": 8, "changed": true, "cmd": "/etc/kubeasz/bin/kubectl get node worker-03|awk 'NR>1{print $2}'", "delta": "0:00:00.057308", "end": "2024-03-25 16:29:30.483426", "msg": "", "rc": 0, "start": "2024-03-25 16:29:30.426118", "stderr": "Error from server (NotFound): nodes \"worker-03\" not found", "stderr_lines": ["Error from server (NotFound): nodes \"worker-03\" not found"], "stdout": "", "stdout_lines": []}
 

登陆一个节点,检查基本的kubectl指令

[root@localhost ~]# kubectl get ns
E0325 16:32:29.117598   29331 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0325 16:32:29.118017   29331 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0325 16:32:29.119581   29331 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0325 16:32:29.120518   29331 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
E0325 16:32:29.122964   29331 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@localhost ~]# ls ~/.kube/
[root@localhost ~]# ls ~/.kube/ -la
总用量 8
drwxr-xr-x. 2 root root 4096  3月 25 15:57 .
dr-xr-x---. 5 root root 4096  3月 25 16:28 ..
 

发现没有为kubectl生成~/.kube/config文件,kubectl默认使用http://localhost:8080/api地址

bug?

待续。。。

参考:

https://github.com/easzlab/kubeasz/blob/3.6.3/docs/setup/00-planning_and_overall_intro.md

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/489450.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

【openGL4.x手册10】基元程序集和面部剔除

https://www.khronos.org/opengl/wiki/Face_Culling 一、说明 基元汇编是 OpenGL 渲染管道中的阶段,在该阶段,基元被划分为一系列单独的基本基元。经过一些小的处理后,如下所述,它们被传递到光栅器进行渲染。 二 早期原始组装 基…

Spring实例化Bean的三种方式

参考资料: Core Technologies 核心技术 spring实例化bean的三种方式 构造器来实例化bean 静态工厂方法实例化bean 非静态工厂方法实例化bean_spring中有参构造器实例化-CSDN博客 1. 构造函数 1.1. 空参构造函数 下面这样表示调用空参构造函数,使用p…

npm ERR! cb() never called!(已解决)

从仓库拉下来的代码,用npm install时报错 试了很多种方法,结果发现有一种可能是你的node版本过低导致的,可以升级node版本试一下。 node版本升级后,把上一次npm install错误的node_modules删除,重新npm install。

压力测试面试题及答案!

压力测试是软件测试中的一种测试方式,用于评估软件系统在各种压力条件下的性能表现。以下是常见的压力测试面试题及答案: 什么是压力测试? 压力测试是一种测试方式,用于模拟实际用户在正常和峰值负载条件下对软件系统施加的压力&…

java线程池原理浅析

问题与解决: 问题: 查询大数据量的时候,例如一次返回50w数据量的包,循环去查询发现读取会超时。 解决方案: 经过思考采用多线程去分页查询。使用线程池创建多个线程去查询分页后的数据最后汇总一下,解决…

双指针算法_盛水最多的容器

题目: 题目解析: 如图所示,一个数组内部存储的是高度,求数组中,能够组成最大容量的两个元素,需要注意的是容量是 高度*宽度,这里的宽度指的是两个数字之间的距离,且需要注意高度中&…

现代c++内存管理的方式有哪些?

在现代C编程实践中,内存管理是软件开发中的核心议题之一,直接影响着程序的性能、稳定性以及资源的有效利用。C提供了一系列丰富且灵活的内存管理机制,以适应不同场景的需求和应对潜在的内存问题,如内存泄漏、野指针和堆栈溢出等。…

VUE之首次加载项目缓慢

最近公司有个大型的项目,使用vue2开发的,但是最终开发完成之后,项目发布到线上,首次加载项目特别缓慢,有时候至少三十秒才能加载完成,加载太慢了,太影响用户体验了,最近研究了一下优…

java spirng和 mybatis 常用的注解有哪些

当在Java Spring和MyBatis中进行开发时,常用的注解对于简化配置和提高开发效率非常重要。以下是更多常用的注解以及它们的详细说明和用途: 在Spring中常用的注解: Component: 用途:表明一个类会作为组件被Spring容器管…

AJAX(一):初识AJAX、http协议、配置环境、发送AJAX请求、请求时的问题

一、什么是AJAX 1.AJAX 就是异步的JS和XML。通过AJAX 可以在浏览器中向服务器发送异步请求,最大的优势:无刷新获取数据。AJAX 不是新的编程语言,而是一种将现有的标准组合在一起使用的新方式。 2.XML 可扩展标记语言。XML被设计用来传输和…

C++(13) STL 库初识

文章目录 STL 库初识1. STL 库1.1 STL 库的案例1.2 C 标准模板库的三个组件1.3 案例展示 2. 迭代器1.1 概述和分类2.2 代码案例 STL 库初识 1. STL 库 1.1 STL 库的案例 类似学习了模板的概念。CSTL (标准模板库) 是一套功能强大的 C 模板类,提供了通用的模板类和…

【Linux详解】——进程信号

📖 前言:本期介绍进程信号。 目录 🕒 1. 生活角度的信号🕒 2. 技术角度的信号🕘 2.1 Linux中的信号🕘 2.2 进程对信号的处理 🕒 3. 信号的产生方式🕘 3.1 键盘产生🕘 3.2…

svn如何合并代码以及解决合并冲突的问题(把分支代码合并到主版本)

1.选择主版本的文件夹。 ​​​​​​​ 2.选择合并一个不同的分支 3.选择主分支的路径和要合并的代码范围 4.点解next 选择这两个选项 5.然后点击Test merge,查看能否和并成功 有红色的提示,说明是有冲突的, 都是黑色说明能够合并成功 …

【无标题】如何使用 MuLogin 设置代理

如何使用 MuLogin 设置代理 使用 MuLogin 浏览器设置我们的代理,轻松管理多个社交媒体或电子商务帐户。 什么是MuLogin? MuLogin 是一款虚拟反检测浏览器,使用户能够管理多个电子商务、社交媒体和广告帐户,而无需验证码或 IP 禁…

星巴克终止Odyssey Beta NFT计划

日前,咖啡品牌星巴克宣布将于3月31日终止其NFT产品Odyssey Beta客户忠诚度计划。这意味着,曾经旨在改进会员忠诚度的Web3 产品Starbucks Odyssey将终止,构筑多年的Web2会员系统“星享俱乐部”脱去了Web3外衣,回到了本来的面貌。 至…

体验分低导致闭店!抖音小店体验分是什么?教你如何提高体验分!

哈喽~我是电商月月 相信开抖音小店的伙伴们对体验分这个东西都不陌生,但如何有效的提高体验分你们知道吗? 今天,我就来讲讲抖音小店体验分低有什么后果,同时在后面说一下体验分降低如何提高! 大家可根据情况不同自行…

羊大师羊奶靠谱么?品质保障深度解析

羊大师羊奶靠谱么?品质保障深度解析 羊大师羊奶,作为市场上的知名品牌,其靠谱性一直备受消费者关注。那么,羊大师羊奶究竟靠谱不靠谱呢?这就需要从品质保障和消费者信赖两个方面进行深入解析。 从品质保障的角度来看&…

【JAVA】数据类型与变量(主要学习与c语言不同之处)

✅作者简介:大家好,我是橘橙黄又青,一个想要与大家共同进步的男人😉😉 🍎个人主页:橘橙黄又青-CSDN博客 目标: 1. 字面常量 2. 数据类型 3. 变量 1.字面常量 在上节课 Hello…

Mysql的高级语句2

目录 引言: 一、按关键字进行排序 1、语句以及用法 2、先创建一个新的数据库以及数据表,并且导入内容 二、关键字排序操作 1、单个字段排序 ①按照分数进行排序,默认不指定就是升序排列 ②按照分数降序排列 ③结合where进行条件过滤…

C# LINQ笔记

C# LINQ笔记 from子句 foreach语句命令式指定了按顺序一个个访问集合中的项。from子句只是声明式地规定集合中的每个项都要访问,并没有指定顺序。foreach在遇到代码时就执行其主体。from子句什么也不执行,只有在遇到访问查询变量的语句时才会执行。 u…