KubeSphere安装和使用集群版
官网:https://www.kubesphere.io/zh/
使用 KubeKey 内置 HAproxy 创建高可用集群:https://www.kubesphere.io/zh/docs/v3.3/installing-on-linux/high-availability-configurations/internal-ha-configuration/
- 安装前注意
必须把当前使用的DNS
设置成 223.5.5.5
或者其他,千万不要设置成内网的,要不然一堆问题。
cd /etc/sysconfig/network-scripts/
- 刷新网络
systemctl restart network
虽然每个机器上的名字是不一样的,但是看的方法就是这样。
设置DNS
,宿主机要不要设置不太清楚,但是我是把宿主机和所有虚拟机都设置了。
一、安装
安装前准备-所有机器都需要
安装之前,在所有的机器上安装好docker
,并设置好镜像源等信息。虽然kubeKey
会检测docker
,没有安装自动安装最新版本,但是我们提前手动安装好比较好。
- 安装docker
参照docker安装文档
- 关闭防火墙
systemctl stop firewalld
- 永久关闭防火墙
systemctl disable firewalld
- 设置主机名
hostname master1
hostname master2
hostname node1
hostname node2
hostname node3
- 查询主机名称
hostname
- 安装需依赖
yum install socat -y
yum install conntrack -y
yum install ebtables -y
yum install ipset -y
以上操作每个机器都必须操作。
下载kubeKey
我是用的是 使用 KubeKey 内置 HAproxy 创建高可用集群
方式集群。
- 创建文件夹保存
kubeKey
mkdir -p /opt/kubesphere
cd /opt/kubesphere
- 下载kubeKey
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
上面这个有些时间很难下载,一台机器下载不了换另一台,下载下来传到master
机器上安装,这里应该是随意一台上安装都可以的,毕竟配置文件里面是指定了master
节点机器和node
节点机器的。
- 给kk添加权限
chmod +x kk
创建集群配置文件
- 创建包含默认配置的示例配置文件
./kk create config --with-kubesphere v3.3.2 --with-kubernetes v1.22.12
更改集群配置文件
config-sample.yaml
配置文件就是集群的配置文件,在里面配置集群所有信息。
- 完整配置如下
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts: # 配置所有的主节点和从节点 user 当前节点的用户名,password:当前节点的密码。用户名密码是linux的登录用户名和密码 # address 和 internalAddress 都填写当前机器的ip地址
- {
name: master1, address: 192.168.124.238, internalAddress: 192.168.124.238, user: root, password: "root"}
- {
name: master2, address: 192.168.124.76, internalAddress: 192.168.124.76, user: root, password: "root"}
- {
name: node1, address: 192.168.124.202, internalAddress: 192.168.124.202, user: root, password: "root"}
- {
name: node2, address: 192.168.124.223, internalAddress: 192.168.124.223, user: root, password: "root"}
- {
name: node3, address: 192.168.124.47, internalAddress: 192.168.124.47, user: root, password: "root"}
roleGroups:
etcd: # 填写主节点名称,上面 hosts中配置的name
- master1
- master2
control-plane: # 填写主节点名称,上面 hosts中配置的name
- master1
- master2
worker: # 从节点名称
- node1
- node2
- node3
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
internalLoadbalancer: haproxy # 这里是高可用时,原来是注释掉的,需要去掉注释
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.22.12
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons: []
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.3.2
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
zone: ""
local_registry: ""
namespace_override: ""
# dev_tag: ""
etcd:
monitoring: false
endpointIps: localhost
port: 2379
tlsEnable: true
common:
core:
console:
enableMultiLogin: true
port: 30880
type: NodePort
# apiserver:
# resources: {}
# controllerManager:
# resources: {}
redis:
enabled: false
volumeSize: 2Gi
openldap:
enabled: false
volumeSize: 2Gi
minio:
volumeSize: 20Gi
monitoring:
# type: external
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
GPUMonitoring:
enabled: false
gpu:
kinds:
- resourceName: "nvidia.com/gpu"
resourceType: "GPU"
default: true
es:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchHost: ""
externalElasticsearchPort: ""
alerting:
enabled: false
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: false
# operator:
# resources: {}
# webhook:
# resources: {}
devops:
enabled: true # 启动devops插件
# resources: {}
jenkinsMemoryLim: 8Gi
jenkinsMemoryReq: 4Gi
jenkinsVolumeSize: 8Gi
events:
enabled: false
# operator:
# resources: {}
# exporter:
# resources: {}
# ruler:
# enabled: true
# replicas: 2
# resources: {}
logging:
enabled: true # 启用日志插件
logsidecar:
enabled: true
replicas: 2
# resources: {}
metrics_server:
enabled: false
monitoring:
storageClass: ""
node_exporter:
port: 9100
# resources: {}
# kube_rbac_proxy:
# resources: {}
# kube_state_metrics:
# resources: {}
# prometheus:
# replicas: 1
# volumeSize: 20Gi
# resources: {}
# operator:
# resources: {}
# alertmanager:
# replicas: 1
# resources: {}
# notification_manager:
# resources: {}
# operator:
# resources: {}
# proxy:
# resources: {}
gpu:
nvidia_dcgm_exporter:
enabled: false
# resources: {}
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
istio:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: false
cni:
enabled: false
edgeruntime:
enabled: false
kubeedge:
enabled: false
cloudCore:
cloudHub:
advertiseAddress:
- ""
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
# resources: {}
# hostNetWork: false
iptables-manager:
enabled: true
mode: "external"
# resources: {}
# edgeService:
# resources: {}
terminal:
timeout: 600
上面的配置文件配置好后,直接安装。
./kk create cluster -f config-sample.yaml
接下来的操作和安装 all-in-one
的表现是一样的了。
- 验证安装
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
这里无论是访问主节点的ip:30880还是从节点的ip:30880都可以访问到web管理页面。