目录
一、Kubernetes简介
1 应用部署方式演变
1.2 容器编排应用
1.3 kubernetes 简介
1.4 K8S的设计架构
1.4.1 K8S各个组件用途
1.4.2 K8S 各组件之间的调用关系
1.4.3 K8S 的 常用名词感念
1.4.4 k8S的分层架构
二 K8S集群环境搭建
2.1 k8s中容器的管理方式
2.2 k8s 集群部署
2.2.1 k8s 环境部署说明
2.2.2 集群环境初始化
2.2.2.1.所有禁用swap和本地解析
2.2.2.2.所有安装docker
2.2.2.3.所有节点设定docker的资源管理模式为systemd
2.2.2.4.所有阶段复制harbor仓库中的证书并启动docker
2.2.2.5 安装K8S部署工具
2.2.2.6 设置kubectl命令补齐功能
2.2.2.7 在所节点安装cri-docker
2.2.2.8 在master节点拉取K8S所需镜像
2.2.2.9 集群初始化
2.2.2.10 安装flannel网络插件
2.2.2.11 节点扩容
一、Kubernetes简介
1 应用部署方式演变
在部署应用程序的方式上,主要经历了三个阶段:
传统部署:互联网早期,会直接将应用程序部署在物理机上
优点:简单,不需要其它技术的参与
缺点:不能为应用程序定义资源使用边界,很难合理地分配计算资源,而且程序之间容易产生影响
虚拟化部署:可以在一台物理机上运行多个虚拟机,每个虚拟机都是独立的一个环境
优点:程序环境不会相互产生影响,提供了一定程度的安全性
缺点:增加了操作系统,浪费了部分资源
容器化部署:与虚拟化类似,但是共享了操作系统
注意:
容器化部署方式给带来很多的便利,但是也会出现一些问题,比如说:
- 一个容器故障停机了,怎么样让另外一个容器立刻启动去替补停机的容器
- 当并发访问量变大的时候,怎么样做到横向扩展容器数量
1.2 容器编排应用
为了解决这些容器编排问题,就产生了一些容器编排的软件:
Swarm:Docker自己的容器编排工具
Mesos:Apache的一个资源统一管控的工具,需要和Marathon结合使用
Kubernetes:Google开源的的容器编排工具
1.3 kubernetes 简介
- 在Docker 作为高级容器引擎快速发展的同时,在Google内部,容器技术已经应用了很多年
- Borg系统运行管理着成千上万的容器应用。
- Kubernetes项目来源于Borg,可以说是集结了Borg设计思想的精华,并且吸收了Borg系统中的经验和教训。
- Kubernetes对计算资源进行了更高层次的抽象,通过将容器进行细致的组合,将最终的应用服务交给用户。
kubernetes的本质是一组服务器集群,它可以在集群的每个节点上运行特定的程序,来对节点中的容器进行管理。目的是实现资源管理的自动化,主要提供了如下的主要功能:
自我修复:一旦某一个容器崩溃,能够在1秒中左右迅速启动新的容器
弹性伸缩:可以根据需要,自动对集群中正在运行的容器数量进行调整
服务发现:服务可以通过自动发现的形式找到它所依赖的服务
负载均衡:如果一个服务起动了多个容器,能够自动实现请求的负载均衡
版本回退:如果发现新发布的程序版本有问题,可以立即回退到原来的版本
存储编排:可以根据容器自身的需求自动创建存储卷
1.4 K8S的设计架构
1.4.1 K8S各个组件用途
一个kubernetes集群主要是由控制节点(master)、工作节点(node)构成,每个节点上都会安装不同的组件
1 master:集群的控制平面,负责集群的决策
ApiServer : 资源操作的唯一入口,接收用户输入的命令,提供认证、授权、API注册和发现等机制
Scheduler : 负责集群资源调度,按照预定的调度策略将Pod调度到相应的node节点上
ControllerManager : 负责维护集群的状态,比如程序部署安排、故障检测、自动扩展、滚动更新等
Etcd :负责存储集群中各种资源对象的信息
2 node:集群的数据平面,负责为容器提供运行环境
kubelet:负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理
Container runtime:负责镜像管理以及Pod和容器的真正运行(CRI)
kube-proxy:负责为Service提供cluster内部的服务发现和负载均衡
1.4.2 K8S 各组件之间的调用关系
当我们要运行一个web服务时
kubernetes环境启动之后,master和node都会将自身的信息存储到etcd数据库中
web服务的安装请求会首先被发送到master节点的apiServer组件
apiServer组件会调用scheduler组件来决定到底应该把这个服务安装到哪个node节点上
在此时,它会从etcd中读取各个node节点的信息,然后按照一定的算法进行选择,并将结果告知apiServer
apiServer调用controller-manager去调度Node节点安装web服务
kubelet接收到指令后,会通知docker,然后由docker来启动一个web服务的pod
如果需要访问web服务,就需要通过kube-proxy来对pod产生访问的代理
1.4.3 K8S 的 常用名词感念
- Master:集群控制节点,每个集群需要至少一个master节点负责集群的管控
- Node:工作负载节点,由master分配容器到这些node工作节点上,然后node节点上的
- Pod:kubernetes的最小控制单元,容器都是运行在pod中的,一个pod中可以有1个或者多个容器
- Controller:控制器,通过它来实现对pod的管理,比如启动pod、停止pod、伸缩pod的数量等等
- Service:pod对外服务的统一入口,下面可以维护者同一类的多个pod
- Label:标签,用于对pod进行分类,同一类pod会拥有相同的标签
- NameSpace:命名空间,用来隔离pod的运行环境
1.4.4 k8S的分层架构
- 核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,对内提供插件式应用执行环境
- 应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服务发现、DNS解析等)
- 管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动态Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)
- 接口层:kubectl命令行工具、客户端SDK以及集群联邦
- 生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范畴
- Kubernetes外部:日志、监控、配置管理、CI、CD、Workflow、FaaS、OTS应用、ChatOps等
- Kubernetes内部:CRI、CNI、CVI、镜像仓库、Cloud Provider、集群自身的配置和管理等
二 K8S集群环境搭建
2.1 k8s中容器的管理方式
K8S 集群创建方式有3种:
centainerd
默认情况下,K8S在创建集群时使用的方式
docker
Docker使用的普记录最高,虽然K8S在1.24版本后已经费力了kubelet对docker的支持,但时可以借助cri-docker方式来实现集群创建
cri-o
CRI-O的方式是Kubernetes创建容器最直接的一种方式,在创建集群的时候,需要借助于cri-o插件的方式来实现Kubernetes集群的创建。
注意:
docker 和cri-o 这两种方式要对kubelet程序的启动参数进行设置
2.2 k8s 集群部署
2.2.1 k8s 环境部署说明
K8S中文官网:Kubernetes
主机名 | IP | 角色 |
reg.timinglee.org | 192.168.10.130 | harbor仓库 |
k8s-master | 192.168.10.100 | master,k8s集群控制节点 |
k8s-node1 | 192.168.10.10 | worker,k8s集群工作节点 |
k8s-node2 | 192.168.10.20 | worker,k8s集群工作节点 |
所有节点禁用selinux和防火墙
所有节点同步时间和解析
所有节点安装docker-ce
所有节点禁用swap,注意注释掉/etc/fstab文件中的定义
2.2.2 集群环境初始化
所有k8s集群节点执行以下步骤
2.2.2.1.所有禁用swap和本地解析
[root@k8s-master ~]# swapon -s
Filename Type Size Used Priority
/dev/dm-1 partition 2113532 272896 -2[root@k8s-master ~]# systemctl status /dev/dm-1
● dev-dm\x2d1.device - /dev/dm-1
Follows: unit currently follows state of sys-devices-virtual-block-dm\x2d1.device
Loaded: loaded
Active: active (plugged) since Sat 2024-09-07 15:03:14 CST; 1h 29min ago
Until: Sat 2024-09-07 15:03:14 CST; 1h 29min ago
Device: /sys/devices/virtual/block/dm-1[root@k8s-master ~]# dmsetup info /dev/dm-1
Name: rhel-swap
State: ACTIVE
Read Ahead: 256
Tables present: LIVE
Open count: 2
Event number: 0
Major, minor: 253, 1
Number of targets: 1
UUID: LVM-LndQzTLc8UX8Hy8ELOVjaA3fog9Q23dJcyUPgIweSJ0che8017WvfWIeVd57jHxG[root@k8s-master ~]# systemctl mask rhel-swap
Unit rhel-swap.service does not exist, proceeding anyway.
Created symlink /etc/systemd/system/rhel-swap.service → /dev/null.
[root@k8s-master ~]# swapoff /dev/dm-1
[root@k8s-master ~]# swapon -s
[root@k8s-master ~]##如果没有第一个,那就把下面那个禁掉
[root@k8s-master ~]# systemctl list-unit-files | grep swap
rhel-swap.service masked disabled
swap.target static -
[root@k8s-master ~]# vim /etc/fstab
#
# /etc/fstab
# Created by anaconda on Sat Aug 31 07:47:36 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=4e2119dc-6069-4af9-9ede-c7aa94a3fc1a /boot xfs defaults 0 0
#/dev/mapper/rhel-swap none swap defaults 0 0
/dev/sr0 /rhel9.1 none defaults 0 0
[root@k8s-master ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.100 k8s-master.timinglee.org
[root@k8s-master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.130 reg.timinglee.org
192.168.10.100 k8s-master.timinglee.org
192.168.10.10 k8s-node1.timinglee.org
192.168.10.20 k8s-node2.timinglee.org
2.2.2.2.所有安装docker
[root@k8s-master ~]# vim /etc/yum.repos.d/docker.repo
[doocker]
name=docker
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/rhel/9/x86_64/stable/
enabled=1
gpgcheck=0[root@k8s-master ~]# yum install docker-ce -y
2.2.2.3.所有节点设定docker的资源管理模式为systemd
#注意:这个是在rhel7上进行此部署,rhel9不需要
[root@k8s-master ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://reg.westos.org"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
2.2.2.4.所有阶段复制harbor仓库中的证书并启动docker
[root@k8s-master ~]# ls -l /etc/docker/certs.d/reg.timinglee.org/ca.crt
-rw-r--r--. 1 root root 2187 8月 31 18:06 /etc/docker/certs.d/reg.timinglee.org/ca.crt
[root@k8s-master ~]# systemctl enable --now docker[root@k8s-master ~]# docker login reg.timinglee.org
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-storesLogin Succeeded
[root@k8s-master ~]# docker info
Client: Docker Engine - Community
Version: 27.2.0
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.16.2
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.29.2
Path: /usr/libexec/docker/cli-plugins/docker-composeServer:
Containers: 10
Running: 9
Paused: 0
Stopped: 1
Images: 18
Server Version: 27.2.0
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: runc io.containerd.runc.v2
Default Runtime: runc
Init Binary: docker-init
containerd version: 472731909fa34bd7bc9c087e4c27943f9835f111
runc version: v1.1.13-0-g58aa920
init version: de40ad0
Security Options:
seccomp
Profile: builtin
cgroupns
Kernel Version: 5.14.0-162.6.1.el9_1.x86_64
Operating System: Red Hat Enterprise Linux 9.1 (Plow)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.703GiB
Name: k8s-master.timinglee.org
ID: 6a7ad2f2-4d31-4cf0-b509-850d6de5dadf
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
reg.timinglee.org
127.0.0.0/8
Live Restore Enabled: false[root@k8s-master ~]# cat /etc/docker/daemon.json
{
"insecure-registries":["reg.timinglee.org"]
}
2.2.2.5 安装K8S部署工具
#部署软件仓库,添加K8S源
[root@k8s-master ~]# cd /etc/yum.repos.d/
[root@k8s-master yum.repos.d]# vim k8s.repo
[k8s]
name=k8s
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/
gpgcheck=0
[root@k8s-master yum.repos.d]# yum makecache#安装软件
[root@k8s-master ~]# yum list kubeadm --showduplicates #查询版本
[root@k8s-master ~]# yum install kubelet-1.30.0 kubeadm-1.30.0 kubectl-1.30.0 -y
2.2.2.6 设置kubectl命令补齐功能
[root@k8s-master ~]# yum install bash-completion -y
[root@k8s-master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@k8s-master ~]# source ~/.bashrc
2.2.2.7 在所节点安装cri-docker
k8s从1.24版本开始移除了dockershim,所以需要安装cri-docker插件才能使用docker
软件下载:https://github.com/Mirantis/cri-dockerd
[root@k8s-master ~]# ls
cri-dockerd-0.3.14-3.el8.x86_64.rpm
libcgroup-0.41-19.el8.x86_64.rpm[root@k8s-master ~]# yum install *.rpm -y
[root@k8s-master ~]# systemctl enable --now cri-docker
Created symlink /etc/systemd/system/multi-user.target.wants/cri-docker.service → /usr/lib/systemd/system/cri-docker.service.[root@k8s-master ~]# vim /lib/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket[Service]
Type=notify#指定网络插件名称及基础容器镜像
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=reg.timinglee.org/k8s/pause:3.9ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl start cri-docker[root@k8s-master ~]# ll /var/run/cri-dockerd.sock
srw-rw---- 1 root docker 0 8月 26 22:14 /var/run/cri-dockerd.sock #cri-dockerd的套接字文件
2.2.2.8 在master节点拉取K8S所需镜像
#拉取k8s集群所需要的镜像
[root@k8s-master ~]# kubeadm config images pull \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.30.0 \
--cri-socket=unix:///var/run/cri-dockerd.sock[root@k8s-master ~]# docker images | awk '/google/{ print $1":"$2}' \
| awk -F "/" '{system("docker tag "$0" reg.timinglee.org/k8s/"$3)}'[root@k8s-master ~]# docker images | awk '/k8s/{system("docker push "$1":"$2)}'
2.2.2.9 集群初始化
#启动kubelet服务
[root@k8s-master ~]# systemctl status kubelet.service[root@k8s-master ~]# systemctl enable --now kubelet.service
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
#执行初始化命令
[root@k8s-master ~]# kubeadm init --pod-network-cidr=192.188.0.0/16 \
--image-repository reg.timinglee.org/k8s \
--kubernetes-version v1.30.0 \
--cri-socket=unix:///var/run/cri-dockerd.sock[root@k8s-master ~]# kubeadm reset --cri-socket=unix:///var/run/cri-dockerd.sock #重置集群
#指定集群配置文件变量
[root@k8s-master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile#当前节点没有就绪,因为还没有安装网络插件,容器没有运行
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master.timinglee.org NotReady control-plane 4m25s v1.30.0
root@k8s-master ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-647dc95897-2sgn8 0/1 Pending 0 6m13s
kube-system coredns-647dc95897-bvtxb 0/1 Pending 0 6m13s
kube-system etcd-k8s-master.timinglee.org 1/1 Running 0 6m29s
kube-system kube-apiserver-k8s-master.timinglee.org 1/1 Running 0 6m30s
kube-system kube-controller-manager-k8s-master.timinglee.org 1/1 Running 0 6m29s
kube-system kube-proxy-fq85m 1/1 Running 0 6m14s
kube-system kube-scheduler-k8s-master.timinglee.org 1/1 Running 0 6m29s
注意:
在此阶段如果生成的集群token找不到了可以重新生成
[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 172.25.254.100:6443 --token 5hwptm.zwn7epa6pvatbpwf --discovery-token-ca-cert-hash sha256:52f1a83b70ffc8744db5570288ab51987ef2b563bf906ba4244a300f61e9db23
2.2.2.10 安装flannel网络插件
官方网站:GitHub - flannel-io/flannel: flannel is a network fabric for containers, designed for Kubernetes
[root@k8s-master ~]# ls
flannel-0.25.5.tag.gzkube-flannel.yml
[root@k8s-master ~]# vim kube-flannel.yml #更改位置已经标红
apiVersion: v1
kind: Namespace
metadata:
labels:
k8s-app: flannel
pod-security.kubernetes.io/enforce: privileged
name: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: flannel
name: flannel
namespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: flannel
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: flannel
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"EnableNFTables": false,
"Backend": {
"Type": "vxlan"
}
}
kind: ConfigMap
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
name: kube-flannel-cfg
namespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
name: kube-flannel-ds
namespace: kube-flannel
spec:
selector:
matchLabels:
app: flannel
k8s-app: flannel
template:
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
containers:
- args:
- --ip-masq
- --kube-subnet-mgr
command:
- /opt/bin/flanneld
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
image: reg.timinglee.org/flannel/flannel:v0.25.5
name: kube-flannel
resources:
requests:
cpu: 100m
memory: 50Mi
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
privileged: false
volumeMounts:
- mountPath: /run/flannel
name: run
- mountPath: /etc/kube-flannel/
name: flannel-cfg
- mountPath: /run/xtables.lock
name: xtables-lock
hostNetwork: true
initContainers:
- args:
- -f
- /flannel
- /opt/cni/bin/flannel
command:
- cp
image: reg.timinglee.org/flannel/flannel-cni-plugin:v1.5.1-flannel1
name: install-cni-plugin
volumeMounts:
- mountPath: /opt/cni/bin
name: cni-plugin
- args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
command:
- cp
image: reg.timinglee.org/flannel/flannel:v0.25.5
name: install-cni
volumeMounts:
- mountPath: /etc/cni/net.d
name: cni
- mountPath: /etc/kube-flannel/
name: flannel-cfg
priorityClassName: system-node-critical
serviceAccountName: flannel
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- hostPath:
path: /run/flannel
name: run
- hostPath:
path: /opt/cni/bin
name: cni-plugin
- hostPath:
path: /etc/cni/net.d
name: cni
- configMap:
name: kube-flannel-cfg
name: flannel-cfg
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock[root@k8s-master ~]# docker load -i flannel-0.25.5.tag.gz
ef7a14b43c43: Loading layer 8.079MB/8.079MB
1d9375ff0a15: Loading layer 9.222MB/9.222MB
4af63c5dc42d: Loading layer 16.61MB/16.61MB
2b1d26302574: Loading layer 1.544MB/1.544MB
d3dd49a2e686: Loading layer 42.11MB/42.11MB
7278dc615b95: Loading layer 5.632kB/5.632kB
c09744fc6e92: Loading layer 6.144kB/6.144kB
0a2b46a5555f: Loading layer 1.923MB/1.923MB
5f70bf18a086: Loading layer 1.024kB/1.024kB
601effcb7aab: Loading layer 1.928MB/1.928MB
Loaded image: flannel/flannel:v0.25.5
21692b7dc30c: Loading layer 2.634MB/2.634MB
Loaded image: flannel/flannel-cni-plugin:v1.5.1-flannel1
[root@k8s-master ~]# docker tag flannel/flannel:v0.25.5 reg.timinglee.org/flannel/flannel:v0.25.5
[root@k8s-master ~]# docker tag flannel/flannel-cni-plugin:v1.5.1-flannel1 reg.timinglee.org/flannel/flannel-cni-plugin:v1.5.1-flannel1
[root@k8s-master ~]# docker push reg.timinglee.org/flannel/flannel:v0.25.5
The push refers to repository [reg.timinglee.org/flannel/flannel]
601effcb7aab: Pushed
5f70bf18a086: Pushed
0a2b46a5555f: Pushed
c09744fc6e92: Pushed
7278dc615b95: Pushed
d3dd49a2e686: Pushed
2b1d26302574: Pushed
4af63c5dc42d: Pushed
1d9375ff0a15: Pushed
ef7a14b43c43: Pushed
v0.25.5: digest: sha256:89be0f0c323da5f3b9804301c384678b2bf5b1aca27e74813bfe0b5f5005caa7 size: 2414
[root@k8s-master ~]# docker push reg.timinglee.org/flannel/flannel-cni-plugin:v1.5.1-flannel1
The push refers to repository [reg.timinglee.org/flannel/flannel-cni-plugin]
21692b7dc30c: Pushed
ef7a14b43c43: Mounted from flannel/flannel
v1.5.1-flannel1: digest: sha256:c026a9ad3956cb1f98fe453f26860b1c3a7200969269ad47a9b5996f25ab0a18 size: 738
#安装flannel网络插件
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
2.2.2.11 节点扩容
在所有的worker节点中
1 确认部署好以下内容
2 禁用swap
3 安装:
-
kubelet-1.30.0
-
kubeadm-1.30.0
-
kubectl-1.30.0
-
docker-ce
-
cri-dockerd
4 修改cri-dockerd启动文件添加
-
--network-plugin=cni
-
--pod-infra-container-image=reg.timinglee.org/k8s/pause:3.9
5 启动服务
-
kubelet.service
-
cri-docker.service
以上信息确认完毕后即可加入集群
[root@k8s-node & 2 ~]# kubeadm join 192.168.10.100:6443 --token wgq868.1cqpplspl3241123 --discovery-token-ca-cert-hash sha256:125e41a03007aec0ee43b733ed3c201c112d2217eaef5c4e4a0df05c65924a5f --cri-socket=unix:///var/run/cri-dockerd.sock
在master阶段中查看所有node的状态
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 38m v1.30.0
k8s-node Ready <none> 11m v1.30.0
k8s-node2 Ready <none> 11m v1.30.0
注意:
所有阶段的STATUS为Ready状态,那么恭喜你,你的kubernetes就装好了!!
测试集群运行情况
#删除节点
[root@k8s-master ~]# kubectl delete nodes k8s-node
#建立一个pod
[root@k8s-master ~]# kubectl run test --image nginx
pod/test created#查看pod状态
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 0/1 ContainerCreating 0 3m52s <none> k8s-node <none> <none>
test1 0/1 ContainerCreating 0 3m9s <none> k8s-node2 <none> <none>
#删除pod[root@k8s-master ~]# kubectl delete pod test
pod "test" deleted
[root@k8s-master ~]# kubectl delete pod test1 --force
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "test1" force deleted