容器运行时,containerd
按照官方的指导,需要安装runc和cni插件,提示的安装方式,有三种:
- 二进制安装包
- 源码
- apt-get 或 dnf安装
我们这里选用第三种,找到docker官方提供的安装方式
ubuntu-containerd
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository to Apt sources:
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
这里我都是直接安装的最新版
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
你也可以通过指令查看可以安装的指定版本
apt-cache madison docker-ce| awk '{ print $3 }'
centos-containerd
添加源
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
安装最新版本
sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
注意要生成containerd 配置文件,并修改SystemdCgroup为true
containerd config default > /etc/containerd/config.toml
设置容器运行时
crictl config runtime-endpoint unix:///run/containerd/containerd.sock
cni
前面的docker安装种保护了containerd.io 包,他包含了 runc 但是不包含 CNI plugins,因此需要补充CNI插件
你可以从找到cni插件的下载连接,官方,也可以使用我之前下载的
链接:https://pan.baidu.com/s/1eHV4KuM_1bUTuZ_2zW3UQg?pwd=ypdi
提取码:ypdi
将文件cni-plugins-linux-arm64-v1.3.0.tgz拷贝至节点
$ mkdir -p /opt/cni/bin
$ tar Cxzvf /opt/cni/bin cni-plugins-linux-arm64-v1.3.0.tgz
./
./macvlan
./static
./vlan
./portmap
./host-local
./vrf
./bridge
./tuning
./firewall
./host-device
./sbr
./loopback
./dhcp
./ptp
./ipvlan
./bandwidth
kubelet
前置条件
关闭交换分区 swappoff
#关闭分区
swappof -a
#查看交换分区,swap 显示为0 或者交换显示为0 表示关闭了
free -h
关闭防火墙
#禁止
ufw disable
#查看状态
ufw status
#inactive 表示禁止了
Status: inactive
安装
按照阿里云安装最新的:
ubuntu-k8s
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
你也可以通过指令查看可以安装的版本
apt-cache madison kubelet| awk '{ print $3 }'
centos-k8s
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
初始化节点
主节点
导入默认初始化配置文件
kubeadm config print init-defaults > kubeadm.yaml
修改配置kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.10.37 #节点局域网ip地址
bindPort: 6443
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: k8s-master
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12 #默认
podSubnet: 10.244.0.0/16 #pod子网
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
# KubeProxyConfiguration 代理模式,指定 ipvs,默认是 iptables,iptables 效率低。
kind: KubeProxyConfiguration
mode: ipvs #指定
---
apiVersion: kubelet.config.k8s.io/v1beta1
# 修改 KubeletConfiguration 驱动为 systemd
kind: KubeletConfiguration
cgroupDriver: systemd #指定
执行初始化
kubeadm init --config kubeadm.yaml
此时会输出
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.10.37:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:ecbe8d9e29a5a255ab641d7b95fc643458b5575c33b057b6d56044f395ec92e2
可以完全按照提示的内容进行集群的操作
执行第一步就可以获取到节点了
root@k8s-master:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane 6m56s v1.28.0
查看系统pod状态,发现coredns处于Pending 状态,原因是缺少pod网络插件
root@k8s-master:~# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-7f8cbcb969-9j5v6 0/1 Pending 0 9m38s <none> <none> <none> <none>
coredns-7f8cbcb969-vnz5h 0/1 Pending 0 9m38s <none> <none> <none> <none>
etcd-master 1/1 Running 0 10m 192.168.17.130 master <none> <none>
kube-apiserver-master 1/1 Running 0 10m 192.168.17.130 master <none> <none>
kube-controller-manager-master 1/1 Running 0 10m 192.168.17.130 master <none> <none>
kube-proxy-5hv9x 1/1 Running 0 9m39s 192.168.17.130 master <none> <none>
kube-proxy-gv5g8 1/1 Running 0 5m21s 192.168.17.132 node2 <none> <none>
kube-proxy-smk2m 1/1 Running 0 6m18s 192.168.17.131 node1 <none> <none>
kube-scheduler-master 1/1 Running 0 10m 192.168.17.130 master <none> <none>
接下来安装pod网络插件
链接:https://pan.baidu.com/s/1eHV4KuM_1bUTuZ_2zW3UQg?pwd=ypdi
提取码:ypdi
提取calico.yaml,应用安装,单网卡这样就够了,如果多网卡还要修改calico.yaml文件
kubectl apply -f calico.yaml
至此主节点安装完成,将节点加入集群,执行第三步,
kubeadm join 192.168.10.37:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:ecbe8d9e29a5a255ab641d7b95fc643458b5575c33b057b6d56044f395ec92e2
如果忘记,可以从主节点重新获取
kubeadm token create --print-join-command
可能会遇到的问题
1
Failed to create pod sandbox: open /run/systemd/resolve/resolv.conf: no such file or directory
将主节点/run/systemd/resolve 目录下的文件 ,拷贝到节点一样的目录下,没有则自己创建
2
/proc/sys/net/bridge/bridge-nf-call-iptables does not exist
执行如下命令解决
modprobe br_netfilter
echo 1 > /proc/sys/net/ipv4/ip_forward
3
Failed to create pod sandbox: open /run/systemd/resolve/resolv.conf: no such file or directory
1、将主节点/run/systemd/resolve 目录下的文件 ,拷贝到节点一样的目录下,节点没有则自己创建
2、如果没有安装systemd-resolved,则可以安装
3、如果systemd-resolved没有启动则启动他
一些常用的指令
删除节点
先驱逐
kubectl drain --ignore-daemonsets <节点名称>
再删除
kubectl delete node <节点名称>
清除ipvs命令
ipvsadm -C
清楚iptables
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X