k8s的adm方式部署

1 k8s kubeadm搭建

1.1 k8s kubeadm搭建步骤

kubeadm init

在使用kubeadm方式安装k8s集群是,可根据初始化配置文件或配置参数选项快速的初始化生成一个k8s的master管理平台

kubeadm join

根据kubadm init初始化的提示信息快速的将一个node节点或其他的master节点加入到k8s集群里

1)所有节点进行初始化,安装容器引擎、kubeadm、kubelet
2)执行 kubeadm config print init-defaults 命令生成K8S集群初始化配置文件并进行修改
3)执行 kubeadm init --config 指定初始化配置文件进行K8S集群的初始化,生成K8S的master管理控制节点
4)在其它节点执行 kubeadm join 命令将node节点或其它master节点加入到K8S集群里
5)安装 cni 网络插件(flannel或calico)

1.2 如何更新kubeadm搭建的K8S的证书有效期

1)备份好证书和kubeconfig配置文件

mkdir /root/k8s.bak cp -r /etc/kubernetes/pki /root/k8s.bak/ cp /etc/kubernetes/*.yaml /root/k8s.bak/

2)更新证书

kubeadm alpha certs renew all --config=/opt/k8s/kubeadm-config.yaml

3)更新kubeconfig配置文件

kubeadm init phase kubeconfig all --config=/opt/k8s/kubeadm-config.yaml

4)重启kubelet进程和其它K8S组件的Pod

systemctl restart kubelet mv /etc/kubernetes/manifests/.yaml /tmp mv /tmp/.yaml /etc/kubernetes/manifests/

5)查看证书有效期

kubeadm alpha certs check-expiration openssl x509 -noout -dates -in /etc/kubernetes/pki/XXX.crt

2 kubeadm搭建高可用k8s实际部署

Kubeadm - K8S1.20 - 高可用集群部署
注意事项:
master节点cpu核心数要求大于2
●最新的版本不一定好,但相对于旧版本,核心功能稳定,但新增功能、接口相对不稳
●学会一个版本的 高可用部署,其他版本操作都差不多
●宿主机尽量升级到CentOS 7.9
●内核kernel升级到 4.19+ 这种稳定的内核
●部署k8s版本时,尽量找 1.xx.5 这种大于5的小版本(这种一般是比较稳定的版本)

2.1 初始化环境准备

所有节点,关闭防火墙规则,关闭selinux,关闭swap交换
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

修改主机名,所有节点修改hosts文件
hostnamectl set-hostname master01
hostnamectl set-hostname master02
hostnamectl set-hostname master03
hostnamectl set-hostname node01
hostnamectl set-hostname node02
hostnamectl set-hostname node03
vim /etc/hosts
192.168.111.5 master01
192.168.111.6 master02
192.168.111.7 master03
192.168.111.8 node01
192.168.111.9 node02
192.168.111.55 node03

所有节点时间同步
yum -y install ntpdate
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
​
systemctl enable --now crond
​
crontab -e
*/30 * * * * /usr/sbin/ntpdate time2.aliyun.com

所有节点实现Linux的资源限制
vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

所有节点升级内核
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm -O /opt/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm -O /opt/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
​
cd /opt/
yum localinstall -y kernel-ml*

更改内核启动方式
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
grubby --default-kernel
reboot

调整内核参数
调整内核参数
cat > /etc/sysctl.d/k8s.conf <<EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
​
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
​
#生效参数
sysctl --system  
​
​
加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

2.2 所有节点安装docker

yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install -y docker-ce docker-ce-cli containerd.io
​
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "500m", "max-file": "3"
  }
}
EOF
​
systemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service 
​
docker info | grep "Cgroup Driver"
Cgroup Driver: systemd

2.3 所有节点安装kubeadm,kubelet和kubectl

定义kubernetes源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.20.15 kubeadm-1.20.15 kubectl-1.20.15

配置Kubelet使用阿里云的pause镜像
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
EOF


开机自启kubelet
systemctl enable --now kubelet

2.4 使用nginx结合keepalived实现四层代理高可用组件安装、配置

两台高可用节点安装配置nginx四层代理转发
yum -y install nginx

user  nginx;
worker_processes  auto;

vim /etc/nginx/nginx.conf

events {
    worker_connections  1024;
}
stream {
     upstream k8s-apiservers {
      server 192.168.111.5:6443;
      server 192.168.111.6:6443;
      server 192.168.111.7:6443;
}
     server {
        listen 6443;
        proxy_pass k8s-apiservers;
 }
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

systemctl enable --now nginx

安装配置keepalived实现高可用转发
yum -y install keepalived

编辑nginx服务健康检查脚本
vim check_nginx.sh 

#!/bin/bash
if ! killall -0 nginx
  then
  systemctl stop keepalived
fi

配置keepalived配置文件
vim keepalived.conf 

! Configuration File for keepalived
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   } 
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_02
}  
vrrp_script check_nginx {
   interval 2
   script "/etc/keepalived/check_nginx.sh"
}  
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }   
    virtual_ipaddress {
        192.168.111.100
    }   
    track_script {
       check_nginx
}      
}  
使用ip a检测maser地址是否生成master
ip a
关闭master节点的nginx服务查看vip地址是否会漂移到BACKUP
systemctl stpo nginx
从节点上查看
ip a

2.5 部署K8S集群

在 master01 节点上设置集群初始化配置文件
kubeadm config print init-defaults > /opt/kubeadm-config.yaml

cd /opt/
vim kubeadm-config.yaml
......
11 localAPIEndpoint:
12   advertiseAddress: 192.168.80.10		#指定当前master节点的IP地址
13   bindPort: 6443

21 apiServer:
22   certSANs:								#在apiServer属性下面添加一个certsSANs的列表,添加所有master节点的IP地址和集群VIP地址
23   - 192.168.111.100
24   - 192.168.111.5
25   - 192.168.111.6
26   - 192.168.111.7

30 clusterName: kubernetes
31 controlPlaneEndpoint: "192.168.80.100:6443"		#指定集群VIP地址
32 controllerManager: {}

38 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers			#指定镜像下载地址
39 kind: ClusterConfiguration
40 kubernetesVersion: v1.20.15				#指定kubernetes版本号
41 networking:
42   dnsDomain: cluster.local
43   podSubnet: "10.244.0.0/16"				#指定pod网段,10.244.0.0/16用于匹配flannel默认网段
44   serviceSubnet: 10.96.0.0/16			#指定service网段
45 scheduler: {}
#末尾再添加以下内容
--- 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs									#把默认的kube-proxy调度方式改为ipvs模式

补充:可以配置外置的etcd网段,只能二选一
etcd:
  external:
    endpoints:
    - https://192.168.80.10:2379
    - https://192.168.80.11:2379
    - https://192.168.80.12:2379
    - caFile:/opt/etcd/ssl/ca.pem
    - certFile:/opt/etcd/ssl/server.pem
    - keyFile:/opt/etcd/ssl/server-key.pem

所有节点拉取镜像
for i in master02 master03; do scp /opt/kubeadm-config.yaml $i:/opt/; done
#node节点的镜像加入集群后会自动拉取
kubeadm config images pull --config /opt/kubeadm-config.yaml
docker images

master01 节点进行初始化
kubeadm init --config kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
kubeadm init --config kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
[init] Using Kubernetes version: v1.20.15
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01] and IPs [10.96.0.1 192.168.111.5 192.168.111.100 192.168.111.6 192.168.111.7]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [192.168.111.5 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [192.168.111.5 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.801874 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
4e923d767813d1f53185722eca61fb44ee2ba4336c372c9cae4f96b2c3a66e7c
[mark-control-plane] Marking the node master01 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.111.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4f6e7fe908d8dcda265a26a95a5db9f0c8d64d04bd3a31b54964b3488daa8320 \
    --control-plane --certificate-key 4e923d767813d1f53185722eca61fb44ee2ba4336c372c9cae4f96b2c3a66e7c

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.111.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4f6e7fe908d8dcda265a26a95a5db9f0c8d64d04bd3a31b54964b3488daa8320
#若初始化失败,进行的操作
kubeadm reset -f
ipvsadm --clear 
rm -rf ~/.kube
再次进行初始化

master01 节点进行环境配置
设定kubectl
kubectl需经由API server认证及授权后方能执行相应的管理操作,kubeadm 部署的集群为其生成了一个具有管理员权限的认证配置文件 /etc/kubernetes/admin.conf,它可由 kubectl 通过默认的 “$HOME/.kube/config” 的路径进行加载。
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
修改controller-manager和scheduler配置文件
vim /etc/kubernetes/manifests/kube-scheduler.yaml 
vim /etc/kubernetes/manifests/kube-controller-manager.yaml

所有节点加入集群
master 节点加入集群
kubeadm join 192.168.111.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4f6e7fe908d8dcda265a26a95a5db9f0c8d64d04bd3a31b54964b3488daa8320 \
    --control-plane --certificate-key 4e923d767813d1f53185722eca61fb44ee2ba4336c372c9cae4f96b2c3a66e7c

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
#node 节点加入集群
kubeadm join 192.168.80.100:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98

在 master01 查看集群信息
kubectl get nodes

部署cni网络插件flannel
所有节点上传 flannel 镜像 flannel.tar 和网络插件 cni-plugins-linux-amd64-v0.8.6.tgz 到 /opt 目录,master节点上传 kube-flannel.yml 文件
cd /opt
docker load < flannel.tar

mv /opt/cni /opt/cni_bak
mkdir -p /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

kubectl apply -f kube-flannel.yml 

kubectl get pods -A  等所有组件状态都为running再查看nodes节点状态

测试 pod 资源创建
kubectl create deployment nginx --image=nginx
kubectl get pods -o wide

暴露端口提供服务
暴露端口提供服务
kubectl expose deployment nginx --port=80 --type=NodePort

kubectl get svc

测试访问
curl http://node01:32698

扩展3个副本,测试是否可以负载均衡
kubectl scale deployment nginx --replicas=3
kubectl get pods -o wide

2.6 kubeadm安装补充

初始化kubeadm的其他方法
kubeadm init \
--apiserver-advertise-address=192.168.111.10 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.20.15 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--token-ttl=0
--------------------------------------------------------------------------------------------
初始化集群需使用kubeadm init命令,可以指定具体参数初始化,也可以指定配置文件初始化。
可选参数:
--apiserver-advertise-address:apiserver通告给其他组件的IP地址,一般应该为Master节点的用于集群内部通信的IP地址,0.0.0.0表示节点上所有可用地址
--apiserver-bind-port:apiserver的监听端口,默认是6443
--cert-dir:通讯的ssl证书文件,默认/etc/kubernetes/pki
--control-plane-endpoint:控制台平面的共享终端,可以是负载均衡的ip地址或者dns域名,高可用集群时需要添加
--image-repository:拉取镜像的镜像仓库,默认是k8s.gcr.io
--kubernetes-version:指定kubernetes版本
--pod-network-cidr:pod资源的网段,需与pod网络插件的值设置一致。Flannel网络插件的默认为10.244.0.0/16,Calico插件的默认值为192.168.0.0/16;
--service-cidr:service资源的网段
--service-dns-domain:service全域名的后缀,默认是cluster.local
--token-ttl:默认token的有效期为24小时,如果不想过期,可以加上 --token-ttl=0 这个参数

方法二初始化后需要修改 kube-proxy 的 configmap,开启 ipvs
kubectl edit configmap kube-proxy -n kube-system

2.7 常见的k8s问题

1)加入集群的 Token 过期

注意:Token值在集群初始化后,有效期为 24小时 ,过了24小时过期。进行重新生成Token,再次加入集群,新生成的Token为 2小时。

1.1、生成Node节点加入集群的 Token

kubeadm token create --print-join-command kubeadm join 192.168.80.100:16443 --token menw99.1hbsurvl5fiz119n --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab865 12420215242c4313fb830a4eb98

1.2、生成Master节点加入集群的 --certificate-key

kubeadm init phase upload-certs --upload-certs I1105 12:33:08.201601 93226 version.go:254] remote version is much newer: v1.22.3; falling back to: stable-1.20 [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 38dba94af7a38700c3698b8acdf8e23f273be07877f5c86f4977dc023e333deb

#master节点加入集群的命令 kubeadm join 192.168.80.100:16443 --token menw99.1hbsurvl5fiz119n --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98 \ --control-plane --certificate-key 38dba94af7a38700c3698b8acdf8e23f273be07877f5c86f4977dc023e333deb

2)master节点 无法部署非系统Pod

解析:主要是因为master节点被加上污点,污点是不允许部署非系统 Pod,在 测试 环境,可以将污点去除,节省资源,可利用率。

2.1、查看污点

kubectl describe node -l node-role.kubernetes.io/master= | grep Taints Taints: node-role.kubernetes.io/master:NoSchedule Taints: node-role.kubernetes.io/master:NoSchedule Taints: node-role.kubernetes.io/master:NoSchedule

2.2、取消污点

kubectl taint node -l node-role.kubernetes.io/master node-role.kubernetes.io/master:NoSchedule- node/master01 untainted node/master02 untainted node/master03 untainted

kubectl describe node -l node-role.kubernetes.io/master= | grep Taints Taints: <none> Taints: <none> Taints: <none>

3)修改NodePort的默认端口

原理:默认k8s的使用端口的范围为30000左右,作为对外部提供的端口。我们也可以通过对配置文件的修改去指定默认的对外端口的范围。

报错

The Service "nginx-svc" is invalid: spec.ports[0].nodePort: Invalid value: 80: provided port is not in the valid range. The range of valid ports is 30000-32767

[root@k8s-master1 ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml

  • --service-cluster-ip-range=10.96.0.0/16

  • --service-node-port-range=1-65535 #找到后进行添加即可

无需重启,k8s会自动生效
4)外部 etcd 部署配置

kubeadm config print init-defaults > /opt/kubeadm-config.yaml

cd /opt/ vim kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens:

  • groups:

    • system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages:

    • signing

    • authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.80.14 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: master01 taints:

    • effect: NoSchedule key: node-role.kubernetes.io/master


apiServer: certSANs:

  • 10.96.0.1

  • 127.0.0.1

  • localhost

  • kubernetes

  • kubernetes.default

  • kubernetes.default.svc

  • kubernetes.default.svc.cluster.local

  • 192.168.80.100

  • 192.168.80.10

  • 192.168.80.11

  • 192.168.80.12

  • master01

  • master02

  • master03 timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: 192.168.80.100:16443 controllerManager: {} dns: type: CoreDNS etcd: external: #使用外部etcd的方式 endpoints:

    • https://192.168.80.10:2379

    • https://192.168.80.11:2379

    • https://192.168.80.12:2379 caFile: /opt/etcd/ssl/ca.pem #需要把etcd的证书都复制到所有master节点上 certFile: /opt/etcd/ssl/server.pem keyFile: /opt/etcd/ssl/server-key.pem imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.20.15 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/16 scheduler: {}


apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs

3 k8s图形化界面

3.1 Dashboard安装

在 master01 节点上操作

#上传 recommended.yaml 文件到 /opt/k8s 目录中
cd /opt/k8s
vim recommended.yaml
#默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001     #添加
  type: NodePort          #添加
  selector:
    k8s-app: kubernetes-dashboard
	
kubectl apply -f recommended.yaml

kubectl get pods -A

创建service account并绑定默认cluster-admin管理员集群角色
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

浏览器访问测试
浏览器打不开的用360浏览器(很多浏览器证书不可用)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/424065.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

新项目,Linux上一键安装MySQL,Redis,Nacos,Minio

大家好&#xff0c;我是 jonssonyan 分享一个我的一个开源项目&#xff0c;这是一个在 Linux 平台上一键安装各种软件的脚本项目&#xff0c;脚本使用 Shell 语言编写&#xff0c;后续还会增加更多软件的一键安装&#xff0c;代码在 GitHub 上全部开源的&#xff0c;开源地址如…

scrapy 中间件

就是发送请求的时候&#xff0c;会经过&#xff0c;中间件。中间件会处理&#xff0c;你的请求 下面是代码&#xff1a; # Define here the models for your spider middleware # # See documentation in: # https://docs.scrapy.org/en/latest/topics/spider-middleware.html…

Java构造方法总结(很清晰)

构造方法扫盲&#xff1a;构造方法就是为了创建对象的 解释&#xff1a;真正创建对象的是 new 这个关键字&#xff0c;Java 虚拟机在创建对象时是有很多步骤的&#xff0c;构造方法只是其中的一步&#xff0c;它的作用是进行成员变量初始化。

怎么优雅地访问ChatGPT

ChatGPT&#xff0c;这颗璀璨的智能结晶&#xff0c;在2022年岁末之际&#xff0c;由OpenAI实验室倾力铸就&#xff0c;犹如夜空中跃动的智慧星辰&#xff0c;点亮了人工智能领域的新纪元。犹如汪洋中的一座灯塔&#xff0c;ChatGPT以其独特的智慧光辉引人注目&#xff0c;然而…

mTLS: openssl创建CA证书

证书可以通过openssl或者keytool创建&#xff0c;在本篇文章中&#xff0c;只介绍openssl。 openssl 生成证书 申请操作流程 生成ca证书私钥, 文件名&#xff1a;ca.key生成ca证书&#xff0c;文件名&#xff1a;ca.crt生成Server/Client 证书私钥&#xff0c;文件名&#x…

<网络安全>《63 微课堂<第3课 旁路部署和串行部署是什么?>》

1、串联和并联概念 串联和并联是物理学上的概念。 串联电路把元件逐个顺次连接起来组成的电路。如图&#xff0c;特点是&#xff1a;流过一个元件的电流同时也流过另一个。 并联电路把元件并列地连接起来组成的电路&#xff0c;如图&#xff0c;特点是&#xff1a;干路的电流…

GO-接口

1. 接口 在Go语言中接口&#xff08;interface&#xff09;是一种类型&#xff0c;一种抽象的类型。 interface是一组method的集合&#xff0c;接口做的事情就像是定义一个协议&#xff08;规则&#xff09;&#xff0c;只要一台机器有洗衣服和甩干的功能&#xff0c;我就称它…

AutoSAR(基础入门篇)13.3-Mcal Dio配置

目录 一、Dio port配置 二、Dio pin配置 一、Dio port配置 同之前的Port一样,双击进入Dio配置界面后会看到几乎差不多的配置界面。General和Port类似,我们不再赘述,主要讲解Dio的配置 1. 其实Dio并没有什么实质的作用,主要起到了一个重命名的功能。双击DioConfig_0进入下…

优选算法|【双指针】|1089.复写零

目录 题目描述 题目解析 算法原理讲解 代码 题目描述 1089. 复写零 给你一个长度固定的整数数组 arr &#xff0c;请你将该数组中出现的每个零都复写一遍&#xff0c;并将其余的元素向右平移。 注意&#xff1a;请不要在超过该数组长度的位置写入元素。请对输入的数组 就…

力扣hot100题解(python版41-43题)

41、二叉树的层序遍历 给你二叉树的根节点 root &#xff0c;返回其节点值的 层序遍历 。 &#xff08;即逐层地&#xff0c;从左到右访问所有节点&#xff09;。 示例 1&#xff1a; 输入&#xff1a;root [3,9,20,null,null,15,7] 输出&#xff1a;[[3],[9,20],[15,7]]示例…

STM32 GPIO的几种工作模式

介绍STM32 GPIO的几种工作模式 1、输出模式 STM32的引脚输出有两种方式&#xff1a; 1、推挽输出 2、开漏输出 1.1 推挽输出 当引脚设置为推挽输出时&#xff0c;P-MOS和N-MOS共同配合工作。 当使用HAL库 //该函数的作用就是将P-MOS导通&#xff0c;N-MOS关…

链式插补 (MICE):弥合不完整数据分析的差距

导 读 数据缺失可能会扭曲结果&#xff0c;降低统计功效&#xff0c;并且在某些情况下&#xff0c;导致估计有偏差&#xff0c;从而破坏从数据中得出的结论的可靠性。 处理缺失数据的传统方法&#xff08;例如剔除或均值插补&#xff09;通常会引入自己的偏差或无法充分利用数…

网页版图像处理软件开发服务:助您项目在市场竞争中脱颖而出

在当今数字化时代&#xff0c;图像处理在各个行业中扮演着重要的角色&#xff0c;虎克专注于提供定制化的网页版图像处理软件开发服务&#xff0c;为您的项目保驾护航。 1.网页版图像处理软件的定制化需求 1.1行业特定功能 针对不同的业务需求&#xff0c;深入了解行业特点&…

前端打包部署(黑马学习笔记)

我们的前端工程开发好了&#xff0c;但是我们需要发布&#xff0c;那么如何发布呢&#xff1f;主要分为2步&#xff1a; 1.前端工程打包 2.通过nginx服务器发布前端工程 前端工程打包 接下来我们先来对前端工程进行打包 我们直接通过VS Code的NPM脚本中提供的build按钮来完…

微信小程序云开发教程——墨刀原型工具入门(添加交互事件)

引言 作为一个小白&#xff0c;小北要怎么在短时间内快速学会微信小程序原型设计&#xff1f; “时间紧&#xff0c;任务重”&#xff0c;这意味着学习时必须把握微信小程序原型设计中的重点、难点&#xff0c;而非面面俱到。 要在短时间内理解、掌握一个工具的使用&#xf…

13 双口 RAM IP 核

双口 RAM IP 核简介 双口 RAM IP 核有两个端口&#xff0c;它又分为伪双端口 RAM 和真双端口 RAM&#xff0c;伪双端口 RAM 一个端口只能读&#xff0c;另一个端口只能 写&#xff0c;真双端口 RAM 两个端口都可以进行读写操作。同时对存储器进行读写操作时就会用到双端口 RAM…

LeetCode受限条件下可到达节点的数目

题目描述 现有一棵由 n 个节点组成的无向树&#xff0c;节点编号从 0 到 n - 1 &#xff0c;共有 n - 1 条边。 给你一个二维整数数组 edges &#xff0c;长度为 n - 1 &#xff0c;其中 edges[i] [ai, bi] 表示树中节点 ai 和 bi 之间存在一条边。另给你一个整数数组 restr…

决策树实验分析(分类和回归任务,剪枝,数据对决策树影响)

目录 1. 前言 2. 实验分析 2.1 导入包 2.2 决策树模型构建及树模型的可视化展示 2.3 概率估计 2.4 绘制决策边界 2.5 决策树的正则化&#xff08;剪枝&#xff09; 2.6 对数据敏感 2.7 回归任务 2.8 对比树的深度对结果的影响 2.9 剪枝 1. 前言 本文主要分析了决策树的分类和回…

matplotlib——散点图和条形图(python)

散点图 需求 我们获得北京2016年三月和十月每天白天最高气温&#xff0c;我们现在需要找出气温随时间变化的某种规律。 代码 # 导入库 from matplotlib import pyplot as plt import random# 解决中文乱码 import matplotlib matplotlib.rc("font",family"F…

详细讲解Docker架构的原理、功能以及如何使用

一、简介 1、了解docker的前生LXC LXC为Linux Container的简写。可以提供轻量级的虚拟化&#xff0c;以便隔离进程和资源&#xff0c;而且不需要提供指令解释机制以及全虚拟化的其他复杂性。相当于C中的NameSpace。容器有效地将由单个操作系统管理的资源划分到孤立的组中&…