centos上部署k8s

环境准备

四台Linux服务器

主机名

IP

角色

k8s-master-94

192.168.0.94

master

k8s-node1-95

192.168.0.95

node1

k8s-node2-96

192.168.0.96

node2

habor

192.168.0.77

镜像仓库

三台机器均执行以下命令:

  • 查看centos版本
[root@localhost Work]# cat /etc/redhat-release
CentOS Linux release 8.5.2111
  • 关闭防火墙和selinux
[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@localhost ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@localhost ~]# setenforce 0
  • 关闭swap分区(k8s禁止虚拟内存以提高性能)
# 临时关闭;关闭swap主要是为了性能考虑
[root@localhost ~]#swapoff -a
# 可以通过这个命令查看swap是否关闭了
[root@localhost ~]#free
# 永久关闭
[root@localhost ~]#sed -ri 's/.*swap.*/#&/' /etc/fstab
  • 修改主机名
# 在192.168.0.94执行
[root@localhost ~]#hostnamectl set-hostname  k8s-master-94
# 在192.168.0.95执行
[root@localhost ~]#hostnamectl set-hostname k8s-node1-95
# 在192.168.0.96执行
[root@localhost ~]#hostnamectl set-hostname k8s-node2-96
[root@localhost ~]#hostname $hostname # 立刻生效
  • 修改hosts表
[root@localhost ~]# cat >> /etc/hosts<<EOF
> 192.168.0.94 k8s-master-94
> 192.168.0.95 k8s-node1-95
> 192.168.0.96 k8s-node2-96
> EOF
  • 时间同步
[root@localhost ~]#yum install chrony -y
[root@localhost ~]#systemctl start chronyd
[root@localhost ~]#systemctl enable chronyd
[root@localhost ~]#chronyc sources
  • 允许 iptables 检查桥接流量,将桥接的IPv4流量传递到iptables的链:以下net.ipv4.ip_forward如存在=0,修改为1即可
[root@localhost ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.ipv4.ip_forward = 1
> net.ipv4.tcp_tw_recycle = 0
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@localhost ~]#
sysctl --system
  • 安装docker,如果有问题,参考这里解决:

Centos 8安装Docker及报错解决办法_duansamve的博客-CSDN博客_centos8 docker 安装失败

##卸载旧版本
yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine


##更换镜像
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-8.repo

###进入yum目录
cd /etc/yum.repos.d

## 删除目录下所有文件(注意完整复制,不要漏了那个点)
rm -rf ./*

##安装正确的镜像源
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo

##生成缓存
yum makecache

##安装需要的软件包, yum-util 提供yum-config-manager功能,另外两个是devicemapper驱动依赖的
yum install -y yum-utils device-mapper-persistent-data lvm2

##设置yum源
um-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

##安装docker
yum install -y docker-ce

##启动并加入开机启动
systemctl start docker
systemctl enable docker

##验证安装是否成功
docker version
docker info

##配置镜像加速
mkdir -p /etc/docker
 tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://ccdkz6eh.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker


安装kubeadm,kubelet和kubectl

三台机器执行

  • 添加k8s阿里云YUM软件源
[root@k8s-node1-80 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

#清除缓存

[root@k8s-node1-80 ~]# yum clean all
#把服务器的包信息下载到本地电脑缓存起来,makecache建立一个缓存

[root@k8s-node1-80 ~]# yum makecache
#列出kubectl可用的版本

[root@k8s-node1-80 ~]# yum list kubectl --showduplicates | sort -r
  • 安装kubeadm,kubelet和kubectl
[root@k8s-node1-80 ~]#yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0
[root@k8s-node1-80 ~]#systemctl start kubelet
[root@k8s-node1-80 ~]#systemctl enable kubelet

#查看有没有安装
[root@k8s-node2-92 ~]# yum list installed | grep kubelet
kubelet.x86_64                                     1.21.0-0                                      @kubernetes
[root@k8s-node2-92 ~]# yum list installed | grep kubeadm
kubeadm.x86_64                                     1.21.0-0                                      @kubernetes
[root@k8s-node2-92 ~]# yum list installed | grep kubectl
kubectl.x86_64                                     1.21.0-0                                      @kubernetes
  • 查看安装的版本
[root@k8s-node2-92 ~]# kubelet --version
Kubernetes v1.21.0
##########3
Kubelet:运行在cluster所有节点上,负责启动POD和容器;
Kubeadm:用于初始化cluster的一个工具;
Kubectl:kubectl是kubenetes命令行工具,通过kubectl可以部署和管理应用,查看各种资源,创建,删除和更新组件;
  • 重启centos
reboot

初始化K8S集群

部署master节点,在192.168.0.94执行

kubeadm init --apiserver-advertise-address=192.168.0.94 \
--apiserver-cert-extra-sans=127.0.0.1 \
--image-repository=registry.aliyuncs.com/google_containers \
--ignore-preflight-errors=all \
--kubernetes-version=v1.21.0 \
--service-cidr=10.10.0.0/16 \
--pod-network-cidr=10.244.0.0/16

参数说明

--apiserver-advertise-address=192.168.0.94 :这个参数就是master主机的IP地址,例如我的Master主机的IP是:192.168.0.94 

--image-repository=registry.aliyuncs.com/google_containers:这个是镜像地址,由于国外地址无法访问,故使用的阿里云仓库地址:registry.aliyuncs.com/google_containers

--kubernetes-version=v1.17.4:这个参数是下载的k8s软件版本号

--service-cidr=10.10.0.0/16:这个参数后的IP地址直接就套用10.10.0.0/16 ,以后安装时也套用即可,不要更改

--pod-network-cidr=10.244.0.0/16:k8s内部的pod节点之间网络可以使用的IP段,不能和service-cidr写一样,如果不知道怎么配,就先用这个10.244.0.0/16

网段问题,两个网段不要重,后面是/16,不要与当前机器网段一样。

如果报错:

[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”.

出现[WARNING IsDockerSystemdCheck],是由于docker的Cgroup Driver和kubelet的Cgroup Driver不一致导致的,此处选择修改docker的和kubelet一致


[root@k8s-master-94 ~]# docker info | grep Cgroup
 Cgroup Driver: cgroupfs
 Cgroup Version: 1

[root@k8s-master-94 ~]# vim /usr/lib/systemd/system/docker.service,加入--exec-opt native.cgroupdriver=systemd

[root@k8s-master-94 ~]# systemctl daemon-reload
[root@k8s-master-94 ~]# systemctl restart docker

 
# 重新初始化
[root@k8s-master-94 ~]# kubeadm reset # 先重置
 
[root@k8s-master-94 ~]# docker info | grep Cgroup
 Cgroup Driver: systemd
 Cgroup Version: 1

#重复上次【初始化master节点】的命令

初始化成功

其中有生成一串命令用于node节点的加入,记录下来,接着执行以下命令

[root@k8s-master-94 ~]#   mkdir -p $HOME/.kube
[root@k8s-master-94 ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master-94 ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

看Master结点的安装状态:

[root@k8s-master-94 ~]# kubectl get node
NAME            STATUS   ROLES                  AGE   VERSION
k8s-master-94   Ready    control-plane,master   20m   v1.21.0

 Master设备上安装K8S路由插件Calico

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml

然后在临时文件夹(或者随便你建一个文件夹)执行

wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/custom-resources.yaml
vim custom-resources.yaml

修改其中的cidr为你在初始化master节点时用--pod-network-cidr配置的那个--pod-network-cidr=10.244.0.0/16

保存修改,然后执行:

kubectl create -f custom-resources.yaml

稍等片刻等待上面的pod状态变为下图,即证明网络插件Calico已经安装完毕了

kubectl get pod --all-namespaces

非running解决方法:pod calico CoreDNS 拉取不到镜像的问题的解决办法-CSDN博客

此时Master节点就绪:

[root@k8s-master-94 ~]# kubectl get node
NAME            STATUS   ROLES                  AGE    VERSION
k8s-master-94   Ready    control-plane,master   165m   v1.21.0

部署node节点,在192.168.0.95和192.168.0.96执行

kubeadm join 192.168.0.94:6443 --token faj2nf.5o3gwjtbst90k19y \
        --discovery-token-ca-cert-hash sha256:62d91aaef65e987702ddca804330d1fe721707fdf794d2494730636e616bda09

命令执行失败,解决方法:https://www.cnblogs.com/cloud-yongqing/p/16032596.html

如果忘记,获取命令

kubeadm token create --print-join-command

执行成功

查看部署结果

node节点
[root@k8s-node1-95 ~]# kubectl get node
The connection to the server localhost:8080 was refused - did you specify the right host or port?

 master节点 
[root@k8s-master-94 ~]# kubectl get nodes
NAME            STATUS   ROLES                  AGE     VERSION
k8s-master-94   Ready    control-plane,master   14m     v1.21.0
k8s-node1-95    Ready    <none>                 4m      v1.21.0
k8s-node2-96    Ready    <none>                 5m10s   v1.21.0

部署dashboard(master)

创建recommended.yaml

cat > recommended.yaml << EOF
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001       #DASHBOARD端口
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.7.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.8
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
EOF

[root@k8s-master-94 ~]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

[root@k8s-master-94 ~]# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7c857855d9-xk4d4   1/1     Running   0          2m10s
kubernetes-dashboard-658b66597c-r59xp        1/1     Running   0          2m10s

创建token登录(需要注意的是Token默认有效期是24小时,过期需要重新生成token)

创建service  account并绑定默认cluster-admin管理员群集角色
#创建用户
 kubectl create serviceaccount dashboard-admin -n kube-system 
#用户授权
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
#获取用户Token
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

使用ip登录dashboard

https://masterip:30001/#/login
https://node1ip:30001/#/login
https://node2ip:30001/#/login

配置token永不过期输入获取的TOKEN,配置token永不过期

部署metrics-server(master)

创建components.yaml

cat > components.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls  #新添加的内容
        - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname #新添加的内容
        image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0  #替换为阿里云的镜像
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
EOF

[root@k8s-master-94 ~]# kubectl apply -f components.yaml
[root@k8s-master-94 ~]# kubectl get pods -n kube-system|grep metrics
metrics-server-5f85c44dcd-fcshj         1/1     Running   0          43s

部署harbor仓库

  • 环境要求:服务器必须安装docker和docker-compose
  • 安装docker-compose
[root@localhost ~]# curl -L "https://github.com/docker/compose/releases/download/1.26.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  4 11.6M    4  572k    0     0  15439      0  0:13:13  0:00:37  0:12:36  7942
100 11.6M  100 11.6M    0     0  27290      0  0:07:29  0:07:29 --:--:--  154k
[root@localhost ~]# chmod +x /usr/local/bin/docker-compose
[root@localhost ~]# docker-compose version
docker-compose version 1.26.0, build d4451659
docker-py version: 4.2.1
CPython version: 3.7.7
OpenSSL version: OpenSSL 1.1.0l  10 Sep 2019
  • 下载harbor安装包
[root@localhost ~]wget https://storage.googleapis.com/harbor-releases/harbor-offline-installer-v1.5.3.tgz
  • 解压安装包并移动位置
tar -zxvf harbor-offline-installer-v1.5.3.tgz #解压离线安装包
mv harbor /opt/ #移到/opt目录下
cd /opt #进入到/opt目录
ls  #查看目录内容
cd harbor
  • 进入harbor 目录,修改harbor.cfg配置文件
vim harbor.cfg
hostname = 192.168.0.77 #修改harbor的启动ip,这里需要依据系统ip设置
harbor_admin_password = Natux2019. #修改harbor的admin用户的密码
  • 配置Harbor,若执行失败,安装python2.7
./prepare
  • 安装Harbor
/install.sh
  • 如果出现问题

将docker-compose.yml ,第一行version修改为2.1,在执行./install.sh

  • 访问Harbor页面,默认端口为80,http://自己的ip

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/433198.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

2024腾讯Java面试题精选,教你抓住面试的重点

重要 大环境对于我们能力要求越来越高&#xff0c;医学专家又说今年冬天新冠肺炎将“席卷重来”。 如果疫情再次爆发&#xff0c;势必将再次影响企业的正常运作&#xff0c;一波裁员浪潮你又能否抗住&#xff1f; 不管如何&#xff0c;明年金三银四又是一波跳槽时机&#xf…

数字化时代的新里程碑:Web3的革命

在当今数字化时代&#xff0c;Web3正成为了一股强大的力量&#xff0c;重新定义了我们对互联网的认知。本文将深入探讨Web3的定义、特点&#xff0c;以及它对金融、供应链、社交媒体等领域的革命性影响&#xff0c;并展望Web3的未来发展。 1. Web3的定义与特点 Web3不仅是一种…

力扣206反转链表

206.反转链表 力扣题目链接(opens new window) 题意&#xff1a;反转一个单链表。 示例: 输入: 1->2->3->4->5->NULL 输出: 5->4->3->2->1->NULL 1&#xff0c;双指针 2&#xff0c;递归。递归参考双指针更容易写&#xff0c; 为什么不用头插…

无代理方式实现VMware的迁移?详细解析

在当今数字化时代&#xff0c;数据的安全性和可用性对于企业至关重要。尤其是在VMware转变订阅策略后&#xff0c;原本永久订阅的产品转变为以年付费订阅的形式&#xff0c;导致客户不得不支付更多的费用&#xff0c;大幅增加了成本。同时&#xff0c;客户也对VMware未来发展前…

计算机图形学的作用

计算机图形学的作用 计算机图形学的作用1.创造数字世界2.物理世界的仿真模拟2.1 三维几何2.2 物理动态2.3 人体运动2.4 虚实融合 3.仿真模拟与智能应用 笔记来源&#xff1a;GAMES001-图形学中的数学 计算机图形学的作用 1.创造数字世界 计算机图形学创造数字世界 数字世界…

FEP容量瓶多应用于制药光电光伏行业

常用规格&#xff1a;25ml、50ml、100ml、250mlFEP容量瓶也叫特氟龙容量瓶&#xff0c;容量瓶是为配制一定物质的量浓度的溶液用的精确定容器皿&#xff0c;常和移液管配合使用。广泛用于ICP-MS、ICP-OES等痕量分析以及同位素分析等高端实验。地质、电子化学品、半导体分析测试…

鸿蒙Harmony应用开发—ArkTS声明式开发(基础手势:TapGesture)

支持单击、双击和多次点击事件的识别。 说明&#xff1a; 从API Version 7开始支持。后续版本如有新增内容&#xff0c;则采用上角标单独标记该内容的起始版本。 接口 TapGesture(value?: { count?: number, fingers?: number }) 参数&#xff1a; 参数名称参数类型必填参…

【Android Studio】的矢量绘图【pathData】详解

目录&#xff1a; 例子老师&#xff1a;一、基础知识&#xff1a;1、命令和常数&#xff1a;2、绝对坐标和相对坐标&#xff1a; 一、落笔命令命令Mx&#xff0c;y和mx&#xff0c;y&#xff08;大小写绝对和相对&#xff09; 二、画直线命令Lx&#xff0c;y和lx&#xff0c;y&…

Linux系统——LVS-DR群集部署及拓展

目录 引言 1.LVS的工作模式及其工作过程 2.列举出LVS调度算法 3.LVS调度常见算法&#xff08;均衡策略&#xff09; 3.1固定调度算法:rr&#xff0c;wrr&#xff0c;dh&#xff0c;sh 3.2动态调度算法:wlc&#xff0c;lc&#xff0c;lblc 4.LVS三种工作模式区别 一、I…

逆向实战33——某东登录参数与流程分析(包含滑块)

声明 本文章中所有内容仅供学习交流,抓包内容、敏感网址、数据接口均已做脱敏处理,严禁用于商业用途和非法用途,否则由此产生的一切后果均与作者无关,若有侵权,请联系我立即删除! 目标网站 aHR0cHM6Ly9wYXNzcG9ydC5qZC5jb20vbmV3L2xvZ2luLmFzcHg/UmV0dXJuVXJsPWh0dHBzJ…

CSS极速入门

CSS介绍 什么是CSS? CSS(Cascading Style Sheet),层叠样式表,用于控制页面的样式. CSS能够对网页中元素位置的排版进行像素级的精确控制,实现美化页面的效果.能够做到页面的样式和结构分离. CSS可以理解为"东方四大邪术"的化妆术. 对页面展示进行化妆. 基本语法规…

PCM会重塑汽车OTA格局吗(2)

目录 1.概述 2. PCM技术视角下的OTA 3.小结 1.概述 上一篇文章&#xff0c;我们着重讲解了OTA的概述内容&#xff0c;和意法半导体推出的跨域融合MCU的四大特征&#xff0c;其中就包含了OTA技术。 他们针对OTA做了比较创新的设计&#xff0c;在总的可用memory容量不变情况…

Ansys Zemax | 如何在OpticStudio中建模DMD(MEMS)

附件下载 联系工作人员获取附件 什么是DMD/ MEMS 下图显示了一个DMD设备&#xff0c;它单独倾斜的微镜组成。镜子通常被称为像素。 如何在OpticStudio中建模DMD 这些设备可以在序列或非序列模式下建模。 如何计算单个像素/镜子的旋转 本节将说明如何设置单个像素的旋转。像…

FEP样品瓶透明聚四氟乙烯取样瓶

一、产品介绍 FEP试剂瓶&#xff0c;也叫FEP取样瓶、特氟龙样品瓶等&#xff0c;主要用于痕量分析、同位素检测&#xff0c;ICP-MS/OES/AAS分析等高端实验。本底值低&#xff0c;金属元素铅、铀含量小于0.01ppb,无溶出与析出。 常用尺寸&#xff08;ml&#xff09;&#xff1…

2024大厂Java面试最火问题,1200页文档笔记

前言 ⽂章有点⻓&#xff0c;请耐⼼看完&#xff0c;绝对有收获&#xff01;不想听我BB直接进⼊⾯试分享&#xff1a; 准备过程蚂蚁⾦服⾯试分享拼多多⾯试分享字节跳动⾯试分享最后总结个人所得&#xff08;供大家参考学习&#xff09; 当时我⾃⼰也准备出去看看机会&#…

七、链表问题(上)

160、相交链表&#xff08;简单&#xff09; 题目描述 给你两个单链表的头节点 headA 和 headB &#xff0c;请你找出并返回两个单链表相交的起始节点。如果两个链表不存在相交节点&#xff0c;返回 null 。 图示两个链表在节点 c1 开始相交&#xff1a; 题目数据 保证 整个…

引领测试开发新风向:模型驱动测试的魔力

测试开发是软件开发周期中至关重要的一个环节&#xff0c;而模型驱动测试作为一种新颖的测试方法&#xff0c;为测试开发带来了新的思路和技术。本文将探讨如何利用模型驱动测试优化测试开发流程&#xff0c;提高软件质量和开发效率。 模型驱动测试在测试开发中的应用价值 模型…

计算机三级——网络技术(综合题第一题)

笔记 标准分类的IP地址&#xff1a; 类别地址范围实际可用范围说明A类0~1271.0.0.1~126.255.255.2540代表任何地址&#xff0c;127为回环测试地址B类128~191128.1.0.0~191.254.0.0128.0.0.0和191.255.0.0为保留ipC类192~223192.0.1.0~223.255.254.0192.0.0.0和223.255.255.0…

java多线程编程(四)-----线程池

一.线程池的介绍 java中的池是非常重要的思想方法&#xff0c;比如内存池&#xff0c;进程池&#xff0c;连接池&#xff0c;常量池等等。本篇重点介绍java中的线程池。这里的这些池的概念都是一样的&#xff0c;比如做饭的时候&#xff0c;有烧水&#xff0c;切菜&#xff0c…

大数据开发-Hadoop之HDFS高级应用

文章目录 HDFS回收站HDFS的安全模式定时上传数据至HDFSHDFS的高可用和高扩展HDFS写数据过程源码剖析 HDFS回收站 HDFS会为每个用户创建一个回收站目录:/user/用户名/.Trash/回收站中的数据都会有一个默认的保存周期&#xff0c;过期未恢复则会被HDFS自动彻底删除默认情况下HDF…