kubernetes最新版安装单机版v1.21.5

k8s集群由Master节点和Node(Worker)节点组成。

1.环境

  • 环境:centos 7
  • 资源配置:2c4g (CPU最少2c,不然k8s起不来)
  • docker:25.0.3
  • k8s:1.21.5

2.安装前置环境

[root@bertram-test ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
[root@bertram-test ~]# uname -a
Linux k8s-master 3.10.0-1160.102.1.el7.x86_64 #1 SMP Tue Oct 17 15:42:21 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
[root@bertram-test ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:16:3e:38:c2:e5 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.96/24 brd 172.16.0.255 scope global dynamic eth0
       valid_lft 311143308sec preferred_lft 311143308sec
    inet6 fe80::216:3eff:fe38:c2e5/64 scope link 
       valid_lft forever preferred_lft forever

3.修改主机名称

[root@bertram-test ~]# hostnamectl set-hostname k8s-master && bash

4.添加hosts

这里的IP为服务器内网ip

[root@k8s-master ~]# cat >> /etc/hosts << EOF
172.16.0.96  k8s-master
EOF

5.关闭swap

[root@k8s-master ~]# swapoff -a # 临时
[root@k8s-master ~]# sed -i 's/.*swap.*/#&/' /etc/fstab # 永久

6.将桥接的IPv4 流量传递到iptables 的链

[root@k8s-master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@k8s-master ~]# sysctl --system # 生效

7.时间同步

[root@k8s-master ~]# yum install ntpdate -y
[root@k8s-master ~]# ntpdate time.windows.com

8.安装Docker

查看所有可用版本有哪些

yum list docker-ce --showduplicates | sort -r

在这里插入图片描述

选择1个版本并安装

yum install docker-ce-版本号

# 默认安装的是最高版本 25.0.3-1.el7
# 注:版本号是 25.0.3-1.el7, 而非 3:25.0.3-1.el7
[root@k8s-master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
[root@k8s-master ~]#yum -y install docker-ce-25.0.3-1.el7  #安装最新25.0.3,我这里固定一下版本
[root@k8s-master ~]# systemctl enable docker && systemctl start docker && systemctl status docker
[root@k8s-master ~]# docker --version  #目前我安装最新版是25.0.3

9.给docker添加加速器

[root@k8s-master ~]# cat > /etc/docker/daemon.json << EOF
 {
    "registry-mirrors": ["https://qj799ren.mirror.aliyuncs.com","http://hub-mirror.c.163.com"],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
      "max-size": "100m"
    },
   "storage-driver": "overlay2"
 }
EOF
[root@k8s-master ~]# systemctl restart docker

10.添加kubernetes的yum源

[root@k8s-master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

11.安装kubeadm,kubelet 和kubectl

[root@k8s-master ~]# yum -y install kubelet-1.21.5-0 kubeadm-1.21.5-0 kubectl-1.21.5-0  #当前时间最新版是v1.21.5固定版本,下面有用

#测试安装是否成功和安装的版本,只要版本输出是正确的就说明kubectl安装成功
[root@k8s-master ~]# kubectl version --client -o json
{
  "clientVersion": {
    "major": "1",
    "minor": "21",
    "gitVersion": "v1.21.5",
    "gitCommit": "aea7bbadd2fc0cd689de94a54e5b7b758869d691",
    "gitTreeState": "clean",
    "buildDate": "2021-09-15T21:10:45Z",
    "goVersion": "go1.16.8",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}
[root@k8s-master ~]# systemctl enable kubelet

12.部署Kubernetes Master

[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=172.16.0.96  --image-repository registry.aliyuncs.com/google_containers  --kubernetes-version v1.21.5  --service-cidr=10.96.0.0/12  --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all

参数说明:

  • --apiserver-advertise-address=172.16.0.96 这个参数就是master主机的IP地址,例如我的Master主机的内网IP是:172.16.0.96
    --image-repository registry.aliyuncs.com/google_containers 这个是镜像地址,由于国外地址无法访问,故使用的阿里云仓库地址:repository registry.aliyuncs.com/google_containers
  • --kubernetes-version=v1.21.5 这个参数是下载的k8s软件版本号
  • --service-cidr=10.96.0.0/12 这个参数后的IP地址直接就套用10.96.0.0/12 ,以后安装时也套用即可,不要更改
  • --pod-network-cidr=10.244.0.0/16 k8s内部的pod节点之间网络可以使用的IP段,不能和service-cidr写一样,如果不知道怎么配,就先用这个10.244.0.0/16
  • --ignore-preflight-errors=all 添加这个会忽略错误

执行语句后,看到如下的信息说明就安装成功了。

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.0.96:6443 --token wwgx1z.9cw9z0okg4dgnzy7 \
	--discovery-token-ca-cert-hash sha256:69e3f9b3d2a28e88c9273edfbe3dee59e551832ed177ba5380f1f361f23d5ab5 
[root@k8s-master ~]# docker --version
Docker version 25.0.3, build 4debf41

13.执行如下语句

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@k8s-master ~]# kubectl get nodes    #节点状态为NotReady
NAME         STATUS     ROLES                  AGE   VERSION
k8s-master   NotReady   control-plane,master   19m   v1.21.5
# 查看客户端服务端的系统版本信息
[root@k8s-master ~]# kubectl version -o json
{
  "clientVersion": {
    "major": "1",
    "minor": "21",
    "gitVersion": "v1.21.5",
    "gitCommit": "aea7bbadd2fc0cd689de94a54e5b7b758869d691",
    "gitTreeState": "clean",
    "buildDate": "2021-09-15T21:10:45Z",
    "goVersion": "go1.16.8",
    "compiler": "gc",
    "platform": "linux/amd64"
  },
  "serverVersion": {
    "major": "1",
    "minor": "21",
    "gitVersion": "v1.21.5",
    "gitCommit": "aea7bbadd2fc0cd689de94a54e5b7b758869d691",
    "gitTreeState": "clean",
    "buildDate": "2021-09-15T21:04:16Z",
    "goVersion": "go1.16.8",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

14.安装Pod 网络插件(CNI)

[root@k8s-master ~]# wget https://docs.projectcalico.org/manifests/calico.yaml #我这里用官网推荐的calico,下载的需要修改很多地方,建议直接复制我如下的信息。
cat > calico.yaml  << \EOF
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Typha is disabled.
  typha_service_name: "none"
  # Configure the backend to use.
  calico_backend: "bird"

  # Configure the MTU to use
  veth_mtu: "1440"

  # The CNI network configuration to install on each node.  The special
  # values in this config will be automatically populated.
  cni_network_config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "calico",
          "log_level": "info",
          "datastore_type": "kubernetes",
          "nodename": "__KUBERNETES_NODE_NAME__",
          "mtu": __CNI_MTU__,
          "ipam": {
              "type": "calico-ipam"
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        }
      ]
    }

---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: felixconfigurations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: FelixConfiguration
    plural: felixconfigurations
    singular: felixconfiguration
---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ipamblocks.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPAMBlock
    plural: ipamblocks
    singular: ipamblock

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: blockaffinities.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BlockAffinity
    plural: blockaffinities
    singular: blockaffinity

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ipamhandles.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPAMHandle
    plural: ipamhandles
    singular: ipamhandle

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ipamconfigs.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPAMConfig
    plural: ipamconfigs
    singular: ipamconfig

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: bgppeers.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BGPPeer
    plural: bgppeers
    singular: bgppeer

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: bgpconfigurations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BGPConfiguration
    plural: bgpconfigurations
    singular: bgpconfiguration

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ippools.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPPool
    plural: ippools
    singular: ippool

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: hostendpoints.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: HostEndpoint
    plural: hostendpoints
    singular: hostendpoint

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: clusterinformations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: ClusterInformation
    plural: clusterinformations
    singular: clusterinformation

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: globalnetworkpolicies.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: GlobalNetworkPolicy
    plural: globalnetworkpolicies
    singular: globalnetworkpolicy

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: globalnetworksets.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: GlobalNetworkSet
    plural: globalnetworksets
    singular: globalnetworkset

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: networkpolicies.crd.projectcalico.org
spec:
  scope: Namespaced
  group: crd.projectcalico.org
  version: v1
  names:
    kind: NetworkPolicy
    plural: networkpolicies
    singular: networkpolicy

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: networksets.crd.projectcalico.org
spec:
  scope: Namespaced
  group: crd.projectcalico.org
  version: v1
  names:
    kind: NetworkSet
    plural: networksets
    singular: networkset
---
# Source: calico/templates/rbac.yaml

# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
rules:
  # Nodes are watched to monitor for deletions.
  - apiGroups: [""]
    resources:
      - nodes
    verbs:
      - watch
      - list
      - get
  # Pods are queried to check for existence.
  - apiGroups: [""]
    resources:
      - pods
    verbs:
      - get
  # IPAM resources are manipulated when nodes are deleted.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - ippools
    verbs:
      - list
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - blockaffinities
      - ipamblocks
      - ipamhandles
    verbs:
      - get
      - list
      - create
      - update
      - delete
  # Needs access to update clusterinformations.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - clusterinformations
    verbs:
      - get
      - create
      - update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-kube-controllers
subjects:
- kind: ServiceAccount
  name: calico-kube-controllers
  namespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-node
rules:
  # The CNI plugin needs to get pods, nodes, and namespaces.
  - apiGroups: [""]
    resources:
      - pods
      - nodes
      - namespaces
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - endpoints
      - services
    verbs:
      # Used to discover service IPs for advertisement.
      - watch
      - list
      # Used to discover Typhas.
      - get
  - apiGroups: [""]
    resources:
      - nodes/status
    verbs:
      # Needed for clearing NodeNetworkUnavailable flag.
      - patch
      # Calico stores some configuration information in node annotations.
      - update
  # Watch for changes to Kubernetes NetworkPolicies.
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
  # Used by Calico for policy information.
  - apiGroups: [""]
    resources:
      - pods
      - namespaces
      - serviceaccounts
    verbs:
      - list
      - watch
  # The CNI plugin patches pods/status.
  - apiGroups: [""]
    resources:
      - pods/status
    verbs:
      - patch
  # Calico monitors various CRDs for config.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - globalfelixconfigs
      - felixconfigurations
      - bgppeers
      - globalbgpconfigs
      - bgpconfigurations
      - ippools
      - ipamblocks
      - globalnetworkpolicies
      - globalnetworksets
      - networkpolicies
      - networksets
      - clusterinformations
      - hostendpoints
      - blockaffinities
    verbs:
      - get
      - list
      - watch
  # Calico must create and update some CRDs on startup.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - ippools
      - felixconfigurations
      - clusterinformations
    verbs:
      - create
      - update
  # Calico stores some configuration information on the node.
  - apiGroups: [""]
    resources:
      - nodes
    verbs:
      - get
      - list
      - watch
  # These permissions are only requried for upgrade from v2.6, and can
  # be removed after upgrade or on fresh installations.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - bgpconfigurations
      - bgppeers
    verbs:
      - create
      - update
  # These permissions are required for Calico CNI to perform IPAM allocations.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - blockaffinities
      - ipamblocks
      - ipamhandles
    verbs:
      - get
      - list
      - create
      - update
      - delete
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - ipamconfigs
    verbs:
      - get
  # Block affinities must also be watchable by confd for route aggregation.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - blockaffinities
    verbs:
      - watch
  # The Calico IPAM migration needs to get daemonsets. These permissions can be
  # removed if not upgrading from an installation using host-local IPAM.
  - apiGroups: ["apps"]
    resources:
      - daemonsets
    verbs:
      - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: calico-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-node
subjects:
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system

---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: calico-node
      annotations:
        # This, along with the CriticalAddonsOnly toleration below,
        # marks the pod as a critical add-on, ensuring it gets
        # priority scheduling and that its resources are reserved
        # if it ever gets evicted.
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      nodeSelector:
        beta.kubernetes.io/os: linux
      hostNetwork: true
      tolerations:
        # Make sure calico-node gets scheduled on all nodes.
        - effect: NoSchedule
          operator: Exists
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      serviceAccountName: calico-node
      # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
      # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
      terminationGracePeriodSeconds: 0
      priorityClassName: system-node-critical
      initContainers:
        # This container performs upgrade from host-local IPAM to calico-ipam.
        # It can be deleted if this is a fresh installation, or if you have already
        # upgraded to use calico-ipam.
        - name: upgrade-ipam
          image: calico/cni:v3.11.3
          command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
          env:
            - name: KUBERNETES_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
          volumeMounts:
            - mountPath: /var/lib/cni/networks
              name: host-local-net-dir
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
          securityContext:
            privileged: true
        # This container installs the CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: calico/cni:v3.11.3
          command: ["/install-cni.sh"]
          env:
            # Name of the CNI config file to create.
            - name: CNI_CONF_NAME
              value: "10-calico.conflist"
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
            # Set the hostname based on the k8s node name.
            - name: KUBERNETES_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # CNI MTU Config variable
            - name: CNI_MTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Prevents the container from sleeping forever.
            - name: SLEEP
              value: "false"
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
          securityContext:
            privileged: true
        # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
        # to communicate with Felix over the Policy Sync API.
        - name: flexvol-driver
          image: calico/pod2daemon-flexvol:v3.11.3
          volumeMounts:
          - name: flexvol-driver-host
            mountPath: /host/driver
          securityContext:
            privileged: true
      containers:
        # Runs calico-node container on each Kubernetes node.  This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: calico/node:v3.11.3
          env:
            # Use Kubernetes API as the backing datastore.
            - name: DATASTORE_TYPE
              value: "kubernetes"
            # Wait for the datastore.
            - name: WAIT_FOR_DATASTORE
              value: "true"
            # Set based on the k8s node name.
            - name: NODENAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            # Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"
            # Set MTU for tunnel device used if ipip is enabled
            - name: FELIX_IPINIPMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            # Set Felix logging to "info"
            - name: FELIX_LOGSEVERITYSCREEN
              value: "info"
            - name: FELIX_HEALTHENABLED
              value: "true"
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          livenessProbe:
            exec:
              command:
              - /bin/calico-node
              - -felix-live
              - -bird-live
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
              - /bin/calico-node
              - -felix-ready
              - -bird-ready
            periodSeconds: 10
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /run/xtables.lock
              name: xtables-lock
              readOnly: false
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /var/lib/calico
              name: var-lib-calico
              readOnly: false
            - name: policysync
              mountPath: /var/run/nodeagent
      volumes:
        # Used by calico-node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        - name: var-lib-calico
          hostPath:
            path: /var/lib/calico
        - name: xtables-lock
          hostPath:
            path: /run/xtables.lock
            type: FileOrCreate
        # Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /opt/cni/bin
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d
        # Mount in the directory for host-local IPAM allocations. This is
        # used when upgrading from host-local to calico-ipam, and can be removed
        # if not using the upgrade-ipam init container.
        - name: host-local-net-dir
          hostPath:
            path: /var/lib/cni/networks
        # Used to create per-pod Unix Domain Sockets
        - name: policysync
          hostPath:
            type: DirectoryOrCreate
            path: /var/run/nodeagent
        # Used to install Flex Volume Driver
        - name: flexvol-driver-host
          hostPath:
            type: DirectoryOrCreate
            path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-node
  namespace: kube-system

---
# Source: calico/templates/calico-kube-controllers.yaml

# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
spec:
  # The controllers can only have a single active instance.
  replicas: 1
  selector:
    matchLabels:
      k8s-app: calico-kube-controllers
  strategy:
    type: Recreate
  template:
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
      labels:
        k8s-app: calico-kube-controllers
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      nodeSelector:
        beta.kubernetes.io/os: linux
      tolerations:
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: calico-kube-controllers
      priorityClassName: system-cluster-critical
      containers:
        - name: calico-kube-controllers
          image: calico/kube-controllers:v3.11.3
          env:
            # Choose which controllers to run.
            - name: ENABLED_CONTROLLERS
              value: node
            - name: DATASTORE_TYPE
              value: kubernetes
          readinessProbe:
            exec:
              command:
              - /usr/bin/check-status
              - -r
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-kube-controllers
  namespace: kube-system
---
# Source: calico/templates/calico-etcd-secrets.yaml
---
# Source: calico/templates/calico-typha.yaml
---
# Source: calico/templates/configure-canal.yaml
EOF

注意唯一需要修改的是CALICO_IPV4POOL_CIDR对应的IP,需要与前面kubeadm init--pod-network-cidr指定的一样–pod-network-cidr=10.244.0.0/16

[root@k8s-master ~]# kubectl apply -f calico.yaml 

15.验证网络

[root@k8s-master ~]# kubectl get nodes  #看到 Ready说明网络就正常了
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   29m   v1.21.5
[root@k8s-master ~]# kubectl get pods -n kube-system  #全部显示Running就是对的
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5bcd7db644-gnccd   1/1     Running   0          5m36s
calico-node-58f64                          1/1     Running   0          5m36s
coredns-59d64cd4d4-mqnjw                   1/1     Running   0          28m
coredns-59d64cd4d4-njn8z                   1/1     Running   0          28m
etcd-k8s-master                            1/1     Running   0          29m
kube-apiserver-k8s-master                  1/1     Running   0          29m
kube-controller-manager-k8s-master         1/1     Running   0          29m
kube-proxy-h8h69                           1/1     Running   0          28m
kube-scheduler-k8s-master                  1/1     Running   0          29m

16.单集版的k8s安装后, 无法部署服务

因为默认master不能部署pod,有污点, 需要去掉污点或者新增一个node,我这里是去除污点。
[root@k8s-master ~]# kubectl get node -o yaml | grep taint -A 5 #执行后看到有输出说明有污点
[root@k8s-master ~]# kubectl taint nodes --all node-role.kubernetes.io/master-   #执行这句就行,就是取消污点

17.安装补全命令的包

[root@k8s-master ~]# yum -y install bash-completion  #安装补全命令的包
[root@k8s-master ~]# kubectl completion bash
[root@k8s-master ~]# source /usr/share/bash-completion/bash_completion
[root@k8s-master ~]# kubectl completion bash >/etc/profile.d/kubectl.sh
[root@k8s-master ~]# source /etc/profile.d/kubectl.sh
[root@k8s-master ~]# cat  >>  /root/.bashrc <<EOF
source /etc/profile.d/kubectl.sh
EOF

18.测试kubernetes 集群

在Kubernetes集群中部署一个Nginx:

[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
[root@k8s-master ~]# kubectl get pods,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-6799fc88d8-dgk5c   1/1     Running   0          49s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        33m
service/nginx        NodePort    10.107.95.10   <none>        80:30182/TCP   44s

注意:看到对外暴露的是30182端口
由于我这是阿里公有云,外网没开放对应端口,这里利用内网网访问nginx

[root@k8s-master ~]# curl -I 172.16.0.96:30182
HTTP/1.1 200 OK
Server: nginx/1.25.4
Date: Thu, 29 Feb 2024 03:24:43 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 14 Feb 2024 16:03:00 GMT
Connection: keep-alive
ETag: "65cce434-267"
Accept-Ranges: bytes

状态码200 则访问成功!

至此,k8s单机版就部署完成了!

总结

我按照上面的步骤可以顺利安装成功,但由于系统、环境、网络、版本的差异可能会出现问题,也不用担心,根据错误信息搜索就能够解决,大同小异哦。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/420086.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

代码随想录算法刷题训练营day28:LeetCode(93)复原IP地址 、LeetCode(78)子集 、LeetCode(90)子集II

代码随想录算法刷题训练营day28&#xff1a;LeetCode(93)复原IP地址 、LeetCode(78)子集 、LeetCode(90)子集II LeetCode(93)复原IP地址 题目 代码 import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; import java.util.List;class Solu…

Nacos进阶

目录 Nacos支持三种配置加载方案 Namespace方案 DataID方案 Group方案 同时加载多个配置集 Nacos支持三种配置加载方案 Nacos支持“Namespacegroupdata ID”的配置解决方案。 详情见&#xff1a;Nacos config alibaba/spring-cloud-alibaba Wiki GitHub Namespace方案…

JavaScript高级程序设计

前言 《JavaScript高级程序设计》 第1章——什么是JavaScript DOM将整个页面抽象为一组分层节点。 BOM用于支持访问和操作浏览器的窗口。 第2章——HTML中的JavaScript 2.1 < script >元素 元素描述async立即开始下载脚本&#xff0c;但不能阻止其他页面动作&#…

黑马程序员Java面试专题(2)|并发编程篇(1)线程基础

指路&#x1f449; 黑马程序员Java面试专题&#xff08;1&#xff09;|常见集合篇&#xff08;1&#xff09;ArrayList&LinkedList-CSDN博客https://blog.csdn.net/YOYU_/article/details/135932520黑马程序员Java面试专题&#xff08;1&#xff09;|常见集合篇&#xff0…

RTE 开源|小红书 REDPlayer 正式发布!快来 get 同款播放器~

本项目由 RTE 开发者社区 x 小红书 联合运营 播放器最初出现在 19 世纪&#xff0c;当时主要用于播放音频&#xff0c;例如通过留声机播放唱片。 随着技术的进步&#xff0c;音频播放器不断改进&#xff0c;品质越来越好&#xff0c;体积也越来越小。到了今天&#xff0c;通过…

Vue2:用node+express写一个轻量级的后端服务

1、桌面创建demo文件夹 进入demo&#xff0c;执行如下命令 npm init输入名称&#xff1a; test_server然后一路回车 2、安装express框架 npm i express3、新建server.js 在demo文件夹中&#xff0c;新建server.js const express require(express) const app express()…

Linux——进程控制(一)进程的创建与退出

目录 一、进程创建 1.写时拷贝 2.创建多个进程 二、进程终止 1.main函数的返回值 2.bash中的$? 3.自定义退出码 4.C语言的错误码 5.错误码与退出码的区别 6.代码异常终止 7.exit函数 8.总结 一、进程创建 在之前&#xff0c;我们学过linux中的非常重要的函数——…

Fastadmin下拉选择菜单

下拉菜单效果图如下所示 对应的表字段为 cid int(11) unsigned NOT NULL DEFAULT ‘1’ COMMENT ‘分类ID 1 新手 2VIP 3基金产品’ 步骤如下&#xff1a; 一、lang/zh-cn 中找到对应的文件&#xff0c;添加 配置 二、Model 中添加方法 三、控制器中添加 四、add.html中 …

leetcode刷题(javaScript)——栈相关场景题总结

在LeetCode刷题中&#xff0c;栈是一个非常有用的数据结构&#xff0c;可以解决许多问题&#xff0c;包括但不限于以下几类问题&#xff1a; 括号匹配问题&#xff1a;例如检查括号序列是否有效、计算表达式的值等。逆波兰表达式求值&#xff1a;使用栈来实现逆波兰表达式的计算…

Python实现链表:从基础到应用

一、引言 链表是一种常见的数据结构&#xff0c;它由一系列节点组成&#xff0c;每个节点包含数据和指向下一个节点的指针。链表在内存中的存储不是连续的&#xff0c;这使得它在插入和删除操作上具有较高的效率。本文将使用Python语言来实现一个简单的链表&#xff0c;并展示其…

零基础学编程,中文编程工具之进度标尺构件的编程用法

零基础学编程&#xff0c;中文编程工具之进度标尺构件的编程用法 一、前言 今天给大家分享的中文编程开发语言工具 进度条构件的用法。 编程入门视频教程链接 https://edu.csdn.net/course/detail/39036 编程工具及实例源码文件下载可以点击最下方官网卡片——软件下载——…

力扣:35. 搜索插入位置

力扣&#xff1a;35. 搜索插入位置 描述 给定一个排序数组和一个目标值&#xff0c;在数组中找到目标值&#xff0c;并返回其索引。如果目标值不存在于数组中&#xff0c;返回它将会被按顺序插入的位置。 请必须使用时间复杂度为 O(log n) 的算法。 示例 1: 输入: nums [1,…

windows系统玩《模拟人生 4》 免安装版本

本文介绍如何在windows系统上玩《模拟人生 4》这个游戏&#xff0c;无需安装&#xff0c;直接玩&#xff01; 首先下载百度网盘分享的文件&#xff0c;这里下载文章末尾。 下载完成后先点击 language-change.exe 将游戏语言更改为英文&#xff0c;默认是俄语&#xff0c;根本…

sqllabs的order by注入

当我们在打开sqli-labs的46关发现其实是个表格&#xff0c;当测试sort等于123时&#xff0c;会根据列数的不同来进行排序 我们需要利用这个点来判断是否存在注入漏洞&#xff0c;通过加入asc 和desc判断页面有注入点 1、基于使用if语句盲注 如果我们配合if函数&#xff0c;表达…

B端系统:导航机制设计,用户体验提升的法宝

Hi&#xff0c;大家好&#xff0c;我是贝格前端工场&#xff0c;从事8年前端开发的老司机。很多B端系统体验不好很大一部分原因在于导航设计的不合理&#xff0c;让用户无所适从&#xff0c;大大降低了操作体验&#xff0c;本文着重分析B端系统的导航体系改如何设计&#xff0c…

[CISCN2019 华北赛区 Day2 Web1]Hack World 1 题目分析与详解

一、分析判断 进入靶机&#xff0c;主页面如图&#xff1a; 主页面提供给我们一条关键信息&#xff1a; flag值在 表flag 中的 flag列 中。 接着我们尝试输入不同的id&#xff0c;情况分别如图&#xff1a; 当id1时&#xff1a; 当id2时&#xff1a; 当id3时&#xff1a; 我…

【IO流系列】ConvertStream 转换流

转换流 1. 概述2. 作用3. 字符编码和字符集3.1 字符编码3.2 字符集 4. InputStreamReader字符转换输入流4.1 构造方法4.2 代码示例 5. OutputStreamWriter字符转换输出流5.1 构造方法5.2 代码示例 6. 练习6.1 练习1&#xff1a;转换文件编码6.2 练习2&#xff1a;读取文件数据 …

Spring 源码解析

文章目录 前言相关Spring的定义接口整体代码StartupStep contextRefresh this.applicationStartup.start("spring.context.refresh")prepareRefresh()obtainFreshBeanFactory()registerBeanPostProcessors(beanFactory)SpringAOP原码流程EnableAspectJAutoProxyAnno…

基于java+springboot女士电商平台系统源码+文档设计

基于javaspringboot女士电商平台系统源码文档设计 博主介绍&#xff1a;多年java开发经验&#xff0c;专注Java开发、定制、远程、文档编写指导等,csdn特邀作者、专注于Java技术领域 作者主页 央顺技术团队 Java毕设项目精品实战案例《1000套》 欢迎点赞 收藏 ⭐留言 文末获取源…

GaussDB跨云容灾:实现跨地域的数据库高可用能力

背景 金融、银行业等对数据的安全有着较高的要求&#xff0c;同城容灾建设方案&#xff0c;在绝大多数场景下可以保证业务数据的安全性&#xff0c;但是在极端情况下&#xff0c;如遇不可抗力因素等&#xff0c;要保证数据的安全性&#xff0c;就需要采取跨地域的容灾方案。 …