基于centos7安装docker+k8s+KubeSphere

实验环境:(每个服务器推荐内存为8G)

服务器                                             ip地址                                                             主机名   

centos7                                        192.168.80.171                                               k8s-master

centos7                                        192.168.80.172                                               k8s-node1

centos7                                        192.168.80.173                                               k8s-node2

目录

实验环境:(每个服务器推荐内存为8G)

一、安装docker——脚本安装

二、安装Kubernetes

1.基本环境——脚本部署

2、安装kubelet、kubeadm、kubectl——脚本安装

3、初始化master节点

1、初始化(注意pod和主机ip网段不能重复,我就是重复了,找半天问题md)

2、记录关键信息

3、安装Calico网络插件

4、加入worker节点(node1和node2上面执行)

三、安装KubeSphere前置环境

1、NFS文件系统

1、安装NFS-server

2、配置nfs-client(选做)(在node1和node2 )

3、配置默认存储(将ip改为自己master的)

2、metrics-server 集群指标监控组件

四、安装KubeSphere

1、下载核心文件

2、修改cluster-configuration

 3、执行安装

4、查看安装进度

访问任意机器的 30880端口


一、安装docker——脚本安装

vim install_docker.sh

#!/bin/bash

# 移除旧版本docker
sudo yum remove docker*

# 安装yum-utils
sudo yum install -y yum-utils

# 配置docker的yum地址
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安装指定版本的docker
sudo yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6

# 启动&设置开机启动docker
sudo systemctl enable docker --now

# 配置docker加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://ij054rhe.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

sudo systemctl daemon-reload
sudo systemctl restart docker

chmod +x install_docker.sh && bash  install_docker.sh

二、安装Kubernetes

1.基本环境——脚本部署

Vim k8s-envconf.sh

#!/bin/bash

# 临时关闭SELinux
sudo setenforce 0
sudo systemctl stop firewalld
sudo systemctl disable firewalld

# 修改SELinux配置文件
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 关闭swap
sudo swapoff -a
sudo sed -ri 's/.*swap.*/#&/' /etc/fstab

# 允许iptables检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sudo sysctl --system

chmod +x k8s-envconf.sh && bash k8s-envconf.sh

2、安装kubelet、kubeadm、kubectl——脚本安装

Vim install-k8s.sh

#!/bin/bash

# 添加Kubernetes的yum源
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 安装kubelet,kubeadm,kubectl
sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9

# 启动kubelet并设置开机启动
sudo systemctl enable --now kubelet
echo "192.168.80.171  k8s-master" >> /etc/hosts

chmod +x install-k8s.sh && bash install-k8s.sh

3、初始化master节点

1、初始化(注意pod和主机ip网段不能重复,我就是重复了,找半天问题md)

kubeadm init \
--apiserver-advertise-address=192.168.80.171 \
--control-plane-endpoint=k8s-master \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16

2、记录关键信息

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-master:6443 --token 0yiilp.p7x4rgys0niv52yn \
    --discovery-token-ca-cert-hash sha256:c93074aeb47bebf72978db3dc45812d516d614a56614535dbe8cc4867b1f110b \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-master:6443 --token 0yiilp.p7x4rgys0niv52yn \
    --discovery-token-ca-cert-hash sha256:c93074aeb47bebf72978db3dc45812d516d614a56614535dbe8cc4867b1f110b 

将k8s核心管理员配置文件移动到当前root用户.kube/config下

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

显示节点

3、安装Calico网络插件

curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O
kubectl apply -f calico.yaml

4、加入worker节点(node1和node2上面执行)

kubeadm join k8s-master:6443 --token 0yiilp.p7x4rgys0niv52yn \
    --discovery-token-ca-cert-hash sha256:c93074aeb47bebf72978db3dc45812d516d614a56614535dbe8cc4867b1f110b 

三、安装KubeSphere前置环境

1、NFS文件系统

1、安装NFS-server

# 在每个机器。
yum install -y nfs-utils
# 在master 执行以下命令 
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports


# 执行以下命令,启动 nfs 服务;创建共享目录
mkdir -p /nfs/data


# 在master执行
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server

# 使配置生效
exportfs -r


#检查配置是否生效
exportfs

2、配置nfs-client(选做)(在node1和node2 )

showmount -e 192.168.80.171

mkdir -p /nfs/data

mount -t nfs 192.168.80.171:/nfs/data /nfs/data

3、配置默认存储(将ip改为自己master的)

## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.80.171 ## 指定自己nfs服务器地址
            - name: NFS_PATH  
              value: /nfs/data  ## nfs服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.80.171
            path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

#确认配置是否生效
kubectl get sc

编辑测试 vi pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi

kubectl apply -f pvc.yaml

2、metrics-server 集群指标监控组件

vi metrics.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --kubelet-insecure-tls
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          periodSeconds: 10
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

 kubectl apply -f metrics.yaml 

运行命令就可以查看主机信息啦

四、安装KubeSphere

KubeSphere_企业级云原生产品与服务_技术赋能数字化转型与商业运营

1、下载核心文件

wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml

wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml

2、修改cluster-configuration

在 cluster-configuration.yaml中指定我们需要开启的功能

参照官网“启用可插拔组件”

https://kubesphere.com.cn/docs/pluggable-components/overview/

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.1.1
spec:
  persistence:
    storageClass: ""        # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
  authentication:
    jwtSecret: ""           # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
  local_registry: ""        # Add your private registry address if it is needed.
  etcd:
    monitoring: true       # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
    endpointIps: 192.168.80.171  # etcd cluster EndpointIps. It can be a bunch of IPs here.
    port: 2379              # etcd port.
    tlsEnable: true
  common:
    redis:
      enabled: true
    openldap:
      enabled: true
    minioVolumeSize: 20Gi # Minio PVC size.
    openldapVolumeSize: 2Gi   # openldap PVC size.
    redisVolumSize: 2Gi # Redis PVC size.
    monitoring:
      # type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
    es:   # Storage backend for logging, events and auditing.
      # elasticsearchMasterReplicas: 1   # The total number of master nodes. Even numbers are not allowed.
      # elasticsearchDataReplicas: 1     # The total number of data nodes.
      elasticsearchMasterVolumeSize: 4Gi   # The volume size of Elasticsearch master nodes.
      elasticsearchDataVolumeSize: 20Gi    # The volume size of Elasticsearch data nodes.
      logMaxAge: 7                     # Log retention time in built-in Elasticsearch. It is 7 days by default.
      elkPrefix: logstash              # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""
  console:
    enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
    port: 30880
  alerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    enabled: true        # Enable or disable the KubeSphere Alerting System.
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:                # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.
    enabled: true         # Enable or disable the KubeSphere Auditing Log System. 
  devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    enabled: true             # Enable or disable the KubeSphere DevOps System.
    jenkinsMemoryLim: 2Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 1500Mi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
    jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters.
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    enabled: true         # Enable or disable the KubeSphere Events System.
    ruler:
      enabled: true
      replicas: 2
  logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    enabled: true         # Enable or disable the KubeSphere Logging System.
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
    enabled: false                   # Enable or disable metrics-server.
  monitoring:
    storageClass: ""                 # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
    # prometheusReplicas: 1          # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
    prometheusMemoryRequest: 400Mi   # Prometheus request memory.
    prometheusVolumeSize: 20Gi       # Prometheus PVC size.
    # alertmanagerReplicas: 1          # AlertManager Replicas.
  multicluster:
    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.
  network:
    networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
      # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
      enabled: true # Enable or disable network policies.
    ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
      type: calico # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
    topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
      type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
  openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
    store:
      enabled: true # Enable or disable the KubeSphere App Store.
  servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
    enabled: true    # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
  kubeedge:          # Add edge nodes to your cluster and deploy workloads on edge nodes.
    enabled: false   # Enable or disable KubeEdge.
    cloudCore:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
          - ""            # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {"node-role.kubernetes.io/worker": ""}
        tolerations: []

 3、执行安装

kubectl apply -f kubesphere-installer.yaml

kubectl apply -f cluster-configuration.yaml

解决etcd监控证书找不到问题
 

kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key

 

4、查看安装进度

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

 POD全部显示running就好了

 kubectl get pod -A
NAMESPACE                      NAME                                               READY   STATUS      RESTARTS   AGE
default                        nfs-client-provisioner-7c58cbf68b-ncqjv            1/1     Running     2          53m
istio-system                   istio-ingressgateway-7d7fd96b9-t7fv6               1/1     Running     1          53m
istio-system                   istiod-1-6-10-5c8cc86fb4-zhpxp                     1/1     Running     2          95m
istio-system                   jaeger-collector-557f8795cb-n4zvd                  1/1     Running     0          93s
istio-system                   jaeger-operator-79ddbc974-fkt4g                    1/1     Running     0          9m51s
istio-system                   jaeger-query-667b645b7d-6xdgh                      2/2     Running     0          92s
istio-system                   kiali-697755c579-zc5k4                             1/1     Running     1          23m
istio-system                   kiali-operator-76cd67fb59-l7tp6                    1/1     Running     1          26m
kube-system                    calico-kube-controllers-577f77cb5c-wt9jq           1/1     Running     3          3h24m
kube-system                    calico-node-g2m9w                                  1/1     Running     3          3h24m
kube-system                    calico-node-lvnsz                                  1/1     Running     4          3h24m
kube-system                    calico-node-tfzlf                                  1/1     Running     4          3h24m
kube-system                    coredns-5897cd56c4-7f62q                           1/1     Running     3          3h25m
kube-system                    coredns-5897cd56c4-hz97p                           1/1     Running     3          3h25m
kube-system                    etcd-k8s-master                                    1/1     Running     3          3h25m
kube-system                    kube-apiserver-k8s-master                          1/1     Running     3          3h25m
kube-system                    kube-controller-manager-k8s-master                 1/1     Running     12         3h25m
kube-system                    kube-proxy-99xl9                                   1/1     Running     3          3h24m
kube-system                    kube-proxy-c8xvh                                   1/1     Running     3          3h25m
kube-system                    kube-proxy-wzf5j                                   1/1     Running     4          3h24m
kube-system                    kube-scheduler-k8s-master                          1/1     Running     12         3h25m
kube-system                    metrics-server-6497cc6c5f-zw4jz                    1/1     Running     2          53m
kube-system                    snapshot-controller-0                              1/1     Running     2          64m
kubesphere-controls-system     default-http-backend-76d9fb4bb7-7chlt              1/1     Running     1          47m
kubesphere-controls-system     kubectl-admin-776b98f44f-xtsht                     1/1     Running     1          20m
kubesphere-devops-system       ks-jenkins-65db765f86-kn5wd                        1/1     Running     2          66m
kubesphere-devops-system       s2ioperator-0                                      1/1     Running     1          27m
kubesphere-logging-system      elasticsearch-logging-data-0                       1/1     Running     0          2m46s
kubesphere-logging-system      elasticsearch-logging-data-1                       1/1     Running     0          95s
kubesphere-logging-system      elasticsearch-logging-discovery-0                  1/1     Running     0          2m42s
kubesphere-logging-system      fluent-bit-5vlzk                                   1/1     Running     3          147m
kubesphere-logging-system      fluent-bit-6twx5                                   1/1     Running     10         147m
kubesphere-logging-system      fluent-bit-wdl99                                   1/1     Running     3          147m
kubesphere-logging-system      fluentbit-operator-5576bbdcff-xpdl5                1/1     Running     2          53m
kubesphere-logging-system      ks-events-exporter-74b5c8d8b7-spg59                2/2     Running     4          55m
kubesphere-logging-system      ks-events-operator-76b99948b6-7n7x8                1/1     Running     1          47m
kubesphere-logging-system      ks-events-ruler-5d75bddbff-njrq8                   2/2     Running     2          55m
kubesphere-logging-system      ks-events-ruler-5d75bddbff-z6p7m                   2/2     Running     2          55m
kubesphere-logging-system      kube-auditing-operator-7967969d69-htd2r            1/1     Running     1          47m
kubesphere-logging-system      kube-auditing-webhook-deploy-85bbbbd9bb-2jnw2      1/1     Running     1          47m
kubesphere-logging-system      kube-auditing-webhook-deploy-85bbbbd9bb-4jzpb      1/1     Running     1          47m
kubesphere-logging-system      logsidecar-injector-deploy-5b484f575d-2mrzn        2/2     Running     2          47m
kubesphere-logging-system      logsidecar-injector-deploy-5b484f575d-vm55b        2/2     Running     2          47m
kubesphere-monitoring-system   alertmanager-main-0                                2/2     Running     0          2m50s
kubesphere-monitoring-system   alertmanager-main-1                                2/2     Running     0          2m48s
kubesphere-monitoring-system   alertmanager-main-2                                2/2     Running     0          2m49s
kubesphere-monitoring-system   kube-state-metrics-67588479db-jfng9                3/3     Running     3          55m
kubesphere-monitoring-system   node-exporter-89cjb                                2/2     Running     2          55m
kubesphere-monitoring-system   node-exporter-bqlvj                                2/2     Running     4          55m
kubesphere-monitoring-system   node-exporter-mn72c                                2/2     Running     2          55m
kubesphere-monitoring-system   notification-manager-deployment-7bd887ffb4-9jgkf   1/1     Running     1          9m51s
kubesphere-monitoring-system   notification-manager-deployment-7bd887ffb4-v494r   1/1     Running     1          9m51s
kubesphere-monitoring-system   notification-manager-operator-78595d8666-ntsmt     2/2     Running     2          54m
kubesphere-monitoring-system   prometheus-k8s-0                                   3/3     Running     1          2m47s
kubesphere-monitoring-system   prometheus-k8s-1                                   3/3     Running     1          3m18s
kubesphere-monitoring-system   prometheus-operator-d7fdfccbf-mzp5k                2/2     Running     2          55m
kubesphere-monitoring-system   thanos-ruler-kubesphere-0                          2/2     Running     0          3m18s
kubesphere-monitoring-system   thanos-ruler-kubesphere-1                          2/2     Running     0          3m17s
kubesphere-system              ks-apiserver-6d86454ddf-wqfgl                      1/1     Running     2          24m
kubesphere-system              ks-console-6d6d765964-kcw9b                        1/1     Running     2          98m
kubesphere-system              ks-controller-manager-84f6bc569f-ksrv4             1/1     Running     3          24m
kubesphere-system              ks-installer-54c6bcf76b-j85t9                      1/1     Running     1          53m
kubesphere-system              minio-f69748945-sqg7d                              1/1     Running     0          9m51s
kubesphere-system              openldap-0                                         1/1     Running     3          100m
kubesphere-system              openpitrix-import-job-tth2d                        0/1     Completed   0          29m
kubesphere-system              redis-58f948c9b4-69gbk                             1/1     Running     2          100m

访问任意机器的 30880端口

账号 : admin

密码 : P@88w0rd

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/529001.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

模型量化——NVIDIA——方案选择(PTQ、 partialPTQ、 QAT)

PTQ、 partialPTQ、 QAT 选择流程 PTQ、 partialPTQ、 QAT 咨询NVIDIA 官方后&#xff0c;他们的校正过程一致&#xff0c;支持的量化算子本质是一样的&#xff0c;那么如果你的算子不是如下几类&#xff0c;那么需要自己编写算子。参考TensorRT/tools/pytorch-quantization/py…

数据库入门-----SQL基础知识

目录 &#x1f4d6;前言&#xff1a; &#x1f4d1;SQL概述&&通用语法&#xff1a; &#x1f433;DDL&#xff1a; &#x1f43b;操作数据库&#xff1a; &#x1f41e;数据类型&#xff1a; &#x1f989;操作表&#xff1a; &#x1f9a6;DML: 语法规则&#x…

helm与k8基础

文章目录 一、helm二、K8S/K3S1.K8S基本组件1.1 资源对象1.2 核心组件1.3典型的创建 Pod 的流程1.4 Kubernetes 多组件之间的通信原理 三、容器运行时 Containerd1.查看当前k3s使用的容器运行时CRI2.K3S修改docker为运行环境3. Containerd 参考 一、helm Helm是Kubernetes的包…

吴恩达机器学习理论基础解读—线性模型(单一特征拟合)

吴恩达机器学习理论基础——线性模型 机器学习最常见的形式监督学习&#xff0c;无监督学习 线性回归模型概述 应用场景一&#xff1a;根据房屋大小预测房价 应用场景二&#xff1a;分类算法&#xff08;猫狗分类&#xff09; 核心概念&#xff1a;将训练模型的数据称为数…

使用C语言函数对数组进行操作

前言 在我们了解数组和函数之后&#xff0c;我们对数组和函数进行结合&#xff0c;之后完成一些操作吧 题目描述 杰克想将函数与数组结合进行一些操作&#xff0c;以下是他想要达到的效果&#xff0c;请你帮帮他吧&#xff01; 创建一个整型数组&#xff0c;完成对数组的操作 1…

Taro框架中的H5 模板基本搭建

1.H5 模板框架的搭建 一个h5 的基本框架的搭建 基础template 阿乐/H5 Taro 的基础模板

人民网至顶科技:《开启智能新时代:2024中国AI大模型产业发展报告发布》

​3月26日&#xff0c;人民网财经研究院与至顶科技联合发布《开启智能新时代&#xff1a;2024年中国AI大模型产业发展报告》。该报告针对AI大模型产业发展背景、产业发展现状、典型案例、挑战及未来趋势等方面进行了系统全面的梳理&#xff0c;为政府部门、行业从业者以及社会公…

推荐一款自动化测试神器---Katalon Studio

Katalon Studio介绍 Katalon Studio 是一款在网页应用、移动和网页服务方面功能强大的自动化测试解决方案。基于 Selenium 和 Appium框架&#xff0c;Katalon Studio集成了这些框架在软件自动化方面的优点。这个工具支持不同层次的测试技能集。非程序员也可以快速上手一个自动…

5分钟了解清楚【osgb】格式的倾斜摄影数据metadata.xml有几种规范

数据格式同样都是osgb&#xff0c;不同软件生产的&#xff0c;建模是参数不一样&#xff0c;还是有很大区别的。尤其在应用阶段。 本文从建模软件、数据组织结构、metadata.xml&#xff08;投影信息&#xff09;、应用几个方面进行了经验性总结。不论您是初步开始建模&#xf…

Windows Server 2008添加Web服务器(IIS)、WebDAV服务、网络负载均衡

一、Windows Server 2008添加Web服务器&#xff08;IIS&#xff09; &#xff08;1&#xff09;添加角色&#xff0c;搭建web服务器&#xff08;IIS&#xff09; &#xff08;2&#xff09;添加网站&#xff0c;关闭默认网页&#xff0c;添加默认文档 在客户端浏览器输入服务器…

力扣LCR143---子结构判定(先序递归、Java、中等题)

题目描述&#xff1a; 给定两棵二叉树 tree1 和 tree2&#xff0c;判断 tree2 是否以 tree1 的某个节点为根的子树具有 相同的结构和节点值 。 注意&#xff0c;空树 不会是以 tree1 的某个节点为根的子树具有 相同的结构和节点值 。 示例 1&#xff1a; 输入&#xff1a;tree…

你真的了解区块链游戏吗?

随着区块链技术的不断发展和普及&#xff0c;越来越多的人开始关注区块链游戏这一新兴领域。然而&#xff0c;很多人对于区块链游戏的了解仅限于一些表面的概念和特点&#xff0c;真正深入了解的人并不多。那么&#xff0c;你真的了解区块链游戏吗&#xff1f; 首先&#xff0…

12.java openCV4.x 入门-HighGui之图像窗口显示

专栏简介 &#x1f492;个人主页 &#x1f4f0;专栏目录 点击上方查看更多内容 &#x1f4d6;心灵鸡汤&#x1f4d6;我们唯一拥有的就是今天&#xff0c;唯一能把握的也是今天建议把本文当作笔记来看&#xff0c;据说专栏目录里面有相应视频&#x1f92b; &#x1f9ed;文…

【每日刷题】Day7

【每日刷题】Day7 &#x1f955;个人主页&#xff1a;开敲&#x1f349; &#x1f525;所属专栏&#xff1a;每日刷题&#x1f34d; &#x1f33c;文章目录&#x1f33c; 1. 206. 反转链表 - 力扣&#xff08;LeetCode&#xff09; 2. 203. 移除链表元素 - 力扣&#xff08;…

【java数据结构-二叉树详解(下)带你手撕对称二叉树等难题(附题目链接)】

&#x1f308;个人主页&#xff1a;努力学编程’ ⛅个人推荐&#xff1a;基于java提供的ArrayList实现的扑克牌游戏 |C贪吃蛇详解 ⚡学好数据结构&#xff0c;刷题刻不容缓&#xff1a;点击一起刷题 &#x1f319;心灵鸡汤&#xff1a;总有人要赢&#xff0c;为什么不能是我呢 …

正则问题【蓝桥杯】/dfs

正则问题 dfs 刚开始用的是栈&#xff0c;没有想到dfs… #include<iostream> #include<stack> using namespace std; string s; int pos; int dfs() {//ans表示到当前位置最多的x数目//num表示暂存的x数目int num0,ans0;while(pos<s.size()){if(s[pos](){pos;…

蓝桥杯-【二分】肖恩的苹果林

思路:有点类似于找最大值的最小化。 代码及解析 常规的模板引用40% #include <bits/stdc.h> using namespace std; #define ll long long const ll N1e53; ll a[N]; ll m,n; ll chack(ll mid) {int res1,last0;for(int i1;i<n;i){ if(a[i]-a[last]>mid){res;las…

秋招算法刷题6

20240408 1.两数之和 &#xff08;时间复杂度是O&#xff08;n的平方&#xff09;&#xff09; public int[] twoSum(int[] nums, int target){int nnums.length; for(int i0;i<n;i){ for(int j1;j<n;j){ if(nums[i][j]target){ …

大型央国企“信创化”与数字化转型建设思路

一、央国企信创化与数字化转型时代背景 1、信创概念普及&#xff1a; 信创&#xff0c;即“信息技术应用创新”。是我国自主信息产业聚焦的核心&#xff0c;旨在通过对IT硬件、软件等各个环节的重构&#xff0c;基于我国自有IT底层架构和标准&#xff0c;形成自有开放生态&am…