六、从零实战企业级K8S本地部署ThingsBoard专业版集群

1、从 docker hub 拉取 ThingsBoard PE 映像(所有节点)

1.1、查看k8s信息(主节点)

kubectl cluster-info       #查看k8s集群信息
kubectl get node           #查看节点信息
kubectl get pod -A         #查看内部组件

1.2、从 docker hub 拉取 ThingsBoard PE 映像(所有节点)

  • 运行以下命令从 Docker 中心拉取映像。
docker pull thingsboard/tb-pe-node:3.6.3PE
docker pull thingsboard/tb-pe-web-report:3.6.3PE
docker pull thingsboard/tb-pe-web-ui:3.6.3PE
docker pull thingsboard/tb-pe-js-executor:3.6.3PE
docker pull thingsboard/tb-pe-http-transport:3.6.3PE
docker pull thingsboard/tb-pe-mqtt-transport:3.6.3PE
docker pull thingsboard/tb-pe-coap-transport:3.6.3PE
docker pull thingsboard/tb-pe-lwm2m-transport:3.6.3PE
docker pull thingsboard/tb-pe-snmp-transport:3.6.3PE

2、创建K8S集群PV存储库(主节点)

2.1、创建数据资源PV存储库:thingsboard-db-pv.yml

vi thingsboard-db-pv.yml

复制添加以下内容到thingsboard-db-pv.yml文件中

#postgres
apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgres-pv-claim
  namespace: thingsboard
  labels:
    app: postgres
    type: local
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/postgres
  persistentVolumeReclaimPolicy: Recycle
---
#cassandra
apiVersion: v1
kind: PersistentVolume
metadata:
  name: cassandra-data-cassandra-0
  labels:
    type: local
    app: cassandra
spec:
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/cassandra-0
  persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: cassandra-data-cassandra-1
  labels:
    type: local
    app: cassandra
spec:
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/cassandra-1
  persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: cassandra-data-cassandra-2
  labels:
    type: local
    app: cassandra
spec:
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/cassandra-2
  persistentVolumeReclaimPolicy: Recycle

创建目录:

mkdir -p /tmp/data/postgres
mkdir -p /tmp/data/cassandra-0
mkdir -p /tmp/data/cassandra-1
mkdir -p /tmp/data/cassandra-2

2.2、创建第三方资源PV存储库:thingsboard-third-pv.yml

vi thingsboard-third-pv.yml

复制添加以下内容到thingsboard-third-pv.yml文件中

#zookeeper
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-data-0
  labels:
    type: local
    app: zookeeper
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/zookeeper/data-0
  persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datalog-0
  labels:
    type: local
    app: zookeeper
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/zookeeper/datalog-0
  persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-data-1
  labels:
    type: local
    app: zookeeper
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/zookeeper/data-1
  persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datalog-1
  labels:
    type: local
    app: zookeeper
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/zookeeper/datalog-1
  persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-data-2
  labels:
    type: local
    app: zookeeper
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/zookeeper/data-2
  persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datalog-2
  labels:
    type: local
    app: zookeeper
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/zookeeper/datalog-2
  persistentVolumeReclaimPolicy: Recycle
---
#kafka
apiVersion: v1
kind: PersistentVolume
metadata:
  name: tb-kafka-logs
  labels:
    type: local
    app: tb-kafka
spec:
  capacity:
    storage: 200Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/tb-kafka/logs
  persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: tb-kafka-app-logs
  labels:
    type: local
    app: tb-kafka
spec:
  capacity:
    storage: 200Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/tb-kafka/app-logs
  persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: tb-kafka-config
  labels:
    type: local
    app: tb-kafka
spec:
  capacity:
    storage: 50Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/tb-kafka/config
  persistentVolumeReclaimPolicy: Recycle

创建目录:

mkdir -p /tmp/data/zookeeper/data-0
mkdir -p /tmp/data/zookeeper/datalog-0
mkdir -p /tmp/data/zookeeper/data-1
mkdir -p /tmp/data/zookeeper/datalog-1
mkdir -p /tmp/data/zookeeper/data-2
mkdir -p /tmp/data/zookeeper/datalog-2
mkdir -p /tmp/data/tb-kafka/logs
mkdir -p /tmp/data/tb-kafka/app-logs
mkdir -p /tmp/data/tb-kafka/config

2.3、创建tb资源PV存储库:thingsboard-tb-pv.yml

  • 复制添加以下内容到thingsboard-tb-pv.yml文件中
#tb-node
apiVersion: v1
kind: PersistentVolume
metadata:
  name: tb-node-0
  namespace: thingsboard
  labels:
    app: tb-node
    type: local
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/tb-node/node0
  persistentVolumeReclaimPolicy: Recycle
---
#tb-mqtt-transport
apiVersion: v1
kind: PersistentVolume
metadata:
  name: tb-mqtt-transport-0
  namespace: thingsboard
  labels:
    app: tb-mqtt-transport
    type: local
spec:
  capacity:
    storage: 200Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/tb-mqtt-transport/transport0
  persistentVolumeReclaimPolicy: Recycle
---
#tb-mqtt-transport
apiVersion: v1
kind: PersistentVolume
metadata:
  name: tb-mqtt-transport-1
  namespace: thingsboard
  labels:
    app: tb-mqtt-transport
    type: local
spec:
  capacity:
    storage: 200Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/tb-mqtt-transport/transport1
  persistentVolumeReclaimPolicy: Recycle
---
#tb-coap-transport
apiVersion: v1
kind: PersistentVolume
metadata:
  name: tb-coap-transport-0
  namespace: thingsboard
  labels:
    app: tb-coap-transport
    type: local
spec:
  capacity:
    storage: 200Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/tb-coap-transport/transport0
  persistentVolumeReclaimPolicy: Recycle
---
#tb-coap-transport
apiVersion: v1
kind: PersistentVolume
metadata:
  name: tb-coap-transport-1
  namespace: thingsboard
  labels:
    app: tb-coap-transport
    type: local
spec:
  capacity:
    storage: 200Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/tb-coap-transport/transport1
  persistentVolumeReclaimPolicy: Recycle
---
#tb-http-transport
apiVersion: v1
kind: PersistentVolume
metadata:
  name: tb-http-transport-0
  namespace: thingsboard
  labels:
    app: tb-http-transport
    type: local
spec:
  capacity:
    storage: 200Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/tb-http-transport/transport0
  persistentVolumeReclaimPolicy: Recycle
---
#tb-http-transport
apiVersion: v1
kind: PersistentVolume
metadata:
  name: tb-http-transport-1
  namespace: thingsboard
  labels:
    app: tb-http-transport
    type: local
spec:
  capacity:
    storage: 200Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/tb-http-transport/transport1
  persistentVolumeReclaimPolicy: Recycle
---

创建目录:

mkdir -p /tmp/data/tb-node/node0
mkdir -p /tmp/data/tb-mqtt-transport/transport0
mkdir -p /tmp/data/tb-mqtt-transport/transport1
mkdir -p /tmp/data/tb-coap-transport/transport0
mkdir -p /tmp/data/tb-coap-transport/transport1
mkdir -p /tmp/data/tb-http-transport/transport0
mkdir -p /tmp/data/tb-http-transport/transport1

创建K8S集群PV存储库完成后如下图所示:

3、执⾏创建PV存储库(主节点)

依次执⾏下⾯的命令
kubectl apply -f thingsboard-db-pv.yml
kubectl apply -f thingsboard-third-pv.yml
kubectl apply -f thingsboard-tb-pv.yml

查看执⾏创建PV存储库结果:
kubectl get pv

4、克隆 ThingsBoard PE Kubernetes 脚本

4.1、克隆脚本

yum install -y git
git clone -b release-3.6.3 https://github.com/thingsboard/thingsboard-pe-k8s.git --depth 1
cd thingsboard-pe-k8s/minikube

4.2、修改thingsboard-pe-k8s/minikube目录下5个文件配置

4.2.1、配置 .env

cd thingsboard-pe-k8s/minikube
vi .env
# Database used by ThingsBoard, can be either postgres (PostgreSQL) or hybrid (PostgreSQL for entities database and Cassandra for timeseries database).
# According to the database type corresponding kubernetes resources will be deployed (see postgres.yml, cassandra.yml for details).
# DATABASE=postgres     添加注释,并增加下一行!
  DATABASE=hybrid
# Replication factor for Cassandra database (will be ignored if PostgreSQL was selected as the database).
# Must be less or equals to the number of Cassandra nodes which can be configured in ./common/cassandra.yml ('StatefulSet.spec.replicas' property)
CASSANDRA_REPLICATION_FACTOR=1

修改 .env 的环境配置,两种方案:

  1. postgres:使⽤ PostgreSQL 数据库,一般用于学习、研究和开发
  2. hybrid:使⽤ PostgreSQL 作为实体数据库,使⽤ Cassandra 作为时间序列数据库,主要用于企业级的生产环境

本文是企业级K8S本地部署ThingsBoard专业版集群,因此选hybrid⽅式,修改2个地方:

  1. DATABASE=hybrid
  2. CASSANDRA_REPLICATION_FACTOR=2 【此处没有修改!】

4.2.2、配置 cassandra.yml

vi cassandra.yml

因为CASSANDRA_REPLICATION_FACTOR必须⼩于或者等于cassandra的 副本数,所以需要修改 cassandra.yml 中cassandra的副本数为3。

spec:
  serviceName: cassandra
  replicas: 3


  volumeClaimTemplates:
    - metadata:
        name: cassandra-data
      spec:
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 8Gi
  • 修改 replicas: 3

  • 修改  storage: 8Gi

4.2.3、配置database-setup.yml

vi database-setup.yml

#将Always改为Never,因为初始化数据会创建tb-db-setup这个pod会用到store/thingsboard/tb-pe-node:3.6.3PE这个镜像里面的初始化数据集,改为从本地拉取。
tb-db-setup.imagePullPolicy=Never

apiVersion: v1
kind: Pod
metadata:
  name: tb-db-setup
  namespace: thingsboard
spec:
  volumes:
  - name: tb-node-config
    configMap:
      name: tb-node-config
      items:
      - key: conf
        path: thingsboard.conf
      - key: logback
        path: logback.xml
  - name: tb-node-logs
    emptyDir: {}
  containers:
  - name: tb-db-setup
    imagePullPolicy: Never    # Always改为Never
    image: thingsboard/tb-pe-node:3.6.3PE
    env:
      - name: TB_SERVICE_ID
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
    envFrom:
    - configMapRef:
        name: tb-node-db-config
    volumeMounts:
      - mountPath: /config
        name: tb-node-config
      - mountPath: /var/log/thingsboard
        name: tb-node-logs
    command: ['sh', '-c', 'while [ ! -f /tmp/install-finished ]; do sleep 2; done;']
  restartPolicy: Never

4.2.4、配置thingsboard.yml,从本地获取镜像

vi thingsboard.yml
# imagePullPolicy: Always 修改为 imagePullPolicy: Never
:%s/imagePullPolicy: Always/imagePullPolicy: Never/g

4.2.5、配置tb-node.yml,从本地获取镜像

vi tb-node.yml
# imagePullPolicy: Always 修改为 imagePullPolicy: Never
:%s/imagePullPolicy: Always/imagePullPolicy: Never/g

5、获取并配置许可证密钥

5.1、获取许可证密钥

  • 参照官网步骤:

Pricing | ThingsBoardicon-default.png?t=N7T8https://thingsboard.io/pricing/

5.2、配置许可证密钥

cd thingsboard-pe-k8s/minikube
vi tb-node.yml
<Shift>键+:\TB_LICENSE_SECRET           #在tb-node.yml文件中查找TB_LICENSE_SECRET所在行
  • 放置 TB_LICENSE_SECRET 参数:
# tb-node StatefulSet configuration

- name: TB_LICENSE_SECRET
  value: "PUT_YOUR_LICENSE_SECRET_HERE"

6、执⾏初始化数据库脚本(主节点)

注意:以下必须按照顺序执行,完成后才能执行下一步!

6.1、部署数据资源

cd thingsboard-pe-k8s/minikube
./k8s-install-tb.sh --loadDemo

Installation finished successfully!

6.2、部署第三方资源

cd thingsboard-pe-k8s/minikube
./k8s-deploy-thirdparty.sh

注意:k8s版本如果是1.18.0的,可能报错:error: unable to recognize "routes.yml": no matches for kind "Ingress" in version "networking.k8s.io/v1"

6.3、部署 ThingsBoard 资源

cd thingsboard-pe-k8s/minikube
./k8s-deploy-resources.sh

7、查看集群服务状态

kubectl get pods

kubectl get deployment,pod,svc

注意:除了 tb-node 之外所有服务正常启动,查看各个pod日志。这个pod是因为证书验证问题自动shutdown,需要购买证书。

8、部署外部访问

8.1、创建 /root/mandatory.yaml

vi mandatory.yaml
  • 复制添加以下内容到mandatory.yaml文件中
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:
      hostNetwork: true
      dnsPolicy: ClusterFirst
      containers:
        - name: controller
          image: registry.cn-beijing.aliyuncs.com/kole_chang/controller:v1.0.0
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --election-id=ingress-controller-leader
            - --controller-class=k8s.io/ingress-nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
            - --watch-ingress-without-class=true
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: nginx
  namespace: ingress-nginx
spec:
  controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-4.0.1
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.0.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: create
          image: registry.cn-beijing.aliyuncs.com/kole_chang/kube-webhook-certgen:v1.0
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-4.0.1
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.0.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: patch
          image: registry.cn-beijing.aliyuncs.com/kole_chang/kube-webhook-certgen:v1.0
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000

8.2、部署mandatory.yaml

执行命令:

cd /root
kubectl apply -f mandatory.yaml

8.3、更新tb资源

cd thingsboard-pe-k8s/minikube
./k8s-deploy-resources.sh

尝试登录(只有部署有ingress-nginx-controller的节点才可以使用ingress):http://192.168.8.100:8080/login

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/510471.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

Docker:探索容器化技术,重塑云计算时代应用交付与管理

一&#xff0c;引言 在云计算时代&#xff0c;随着开发者逐步将应用迁移至云端以减轻硬件管理负担&#xff0c;软件配置与环境一致性问题日益凸显。Docker的横空出世&#xff0c;恰好为软件开发者带来了全新的解决方案&#xff0c;它革新了软件的打包、分发和管理方式&#xff…

【C++第二阶段】继承多态电脑组装实例

你好你好&#xff01; 以下内容仅为当前认识&#xff0c;可能有不足之处&#xff0c;欢迎讨论&#xff01; 文章目录 继承继承语法继承方式继承中的对象模型继承中构造和析构顺序同名成员处理同名静态成员处理多继承语法菱形继承问题 多态多态基本概念重写&重载 多态原理多…

中仕公考:不同省事业编考试内容一样吗?

在各省份的事业单位公开招聘过程中&#xff0c;考试内容是不一样的。虽然每一年都有很多省份参加事业编联考&#xff0c;且这些联考的考试日期相同&#xff0c;但是考试内容并不一样。各省将依据当地的特定情况来设计考试题目&#xff0c;其中会涉及到与本省的地理、经济、文化…

文章分享:《呼吸道传染病标本采集及检测专家共识》

【摘要】呼吸道传染病临床特点多表现为发热和&#xff08;或&#xff09;呼吸道症状&#xff0c;病原学组成复杂&#xff0c;标本类型选择多样&#xff0c;如何从发热伴呼吸道症候群患者中早期正确识别出潜在呼吸道传染病患者是防控的关键环节。增强医务人员对呼吸道传染病临床…

【现代C++】Lambda表达式

在现代C中&#xff0c;Lambda表达式提供了定义匿名函数对象的能力&#xff0c;这在很多场合&#xff0c;如STL算法、线程构造函数、事件处理等&#xff0c;都非常有用。Lambda表达式使得函数定义更加灵活和简洁。 1. 基本语法 Lambda表达式的基本形式包括一个捕获列表、参数列…

【微信小程序】流量主-激励视频(激励广告)下发策略,每天三次免费体验,然后再次点击触发激励视频,当日不再触发。

如题&#xff1a; 允许用户有三次体验效果&#xff0c;然后弹出激励视频弹窗&#xff0c;之后当日不再弹出。 体验小程序&#xff1a; /*** 判断当前项目当天是否点击超过3次&#xff0c;触发广告效果。* 若&#xff0c;当天低于三次&#xff0c;则新增&#xff0c;若高于…

未来电商怎么走,是腾讯视频号,还是抖音小店?你更看好谁?

大家好&#xff0c;我是电商花花。 现在想做一些项目&#xff0c;做些生意基本上就离不开电商&#xff0c;网络这么发达&#xff0c;直播短视频这么火&#xff0c;未来的发展依然是直播电商。 现在线下很多实体店都被电商所冲击&#xff0c;而且不少实体店也都在网上开店卖货…

Spring 整合 Log4j2日志框架

1. Log4j2日志概述 在项目开发中&#xff0c;日志十分的重要&#xff0c;不管是记录运行情况还是定位线上问题&#xff0c;都离不开对日志的分析。日志记录了系统行为的时间、地点、状态等相关信息&#xff0c;能够帮助我们了解并监控系统状态&#xff0c;在发生错误或者接近某…

【面试HOT200】数组篇

系列综述&#xff1a; &#x1f49e;目的&#xff1a;本系列是个人整理为了秋招面试coding部分的&#xff0c;整理期间苛求每个算法题目&#xff0c;平衡可读性与代码性能&#xff08;leetcode运行复杂度均打败80%以上&#xff09;。 &#x1f970;来源&#xff1a;材料主要源于…

linux删除 buff/cache缓存

1.查看当前内存占用 free -h如图&#xff0c;缓存占用了将近9G&#xff0c;接下来进行清理 释放页缓存 echo 1 > /proc/sys/vm/drop_caches释放dentries和inodes echo 2 > /proc/sys/vm/drop_caches释放所有缓存 echo 3 > /proc/sys/vm/drop_caches再次查看&#…

MQ

目录 MQ优点 异步 解耦 削峰填谷 mq的缺点 MQ常见的几种模式 Kafka、ActiveMQ、RabbitMQ、RocketMQ 区别 MQ优点 mq是一种常见的中间件&#xff0c;在项目中经常用到&#xff0c;它具有异步、解耦、削峰填谷的作用。 异步 比如下单流程&#xff0c;A服务—>B服务&a…

为什么称FreeRTOS为轻量级OS,和Linux相比,有哪些具体的区别?

要理解它们&#xff0c;就要看这些最终的概念是怎么来的&#xff0c;其实这些都是在不同资源&#xff08;硬件&#xff09;上处理不同场景问题所得的结果。 FreeRTOS一般跑在几十Mhz&#xff0c;几十KB的硬件上&#xff0c;比如Cortex-M系列MCU上&#xff0c;资源很有限&#…

论文精读--GPT4

现有的所有模型都无法做到在线学习&#xff0c;能力有限&#xff0c;而让大模型拥有一个tools工具库&#xff0c;则可以使大模型变成一个交互式的工具去协调调用API完成任务&#xff0c;同时GPT4还联网了&#xff0c;可以不断地更新自己的知识库 多模态模型&#xff0c;接受文…

面经分享(Flask,轻量级Web框架)

1. Flask的核心特点 a. 轻量级&#xff1a;核心简洁&#xff0c;只提供了基本的功能&#xff0c;其他高级功能可以通过插件或扩展来添加。 b. 灵活性&#xff1a;允许开发者选择适合自己项目的组件和工具&#xff0c;没有强制的项目结构和设计模式。 c. 易于扩展&#xff1a;提…

JVM 内存溢出排查

说明&#xff1a;记录一次JVM内存溢出的排查过程&#xff1b; 场景 项目开发完成后&#xff0c;首次提交到测试环境。测试、产品同事反馈页面先是操作响应慢&#xff0c;抛出超时异常&#xff0c;最后直接无法使用。查看日志后得知是内存溢出。 重启服务后&#xff0c;我对前…

接口测试实战教程(加密解密攻防)

&#x1f345; 视频学习&#xff1a;文末有免费的配套视频可观看 &#x1f345; 关注公众号【互联网杂货铺】&#xff0c;回复 1 &#xff0c;免费获取软件测试全套资料&#xff0c;资料在手&#xff0c;涨薪更快 一、对称加密 对称加密算法是共享密钥加密算法&#xff0c;在加…

RuntimeError: Error compiling objects for extension虚拟环境和系统环境——添加、删除、修改环境变量

前言&#xff1a;因为一个报错RuntimeError: Error compiling objects for extension 没有配置cl.exe环境变量&#xff0c;我的应用场景是需要搞定虚拟环境变量配置 RuntimeError: Error compiling objects for extension手把手带你解决&#xff08;超详细&#xff09;-CSDN博…

Mybatis 之 useGeneratedKeys

数据库中主键基本都设置为自增&#xff0c;当我们要插入一条数据想要获取这条数据的 Id 时&#xff0c;就可使用 Mybatis 中的 useGeneratedKeys 属性。 背景 这里以 苍穹外卖 中的 新增菜品 功能为例&#xff0c;有 菜品表(dish table)和 口味表(dish_flavor table)&#xf…

HTML标签之有序列表,无序列表,自定义链表

无序列表 <!DOCTYPE html> <html lang"en"> <head><meta charset"UTF-8"><meta http-equiv"X-UA-Compatible" content"IEedge"><meta name"viewport" content"widthdevice-width, i…

从零开始学数据分析之数据分析概述

当今世界对信息技术的依赖程度在不断加深&#xff0c;每天都会有大量的数据产生&#xff0c;我们经常会感到数据越来越多&#xff0c;但是要从中发现有价值的信息却越来越难。 这里所说的信息&#xff0c;可以理解为对数据集处理之后的结果&#xff0c;是从数据集中提炼出的可用…