1、从 docker hub 拉取 ThingsBoard PE 映像(所有节点)
1.1、查看k8s信息(主节点)
kubectl cluster-info #查看k8s集群信息
kubectl get node #查看节点信息
kubectl get pod -A #查看内部组件
1.2、从 docker hub 拉取 ThingsBoard PE 映像(所有节点)
- 运行以下命令从 Docker 中心拉取映像。
docker pull thingsboard/tb-pe-node:3.6.3PE
docker pull thingsboard/tb-pe-web-report:3.6.3PE
docker pull thingsboard/tb-pe-web-ui:3.6.3PE
docker pull thingsboard/tb-pe-js-executor:3.6.3PE
docker pull thingsboard/tb-pe-http-transport:3.6.3PE
docker pull thingsboard/tb-pe-mqtt-transport:3.6.3PE
docker pull thingsboard/tb-pe-coap-transport:3.6.3PE
docker pull thingsboard/tb-pe-lwm2m-transport:3.6.3PE
docker pull thingsboard/tb-pe-snmp-transport:3.6.3PE
2、创建K8S集群PV存储库(主节点)
2.1、创建数据资源PV存储库:thingsboard-db-pv.yml
vi thingsboard-db-pv.yml
复制添加以下内容到thingsboard-db-pv.yml文件中:
#postgres
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv-claim
namespace: thingsboard
labels:
app: postgres
type: local
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/postgres
persistentVolumeReclaimPolicy: Recycle
---
#cassandra
apiVersion: v1
kind: PersistentVolume
metadata:
name: cassandra-data-cassandra-0
labels:
type: local
app: cassandra
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/cassandra-0
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: cassandra-data-cassandra-1
labels:
type: local
app: cassandra
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/cassandra-1
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: cassandra-data-cassandra-2
labels:
type: local
app: cassandra
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/cassandra-2
persistentVolumeReclaimPolicy: Recycle
创建目录:
mkdir -p /tmp/data/postgres
mkdir -p /tmp/data/cassandra-0
mkdir -p /tmp/data/cassandra-1
mkdir -p /tmp/data/cassandra-2
2.2、创建第三方资源PV存储库:thingsboard-third-pv.yml
vi thingsboard-third-pv.yml
复制添加以下内容到thingsboard-third-pv.yml文件中:
#zookeeper
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-data-0
labels:
type: local
app: zookeeper
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/zookeeper/data-0
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datalog-0
labels:
type: local
app: zookeeper
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/zookeeper/datalog-0
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-data-1
labels:
type: local
app: zookeeper
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/zookeeper/data-1
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datalog-1
labels:
type: local
app: zookeeper
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/zookeeper/datalog-1
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-data-2
labels:
type: local
app: zookeeper
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/zookeeper/data-2
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datalog-2
labels:
type: local
app: zookeeper
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/zookeeper/datalog-2
persistentVolumeReclaimPolicy: Recycle
---
#kafka
apiVersion: v1
kind: PersistentVolume
metadata:
name: tb-kafka-logs
labels:
type: local
app: tb-kafka
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/tb-kafka/logs
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: tb-kafka-app-logs
labels:
type: local
app: tb-kafka
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/tb-kafka/app-logs
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: tb-kafka-config
labels:
type: local
app: tb-kafka
spec:
capacity:
storage: 50Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/tb-kafka/config
persistentVolumeReclaimPolicy: Recycle
创建目录:
mkdir -p /tmp/data/zookeeper/data-0
mkdir -p /tmp/data/zookeeper/datalog-0
mkdir -p /tmp/data/zookeeper/data-1
mkdir -p /tmp/data/zookeeper/datalog-1
mkdir -p /tmp/data/zookeeper/data-2
mkdir -p /tmp/data/zookeeper/datalog-2
mkdir -p /tmp/data/tb-kafka/logs
mkdir -p /tmp/data/tb-kafka/app-logs
mkdir -p /tmp/data/tb-kafka/config
2.3、创建tb资源PV存储库:thingsboard-tb-pv.yml
- 复制添加以下内容到thingsboard-tb-pv.yml文件中:
#tb-node
apiVersion: v1
kind: PersistentVolume
metadata:
name: tb-node-0
namespace: thingsboard
labels:
app: tb-node
type: local
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/tb-node/node0
persistentVolumeReclaimPolicy: Recycle
---
#tb-mqtt-transport
apiVersion: v1
kind: PersistentVolume
metadata:
name: tb-mqtt-transport-0
namespace: thingsboard
labels:
app: tb-mqtt-transport
type: local
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/tb-mqtt-transport/transport0
persistentVolumeReclaimPolicy: Recycle
---
#tb-mqtt-transport
apiVersion: v1
kind: PersistentVolume
metadata:
name: tb-mqtt-transport-1
namespace: thingsboard
labels:
app: tb-mqtt-transport
type: local
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/tb-mqtt-transport/transport1
persistentVolumeReclaimPolicy: Recycle
---
#tb-coap-transport
apiVersion: v1
kind: PersistentVolume
metadata:
name: tb-coap-transport-0
namespace: thingsboard
labels:
app: tb-coap-transport
type: local
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/tb-coap-transport/transport0
persistentVolumeReclaimPolicy: Recycle
---
#tb-coap-transport
apiVersion: v1
kind: PersistentVolume
metadata:
name: tb-coap-transport-1
namespace: thingsboard
labels:
app: tb-coap-transport
type: local
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/tb-coap-transport/transport1
persistentVolumeReclaimPolicy: Recycle
---
#tb-http-transport
apiVersion: v1
kind: PersistentVolume
metadata:
name: tb-http-transport-0
namespace: thingsboard
labels:
app: tb-http-transport
type: local
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/tb-http-transport/transport0
persistentVolumeReclaimPolicy: Recycle
---
#tb-http-transport
apiVersion: v1
kind: PersistentVolume
metadata:
name: tb-http-transport-1
namespace: thingsboard
labels:
app: tb-http-transport
type: local
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/tb-http-transport/transport1
persistentVolumeReclaimPolicy: Recycle
---
创建目录:
mkdir -p /tmp/data/tb-node/node0
mkdir -p /tmp/data/tb-mqtt-transport/transport0
mkdir -p /tmp/data/tb-mqtt-transport/transport1
mkdir -p /tmp/data/tb-coap-transport/transport0
mkdir -p /tmp/data/tb-coap-transport/transport1
mkdir -p /tmp/data/tb-http-transport/transport0
mkdir -p /tmp/data/tb-http-transport/transport1
创建K8S集群PV存储库完成后如下图所示:
3、执⾏创建PV存储库(主节点)
kubectl apply -f thingsboard-db-pv.yml
kubectl apply -f thingsboard-third-pv.yml
kubectl apply -f thingsboard-tb-pv.yml
kubectl get pv
4、克隆 ThingsBoard PE Kubernetes 脚本
4.1、克隆脚本
yum install -y git
git clone -b release-3.6.3 https://github.com/thingsboard/thingsboard-pe-k8s.git --depth 1
cd thingsboard-pe-k8s/minikube
4.2、修改thingsboard-pe-k8s/minikube目录下5个文件配置
4.2.1、配置 .env
cd thingsboard-pe-k8s/minikube
vi .env
# Database used by ThingsBoard, can be either postgres (PostgreSQL) or hybrid (PostgreSQL for entities database and Cassandra for timeseries database).
# According to the database type corresponding kubernetes resources will be deployed (see postgres.yml, cassandra.yml for details).
# DATABASE=postgres 添加注释,并增加下一行!
DATABASE=hybrid
# Replication factor for Cassandra database (will be ignored if PostgreSQL was selected as the database).
# Must be less or equals to the number of Cassandra nodes which can be configured in ./common/cassandra.yml ('StatefulSet.spec.replicas' property)
CASSANDRA_REPLICATION_FACTOR=1
修改 .env 的环境配置,两种方案:
- postgres:使⽤ PostgreSQL 数据库,一般用于学习、研究和开发。
- hybrid:使⽤ PostgreSQL 作为实体数据库,使⽤ Cassandra 作为时间序列数据库,主要用于企业级的生产环境。
本文是企业级K8S本地部署ThingsBoard专业版集群,因此选⽤hybrid⽅式,修改2个地方:
- DATABASE=hybrid
- CASSANDRA_REPLICATION_FACTOR=2 【此处没有修改!】
4.2.2、配置 cassandra.yml
vi cassandra.yml
因为CASSANDRA_REPLICATION_FACTOR必须⼩于或者等于cassandra的 副本数,所以需要修改 cassandra.yml 中cassandra的副本数为3。
spec:
serviceName: cassandra
replicas: 3
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 8Gi
- 修改 replicas: 3
- 修改 storage: 8Gi
4.2.3、配置database-setup.yml
vi database-setup.yml
#将Always改为Never,因为初始化数据会创建tb-db-setup这个pod会用到store/thingsboard/tb-pe-node:3.6.3PE这个镜像里面的初始化数据集,改为从本地拉取。
tb-db-setup.imagePullPolicy=Never
apiVersion: v1
kind: Pod
metadata:
name: tb-db-setup
namespace: thingsboard
spec:
volumes:
- name: tb-node-config
configMap:
name: tb-node-config
items:
- key: conf
path: thingsboard.conf
- key: logback
path: logback.xml
- name: tb-node-logs
emptyDir: {}
containers:
- name: tb-db-setup
imagePullPolicy: Never # Always改为Never
image: thingsboard/tb-pe-node:3.6.3PE
env:
- name: TB_SERVICE_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
envFrom:
- configMapRef:
name: tb-node-db-config
volumeMounts:
- mountPath: /config
name: tb-node-config
- mountPath: /var/log/thingsboard
name: tb-node-logs
command: ['sh', '-c', 'while [ ! -f /tmp/install-finished ]; do sleep 2; done;']
restartPolicy: Never
4.2.4、配置thingsboard.yml,从本地获取镜像
vi thingsboard.yml
# imagePullPolicy: Always 修改为 imagePullPolicy: Never
:%s/imagePullPolicy: Always/imagePullPolicy: Never/g
4.2.5、配置tb-node.yml,从本地获取镜像
vi tb-node.yml
# imagePullPolicy: Always 修改为 imagePullPolicy: Never
:%s/imagePullPolicy: Always/imagePullPolicy: Never/g
5、获取并配置许可证密钥
5.1、获取许可证密钥
- 参照官网步骤:
Pricing | ThingsBoardhttps://thingsboard.io/pricing/
5.2、配置许可证密钥
cd thingsboard-pe-k8s/minikube
vi tb-node.yml
<Shift>键+:\TB_LICENSE_SECRET #在tb-node.yml文件中查找TB_LICENSE_SECRET所在行
- 放置 TB_LICENSE_SECRET 参数:
# tb-node StatefulSet configuration
- name: TB_LICENSE_SECRET
value: "PUT_YOUR_LICENSE_SECRET_HERE"
6、执⾏初始化数据库脚本(主节点)
注意:以下必须按照顺序执行,完成后才能执行下一步!
6.1、部署数据资源
cd thingsboard-pe-k8s/minikube
./k8s-install-tb.sh --loadDemo
Installation finished successfully!
6.2、部署第三方资源
cd thingsboard-pe-k8s/minikube
./k8s-deploy-thirdparty.sh
注意:k8s版本如果是1.18.0的,可能报错:error: unable to recognize "routes.yml": no matches for kind "Ingress" in version "networking.k8s.io/v1"
6.3、部署 ThingsBoard 资源
cd thingsboard-pe-k8s/minikube
./k8s-deploy-resources.sh
7、查看集群服务状态
kubectl get pods
kubectl get deployment,pod,svc
注意:除了 tb-node 之外所有服务正常启动,查看各个pod日志。这个pod是因为证书验证问题自动shutdown,需要购买证书。
8、部署外部访问
8.1、创建 /root/mandatory.yaml
vi mandatory.yaml
- 复制添加以下内容到mandatory.yaml文件中:
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- apiGroups:
- ''
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader
verbs:
- get
- update
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: https-webhook
port: 443
targetPort: webhook
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
hostNetwork: true
dnsPolicy: ClusterFirst
containers:
- name: controller
image: registry.cn-beijing.aliyuncs.com/kole_chang/controller:v1.0.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --watch-ingress-without-class=true
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: nginx
namespace: ingress-nginx
spec:
controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
name: ingress-nginx-admission
webhooks:
- name: validate.nginx.ingress.kubernetes.io
matchPolicy: Equivalent
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
sideEffects: None
admissionReviewVersions:
- v1
clientConfig:
service:
namespace: ingress-nginx
name: ingress-nginx-controller-admission
path: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
rules:
- apiGroups:
- ''
resources:
- secrets
verbs:
- get
- create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-create
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
template:
metadata:
name: ingress-nginx-admission-create
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: create
image: registry.cn-beijing.aliyuncs.com/kole_chang/kube-webhook-certgen:v1.0
imagePullPolicy: IfNotPresent
args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
nodeSelector:
kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-patch
namespace: ingress-nginx
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
template:
metadata:
name: ingress-nginx-admission-patch
labels:
helm.sh/chart: ingress-nginx-4.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: patch
image: registry.cn-beijing.aliyuncs.com/kole_chang/kube-webhook-certgen:v1.0
imagePullPolicy: IfNotPresent
args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
nodeSelector:
kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 2000
8.2、部署mandatory.yaml
执行命令:
cd /root
kubectl apply -f mandatory.yaml
8.3、更新tb资源
cd thingsboard-pe-k8s/minikube
./k8s-deploy-resources.sh
尝试登录(只有部署有ingress-nginx-controller的节点才可以使用ingress):http://192.168.8.100:8080/login