Kubernetes------Service

目录

一、属性说明

二、定义和基本配置

 1、定义

 2、创建Service

2.1、type=ClusterIP

 2.2、type=NodePort

 2.3、固定IP访问

三、Service、EndPoint、Pod之间的关系

四、服务发现

1、基于Service中IP访问外部服务

2、基于Service中域名访问外部服务

五、Ingress的安装和使用

1、Ingress是啥

2、Ingress的安装

2.1、安装helm

2.2、环境准备 

2.3、配置SSl证书

六、附录

1、helm安装包

2、values.yml文件修改后的内容

3、Helm和kubernetes版本对照

4、Ingress-nginx和k8s版本对照


一、属性说明

属性名称取值类型是否必须取值说明
versionStringv1
kindStringService
metadataObejct元数据
metadata.nameStringService名称
metadata.namespaceString命名空间,不指定默认为default
metadata.labels[]list自定义标签属性列表
metadata.annotation[]list自定义注解属性列表
specObejct详细描述
spec.selector[]listLabel Selector配置,将选择具有指定Label标签的Pod作为管理范围
spec.typeString

Service的类型,默认为ClusterIp。

ClusterIp:虚拟服务IP地址,该地址用于Kubernetes集群内部的Pod访问,在Node上Kube-proxy通过设置的iptables规则进行转发。

NodePort:使用宿主机的端口,使能够访问各Node的外部客户端通过Node的IP地址和端口号就能访问服务。

LoadBanlancer:使用外接负载均衡器完成到服务的负载分发,需要在spec.status.loadBalancer字段指定外部负载均衡器的IP地址,同时定义NodePort和clusterIp,用于公有云环境。

spec.clusterIpString虚拟服务的IP地址,当type=ClusterIP时,如果不指定,则系统进行自动分配,也可以手工指定;type=LoadBalancer时,需要指定。
spec.sessionAffinityString

是否支持Session,可选值为ClientIP,默认值为None。

ClientIp:表示将同一个客户端(根据客户端的IP地址决定)的访问请求都转发到同一个后端Pod。

spec.ports[]listService端口列表
spec.ports[].nameString端口名称
spec.ports[].protocolString端口协议,支持TCP和UDP,默认 为TCP
spec.ports[].portint服务监听的端口号
spec.ports[].targetPortint需要转发到Pod的端口号
spec.ports[].nodePortint当spec.type=NodePort时,指定映射到宿主机的端口号。
StatusObject当spec.type=LoadBalancer时,设置外部负载均衡器的地址,用于公有云。
status.loadBalancerObject外部负载均衡器
status.loadBalancer.ingressObject外部负载均衡器
status.loadBalancer.ingress.ipString外部负载均衡器的ip
status.loadBalancer.ingress.hostnameString外部负载均衡器的主机名

二、定义和基本配置

 1、定义

        Service主要用于提供网络服务,通过Service的定义,能够为客户端应用提供稳定的访问地址(域名或者IP)和负载均衡功能,以及屏蔽后端EndPoint的变化,是Kubernetes实现微服务的核心资源。通常我们的服务都是分布式的,这样就不会是一个单一的Pod,而且Pod还会面对扩容和缩容,除此之外Pod发生了故障转移,这些都会导致Pod的IP发生变化,而Service恰好可以通过自己的负载均衡策略实现请求到Pod上,而不用关注Pod的ip变化。

 2、创建Service

#创建deploy
 kubectl create -f nginx-deploy.yaml

#创建service
 kubectl create -f nginx-svc.yaml 
apiVersion: apps/v1  #版本
kind: Deployment  #类型为Deployment
metadata: #元数据
  labels:  #标签
    app: my-nginx
  name: nginx-deploy
spec: #描述
  replicas: 3 #副本数量
  revisionHistoryLimit: 10  #历史版本限制,用来回退,如果设置为0则没有回退
  selector: #选择器
    matchLabels: #按标签匹配
      app: my-nginx #标签的值
  template:
    metadata:
      labels:
        app: my-nginx
    spec: #容器描述
      containers:
        - name: nginx-container
          image: nginx:1.21.4
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
              name: nginx-port
              protocol: TCP
apiVersion: v1
kind: Service #类型
metadata: #元数据
  name: nginx-svc  #service的名称
  labels:
    app: nginx-svc #自身的标签
spec:
  selector:
    app: my-nginx  #所有匹配到改标签的pod都可以通过service访问
  ports:
    - protocol: TCP  #可选值TCP、UDP、SCTP 默认为 TCP
      port: 80 #service自己的端口,通过内网访问你的端口
      targetPort: 80 #目标端口
  type: ClusterIP #nodeport 随机端口 30000-32000,该端口直接绑定到node上,且每个节点都会绑定该端口,可以暴露端口供外部访问
                  # ClusterIP 默认值

#创建资源
kubectl create -f nginx-deploy.yaml 

kubectl create -f nginx-svc.yaml 

#查看service
kubectl get services

kubectl get svc

#查看endpoint
kubectl get endpoints

kubectl get ep

#查看pod
kubectl get po -o wide

 

 分别修改三个Pod中nginx的内容

#分别进入容器,修改nginx欢迎页内容
kubectl exec nginx-deploy-6c648dd6dd-867b6 -it -- /bin/sh

kubectl exec nginx-deploy-6c648dd6dd-pj5tp -it -- /bin/sh 

kubectl exec nginx-deploy-6c648dd6dd-tgl82 -it -- /bin/sh  


echo "10.244.1.145   node2" > /usr/share/nginx/html/index.html

echo "10.244.1.144   node2" > /usr/share/nginx/html/index.html

echo "172.17.0.3   node1" > /usr/share/nginx/html/index.html

2.1、type=ClusterIP

 该地址用于Kubernetes内部Pod访问。通过service访问nginx

#通过service访问nginx
while true ;do curl 10.97.61.178 ; sleep 1;done;

 可以看到Service是随机访问到pod的。

#编辑deploy对pod扩容,将副本数改为4
kubectl edit deploy nginx-deploy

#查看Pod
kubectl get po -o wide

 

 再次通过service访问pod,是可以访问到扩容的这个pod的。

 2.2、type=NodePort

使用宿主机的端口,使能够访问各Node的外部客户端通过Node的IP地址和端口号就能访问服务。

apiVersion: v1
kind: Service #类型
metadata: #元数据
  name: nginx-svc  #service的名称
  labels:
    app: nginx-svc #自身的标签
spec:
  selector:
    app: my-nginx  #所有匹配到改标签的pod都可以通过service访问
  ports:
    - protocol: TCP  #可选值TCP、UDP、SCTP 默认为 TCP
      port: 80 #service自己的端口,通过内网访问你的端口
      targetPort: 80 #目标端口
      nodePort: 30080  #指定端口访问 ,外部访问,绑定到主机node上的,端口范围为30000-32767
  type: NodePort
#创建资源
kubectl create -f nginx-svc.yaml 

#查看svc
kubectl get svc

 

 2.3、固定IP访问

apiVersion: v1
kind: Service #类型
metadata: #元数据
  name: nginx-svc  #service的名称
  labels:
    app: nginx-svc #自身的标签
spec:
  selector:
    app: my-nginx  #所有匹配到改标签的pod都可以通过service访问
  ports:
    - protocol: TCP  #可选值TCP、UDP、SCTP 默认为 TCP
      port: 80 #service自己的端口,通过内网访问你的端口
      targetPort: 80 #目标pod端口
  type: ClusterIP
  sessionAffinity: ClientIP #指定客户端ip访问,默认是None
  sessionAffinityConfig:  #会话配置
    clientIP:
      timeoutSeconds: 3600 #会话保持的最长时间,单位为秒

#创建资源
kubectl create -f nginx-svc.yaml

#查看
kubectl get svc

#通过service访问pod,只会访问一个pod
while true ;do curl 10.111.86.26 ; sleep 1;done;

 

 

三、Service、EndPoint、Pod之间的关系

     

Endpoint 类似我们在之前所学习的 服务注册中心 (eureka、nacos)。Endpoint是Kubernetes中的一个资源对象,存储在etcd中,用于记录一个service对应的所有pod的访问地址。

一个Service由一组Pod(通过标签关联)组成,这些Pod通过Endpoints暴露出来,Endpoints是实现实际服务的端点集合。

通俗易懂总结:kubernetes 中的 Service与Endpoints通讯获取实际服务地址,在路由转发到具体pod上。

四、服务发现

1、基于Service中IP访问外部服务

       普通的Service通过Label Selector对后端Endpoint列表进行了一次抽象,如果后端Endpoint不是由Pod副本提供,则Service还可以抽象定义为任意其他服务,将一个Kubernetes集群外的已知服务定义为Kubernetes内的一个Service,供集群内的其他应用访问。常见的应用场景

  • 已部署的一个集群外服务,比如 数据库服务,缓存服务等。
  • 其他Kubernetes集群的某个服务。
  • 迁移过程中对某个服务进行Kubernetes内的服务名访问机制的验证。
apiVersion: v1
kind: Service #类型
metadata: #元数据
  name: nginx-svc-external  #service的名称
  labels:
    app: nginx #自身的标签
spec:
  ports:
    - port: 80 #service自己的端口,通过内网访问你的端口
      targetPort: 80 #目标pod端口
      name: web
  type: ClusterIP

---
#自定义Endpoint

apiVersion: v1
kind: Endpoints  #类型
metadata:
  labels:
    app: nginx #与上面service保持一致
  name: nginx-svc-external #与service一样
  namespace: default #命名空间
subsets:
  - addresses:
      - ip: 192.168.139.1 #目标ip,当集群内部访问到该service时,会转发到该ip上,这里测试ip为本机ip,外网访问的ip同样适用
    ports:
      - port: 8080  #端口,这里我配置的是tomcat
        name: web  #端口名字,与上面service一致
        protocol: TCP #与Service一致


#创建资源
 kubectl create -f nginx-svc-external-ip.yaml

#查看svc,ep
kubectl get svc,ep

#使用busybox容器检测,没有的可以创建
kubectl run -it --image busybox:1.28.4 dns-test  -- /bin/sh

#已存在使用该指令进入pod
kubectl exec -it dns-test -- sh

#进入pod内使用wget命令检测,其中nginx-svc-external为服务的名称,支持跨namespace访问,访问方式为<serviceName>.<namespace>
wget http://nginx-svc-external


 

结论:可以看到通过访问service服务已经访问到了Tomcat的服务。

2、基于Service中域名访问外部服务

apiVersion: v1
kind: Service #类型
metadata: #元数据
  name: nginx-svc-external-domain  #service的名称
  labels:
    app: nginx-svc-external-domain #自身的标签
spec:
  type: ExternalName
  externalName: www.wssnail.com



#创建资源
 kubectl create -f nginx-svc-external-domain.yaml

#查看
kubectl get svc

 

五、Ingress的安装和使用

1、Ingress是啥

        Ingress提供从集群外部到集群内服务的 HTTP 和 HTTPS 路由。 流量路由由 Ingress 资源所定义的规则来控制。使用Ingress进行服务路由时,Ingress Controller 基于Ingress 规则将客户端请求直接转发到Service对应的后端Endpoint上,这样会跳过kube-proxy设置的路由转发规则,提高网络转发效率。下面是Ingress网络访问的示意图。

  •  对www.wsssnail.com/api的访问将被路由到api的service,然后通过service访问到它管理的pod
  •  对www.wsssnail.com/web的访问将被路由到web的service,然后通过service访问到它管理的pod
  •  对www.wsssnail.com/doc的访问将被路由到doc的service,然后通过service访问到它管理的pod

2、Ingress的安装

2.1、安装helm

#官方地址
https://helm.sh/zh/docs/intro/install/

#创建目录helm
mkdir helm

#进入目录 下载helm包
wget https://get.helm.sh/helm-v3.11.3-linux-amd64.tar.gz

#解压文件并移动helm文件到/usr/local/bin
tar -zxf helm-v3.11.3-linux-amd64.tar.gz 

#移动文件
mv helm /usr/local/bin/

#查看版本
helm version

#添加仓库
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

#查看仓库列表
helm repo list

#搜索
helm search repo ingress-nginx

#拉取,指定版本
helm pull ingress-nginx/ingress-nginx --version 4.5.0

#移动文件到/helm文件
mv ingress-nginx-4.5.0.tgz /root/helm/

#解压文件
tar -xf ingress-nginx-4.5.0.tgz 

#进入ingress-nginx目录修改value.yml文件,主要修改镜像地址和镜像,以及node标签


values.yml修改后的完整文件见附录2


#创建命名空间
 kubectl create ns ingress-nginx

#给节点打标签
kubectl label node node1 ingress=true

#执行安装,注意后面有个点别丢了
helm install ingress-nginx -n ingress-nginx .

#查看
kubectl get po -n ingress-nginx



#如果安装失败了卸载helm库

#查看helm列表
 helm list -n <namespace>

#卸载
helm delete ingress-nginx -n  <namespace>

 

2.2、环境准备 

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example
#  annotations:
#    kubernetes.io/ingerss.class: "nginx"
#    nginx.ingress.kubernetes.io/enable-cors: "true"     # 启用CORS。
#    nginx.ingress.kubernetes.io/cors-allow-origin: "*"  # 允许所有域访问。
#    nginx.ingress.kubernetes.io/cors-allow-methods: "GET, PUT, POST, DELETE, PATCH, OPTIONS"  # 允许的HTTP方法。
spec:
  ingressClassName: nginx
  rules:
    - host: test.wssnail.com
      http:
        paths:
          - pathType: Prefix
            backend:
              service:
                name: nginx-svc
                port:
                  number: 80
            path: /
  tls:
    - hosts:
        - test.wssnail.com
      secretName: ingress-secret
#---
#apiVersion: v1
#kind: Secret
#metadata:
#    name: example-tls
#data:
#    tls.crt: <base64 encoded cert>
#    tls.key: <base64 encoded key>
#type: kubernetes.io/tls

apiVersion: apps/v1  #版本
kind: Deployment  #类型为Deployment
metadata: #元数据
  name: nginx-deploy-test-ingress
spec: #描述
  replicas: 3 #副本数量
  selector: #选择器
    matchLabels:
      app: nginx-test-ingress #标签的值
  template:
    metadata:
      labels:
        app: nginx-test-ingress
    spec: #容器描述
      containers:
        - name: nginx-container
          image: nginx:1.21.6
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
              name: nginx-port
              protocol: TCP
---
apiVersion: v1
kind: Service #类型
metadata: #元数据
  name: nginx-svc  #service的名称
  labels:
    app: nginx-test-ingress #自身的标签
spec:
  selector:
    app: nginx-test-ingress  #所有匹配到改标签的pod都可以通过service访问
  ports:
    - protocol: TCP  #可选值TCP、UDP、SCTP 默认为 TCP
      port: 80 #service自己的端口,通过内网访问你的端口
      name: web
  type: NodePort
#创建资源
kubectl create -f ingress-nginx.yaml 
kubectl create -f nginx-svc-test-ingress.yaml 

 配置主机host,在C:\Windows\System32\drivers\etc路径下添加dns解析 

192.168.139.207  test.wssnail.com  #其中ip为ingress pod所在节点的ip

 此域名我在浏览器访问的时候出现了跨域问题,目前不知道怎么解决,但是可以ping通,证明ingress已经转发到了相应的服务上。

 

2.3、配置SSl证书

 生成证书

#使用openssl生成证书,生成证书会生成证书文件
 openssl req -x509 -nodes -days 500 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=test.wssnail.com"

 创建secret

#创建secret ,其中ingress-secret为证书名称, tls.key、tls.crt对应上一步生成的文件
kubectl create secret tls ingress-secret --key tls.key --cert tls.crt

创建ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example
#  annotations:
#    kubernetes.io/ingerss.class: "nginx"
#    nginx.ingress.kubernetes.io/enable-cors: "true"     # 启用CORS。
#    nginx.ingress.kubernetes.io/cors-allow-origin: "*"  # 允许所有域访问。
#    nginx.ingress.kubernetes.io/cors-allow-methods: "GET, PUT, POST, DELETE, PATCH, OPTIONS"  # 允许的HTTP方法。
spec:
  ingressClassName: nginx
  rules:
    - host: test.wssnail.com
      http:
        paths:
          - pathType: Prefix
            backend:
              service:
                name: nginx-svc
                port:
                  number: 80
            path: /
  tls:
    - hosts:
        - test.wssnail.com
      secretName: ingress-secret  #对应创建secret的名字


#---
#apiVersion: v1
#kind: Secret
#metadata:
#    name: example-tls
#data:
#    tls.crt: <base64 encoded cert>  直接使用base64转码后的内容
#    tls.key: <base64 encoded key>
#type: kubernetes.io/tls
#创建资源,再访问域名就可以访问https
kubectl create -f ingress-nginx.yaml 

六、附录

1、helm安装包

链接: https://pan.baidu.com/s/1Pve4W3cMGh9HvasapL-81A?pwd=iafb 提取码: iafb 

2、values.yml文件修改后的内容

## nginx configuration
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/index.md
##

## Overrides for generated resource names
# See templates/_helpers.tpl
# nameOverride:
# fullnameOverride:

## Labels to apply to all resources
##
commonLabels: {}
# scmhash: abc123
# myLabel: aakkmd

controller:
    name: controller
    image:
        ## Keep false as default for now!
        chroot: false
        registry: registry.cn-hangzhou.aliyuncs.com  #此处修改镜像
        image: google_containers/nginx-ingress-controller #此处修改镜像
        ## for backwards compatibility consider setting the full image url via the repository value below
        ## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail
        ## repository:
        tag: "v1.6.3"
        #digest: sha256:b92667e0afde1103b736e6a3f00dd75ae66eec4e71827d19f19f471699e909d2  #此处注释掉验证
        #digestChroot: sha256:4b4a249c9a35ac16a8ec0e22f6c522b8707f7e59e656e64a4ad9ace8fea830a4 #此处注释掉验证
        pullPolicy: IfNotPresent
        # www-data -> uid 101
        runAsUser: 101
        allowPrivilegeEscalation: true
    # -- Use an existing PSP instead of creating one
    existingPsp: ""
    # -- Configures the controller container name
    containerName: controller
    # -- Configures the ports that the nginx-controller listens on
    containerPort:
        http: 80
        https: 443
    # -- Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
    config: {}
    # -- Annotations to be added to the controller config configuration configmap.
    configAnnotations: {}
    # -- Will add custom headers before sending traffic to backends according to https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/custom-headers
    proxySetHeaders: {}
    # -- Will add custom headers before sending response traffic to the client according to: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headers
    addHeaders: {}
    # -- Optionally customize the pod dnsConfig.
    dnsConfig: {}
    # -- Optionally customize the pod hostname.
    hostname: {}
    # -- Optionally change this to ClusterFirstWithHostNet in case you have 'hostNetwork: true'.
    # By default, while using host network, name resolution uses the host's DNS. If you wish nginx-controller
    # to keep resolving names inside the k8s network, use ClusterFirstWithHostNet.
    #dnsPolicy: ClusterFirst
    dnsPolicy: ClusterFirstWithHostNet #此处修改
    # -- Bare-metal considerations via the host network https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network
    # Ingress status was blank because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply
    reportNodeInternalIp: false
    # -- Process Ingress objects without ingressClass annotation/ingressClassName field
    # Overrides value for --watch-ingress-without-class flag of the controller binary
    # Defaults to false
    watchIngressWithoutClass: false
    # -- Process IngressClass per name (additionally as per spec.controller).
    ingressClassByName: false
    # -- This configuration enables Topology Aware Routing feature, used together with service annotation service.kubernetes.io/topology-aware-hints="auto"
    # Defaults to false
    enableTopologyAwareRouting: false
    # -- This configuration defines if Ingress Controller should allow users to set
    # their own *-snippet annotations, otherwise this is forbidden / dropped
    # when users add those annotations.
    # Global snippets in ConfigMap are still respected
    allowSnippetAnnotations: true
    # -- Required for use with CNI based kubernetes installations (such as ones set up by kubeadm),
    # since CNI and hostport don't mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920
    # is merged
    hostNetwork: true  #此处修改
    ## Use host ports 80 and 443
    ## Disabled by default
    hostPort:
        # -- Enable 'hostPort' or not
        enabled: false
        ports:
            # -- 'hostPort' http port
            http: 80
            # -- 'hostPort' https port
            https: 443
    # -- Election ID to use for status update, by default it uses the controller name combined with a suffix of 'leader'
    electionID: ""
    ## This section refers to the creation of the IngressClass resource
    ## IngressClass resources are supported since k8s >= 1.18 and required since k8s >= 1.19
    ingressClassResource:
        # -- Name of the ingressClass
        name: nginx
        # -- Is this ingressClass enabled or not
        enabled: true
        # -- Is this the default ingressClass for the cluster
        default: false
        # -- Controller-value of the controller that is processing this ingressClass
        controllerValue: "k8s.io/ingress-nginx"
        # -- Parameters is a link to a custom resource containing additional
        # configuration for the controller. This is optional if the controller
        # does not require extra parameters.
        parameters: {}
    # -- For backwards compatibility with ingress.class annotation, use ingressClass.
    # Algorithm is as follows, first ingressClassName is considered, if not present, controller looks for ingress.class annotation
    ingressClass: nginx
    # -- Labels to add to the pod container metadata
    podLabels: {}
    #  key: value

    # -- Security Context policies for controller pods
    podSecurityContext: {}
    # -- See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for notes on enabling and using sysctls
    sysctls: {}
    # sysctls:
    #   "net.core.somaxconn": "8192"

    # -- Allows customization of the source of the IP address or FQDN to report
    # in the ingress status field. By default, it reads the information provided
    # by the service. If disable, the status field reports the IP address of the
    # node or nodes where an ingress controller pod is running.
    publishService:
        # -- Enable 'publishService' or not
        enabled: true
        # -- Allows overriding of the publish service to bind to
        # Must be <namespace>/<service_name>
        pathOverride: ""
    # Limit the scope of the controller to a specific namespace
    scope:
        # -- Enable 'scope' or not
        enabled: false
        # -- Namespace to limit the controller to; defaults to $(POD_NAMESPACE)
        namespace: ""
        # -- When scope.enabled == false, instead of watching all namespaces, we watching namespaces whose labels
        # only match with namespaceSelector. Format like foo=bar. Defaults to empty, means watching all namespaces.
        namespaceSelector: ""
    # -- Allows customization of the configmap / nginx-configmap namespace; defaults to $(POD_NAMESPACE)
    configMapNamespace: ""
    tcp:
        # -- Allows customization of the tcp-services-configmap; defaults to $(POD_NAMESPACE)
        configMapNamespace: ""
        # -- Annotations to be added to the tcp config configmap
        annotations: {}
    udp:
        # -- Allows customization of the udp-services-configmap; defaults to $(POD_NAMESPACE)
        configMapNamespace: ""
        # -- Annotations to be added to the udp config configmap
        annotations: {}
    # -- Maxmind license key to download GeoLite2 Databases.
    ## https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases
    maxmindLicenseKey: ""
    # -- Additional command line arguments to pass to nginx-ingress-controller
    # E.g. to specify the default SSL certificate you can use
    extraArgs: {}
    ## extraArgs:
    ##   default-ssl-certificate: "<namespace>/<secret_name>"

    # -- Additional environment variables to set
    extraEnvs: []
    # extraEnvs:
    #   - name: FOO
    #     valueFrom:
    #       secretKeyRef:
    #         key: FOO
    #         name: secret-resource

    # -- Use a `DaemonSet` or `Deployment`
    kind: DaemonSet  #此处修改为DaemonSet
    # -- Annotations to be added to the controller Deployment or DaemonSet
    ##
    annotations: {}
    #  keel.sh/pollSchedule: "@every 60m"

    # -- Labels to be added to the controller Deployment or DaemonSet and other resources that do not have option to specify labels
    ##
    labels: {}
    #  keel.sh/policy: patch
    #  keel.sh/trigger: poll

    # -- The update strategy to apply to the Deployment or DaemonSet
    ##
    updateStrategy: {}
    #  rollingUpdate:
    #    maxUnavailable: 1
    #  type: RollingUpdate

    # -- `minReadySeconds` to avoid killing pods before we are ready
    ##
    minReadySeconds: 0
    # -- Node tolerations for server scheduling to nodes with taints
    ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
    ##
    tolerations: []
    #  - key: "key"
    #    operator: "Equal|Exists"
    #    value: "value"
    #    effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"

    # -- Affinity and anti-affinity rules for server scheduling to nodes
    ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
    ##
    affinity: {}
    # # An example of preferred pod anti-affinity, weight is in the range 1-100
    # podAntiAffinity:
    #   preferredDuringSchedulingIgnoredDuringExecution:
    #   - weight: 100
    #     podAffinityTerm:
    #       labelSelector:
    #         matchExpressions:
    #         - key: app.kubernetes.io/name
    #           operator: In
    #           values:
    #           - ingress-nginx
    #         - key: app.kubernetes.io/instance
    #           operator: In
    #           values:
    #           - ingress-nginx
    #         - key: app.kubernetes.io/component
    #           operator: In
    #           values:
    #           - controller
    #       topologyKey: kubernetes.io/hostname

    # # An example of required pod anti-affinity
    # podAntiAffinity:
    #   requiredDuringSchedulingIgnoredDuringExecution:
    #   - labelSelector:
    #       matchExpressions:
    #       - key: app.kubernetes.io/name
    #         operator: In
    #         values:
    #         - ingress-nginx
    #       - key: app.kubernetes.io/instance
    #         operator: In
    #         values:
    #         - ingress-nginx
    #       - key: app.kubernetes.io/component
    #         operator: In
    #         values:
    #         - controller
    #     topologyKey: "kubernetes.io/hostname"

    # -- Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in.
    ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
    ##
    topologySpreadConstraints: []
    # - maxSkew: 1
    #   topologyKey: topology.kubernetes.io/zone
    #   whenUnsatisfiable: DoNotSchedule
    #   labelSelector:
    #     matchLabels:
    #       app.kubernetes.io/instance: ingress-nginx-internal

    # -- `terminationGracePeriodSeconds` to avoid killing pods before we are ready
    ## wait up to five minutes for the drain of connections
    ##
    terminationGracePeriodSeconds: 300
    # -- Node labels for controller pod assignment
    ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector:
        kubernetes.io/os: linux
        ingress: "true"  #此处修改,节点选择器
    ## Liveness and readiness probe values
    ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
    ##
    ## startupProbe:
    ##   httpGet:
    ##     # should match container.healthCheckPath
    ##     path: "/healthz"
    ##     port: 10254
    ##     scheme: HTTP
    ##   initialDelaySeconds: 5
    ##   periodSeconds: 5
    ##   timeoutSeconds: 2
    ##   successThreshold: 1
    ##   failureThreshold: 5
    livenessProbe:
        httpGet:
            # should match container.healthCheckPath
            path: "/healthz"
            port: 10254
            scheme: HTTP
        initialDelaySeconds: 10
        periodSeconds: 10
        timeoutSeconds: 1
        successThreshold: 1
        failureThreshold: 5
    readinessProbe:
        httpGet:
            # should match container.healthCheckPath
            path: "/healthz"
            port: 10254
            scheme: HTTP
        initialDelaySeconds: 10
        periodSeconds: 10
        timeoutSeconds: 1
        successThreshold: 1
        failureThreshold: 3
    # -- Path of the health check endpoint. All requests received on the port defined by
    # the healthz-port parameter are forwarded internally to this path.
    healthCheckPath: "/healthz"
    # -- Address to bind the health check endpoint.
    # It is better to set this option to the internal node address
    # if the ingress nginx controller is running in the `hostNetwork: true` mode.
    healthCheckHost: ""
    # -- Annotations to be added to controller pods
    ##
    podAnnotations: {}
    replicaCount: 1
    # -- Define either 'minAvailable' or 'maxUnavailable', never both.
    minAvailable: 1
    # -- Define either 'minAvailable' or 'maxUnavailable', never both.
    # maxUnavailable: 1

    ## Define requests resources to avoid probe issues due to CPU utilization in busy nodes
    ## ref: https://github.com/kubernetes/ingress-nginx/issues/4735#issuecomment-551204903
    ## Ideally, there should be no limits.
    ## https://engineering.indeedblog.com/blog/2019/12/cpu-throttling-regression-fix/
    resources:
        ##  limits:
        ##    cpu: 100m
        ##    memory: 90Mi
        requests:
            cpu: 100m
            memory: 90Mi
    # Mutually exclusive with keda autoscaling
    autoscaling:
        apiVersion: autoscaling/v2
        enabled: false
        annotations: {}
        minReplicas: 1
        maxReplicas: 11
        targetCPUUtilizationPercentage: 50
        targetMemoryUtilizationPercentage: 50
        behavior: {}
        # scaleDown:
        #   stabilizationWindowSeconds: 300
        #   policies:
        #   - type: Pods
        #     value: 1
        #     periodSeconds: 180
        # scaleUp:
        #   stabilizationWindowSeconds: 300
        #   policies:
        #   - type: Pods
        #     value: 2
        #     periodSeconds: 60
    autoscalingTemplate: []
    # Custom or additional autoscaling metrics
    # ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics
    # - type: Pods
    #   pods:
    #     metric:
    #       name: nginx_ingress_controller_nginx_process_requests_total
    #     target:
    #       type: AverageValue
    #       averageValue: 10000m

    # Mutually exclusive with hpa autoscaling
    keda:
        apiVersion: "keda.sh/v1alpha1"
        ## apiVersion changes with keda 1.x vs 2.x
        ## 2.x = keda.sh/v1alpha1
        ## 1.x = keda.k8s.io/v1alpha1
        enabled: false
        minReplicas: 1
        maxReplicas: 11
        pollingInterval: 30
        cooldownPeriod: 300
        restoreToOriginalReplicaCount: false
        scaledObject:
            annotations: {}
            # Custom annotations for ScaledObject resource
            #  annotations:
            # key: value
        triggers: []
        #     - type: prometheus
        #       metadata:
        #         serverAddress: http://<prometheus-host>:9090
        #         metricName: http_requests_total
        #         threshold: '100'
        #         query: sum(rate(http_requests_total{deployment="my-deployment"}[2m]))

        behavior: {}
    #     scaleDown:
    #       stabilizationWindowSeconds: 300
    #       policies:
    #       - type: Pods
    #         value: 1
    #         periodSeconds: 180
    #     scaleUp:
    #       stabilizationWindowSeconds: 300
    #       policies:
    #       - type: Pods
    #         value: 2
    #         periodSeconds: 60

    # -- Enable mimalloc as a drop-in replacement for malloc.
    ## ref: https://github.com/microsoft/mimalloc
    ##
    enableMimalloc: true
    ## Override NGINX template
    customTemplate:
        configMapName: ""
        configMapKey: ""
    service:
        enabled: true
        # -- If enabled is adding an appProtocol option for Kubernetes service. An appProtocol field replacing annotations that were
        # using for setting a backend protocol. Here is an example for AWS: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
        # It allows choosing the protocol for each backend specified in the Kubernetes service.
        # See the following GitHub issue for more details about the purpose: https://github.com/kubernetes/kubernetes/issues/40244
        # Will be ignored for Kubernetes versions older than 1.20
        ##
        appProtocol: true
        annotations: {}
        labels: {}
        # clusterIP: ""

        # -- List of IP addresses at which the controller services are available
        ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
        ##
        externalIPs: []
        # -- Used by cloud providers to connect the resulting `LoadBalancer` to a pre-existing static IP according to https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
        loadBalancerIP: ""
        loadBalancerSourceRanges: []
        enableHttp: true
        enableHttps: true
        ## Set external traffic policy to: "Local" to preserve source IP on providers supporting it.
        ## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
        # externalTrafficPolicy: ""

        ## Must be either "None" or "ClientIP" if set. Kubernetes will default to "None".
        ## Ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
        # sessionAffinity: ""

        ## Specifies the health check node port (numeric port number) for the service. If healthCheckNodePort isn’t specified,
        ## the service controller allocates a port from your cluster’s NodePort range.
        ## Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
        # healthCheckNodePort: 0

        # -- Represents the dual-stack-ness requested or required by this Service. Possible values are
        # SingleStack, PreferDualStack or RequireDualStack.
        # The ipFamilies and clusterIPs fields depend on the value of this field.
        ## Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/
        ipFamilyPolicy: "SingleStack"
        # -- List of IP families (e.g. IPv4, IPv6) assigned to the service. This field is usually assigned automatically
        # based on cluster configuration and the ipFamilyPolicy field.
        ## Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/
        ipFamilies:
            - IPv4
        ports:
            http: 80
            https: 443
        targetPorts:
            http: http
            https: https
        #type: LoadBalancer  
        type: ClusterIP  #此处修改
        ## type: NodePort
        ## nodePorts:
        ##   http: 32080
        ##   https: 32443
        ##   tcp:
        ##     8080: 32808
        nodePorts:
            http: ""
            https: ""
            tcp: {}
            udp: {}
        external:
            enabled: true
        internal:
            # -- Enables an additional internal load balancer (besides the external one).
            enabled: false
            # -- Annotations are mandatory for the load balancer to come up. Varies with the cloud service.
            annotations: {}
            # loadBalancerIP: ""

            # -- Restrict access For LoadBalancer service. Defaults to 0.0.0.0/0.
            loadBalancerSourceRanges: []
            ## Set external traffic policy to: "Local" to preserve source IP on
            ## providers supporting it
            ## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
            # externalTrafficPolicy: ""
    # shareProcessNamespace enables process namespace sharing within the pod.
    # This can be used for example to signal log rotation using `kill -USR1` from a sidecar.
    shareProcessNamespace: false
    # -- Additional containers to be added to the controller pod.
    # See https://github.com/lemonldap-ng-controller/lemonldap-ng-controller as example.
    extraContainers: []
    #  - name: my-sidecar
    #    image: nginx:latest
    #  - name: lemonldap-ng-controller
    #    image: lemonldapng/lemonldap-ng-controller:0.2.0
    #    args:
    #      - /lemonldap-ng-controller
    #      - --alsologtostderr
    #      - --configmap=$(POD_NAMESPACE)/lemonldap-ng-configuration
    #    env:
    #      - name: POD_NAME
    #        valueFrom:
    #          fieldRef:
    #            fieldPath: metadata.name
    #      - name: POD_NAMESPACE
    #        valueFrom:
    #          fieldRef:
    #            fieldPath: metadata.namespace
    #    volumeMounts:
    #    - name: copy-portal-skins
    #      mountPath: /srv/var/lib/lemonldap-ng/portal/skins

    # -- Additional volumeMounts to the controller main container.
    extraVolumeMounts: []
    #  - name: copy-portal-skins
    #   mountPath: /var/lib/lemonldap-ng/portal/skins

    # -- Additional volumes to the controller pod.
    extraVolumes: []
    #  - name: copy-portal-skins
    #    emptyDir: {}

    # -- Containers, which are run before the app containers are started.
    extraInitContainers: []
    # - name: init-myservice
    #   image: busybox
    #   command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']

    # -- Modules, which are mounted into the core nginx image. See values.yaml for a sample to add opentelemetry module
    extraModules: []
    #   containerSecurityContext:
    #     allowPrivilegeEscalation: false
    #
    # The image must contain a `/usr/local/bin/init_module.sh` executable, which
    # will be executed as initContainers, to move its config files within the
    # mounted volume.

    opentelemetry:
        enabled: false
        image: registry.k8s.io/ingress-nginx/opentelemetry:v20230107-helm-chart-4.4.2-2-g96b3d2165@sha256:331b9bebd6acfcd2d3048abbdd86555f5be76b7e3d0b5af4300b04235c6056c9
        containerSecurityContext:
            allowPrivilegeEscalation: false
    admissionWebhooks:
        annotations: {}
        # ignore-check.kube-linter.io/no-read-only-rootfs: "This deployment needs write access to root filesystem".

        ## Additional annotations to the admission webhooks.
        ## These annotations will be added to the ValidatingWebhookConfiguration and
        ## the Jobs Spec of the admission webhooks.
        enabled: false #此处修改,不使用ssl
        # -- Additional environment variables to set
        extraEnvs: []
        # extraEnvs:
        #   - name: FOO
        #     valueFrom:
        #       secretKeyRef:
        #         key: FOO
        #         name: secret-resource
        # -- Admission Webhook failure policy to use
        failurePolicy: Fail
        # timeoutSeconds: 10
        port: 8443
        certificate: "/usr/local/certificates/cert"
        key: "/usr/local/certificates/key"
        namespaceSelector: {}
        objectSelector: {}
        # -- Labels to be added to admission webhooks
        labels: {}
        # -- Use an existing PSP instead of creating one
        existingPsp: ""
        networkPolicyEnabled: false
        service:
            annotations: {}
            # clusterIP: ""
            externalIPs: []
            # loadBalancerIP: ""
            loadBalancerSourceRanges: []
            servicePort: 443
            type: ClusterIP
        createSecretJob:
            securityContext:
                allowPrivilegeEscalation: false
            resources: {}
            # limits:
            #   cpu: 10m
            #   memory: 20Mi
            # requests:
            #   cpu: 10m
            #   memory: 20Mi
        patchWebhookJob:
            securityContext:
                allowPrivilegeEscalation: false
            resources: {}
        patch:
            enabled: true
            image:
                registry: registry.cn-hangzhou.aliyuncs.com  #此处修改 修改镜像地址
                image: google_containers/kube-webhook-certgen #此处修改 修改镜像
                ## for backwards compatibility consider setting the full image url via the repository value below
                ## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail
                ## repository:
                #tag: v20220916-gd32f8c343
                tag: v1.3.0
                #digest: sha256:39c5b2e3310dc4264d638ad28d9d1d96c4cbb2b2dcfb52368fe4e3c63f61e10f
                pullPolicy: IfNotPresent
            # -- Provide a priority class name to the webhook patching job
            ##
            priorityClassName: ""
            podAnnotations: {}
            nodeSelector:
                kubernetes.io/os: linux
            tolerations: []
            # -- Labels to be added to patch job resources
            labels: {}
            securityContext:
                runAsNonRoot: true
                runAsUser: 2000
                fsGroup: 2000
        # Use certmanager to generate webhook certs
        certManager:
            enabled: false
            # self-signed root certificate
            rootCert:
                # default to be 5y
                duration: ""
            admissionCert:
                # default to be 1y
                duration: ""
                # issuerRef:
                #   name: "issuer"
                #   kind: "ClusterIssuer"
    metrics:
        port: 10254
        portName: metrics
        # if this port is changed, change healthz-port: in extraArgs: accordingly
        enabled: false
        service:
            annotations: {}
            # prometheus.io/scrape: "true"
            # prometheus.io/port: "10254"
            # -- Labels to be added to the metrics service resource
            labels: {}
            # clusterIP: ""

            # -- List of IP addresses at which the stats-exporter service is available
            ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
            ##
            externalIPs: []
            # loadBalancerIP: ""
            loadBalancerSourceRanges: []
            servicePort: 10254
            type: ClusterIP
            # externalTrafficPolicy: ""
            # nodePort: ""
        serviceMonitor:
            enabled: false
            additionalLabels: {}
            ## The label to use to retrieve the job name from.
            ## jobLabel: "app.kubernetes.io/name"
            namespace: ""
            namespaceSelector: {}
            ## Default: scrape .Release.Namespace only
            ## To scrape all, use the following:
            ## namespaceSelector:
            ##   any: true
            scrapeInterval: 30s
            # honorLabels: true
            targetLabels: []
            relabelings: []
            metricRelabelings: []
        prometheusRule:
            enabled: false
            additionalLabels: {}
            # namespace: ""
            rules: []
            # # These are just examples rules, please adapt them to your needs
            # - alert: NGINXConfigFailed
            #   expr: count(nginx_ingress_controller_config_last_reload_successful == 0) > 0
            #   for: 1s
            #   labels:
            #     severity: critical
            #   annotations:
            #     description: bad ingress config - nginx config test failed
            #     summary: uninstall the latest ingress changes to allow config reloads to resume
            # - alert: NGINXCertificateExpiry
            #   expr: (avg(nginx_ingress_controller_ssl_expire_time_seconds) by (host) - time()) < 604800
            #   for: 1s
            #   labels:
            #     severity: critical
            #   annotations:
            #     description: ssl certificate(s) will expire in less then a week
            #     summary: renew expiring certificates to avoid downtime
            # - alert: NGINXTooMany500s
            #   expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"5.+"} ) / sum(nginx_ingress_controller_requests) ) > 5
            #   for: 1m
            #   labels:
            #     severity: warning
            #   annotations:
            #     description: Too many 5XXs
            #     summary: More than 5% of all requests returned 5XX, this requires your attention
            # - alert: NGINXTooMany400s
            #   expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"4.+"} ) / sum(nginx_ingress_controller_requests) ) > 5
            #   for: 1m
            #   labels:
            #     severity: warning
            #   annotations:
            #     description: Too many 4XXs
            #     summary: More than 5% of all requests returned 4XX, this requires your attention
    # -- Improve connection draining when ingress controller pod is deleted using a lifecycle hook:
    # With this new hook, we increased the default terminationGracePeriodSeconds from 30 seconds
    # to 300, allowing the draining of connections up to five minutes.
    # If the active connections end before that, the pod will terminate gracefully at that time.
    # To effectively take advantage of this feature, the Configmap feature
    # worker-shutdown-timeout new value is 240s instead of 10s.
    ##
    lifecycle:
        preStop:
            exec:
                command:
                    - /wait-shutdown
    priorityClassName: ""
# -- Rollback limit
##
revisionHistoryLimit: 10
## Default 404 backend
##
defaultBackend:
    ##
    enabled: false
    name: defaultbackend
    image:
        registry: registry.k8s.io
        image: defaultbackend-amd64
        ## for backwards compatibility consider setting the full image url via the repository value below
        ## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail
        ## repository:
        tag: "1.5"
        pullPolicy: IfNotPresent
        # nobody user -> uid 65534
        runAsUser: 65534
        runAsNonRoot: true
        readOnlyRootFilesystem: true
        allowPrivilegeEscalation: false
    # -- Use an existing PSP instead of creating one
    existingPsp: ""
    extraArgs: {}
    serviceAccount:
        create: true
        name: ""
        automountServiceAccountToken: true
    # -- Additional environment variables to set for defaultBackend pods
    extraEnvs: []
    port: 8080
    ## Readiness and liveness probes for default backend
    ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
    ##
    livenessProbe:
        failureThreshold: 3
        initialDelaySeconds: 30
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 5
    readinessProbe:
        failureThreshold: 6
        initialDelaySeconds: 0
        periodSeconds: 5
        successThreshold: 1
        timeoutSeconds: 5
    # -- The update strategy to apply to the Deployment or DaemonSet
    ##
    updateStrategy: {}
    #  rollingUpdate:
    #    maxUnavailable: 1
    #  type: RollingUpdate

    # -- `minReadySeconds` to avoid killing pods before we are ready
    ##
    minReadySeconds: 0
    # -- Node tolerations for server scheduling to nodes with taints
    ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
    ##
    tolerations: []
    #  - key: "key"
    #    operator: "Equal|Exists"
    #    value: "value"
    #    effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"

    affinity: {}
    # -- Security Context policies for controller pods
    # See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for
    # notes on enabling and using sysctls
    ##
    podSecurityContext: {}
    # -- Security Context policies for controller main container.
    # See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for
    # notes on enabling and using sysctls
    ##
    containerSecurityContext: {}
    # -- Labels to add to the pod container metadata
    podLabels: {}
    #  key: value

    # -- Node labels for default backend pod assignment
    ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
    ##
    nodeSelector:
        kubernetes.io/os: linux
    # -- Annotations to be added to default backend pods
    ##
    podAnnotations: {}
    replicaCount: 1
    minAvailable: 1
    resources: {}
    # limits:
    #   cpu: 10m
    #   memory: 20Mi
    # requests:
    #   cpu: 10m
    #   memory: 20Mi

    extraVolumeMounts: []
    ## Additional volumeMounts to the default backend container.
    #  - name: copy-portal-skins
    #   mountPath: /var/lib/lemonldap-ng/portal/skins

    extraVolumes: []
    ## Additional volumes to the default backend pod.
    #  - name: copy-portal-skins
    #    emptyDir: {}

    autoscaling:
        annotations: {}
        enabled: false
        minReplicas: 1
        maxReplicas: 2
        targetCPUUtilizationPercentage: 50
        targetMemoryUtilizationPercentage: 50
    service:
        annotations: {}
        # clusterIP: ""

        # -- List of IP addresses at which the default backend service is available
        ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
        ##
        externalIPs: []
        # loadBalancerIP: ""
        loadBalancerSourceRanges: []
        servicePort: 80
        type: ClusterIP
    priorityClassName: ""
    # -- Labels to be added to the default backend resources
    labels: {}
## Enable RBAC as per https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/rbac.md and https://github.com/kubernetes/ingress-nginx/issues/266
rbac:
    create: true
    scope: false
## If true, create & use Pod Security Policy resources
## https://kubernetes.io/docs/concepts/policy/pod-security-policy/
podSecurityPolicy:
    enabled: false
serviceAccount:
    create: true
    name: ""
    automountServiceAccountToken: true
    # -- Annotations for the controller service account
    annotations: {}
# -- Optional array of imagePullSecrets containing private registry credentials
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# - name: secretName

# -- TCP service key-value pairs
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/exposing-tcp-udp-services.md
##
tcp: {}
#  8080: "default/example-tcp-svc:9000"

# -- UDP service key-value pairs
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/exposing-tcp-udp-services.md
##
udp: {}
#  53: "kube-system/kube-dns:53"

# -- Prefix for TCP and UDP ports names in ingress controller service
## Some cloud providers, like Yandex Cloud may have a requirements for a port name regex to support cloud load balancer integration
portNamePrefix: ""
# -- (string) A base64-encoded Diffie-Hellman parameter.
# This can be generated with: `openssl dhparam 4096 2> /dev/null | base64`
## Ref: https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/ssl-dh-param
dhParam:

3、Helm和kubernetes版本对照

Helm 版本支持的 Kubernetes 版本
3.12.x1.27.x - 1.24.x
3.11.x1.26.x - 1.23.x
3.10.x1.25.x - 1.22.x
3.9.x1.24.x - 1.21.x
3.8.x1.23.x - 1.20.x
3.7.x1.22.x - 1.19.x
3.6.x1.21.x - 1.18.x
3.5.x1.20.x - 1.17.x
3.4.x1.19.x - 1.16.x
3.3.x1.18.x - 1.15.x
3.2.x1.18.x - 1.15.x
3.1.x1.17.x - 1.14.x
3.0.x1.16.x - 1.13.x
2.16.x1.16.x - 1.15.x
2.15.x1.15.x - 1.14.x
2.14.x1.14.x - 1.13.x
2.13.x1.13.x - 1.12.x
2.12.x1.12.x - 1.11.x
2.11.x1.11.x - 1.10.x
2.10.x1.10.x - 1.9.x
2.9.x1.10.x - 1.9.x
2.8.x1.9.x - 1.8.x
2.7.x1.8.x - 1.7.x
2.6.x1.7.x - 1.6.x
2.5.x1.6.x - 1.5.x
2.4.x1.6.x - 1.5.x
2.3.x1.5.x - 1.4.x
2.2.x1.5.x - 1.4.x
2.1.x1.5.x - 1.4.x
2.0.x1.4.x - 1.3.x

4、Ingress-nginx和k8s版本对照

SupportedIngress-NGINX versionk8s supported versionAlpine VersionNginx VersionHelm Chart Version
🔄v1.11.21.30, 1.29, 1.28, 1.27, 1.263.20.01.25.54.11.2
🔄v1.11.11.30, 1.29, 1.28, 1.27, 1.263.20.01.25.54.11.1
🔄v1.11.01.30, 1.29, 1.28, 1.27, 1.263.20.01.25.54.11.0
🔄v1.10.41.30, 1.29, 1.28, 1.27, 1.263.20.01.25.54.10.4
🔄v1.10.31.30, 1.29, 1.28, 1.27, 1.263.20.01.25.54.10.3
🔄v1.10.21.30, 1.29, 1.28, 1.27, 1.263.20.01.25.54.10.2
🔄v1.10.11.30, 1.29, 1.28, 1.27, 1.263.19.11.25.34.10.1
🔄v1.10.01.29, 1.28, 1.27, 1.263.19.11.25.34.10.0
v1.9.61.29, 1.28, 1.27, 1.26, 1.253.19.01.21.64.9.1
v1.9.51.28, 1.27, 1.26, 1.253.18.41.21.64.9.0
v1.9.41.28, 1.27, 1.26, 1.253.18.41.21.64.8.3
v1.9.31.28, 1.27, 1.26, 1.253.18.41.21.64.8.*
v1.9.11.28, 1.27, 1.26, 1.253.18.41.21.64.8.*
v1.9.01.28, 1.27, 1.26, 1.253.18.21.21.64.8.*
v1.8.41.27, 1.26, 1.25, 1.243.18.21.21.64.7.*
v1.7.11.27, 1.26, 1.25, 1.243.17.21.21.64.6.*
v1.6.41.26, 1.25, 1.24, 1.233.17.01.21.64.5.*
v1.5.11.25, 1.24, 1.233.16.21.21.64.4.*
v1.4.01.25, 1.24, 1.23, 1.223.16.21.19.10†4.3.0
v1.3.11.24, 1.23, 1.22, 1.21, 1.203.16.21.19.10†4.2.5

 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/874462.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

Caffenie配合Redis做两级缓存

一、什么是两级缓存 在项目中。一级缓存用Caffeine&#xff0c;二级缓存用Redis&#xff0c;查询数据时首先查本地的Caffeine缓存&#xff0c;没有命中再通过网络去访问Redis缓存&#xff0c;还是没有命中再查数据库。具体流程如下 二、简单的二级缓存实现-v1 目录结构 2…

合宙低功耗4G模组AIR780EX ——开发板使用说明

EVB-AIR780EX 开发板是合宙通信推出的基于 Air780EX 模组所开发的&#xff0c;包含电源&#xff0c;SIM 卡&#xff0c;USB&#xff0c;天线&#xff0c;等必要功能的最小硬件系统。 以方便用户在设计前期对Air780E模块进行 性能评估&#xff0c;功能调试&#xff0c;软件开发…

请教一下,安恒信息为什么2024年上半年巨亏2.76亿元?

【科技明说 &#xff5c; 科技热点关注】 根据公开的财务报告来看&#xff0c;安恒信息2024年上半年实现营业总收入6.98亿元&#xff0c;同比增长0.29%。尽管公司在数据安全、商用密码和信创安全等核心业务领域实现了较快增长&#xff0c;但整体上仍然面临亏损。 目前来看&…

blender云渲染来了,blender云渲染教程!

朋友们&#xff0c;成都渲染101农场blender云渲染上线了&#xff0c;继3DMAX/C4D/maya/UE5云渲染上线后&#xff0c;又上线了blender云渲染&#xff0c;今天&#xff0c;成都渲染101渲染农场用四步教会您blender云渲染&#xff01; 第一步&#xff0c;云渲码6666注册个渲染101…

【STM32 HAL库】IIC通信与CubeMX配置

【STM32 HAL库】IIC通信与CubeMX配置 前言理论IIC总线时序图IIC写数据IIC读数据 应用CubeMX配置应用示例AHT20初始化初始化函数读取说明读取函数 前言 本文为笔者学习 IIC 通信的总结&#xff0c;基于keysking的视频内容&#xff0c;如有错误&#xff0c;欢迎指正 理论 IIC总…

大模型备案,全程配合包过拿到备案号

本文详解备案流程&#xff0c;旨在帮助企业和开发者顺利完成备案&#xff0c;确保AI技术健康有序发展。 一、政策要求做大模型备案 大模型备案是中国国家互联网信息办公室为加强生成式人工智能服务的管理&#xff0c;确保用户权益得到充分保护&#xff0c;以及保障国家安全和…

web基础之SSRF

1、内网访问 题目提示&#xff1a;访问位于127.0.0.1的flag.php&#xff1b;直接利用ssrf漏洞访问?url127.0.0.1/flag.php 2、伪协议读取文件 &#xff08;1&#xff09;题目提示&#xff1a;尝试去读取一下Web目录下的flag.php吧 &#xff08;2&#xff09;什么是伪协议&a…

【网络】网络通信的传输方式

目录 1.网络通信中的两种基本通信模式 1.1.怎么理解连接 1.2.面向有连接类型 1.3.面向无连接类型 2.实现这两种通信模式的具体交换技术 2.1.电路交换 2.2.分组交换 3.根据接收端数量分类 单播&#xff08;Unicast&#xff09; 广播&#xff08;Broadcast&#xff09; …

使用C++编写一个语音播报时钟(Qt)

要求&#xff1a;当系统时间达到输入的时间时&#xff0c;语音播报对话框中的内容。定时可以取消。qt界面如上图所示。组件如下&#xff1a; countdownEdit作为书写目标时间的line_edit start_btn作为开始和停止的按钮 stop_btn作为取消的按钮 systimelab显示系统时间的lab tex…

火语言RPA流程组件介绍--鼠标拖拽元素

&#x1f6a9;【组件功能】&#xff1a;在开始位置上按下鼠标&#xff0c;拖动到结束坐标或指定元素上放下鼠标&#xff0c;实现目标元素的拖拽 配置预览 配置说明 丨拖动元素 支持T或# 默认FLOW输入项 开始拖动的元素,并从当前元素开始按下鼠标 丨拖动到 目标元素/目标位…

解锁Web3.0——Scaffold-eth打造以太坊DApp的终极指南

&#x1f680;本系列文章为个人学习笔记&#xff0c;目的是巩固知识并记录我的学习过程及理解。文笔和排版可能拙劣&#xff0c;望见谅。 目录 前言 一、快速部署 1、前期准备&#xff1a; 2、安装项目&#xff1a; ​ 二、配置部署运行环境 1、初始化本地链&#xff1a;…

html css网页制作成品

前言 在HTML和CSS中创建一个网页是一个简单的过程&#xff0c;但是要创建一个成品级的网页&#xff0c;你需要考虑更多的因素&#xff0c;例如&#xff1a; 响应式设计&#xff1a;确保你的网页在不同的设备和屏幕尺寸上都能良好显示。 访问性&#xff1a;确保你的网页对于大…

三天入门WebGIS开发:智慧校园篇

WebGIS开发听起来可能有点高大上&#xff0c;但其实只要掌握几个关键点&#xff0c;入门并不难。智慧校园作为WebGIS的一个热门应用场景&#xff0c;集成了地理信息与校园管理&#xff0c;为校园带来智能化革新。接下来的三天&#xff0c;我们将一步步带你入门WebGIS开发&#…

大数据-130 - Flink CEP 详解 - CEP开发流程 与 案例实践:恶意登录检测实现

点一下关注吧&#xff01;&#xff01;&#xff01;非常感谢&#xff01;&#xff01;持续更新&#xff01;&#xff01;&#xff01; 目前已经更新到了&#xff1a; Hadoop&#xff08;已更完&#xff09;HDFS&#xff08;已更完&#xff09;MapReduce&#xff08;已更完&am…

8.10Laplacian算子

实验原理 Laplacian算子也是一种用于边缘检测的技术&#xff0c;它通过查找二阶导数的零交叉点来定位边缘。 cv::Laplacian()函数是OpenCV库提供的一个用于计算图像拉普拉斯算子的函数。拉普拉斯算子是一个二阶微分算子&#xff0c;常用于图像处理中检测边缘或突变区域。它通…

揭秘!全罐喂养值得吗?高性价比主食罐头推荐

家里的五岁的公猫&#xff0c;已绝育&#xff0c;不爱喝水&#xff0c;医生建议喂湿粮。一开始还是早干晚湿&#xff0c;干粮存货都处理完后&#xff0c;就开始全罐喂养了。身边也有许多铲屎官十分好奇全罐喂养到底值不值&#xff0c;那么今天就来分享一下全罐喂养的感想和经验…

卡诺图的绘制

目录 逻辑函数的卡诺图化简 最小项卡诺图的组成 相邻最小项 卡诺图的组成 二变量卡诺图 三变量卡诺图 四变量卡诺图 卡诺图中的相邻项&#xff08;几何相邻&#xff09; 逻辑函数的卡诺图化简 最小项卡诺图的组成 相邻最小项 互为反变量的那个变量可以消去。 卡诺图的…

.json文件的C#解析,基于Newtonsoft.Json插件

目录 1. 前言 2. 正文 2.1 问题 2.2 解决办法 2.2.1 思路 2.2.2 代码实现 2.2.3 测试结果 3. 备注 1. 前言 天气晚来秋&#xff0c;这几天天气变凉了&#xff0c;各位同学注意好多穿衣服。回归正题 由于需要&#xff0c;需要将json的配置里面的调理解析出来&#xff…

RV1126采集VI视频数据流

本章节内容 这个章节主要是讲解如何通过RKMEDIA的API获取RV1126的VI视频流&#xff0c;虽然这部分在之前的课程里面讲解了很多次&#xff0c;但还是要带着大家回顾一下这部分代码。 采集VI数据的代码实现 2.1. VI模块的初始化并使能 上图是VI模块的初始化&#xff0c;这部分的…

STM32+ESP8266 WiFi连接机智云平台APP远程控制教程

本文档将介绍如何用STM32ESP8266 WiFi模块从零开始连接上机智云&#xff0c;并通过APP进行远程控制。 机智云官网&#xff1a;机智云|智能物联网操作系统 (gizwits.com) 准备&#xff1a;STM32、ESP8266、手机、可上网的WiFi。 1.创建设备 1.1 注册登陆 请自行注册账号并登陆…