K8S日志收集方案-EFK部署

EFK架构工作流程

在这里插入图片描述

部署说明

ECK (Elastic Cloud on Kubernetes):2.7
Kubernetes:1.23.0

文件准备

crds.yaml
下载地址:https://download.elastic.co/downloads/eck/2.7.0/crds.yaml

operator.yaml
下载地址:https://download.elastic.co/downloads/eck/2.7.0/operator.yaml

elastic.yaml

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: efk-elastic
  namespace: elastic-system
spec:
  version: 8.12.2
  nodeSets:
    - name: default
      count: 3
      config:
        node.store.allow_mmap: false
      volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 20Gi
            storageClassName: openebs-hostpath
      podTemplate:
        spec:
          containers:
          - name: elasticsearch
            env:
            - name: ES_JAVA_OPTS
              value: -Xms2g -Xmx2g
            resources:
              requests:
                memory: 2Gi
                cpu: 2
              limits:
                memory: 4Gi
                cpu: 2
          # https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-virtual-memory.html
          initContainers:
            - name: sysctl
              securityContext:
                privileged: true
              command: ["sh", "-c", "sysctl -w vm.max_map_count=262144"]


---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic-system
  labels:
    app: efk-elastic-nodeport
  name: efk-elastic-nodeport
spec:
  sessionAffinity: None
  selector:
    common.k8s.elastic.co/type: elasticsearch
    elasticsearch.k8s.elastic.co/cluster-name: efk-elastic
  ports:
    - name: http-9200
      protocol: TCP
      targetPort: 9200
      port: 9200
      nodePort: 30920
    - name: http-9300
      protocol: TCP
      targetPort: 9300
      port: 9300
      nodePort: 30921
  type: NodePort


---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: efk-kibana
  namespace: elastic-system
spec:
  version: 8.12.2
  count: 1
  elasticsearchRef:
    name: efk-elastic
    namespace: elastic-system

---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic-system
  labels:
    app: efk-kibana-nodeport
  name: efk-kibana-nodeport
spec:
  sessionAffinity: None
  selector:
    common.k8s.elastic.co/type: kibana
    kibana.k8s.elastic.co/name: efk-kibana
  ports:
    - name: http-5601
      protocol: TCP
      targetPort: 5601
      port: 5601
      nodePort: 30922
  type: NodePort

fluentd-es-configmap.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: fluentd-es-config
  namespace: elastic-system
data:
  fluent.conf: |-
    # https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/docker-image/v1.11/debian-elasticsearch7/conf/fluent.conf

    @include "#{ENV['FLUENTD_SYSTEMD_CONF'] || 'systemd'}.conf"
    @include "#{ENV['FLUENTD_PROMETHEUS_CONF'] || 'prometheus'}.conf"
    @include kubernetes.conf
    @include conf.d/*.conf

    <match kubernetes.**>
      # https://github.com/kubernetes/kubernetes/issues/23001
      @type elasticsearch_dynamic
      @id  kubernetes_elasticsearch
      @log_level info
      include_tag_key true
      host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
      port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
      path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"
      scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
      ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
      ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1_2'}"
      user "#{ENV['FLUENT_ELASTICSEARCH_USER'] || use_default}"
      password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD'] || use_default}"
      reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}"
      reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}"
      reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}"
      log_es_400_reason "#{ENV['FLUENT_ELASTICSEARCH_LOG_ES_400_REASON'] || 'false'}"
      logstash_prefix logstash-${record['kubernetes']['namespace_name']}
      logstash_dateformat "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_DATEFORMAT'] || '%Y.%m.%d'}"
      logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"
      index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'logstash'}"
      target_index_key "#{ENV['FLUENT_ELASTICSEARCH_TARGET_INDEX_KEY'] || use_nil}"
      type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"
      include_timestamp "#{ENV['FLUENT_ELASTICSEARCH_INCLUDE_TIMESTAMP'] || 'false'}"
      template_name "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_NAME'] || use_nil}"
      template_file "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_FILE'] || use_nil}"
      template_overwrite "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_OVERWRITE'] || use_default}"
      sniffer_class_name "#{ENV['FLUENT_SNIFFER_CLASS_NAME'] || 'Fluent::Plugin::ElasticsearchSimpleSniffer'}"
      request_timeout "#{ENV['FLUENT_ELASTICSEARCH_REQUEST_TIMEOUT'] || '5s'}"
      suppress_type_name "#{ENV['FLUENT_ELASTICSEARCH_SUPPRESS_TYPE_NAME'] || 'true'}"
      enable_ilm "#{ENV['FLUENT_ELASTICSEARCH_ENABLE_ILM'] || 'false'}"
      ilm_policy_id "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_ID'] || use_default}"
      ilm_policy "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY'] || use_default}"
      ilm_policy_overwrite "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_OVERWRITE'] || 'false'}"
      <buffer>
        flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
        flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
        chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"
        queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
        retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
        retry_forever true
      </buffer>
    </match>

    <match **>
      @type elasticsearch
      @id out_es
      @log_level info
      include_tag_key true
      host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
      port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
      path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"
      scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
      ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
      ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1_2'}"
      user "#{ENV['FLUENT_ELASTICSEARCH_USER'] || use_default}"
      password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD'] || use_default}"
      reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}"
      reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}"
      reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}"
      log_es_400_reason "#{ENV['FLUENT_ELASTICSEARCH_LOG_ES_400_REASON'] || 'false'}"
      logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'logstash'}"
      logstash_dateformat "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_DATEFORMAT'] || '%Y.%m.%d'}"
      logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}"
      index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'logstash'}"
      target_index_key "#{ENV['FLUENT_ELASTICSEARCH_TARGET_INDEX_KEY'] || use_nil}"
      type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}"
      include_timestamp "#{ENV['FLUENT_ELASTICSEARCH_INCLUDE_TIMESTAMP'] || 'false'}"
      template_name "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_NAME'] || use_nil}"
      template_file "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_FILE'] || use_nil}"
      template_overwrite "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_OVERWRITE'] || use_default}"
      sniffer_class_name "#{ENV['FLUENT_SNIFFER_CLASS_NAME'] || 'Fluent::Plugin::ElasticsearchSimpleSniffer'}"
      request_timeout "#{ENV['FLUENT_ELASTICSEARCH_REQUEST_TIMEOUT'] || '5s'}"
      suppress_type_name "#{ENV['FLUENT_ELASTICSEARCH_SUPPRESS_TYPE_NAME'] || 'true'}"
      enable_ilm "#{ENV['FLUENT_ELASTICSEARCH_ENABLE_ILM'] || 'false'}"
      ilm_policy_id "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_ID'] || use_default}"
      ilm_policy "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY'] || use_default}"
      ilm_policy_overwrite "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_OVERWRITE'] || 'false'}"
      <buffer>
        flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
        flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
        chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"
        queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
        retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
        retry_forever true
      </buffer>
    </match>
  kubernetes.conf: |-
    # https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/docker-image/v1.11/debian-elasticsearch7/conf/kubernetes.conf

    <label @FLUENT_LOG>
      <match fluent.**>
        @type null
        @id ignore_fluent_logs
      </match>
    </label>

    <source>
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/es-containers.log.pos
      tag raw.kubernetes.*
      read_from_head true
      <parse>
        @type multi_format
        <pattern>
          format json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        </pattern>
        <pattern>
          format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
          time_format %Y-%m-%dT%H:%M:%S.%N%:z
        </pattern>
      </parse>
    </source>
    # Detect exceptions in the log output and forward them as one log entry.
    <match raw.kubernetes.**>
      @id raw.kubernetes
      @type detect_exceptions
      remove_tag_prefix raw
      message log
      stream stream
      multiline_flush_interval 5
      max_bytes 500000
      max_lines 1000
    </match>
    # Concatenate multi-line logs
    <filter **>
      @id filter_concat
      @type concat
      key message
      multiline_end_regexp /\n$/
      separator ""
    </filter>
    # Enriches records with Kubernetes metadata
    <filter kubernetes.**>
      @id filter_kubernetes_metadata
      @type kubernetes_metadata
    </filter>
    # Fixes json fields in Elasticsearch
    <filter kubernetes.**>
      @id filter_parser
      @type parser
      key_name log
      reserve_data true
      remove_key_name_field true
      <parse>
        @type multi_format
        <pattern>
          format json
        </pattern>
        <pattern>
          format none
        </pattern>
      </parse>
    </filter>

    <source>
      @type tail
      @id in_tail_minion
      path /var/log/salt/minion
      pos_file /var/log/fluentd-salt.pos
      tag salt
      <parse>
        @type regexp
        expression /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/
        time_format %Y-%m-%d %H:%M:%S
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_startupscript
      path /var/log/startupscript.log
      pos_file /var/log/fluentd-startupscript.log.pos
      tag startupscript
      <parse>
        @type syslog
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_docker
      path /var/log/docker.log
      pos_file /var/log/fluentd-docker.log.pos
      tag docker
      <parse>
        @type regexp
        expression /^time="(?<time>[^)]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_etcd
      path /var/log/etcd.log
      pos_file /var/log/fluentd-etcd.log.pos
      tag etcd
      <parse>
        @type none
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_kubelet
      multiline_flush_interval 5s
      path /var/log/kubelet.log
      pos_file /var/log/fluentd-kubelet.log.pos
      tag kubelet
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_kube_proxy
      multiline_flush_interval 5s
      path /var/log/kube-proxy.log
      pos_file /var/log/fluentd-kube-proxy.log.pos
      tag kube-proxy
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_kube_apiserver
      multiline_flush_interval 5s
      path /var/log/kube-apiserver.log
      pos_file /var/log/fluentd-kube-apiserver.log.pos
      tag kube-apiserver
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_kube_controller_manager
      multiline_flush_interval 5s
      path /var/log/kube-controller-manager.log
      pos_file /var/log/fluentd-kube-controller-manager.log.pos
      tag kube-controller-manager
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_kube_scheduler
      multiline_flush_interval 5s
      path /var/log/kube-scheduler.log
      pos_file /var/log/fluentd-kube-scheduler.log.pos
      tag kube-scheduler
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_rescheduler
      multiline_flush_interval 5s
      path /var/log/rescheduler.log
      pos_file /var/log/fluentd-rescheduler.log.pos
      tag rescheduler
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_glbc
      multiline_flush_interval 5s
      path /var/log/glbc.log
      pos_file /var/log/fluentd-glbc.log.pos
      tag glbc
      <parse>
        @type kubernetes
      </parse>
    </source>

    <source>
      @type tail
      @id in_tail_cluster_autoscaler
      multiline_flush_interval 5s
      path /var/log/cluster-autoscaler.log
      pos_file /var/log/fluentd-cluster-autoscaler.log.pos
      tag cluster-autoscaler
      <parse>
        @type kubernetes
      </parse>
    </source>

    # Example:
    # 2017-02-09T00:15:57.992775796Z AUDIT: id="90c73c7c-97d6-4b65-9461-f94606ff825f" ip="104.132.1.72" method="GET" user="kubecfg" as="<self>" asgroups="<lookup>" namespace="default" uri="/api/v1/namespaces/default/pods"
    # 2017-02-09T00:15:57.993528822Z AUDIT: id="90c73c7c-97d6-4b65-9461-f94606ff825f" response="200"
    <source>
      @type tail
      @id in_tail_kube_apiserver_audit
      multiline_flush_interval 5s
      path /var/log/kubernetes/kube-apiserver-audit.log
      pos_file /var/log/kube-apiserver-audit.log.pos
      tag kube-apiserver-audit
      <parse>
        @type multiline
        format_firstline /^\S+\s+AUDIT:/
        # Fields must be explicitly captured by name to be parsed into the record.
        # Fields may not always be present, and order may change, so this just looks
        # for a list of key="\"quoted\" value" pairs separated by spaces.
        # Unknown fields are ignored.
        # Note: We can't separate query/response lines as format1/format2 because
        #       they don't always come one after the other for a given query.
        format1 /^(?<time>\S+) AUDIT:(?: (?:id="(?<id>(?:[^"\\]|\\.)*)"|ip="(?<ip>(?:[^"\\]|\\.)*)"|method="(?<method>(?:[^"\\]|\\.)*)"|user="(?<user>(?:[^"\\]|\\.)*)"|groups="(?<groups>(?:[^"\\]|\\.)*)"|as="(?<as>(?:[^"\\]|\\.)*)"|asgroups="(?<asgroups>(?:[^"\\]|\\.)*)"|namespace="(?<namespace>(?:[^"\\]|\\.)*)"|uri="(?<uri>(?:[^"\\]|\\.)*)"|response="(?<response>(?:[^"\\]|\\.)*)"|\w+="(?:[^"\\]|\\.)*"))*/
        time_format %Y-%m-%dT%T.%L%Z
      </parse>
    </source>

fluentd-es-ds.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd-es
  namespace: elastic-system
  labels:
    app: fluentd-es
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    app: fluentd-es
rules:
  - apiGroups:
      - ""
    resources:
      - "namespaces"
      - "pods"
    verbs:
      - "get"
      - "watch"
      - "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    app: fluentd-es
subjects:
  - kind: ServiceAccount
    name: fluentd-es
    namespace: elastic-system
    apiGroup: ""
roleRef:
  kind: ClusterRole
  name: fluentd-es
  apiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-es
  namespace: elastic-system
  labels:
    app: fluentd-es
spec:
  selector:
    matchLabels:
      app: fluentd-es
  template:
    metadata:
      labels:
        app: fluentd-es
    spec:
      serviceAccount: fluentd-es
      serviceAccountName: fluentd-es
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      containers:
        - name: fluentd-es
          image: fluent/fluentd-kubernetes-daemonset:v1.16-debian-elasticsearch8-2
          env:
            - name: FLUENT_ELASTICSEARCH_HOST
              value: efk-elastic-es-http
            # default user
            - name: FLUENT_ELASTICSEARCH_USER
              value: elastic
            # is already present from the elasticsearch deployment
            - name: FLUENT_ELASTICSEARCH_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: efk-elastic-es-elastic-user
                  key: elastic
            # elasticsearch standard port
            - name: FLUENT_ELASTICSEARCH_PORT
              value: "9200"
            # der elastic operator ist https standard
            - name: FLUENT_ELASTICSEARCH_SCHEME
              value: "https"
              # dont need systemd logs for now
            - name: FLUENTD_SYSTEMD_CONF
              value: disable
            # da certs self signt sind muss verify disabled werden
            - name: FLUENT_ELASTICSEARCH_SSL_VERIFY
              value: "false"
            # to avoid issue https://github.com/uken/fluent-plugin-elasticsearch/issues/525
            - name: FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS
              value: "false"
          resources:
            limits:
              memory: 512Mi
            requests:
              cpu: 100m
              memory: 100Mi
          volumeMounts:
            - name: varlog
              mountPath: /var/log
            - name: varlibdockercontainers
              mountPath: /var/lib/docker/containers
              readOnly: true
            - name: config-volume
              mountPath: /fluentd/etc
      terminationGracePeriodSeconds: 30
      volumes:
        - name: varlog
          hostPath:
            path: /var/log
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers
        - name: config-volume
          configMap:
            name: fluentd-es-config

index-cleaner.yaml

apiVersion: batch/v1  
kind: CronJob  
metadata:  
  name: index-cleaner  
  namespace: elastic-system  
spec:  
  schedule: "0 0 * * *" 
  jobTemplate:  
    spec:  
      backoffLimit: 2  
      template:  
        spec:
          containers:  
            - name: index-cleaner  
              image: hcd1129/es-index-cleaner:latest
              env:  
                - name: DAYS
                  value: "2"
                - name: PREFIX
                  value: logstash-*  
                - name: ES_HOST  
                  value: https://efk-elastic-es-http:9200  
                - name: ES_USER  
                  value: elastic  
                - name: ES_PASSWORD
                  valueFrom:
                    secretKeyRef:
                      key: elastic
                      name: efk-elastic-es-elastic-user
          restartPolicy: Never

es-index-cleaner镜像

执行部署

# 依次执行

#安装 Operator
kubectl create -f crds.yaml

kubectl apply -f operator.yaml

# 安装Elasticsearch kibana
kubectl apply -f elastic.yaml

#安装 Fluentd
kubectl apply -f fluentd-es-configmap.yaml

kubectl apply -f fluentd-es-ds.yaml

#部署自动清理旧日志定时任务
kubectl apply -f index-cleaner.yaml

部署效果

kubectl get all -n elastic-system

在这里插入图片描述

账号密码

#es账号:elastic
#查看es密码
kubectl get secret efk-elastic-es-elastic-user -n elastic-system -o=jsonpath='{.data.elastic}' | base64 --decode; echo

效果展示

https://192.168.1.180:30920/
在这里插入图片描述
https://192.168.1.180:30922/
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/460879.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

鸿蒙Harmony应用开发—ArkTS声明式开发(容器组件:FormLink)

提供静态卡片交互组件&#xff0c;用于静态卡片内部和提供方应用间的交互&#xff0c;当前支持router、message和call三种类型的事件。 说明&#xff1a; 该组件从API Version 10开始支持。后续版本如有新增内容&#xff0c;则采用上角标单独标记该内容的起始版本。 该组件仅可…

Spring Cloud Alibaba微服务从入门到进阶(六)(声明式HTTP客户端-Feign)

Feign是Netflix开源的声明式HTTP客户端&#xff08;只要声明一个接口&#xff0c;Feign就会通过你定义的接口自动给你构造请求的目标地址&#xff0c;并帮助你请求&#xff09; 用Feign重构前面RestTemplate方式的服务间调用 想回顾一下RestTemplate调用 加依赖 项目集成Feig…

3. ElasticSearch搜索技术深入与聚合查询实战

1. ES分词器详解 1.1 基本概念 分词器官方称之为文本分析器&#xff0c;顾名思义&#xff0c;是对文本进行分析处理的一种手段&#xff0c;基本处理逻辑为按照预先制定的分词规则&#xff0c;把原始文档分割成若干更小粒度的词项&#xff0c;粒度大小取决于分词器规则。 1.2 …

米桃安全漏洞讲堂系列第4期:WebShell木马专题

一、WebShell概述 WebShell是黑客经常使用的一种恶意脚本也称为木马后门。其目的是获得对服务器的执行操作权限&#xff0c;比如执行系统命令、窃取用户文件、访问数据库、删改web页面等&#xff0c;其危害不言而喻。 黑客利用常见的漏洞&#xff0c;如文件上传、SQL注入、远程…

PMP和软考,考哪一个?

PMP跟软考有部分知识点是重合的&#xff0c;软考高项比较适用于计算机 IT 行业&#xff0c;而 PMP 不受行业限制&#xff0c;各行各业都适用&#xff0c;至于哪个更合适&#xff0c;看你想去国企还是民企&#xff0c;国企软考吃香&#xff0c;民企PMP 吃香 下面说下两者具体有什…

【四 (6)数据可视化之 Grafana安装、页面介绍、图表配置】

目录 文章导航一、Grafana介绍[✨ 特性]二、安装和配置1、安装2、权限配置&#xff08;账户/团队/用户&#xff09;①用户管理②团队管理③账户管理④看板权限 3、首选项配置4、插件管理①数据源插件②图表插件③应用插件④插件安装方式一⑤安装方式二 三、数据源管理1、添加数…

腾讯春招后端一面(八股篇)

前言 前几天在网上发了腾讯面试官问的一些问题&#xff0c;好多小伙伴关注&#xff0c;今天对这些问题写个具体答案&#xff0c;博主好久没看八股了&#xff0c;正好复习一下。 面试手撕了三道算法&#xff0c;这部分之后更&#xff0c;喜欢的小伙伴可以留意一下我的账号。 1…

JavaScript中的事件模型(详细案例代码)

文章目录 一、事件与事件流二、事件模型原始事件模型特性 标准事件模型特性 IE事件模型 一、事件与事件流 javascript中的事件&#xff0c;可以理解就是在HTML文档或者浏览器中发生的一种交互操作&#xff0c;使得网页具备互动性&#xff0c; 常见的有加载事件、鼠标事件、自定…

详解命令docker run -d --name container_name -e TZ=Asia/Shanghai your_image

docker run 是Docker的主要命令&#xff0c;用于从镜像启动一个新的容器。下面详细解释并举例说明 -d, --name, -e TZ 参数的用法&#xff1a; -d 或 --detach&#xff1a; 这个标志告诉Docker以守护进程&#xff08;后台&#xff09;模式运行容器。这意味着当你执行 docker ru…

JavaScript进阶:js的一些学习笔记-this指向,call,apply,bind,防抖,节流

文章目录 1. this指向1. 箭头函数 this的指向 2. 改变this的指向1. call()2. apply()3. bind() 3. 防抖和节流1. 防抖2. 节流 1. this指向 1. 箭头函数 this的指向 箭头函数默认帮我们绑定外层this的值&#xff0c;所以在箭头函数中this的值和外层的this是一样的箭头函数中的…

双碳目标下生态与农田系统温室气体排放模

当前全球温室气体大幅升高&#xff0c;过去170年CO2浓度上升47%&#xff0c;这种极速变化使得物种和生态系统的适应时间大大缩短&#xff0c;进而造成全球气候变暖、海平面上升、作物产量降低、人类心血管和呼吸道疾病加剧等种种危害。在此背景下&#xff0c;代表可持续发展的“…

linux ffmpeg编译

下载源码 https://ffmpeg.org/ csdn下载源码包 不想编译可以直接下载使用静态版本 https://ffmpeg.org/download.html https://johnvansickle.com/ffmpeg/ 根据cpu类型&#xff0c;下载解压后就可以直接使用了。 linux编译 安装底层依赖 yum install gcc yum isntall …

Openlayers入门教程 --- 万字长篇

也许你还不熟悉Openlayers&#xff0c;也许你是一个Openlayers小白&#xff0c;零基础没关系&#xff0c;这篇文章提供最基础的 Openlayers 教程&#xff0c;简单易学&#xff0c;贯穿整个Openlayers 知识体系。读完本文&#xff0c;您将会对 Openlayers 有一个全新的认识。 文…

FreeRTOS学习笔记

一、RTOS入门 1.RTOS介绍 RTOS全称&#xff1a;Real Time OS&#xff0c;实时操作系统。 特点&#xff1a; 分而治之&#xff1a;实现功能划分多个任务。延时函数&#xff1a;不会空等待&#xff0c;高优先级延时的时候执行低优先级&#xff0c;会让出CPU的使用权给其他任务…

Day43-2-企业级实时复制intofy介绍及实践

Day43-2-企业级实时复制intofy介绍及实践 1. 企业级备份方案介绍1.1 利用定时方式&#xff0c;实现周期备份重要数据信息。1.2 实时数据备份方案1.3 实时复制环境准备1.4 实时复制软件介绍1.5 实时复制inotify机制介绍1.6 项目部署实施1.6.1 部署环境准备1.6.2 检查Linux系统支…

Hive借助java反射解决User-agent编码乱码问题

一、需求背景 在截取到浏览器user-agent&#xff0c;并想保存入数据库中&#xff0c;经查询发现展示的为编码后的结果。 现需要经过url解码过程&#xff0c;将解码后的结果保存进数据库&#xff0c;那么有几种实现方式。 二、问题解决 1、百度&#xff1a;url在线解码工具 …

Hello,Spider!入门第一个爬虫程序

在各大编程语言中&#xff0c;初学者要学会编写的第一个简单程序一般就是“Hello, World!”&#xff0c;即通过程序来在屏幕上输出一行“Hello, World!”这样的文字&#xff0c;在Python中&#xff0c;只需一行代码就可以做到。我们把这第一个爬虫就称之为“HelloSpider”&…

免费分享一套SpringBoot+Vue自习室(预约)管理系统,帅呆了~~

大家好&#xff0c;我是java1234_小锋老师&#xff0c;看到一个不错的SpringBootVue自习室预约)管理系统&#xff0c;分享下哈。 项目视频演示 【免费】SpringBootVue自习室预约(预约)管理系统 Java毕业设计_哔哩哔哩_bilibili【免费】SpringBootVue自习室预约(预约)管理系统…

flask库

文章目录 flask库1. 基本使用2. 路由路径和路由参数3. 请求跳转和请求参数4. 模板渲染1. 模板变量2. 过滤器3. 测试器 5. 钩子函数与响应对象 flask库 flask是python编写的轻量级框架&#xff0c;提供Werkzeug&#xff08;WSGI工具集&#xff09;和jinjia2&#xff08;渲染模板…

关于Ubuntu虚拟机识别不了USB设备的解决方案

唉昨天从网上找了一天的解决方案都没法让我的Ubuntu虚拟机识别USB设备&#xff0c;CSDN上有些方法是让从控制面板中进行修复&#xff0c;很多人都是一样的做法链接&#xff0c;那我觉得应该是可以解决的啊&#xff01; 结果我去控制面板执行修复的时候&#xff0c;显示报错“没…