k8s和deepflow部署与测试

Ubuntu-22-LTS部署k8s和deepflow

环境详情:
Static hostname: k8smaster.example.net
Icon name: computer-vm
Chassis: vm
Machine ID: 22349ac6f9ba406293d0541bcba7c05d
Boot ID: 605a74a509724a88940bbbb69cde77f2
Virtualization: vmware
Operating System: Ubuntu 22.04.4 LTS
Kernel: Linux 5.15.0-106-generic
Architecture: x86-64
Hardware Vendor: VMware, Inc.
Hardware Model: VMware Virtual Platform

当您在 Ubuntu 22.04 上安装 Kubernetes 集群时,您可以遵循以下步骤:

  1. 设置主机名并在 hosts 文件中添加条目

    • 登录到主节点并使用 hostnamectl 命令设置主机名:

      hostnamectl set-hostname "k8smaster.example.net"
      
    • 在工作节点上,运行以下命令设置主机名(分别对应第一个和第二个工作节点):

      hostnamectl set-hostname "k8sworker1.example.net"  # 第一个工作节点
      hostnamectl set-hostname "k8sworker2.example.net"  # 第二个工作节点
      
    • 在每个节点的 /etc/hosts 文件中添加以下条目:

      10.1.1.70 k8smaster.example.net k8smaster
      10.1.1.71 k8sworker1.example.net k8sworker1
      
  2. 禁用 swap 并添加内核设置

    • 在所有节点上执行以下命令以禁用交换功能:

      swapoff -a
      sed -i '/swap/ s/^\(.*\)$/#\1/g' /etc/fstab
      
    • 加载以下内核模块:

      tee /etc/modules-load.d/containerd.conf <<EOF
      overlay
      br_netfilter
      EOF
      modprobe overlay
      modprobe br_netfilter
      
    • 为 Kubernetes 设置以下内核参数:

      tee /etc/sysctl.d/kubernetes.conf <<EOF
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      net.ipv4.ip_forward = 1
      EOF
      sysctl --system
      
  3. 安装 containerd 运行时

    • 首先安装 containerd 的依赖项:

      apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
      
    • 启用 Docker 存储库:

      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
      add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
      
    • 安装 containerd:

      apt update
      apt install -y containerd.io
      
    • 配置 containerd 使用 systemd 作为 cgroup:

      containerd config default | tee /etc/containerd/config.toml > /dev/null 2>&1
      sed -i 's/SystemdCgroup\\=false/SystemdCgroup\\=true/g' /etc/containerd/config.toml
      

      部分配置手动修改

      disabled_plugins = []
      imports = []
      oom_score = 0
      plugin_dir = ""
      required_plugins = []
      root = "/var/lib/containerd"
      state = "/run/containerd"
      temp = ""
      version = 2
      
      [cgroup]
      path = ""
      
      [debug]
      address = ""
      format = ""
      gid = 0
      level = ""
      uid = 0
      
      [grpc]
      address = "/run/containerd/containerd.sock"
      gid = 0
      max_recv_message_size = 16777216
      max_send_message_size = 16777216
      tcp_address = ""
      tcp_tls_ca = ""
      tcp_tls_cert = ""
      tcp_tls_key = ""
      uid = 0
      
      [metrics]
      address = ""
      grpc_histogram = false
      
      [plugins]
      
      [plugins."io.containerd.gc.v1.scheduler"]
          deletion_threshold = 0
          mutation_threshold = 100
          pause_threshold = 0.02
          schedule_delay = "0s"
          startup_delay = "100ms"
      
      [plugins."io.containerd.grpc.v1.cri"]
          device_ownership_from_security_context = false
          disable_apparmor = false
          disable_cgroup = false
          disable_hugetlb_controller = true
          disable_proc_mount = false
          disable_tcp_service = true
          drain_exec_sync_io_timeout = "0s"
          enable_selinux = false
          enable_tls_streaming = false
          enable_unprivileged_icmp = false
          enable_unprivileged_ports = false
          ignore_deprecation_warnings = []
          ignore_image_defined_volumes = false
          max_concurrent_downloads = 3
          max_container_log_line_size = 16384
          netns_mounts_under_state_dir = false
          restrict_oom_score_adj = false
          # 修改以下这行
          sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.8"
          selinux_category_range = 1024
          stats_collect_period = 10
          stream_idle_timeout = "4h0m0s"
          stream_server_address = "127.0.0.1"
          stream_server_port = "0"
          systemd_cgroup = false
          tolerate_missing_hugetlb_controller = true
          unset_seccomp_profile = ""
      
          [plugins."io.containerd.grpc.v1.cri".cni]
          bin_dir = "/opt/cni/bin"
          conf_dir = "/etc/cni/net.d"
          conf_template = ""
          ip_pref = ""
          max_conf_num = 1
      
          [plugins."io.containerd.grpc.v1.cri".containerd]
          default_runtime_name = "runc"
          disable_snapshot_annotations = true
          discard_unpacked_layers = false
          ignore_rdt_not_enabled_errors = false
          no_pivot = false
          snapshotter = "overlayfs"
      
          [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = ""
      
              [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
      
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
      
              [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = "io.containerd.runc.v2"
      
              [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
                  BinaryName = ""
                  CriuImagePath = ""
                  CriuPath = ""
                  CriuWorkPath = ""
                  IoGid = 0
                  IoUid = 0
                  NoNewKeyring = false
                  NoPivotRoot = false
                  Root = ""
                  ShimCgroup = ""
                  SystemdCgroup = true
      
          [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = ""
      
              [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
      
          [plugins."io.containerd.grpc.v1.cri".image_decryption]
          key_model = "node"
      
          [plugins."io.containerd.grpc.v1.cri".registry]
          config_path = ""
      
          [plugins."io.containerd.grpc.v1.cri".registry.auths]
      
          [plugins."io.containerd.grpc.v1.cri".registry.configs]
      
          [plugins."io.containerd.grpc.v1.cri".registry.headers]
      
          [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
              # 添加如下4行
              [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
              endpoint = ["https://docker.mirrors.ustc.edu.cn"]
              [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
              endpoint = ["https://registry.aliyuncs.com/google_containers"]
      
          [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
          tls_cert_file = ""
          tls_key_file = ""
      
      [plugins."io.containerd.internal.v1.opt"]
          path = "/opt/containerd"
      
      [plugins."io.containerd.internal.v1.restart"]
          interval = "10s"
      
      [plugins."io.containerd.internal.v1.tracing"]
          sampling_ratio = 1.0
          service_name = "containerd"
      
      [plugins."io.containerd.metadata.v1.bolt"]
          content_sharing_policy = "shared"
      
      [plugins."io.containerd.monitor.v1.cgroups"]
          no_prometheus = false
      
      [plugins."io.containerd.runtime.v1.linux"]
          no_shim = false
          runtime = "runc"
          runtime_root = ""
          shim = "containerd-shim"
          shim_debug = false
      
      [plugins."io.containerd.runtime.v2.task"]
          platforms = ["linux/amd64"]
          sched_core = false
      
      [plugins."io.containerd.service.v1.diff-service"]
          default = ["walking"]
      
      [plugins."io.containerd.service.v1.tasks-service"]
          rdt_config_file = ""
      
      [plugins."io.containerd.snapshotter.v1.aufs"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.btrfs"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.devmapper"]
          async_remove = false
          base_image_size = ""
          discard_blocks = false
          fs_options = ""
          fs_type = ""
          pool_name = ""
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.native"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.overlayfs"]
          mount_options = []
          root_path = ""
          sync_remove = false
          upperdir_label = false
      
      [plugins."io.containerd.snapshotter.v1.zfs"]
          root_path = ""
      
      [plugins."io.containerd.tracing.processor.v1.otlp"]
          endpoint = ""
          insecure = false
          protocol = ""
      
      [proxy_plugins]
      
      [stream_processors]
      
      [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
          accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
          args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
          env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
          path = "ctd-decoder"
          returns = "application/vnd.oci.image.layer.v1.tar"
      
      [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
          accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
          args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
          env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
          path = "ctd-decoder"
          returns = "application/vnd.oci.image.layer.v1.tar+gzip"
      
      [timeouts]
      "io.containerd.timeout.bolt.open" = "0s"
      "io.containerd.timeout.shim.cleanup" = "5s"
      "io.containerd.timeout.shim.load" = "5s"
      "io.containerd.timeout.shim.shutdown" = "3s"
      "io.containerd.timeout.task.state" = "2s"
      
      [ttrpc]
      address = ""
      gid = 0
      uid = 0
      
    • 重启并启用容器服务:

      systemctl restart containerd
      systemctl enable containerd
      
    • 设置crictl

      cat > /etc/crictl.yaml <<EOF
      runtime-endpoint: unix:///var/run/containerd/containerd.sock
      image-endpoint: unix:///var/run/containerd/containerd.sock
      timeout: 10
      debug: false
      pull-image-on-create: false
      EOF
      
  4. 添加阿里云的 Kubernetes 源

    • 首先,导入阿里云的 GPG 密钥:

      curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
      
    • 然后,添加阿里云的 Kubernetes 源:

      tee /etc/apt/sources.list.d/kubernetes.list <<EOF
      deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
      EOF
      
  5. 安装 Kubernetes 组件

    • 更新软件包索引并安装 kubelet、kubeadm 和 kubectl:

      apt-get update
      apt-get install -y kubelet kubeadm kubectl
      
    • 设置 kubelet 使用 systemd 作为 cgroup 驱动:

      # 可忽略
      # sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /var/lib/kubelet/kubeadm-flags.env
      # systemctl daemon-reload
      # systemctl restart kubelet
      
  6. 初始化 Kubernetes 集群

    • 使用 kubeadm 初始化集群,并指定阿里云的镜像仓库:

      # kubeadm init --image-repository registry.aliyuncs.com/google_containers
      I0513 14:16:59.740096   17563 version.go:256] remote version is much newer: v1.30.0; falling back to: stable-1.28
      [init] Using Kubernetes version: v1.28.9
      [preflight] Running pre-flight checks
      [preflight] Pulling images required for setting up a Kubernetes cluster
      [preflight] This might take a minute or two, depending on the speed of your internet connection
      [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
      W0513 14:17:01.440936   17563 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.8" of the container runtime         is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
      [certs] Using certificateDir folder "/etc/kubernetes/pki"
      [certs] Generating "ca" certificate and key
      [certs] Generating "apiserver" certificate and key
      [certs] apiserver serving cert is signed for DNS names [k8smaster.example.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.        cluster.local] and IPs [10.96.0.1 10.1.1.70]
      [certs] Generating "apiserver-kubelet-client" certificate and key
      [certs] Generating "front-proxy-ca" certificate and key
      [certs] Generating "front-proxy-client" certificate and key
      [certs] Generating "etcd/ca" certificate and key
      [certs] Generating "etcd/server" certificate and key
      [certs] etcd/server serving cert is signed for DNS names [k8smaster.example.net localhost] and IPs [10.1.1.70 127.0.0.1 ::1]
      [certs] Generating "etcd/peer" certificate and key
      [certs] etcd/peer serving cert is signed for DNS names [k8smaster.example.net localhost] and IPs [10.1.1.70 127.0.0.1 ::1]
      [certs] Generating "etcd/healthcheck-client" certificate and key
      [certs] Generating "apiserver-etcd-client" certificate and key
      [certs] Generating "sa" key and public key
      [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
      [kubeconfig] Writing "admin.conf" kubeconfig file
      [kubeconfig] Writing "kubelet.conf" kubeconfig file
      [kubeconfig] Writing "controller-manager.conf" kubeconfig file
      [kubeconfig] Writing "scheduler.conf" kubeconfig file
      [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
      [control-plane] Using manifest folder "/etc/kubernetes/manifests"
      [control-plane] Creating static Pod manifest for "kube-apiserver"
      [control-plane] Creating static Pod manifest for "kube-controller-manager"
      [control-plane] Creating static Pod manifest for "kube-scheduler"
      [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
      [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
      [kubelet-start] Starting the kubelet
      [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to         4m0s
      [apiclient] All control plane components are healthy after 4.002079 seconds
      [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
      [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
      [upload-certs] Skipping phase. Please see --upload-certs
      [mark-control-plane] Marking the node k8smaster.example.net as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.        io/exclude-from-external-load-balancers]
      [mark-control-plane] Marking the node k8smaster.example.net as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
      [bootstrap-token] Using token: m9z4yq.dok89ro6yt23wykr
      [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
      [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
      [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
      [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
      [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
      [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
      [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
      [addons] Applied essential addon: CoreDNS
      [addons] Applied essential addon: kube-proxy
      
      Your Kubernetes control-plane has initialized successfully!
      
      To start using your cluster, you need to run the following as a regular user:
      
        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
      
      Alternatively, if you are the root user, you can run:
      
        export KUBECONFIG=/etc/kubernetes/admin.conf
      
      You should now deploy a pod network to the cluster.
      Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
        https://kubernetes.io/docs/concepts/cluster-administration/addons/
      
      Then you can join any number of worker nodes by running the following on each as root:
      
      kubeadm join 10.1.1.70:6443 --token m9z4yq.dok89ro6yt23wykr \
              --discovery-token-ca-cert-hash sha256:17c3f29bd276592e668e9e6a7a187140a887254b4555cf7d293c3313d7c8a178 
      
  7. 配置 kubectl

    • 为当前用户设置 kubectl 访问:

      mkdir -p $HOME/.kube
      cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      chown $(id -u):$(id -g) $HOME/.kube/config
      
  8. 安装网络插件

    • 安装一个 Pod 网络插件,例如 Calico 或 Flannel。例如,使用 Calico:

      kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
      # 网络插件初始化完毕之后,coredns容器就正常了
      kubectl logs -n kube-system -l k8s-app=kube-dns
      
  9. 验证集群

    • 启动一个nginx pod:

      # vim nginx_pod.yml
      apiVersion: v1
      kind: Pod
      metadata:
        name: test-nginx-pod
        namespace: test
        labels:
          app: nginx
      spec:
        containers:
        - name: test-nginx-container
          image: nginx:latest
          ports:
          - containerPort: 80
        tolerations:
          - key: "node-role.kubernetes.io/control-plane"
            operator: "Exists"
            effect: "NoSchedule"
      ---
      
      apiVersion: v1
      kind: Service
      # service和pod必须位于同一个namespace
      metadata:
        name: nginx-service
        namespace: test
      spec:
        type: NodePort
        # selector应该匹配pod的labels
        selector:
          app: nginx
        ports:
        - protocol: TCP
          port: 80
          nodePort: 30007
          targetPort: 80
      

      启动

      kubectl apply -f nginx_pod.yml
      

部署opentelemetry-collector测试

otel-collector和otel-agent需要程序集成API,发送到以DaemonSet运行在每个节点的otel-agent,otel-agent再将数据发送给otel-collector汇总,然后发往可以处理otlp trace数据的后端,如zipkin、jaeger等。

自定义测试yaml文件

apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-conf
  namespace: default
data:
  # 你的配置数据
  config.yaml: |
    receivers:
      otlp:
        protocols:
          grpc:
          http:
    processors:
      batch:
    exporters:
      logging:
        loglevel: debug
    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [batch]
          exporters: [logging]

---
apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
spec:
  type: NodePort
  ports:
    - port: 4317
      targetPort: 4317
      nodePort: 30080
      name: otlp-grpc
    - port: 8888
      targetPort: 8888
      name: metrics
  selector:
    component: otel-collector

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
spec:
  replicas: 1
  selector:
    matchLabels:
      component: otel-collector
  template:
    metadata:
      labels:
        component: otel-collector
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - name: otel-collector
        image: otel/opentelemetry-collector:latest
        ports:
        - containerPort: 4317
        - containerPort: 8888
        env:
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        volumeMounts:
        - name: otel-collector-config-vol
          mountPath: /conf
      volumes:
      - configMap:
          name: otel-collector-conf
        name: otel-collector-config-vol

启动

mkdir /conf
kubectl apply -f otel-collector.yaml
kubectl get -f otel-collector.yaml

删除

kubectl delete -f otel-collector.yaml

使用官方提供示例

kubectl apply -f https://raw.githubusercontent.com/open-telemetry/opentelemetry-collector/main/examples/k8s/otel-config.yaml

根据需要修改文件

otel-config.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-agent-conf
  labels:
    app: opentelemetry
    component: otel-agent-conf
data:
  otel-agent-config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:4317
          http:
            endpoint: ${env:MY_POD_IP}:4318
    exporters:
      otlp:
        endpoint: "otel-collector.default:4317"
        tls:
          insecure: true
        sending_queue:
          num_consumers: 4
          queue_size: 100
        retry_on_failure:
          enabled: true
    processors:
      batch:
      memory_limiter:
        # 80% of maximum memory up to 2G
        limit_mib: 400
        # 25% of limit up to 2G
        spike_limit_mib: 100
        check_interval: 5s
    extensions:
      zpages: {}
    service:
      extensions: [zpages]
      pipelines:
        traces:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [otlp]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: otel-agent
  labels:
    app: opentelemetry
    component: otel-agent
spec:
  selector:
    matchLabels:
      app: opentelemetry
      component: otel-agent
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-agent
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - command:
          - "/otelcol"
          - "--config=/conf/otel-agent-config.yaml"
        image: otel/opentelemetry-collector:0.94.0
        name: otel-agent
        resources:
          limits:
            cpu: 500m
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 55679 # ZPages endpoint.
        - containerPort: 4317 # Default OpenTelemetry receiver port.
        - containerPort: 8888  # Metrics.
        env:
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
          - name: GOMEMLIMIT
            value: 400MiB
        volumeMounts:
        - name: otel-agent-config-vol
          mountPath: /conf
      volumes:
        - configMap:
            name: otel-agent-conf
            items:
              - key: otel-agent-config
                path: otel-agent-config.yaml
          name: otel-agent-config-vol
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-conf
  labels:
    app: opentelemetry
    component: otel-collector-conf
data:
  otel-collector-config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:4317
          http:
            endpoint: ${env:MY_POD_IP}:4318
    processors:
      batch:
      memory_limiter:
        # 80% of maximum memory up to 2G
        limit_mib: 1500
        # 25% of limit up to 2G
        spike_limit_mib: 512
        check_interval: 5s
    extensions:
      zpages: {}
    exporters:
      otlp:
        endpoint: "http://someotlp.target.com:4317" # Replace with a real endpoint.
        tls:
          insecure: true
      zipkin:
        endpoint: "http://10.1.1.10:9411/api/v2/spans"
        format: "proto"
    service:
      extensions: [zpages]
      pipelines:
        traces/1:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [zipkin]
---
apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  ports:
  - name: otlp-grpc # Default endpoint for OpenTelemetry gRPC receiver.
    port: 4317
    protocol: TCP
    targetPort: 4317
  - name: otlp-http # Default endpoint for OpenTelemetry HTTP receiver.
    port: 4318
    protocol: TCP
    targetPort: 4318
  - name: metrics # Default endpoint for querying metrics.
    port: 8888
  selector:
    component: otel-collector
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  selector:
    matchLabels:
      app: opentelemetry
      component: otel-collector
  minReadySeconds: 5
  progressDeadlineSeconds: 120
  replicas: 1 #TODO - adjust this to your own requirements
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-collector
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - command:
          - "/otelcol"
          - "--config=/conf/otel-collector-config.yaml"
        image: otel/opentelemetry-collector:0.94.0
        name: otel-collector
        resources:
          limits:
            cpu: 1
            memory: 2Gi
          requests:
            cpu: 200m
            memory: 400Mi
        ports:
        - containerPort: 55679 # Default endpoint for ZPages.
        - containerPort: 4317 # Default endpoint for OpenTelemetry receiver.
        - containerPort: 14250 # Default endpoint for Jaeger gRPC receiver.
        - containerPort: 14268 # Default endpoint for Jaeger HTTP receiver.
        - containerPort: 9411 # Default endpoint for Zipkin receiver.
        - containerPort: 8888  # Default endpoint for querying metrics.
        env:
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
          - name: GOMEMLIMIT
            value: 1600MiB
        volumeMounts:
        - name: otel-collector-config-vol
          mountPath: /conf
#        - name: otel-collector-secrets
#          mountPath: /secrets
      volumes:
        - configMap:
            name: otel-collector-conf
            items:
              - key: otel-collector-config
                path: otel-collector-config.yaml
          name: otel-collector-config-vol
#        - secret:
#            name: otel-collector-secrets
#            items:
#              - key: cert.pem
#                path: cert.pem
#              - key: key.pem
#                path: key.pem

部署deepflow监控单个k8s集群

官方文档
官方demo

安装helm

snap install helm --classic

设置pv

kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
## config default storage class
kubectl patch storageclass openebs-hostpath  -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

部署deepflow

helm repo add deepflow https://deepflowio.github.io/deepflow
helm repo update deepflow # use `helm repo update` when helm < 3.7.0
helm install deepflow -n deepflow deepflow/deepflow --create-namespace
# 显示如下
NAME: deepflow
LAST DEPLOYED: Tue May 14 14:13:50 2024
NAMESPACE: deepflow
STATUS: deployed
REVISION: 1
NOTES:
██████╗ ███████╗███████╗██████╗ ███████╗██╗      ██████╗ ██╗    ██╗
██╔══██╗██╔════╝██╔════╝██╔══██╗██╔════╝██║     ██╔═══██╗██║    ██║
██║  ██║█████╗  █████╗  ██████╔╝█████╗  ██║     ██║   ██║██║ █╗ ██║
██║  ██║██╔══╝  ██╔══╝  ██╔═══╝ ██╔══╝  ██║     ██║   ██║██║███╗██║
██████╔╝███████╗███████╗██║     ██║     ███████╗╚██████╔╝╚███╔███╔╝
╚═════╝ ╚══════╝╚══════╝╚═╝     ╚═╝     ╚══════╝ ╚═════╝  ╚══╝╚══╝ 

An automated observability platform for cloud-native developers.

# deepflow-agent Port for receiving trace, metrics, and log

deepflow-agent service: deepflow-agent.deepflow
deepflow-agent Host listening port: 38086

# Get the Grafana URL to visit by running these commands in the same shell

NODE_PORT=$(kubectl get --namespace deepflow -o jsonpath="{.spec.ports[0].nodePort}" services deepflow-grafana)
NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")
echo -e "Grafana URL: http://$NODE_IP:$NODE_PORT  \nGrafana auth: admin:deepflow"

节点安装deepflow-ctl

curl -o /usr/bin/deepflow-ctl https://deepflow-ce.oss-cn-beijing.aliyuncs.com/bin/ctl/stable/linux/$(arch | sed 's|x86_64|amd64|' | sed 's|aarch64|arm64|')/deepflow-ctl
chmod a+x /usr/bin/deepflow-ctl

访问grafana页面

NODE_PORT=$(kubectl get --namespace deepflow -o jsonpath="{.spec.ports[0].nodePort}" services deepflow-grafana)
NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")
echo -e "Grafana URL: http://$NODE_IP:$NODE_PORT  \nGrafana auth: admin:deepflow"

Ubuntu-22-LTS部署k8s和deepflow

环境详情:
Static hostname: k8smaster.example.net
Icon name: computer-vm
Chassis: vm
Machine ID: 22349ac6f9ba406293d0541bcba7c05d
Boot ID: 605a74a509724a88940bbbb69cde77f2
Virtualization: vmware
Operating System: Ubuntu 22.04.4 LTS
Kernel: Linux 5.15.0-106-generic
Architecture: x86-64
Hardware Vendor: VMware, Inc.
Hardware Model: VMware Virtual Platform

当您在 Ubuntu 22.04 上安装 Kubernetes 集群时,您可以遵循以下步骤:

  1. 设置主机名并在 hosts 文件中添加条目

    • 登录到主节点并使用 hostnamectl 命令设置主机名:

      hostnamectl set-hostname "k8smaster.example.net"
      
    • 在工作节点上,运行以下命令设置主机名(分别对应第一个和第二个工作节点):

      hostnamectl set-hostname "k8sworker1.example.net"  # 第一个工作节点
      hostnamectl set-hostname "k8sworker2.example.net"  # 第二个工作节点
      
    • 在每个节点的 /etc/hosts 文件中添加以下条目:

      10.1.1.70 k8smaster.example.net k8smaster
      10.1.1.71 k8sworker1.example.net k8sworker1
      
  2. 禁用 swap 并添加内核设置

    • 在所有节点上执行以下命令以禁用交换功能:

      swapoff -a
      sed -i '/swap/ s/^\(.*\)$/#\1/g' /etc/fstab
      
    • 加载以下内核模块:

      tee /etc/modules-load.d/containerd.conf <<EOF
      overlay
      br_netfilter
      EOF
      modprobe overlay
      modprobe br_netfilter
      
    • 为 Kubernetes 设置以下内核参数:

      tee /etc/sysctl.d/kubernetes.conf <<EOF
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      net.ipv4.ip_forward = 1
      EOF
      sysctl --system
      
  3. 安装 containerd 运行时

    • 首先安装 containerd 的依赖项:

      apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
      
    • 启用 Docker 存储库:

      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
      add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
      
    • 安装 containerd:

      apt update
      apt install -y containerd.io
      
    • 配置 containerd 使用 systemd 作为 cgroup:

      containerd config default | tee /etc/containerd/config.toml > /dev/null 2>&1
      sed -i 's/SystemdCgroup\\=false/SystemdCgroup\\=true/g' /etc/containerd/config.toml
      

      部分配置手动修改

      disabled_plugins = []
      imports = []
      oom_score = 0
      plugin_dir = ""
      required_plugins = []
      root = "/var/lib/containerd"
      state = "/run/containerd"
      temp = ""
      version = 2
      
      [cgroup]
      path = ""
      
      [debug]
      address = ""
      format = ""
      gid = 0
      level = ""
      uid = 0
      
      [grpc]
      address = "/run/containerd/containerd.sock"
      gid = 0
      max_recv_message_size = 16777216
      max_send_message_size = 16777216
      tcp_address = ""
      tcp_tls_ca = ""
      tcp_tls_cert = ""
      tcp_tls_key = ""
      uid = 0
      
      [metrics]
      address = ""
      grpc_histogram = false
      
      [plugins]
      
      [plugins."io.containerd.gc.v1.scheduler"]
          deletion_threshold = 0
          mutation_threshold = 100
          pause_threshold = 0.02
          schedule_delay = "0s"
          startup_delay = "100ms"
      
      [plugins."io.containerd.grpc.v1.cri"]
          device_ownership_from_security_context = false
          disable_apparmor = false
          disable_cgroup = false
          disable_hugetlb_controller = true
          disable_proc_mount = false
          disable_tcp_service = true
          drain_exec_sync_io_timeout = "0s"
          enable_selinux = false
          enable_tls_streaming = false
          enable_unprivileged_icmp = false
          enable_unprivileged_ports = false
          ignore_deprecation_warnings = []
          ignore_image_defined_volumes = false
          max_concurrent_downloads = 3
          max_container_log_line_size = 16384
          netns_mounts_under_state_dir = false
          restrict_oom_score_adj = false
          # 修改以下这行
          sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.8"
          selinux_category_range = 1024
          stats_collect_period = 10
          stream_idle_timeout = "4h0m0s"
          stream_server_address = "127.0.0.1"
          stream_server_port = "0"
          systemd_cgroup = false
          tolerate_missing_hugetlb_controller = true
          unset_seccomp_profile = ""
      
          [plugins."io.containerd.grpc.v1.cri".cni]
          bin_dir = "/opt/cni/bin"
          conf_dir = "/etc/cni/net.d"
          conf_template = ""
          ip_pref = ""
          max_conf_num = 1
      
          [plugins."io.containerd.grpc.v1.cri".containerd]
          default_runtime_name = "runc"
          disable_snapshot_annotations = true
          discard_unpacked_layers = false
          ignore_rdt_not_enabled_errors = false
          no_pivot = false
          snapshotter = "overlayfs"
      
          [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = ""
      
              [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
      
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
      
              [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = "io.containerd.runc.v2"
      
              [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
                  BinaryName = ""
                  CriuImagePath = ""
                  CriuPath = ""
                  CriuWorkPath = ""
                  IoGid = 0
                  IoUid = 0
                  NoNewKeyring = false
                  NoPivotRoot = false
                  Root = ""
                  ShimCgroup = ""
                  SystemdCgroup = true
      
          [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = ""
      
              [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
      
          [plugins."io.containerd.grpc.v1.cri".image_decryption]
          key_model = "node"
      
          [plugins."io.containerd.grpc.v1.cri".registry]
          config_path = ""
      
          [plugins."io.containerd.grpc.v1.cri".registry.auths]
      
          [plugins."io.containerd.grpc.v1.cri".registry.configs]
      
          [plugins."io.containerd.grpc.v1.cri".registry.headers]
      
          [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
              # 添加如下4行
              [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
              endpoint = ["https://docker.mirrors.ustc.edu.cn"]
              [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
              endpoint = ["https://registry.aliyuncs.com/google_containers"]
      
          [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
          tls_cert_file = ""
          tls_key_file = ""
      
      [plugins."io.containerd.internal.v1.opt"]
          path = "/opt/containerd"
      
      [plugins."io.containerd.internal.v1.restart"]
          interval = "10s"
      
      [plugins."io.containerd.internal.v1.tracing"]
          sampling_ratio = 1.0
          service_name = "containerd"
      
      [plugins."io.containerd.metadata.v1.bolt"]
          content_sharing_policy = "shared"
      
      [plugins."io.containerd.monitor.v1.cgroups"]
          no_prometheus = false
      
      [plugins."io.containerd.runtime.v1.linux"]
          no_shim = false
          runtime = "runc"
          runtime_root = ""
          shim = "containerd-shim"
          shim_debug = false
      
      [plugins."io.containerd.runtime.v2.task"]
          platforms = ["linux/amd64"]
          sched_core = false
      
      [plugins."io.containerd.service.v1.diff-service"]
          default = ["walking"]
      
      [plugins."io.containerd.service.v1.tasks-service"]
          rdt_config_file = ""
      
      [plugins."io.containerd.snapshotter.v1.aufs"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.btrfs"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.devmapper"]
          async_remove = false
          base_image_size = ""
          discard_blocks = false
          fs_options = ""
          fs_type = ""
          pool_name = ""
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.native"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.overlayfs"]
          mount_options = []
          root_path = ""
          sync_remove = false
          upperdir_label = false
      
      [plugins."io.containerd.snapshotter.v1.zfs"]
          root_path = ""
      
      [plugins."io.containerd.tracing.processor.v1.otlp"]
          endpoint = ""
          insecure = false
          protocol = ""
      
      [proxy_plugins]
      
      [stream_processors]
      
      [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
          accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
          args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
          env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
          path = "ctd-decoder"
          returns = "application/vnd.oci.image.layer.v1.tar"
      
      [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
          accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
          args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
          env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
          path = "ctd-decoder"
          returns = "application/vnd.oci.image.layer.v1.tar+gzip"
      
      [timeouts]
      "io.containerd.timeout.bolt.open" = "0s"
      "io.containerd.timeout.shim.cleanup" = "5s"
      "io.containerd.timeout.shim.load" = "5s"
      "io.containerd.timeout.shim.shutdown" = "3s"
      "io.containerd.timeout.task.state" = "2s"
      
      [ttrpc]
      address = ""
      gid = 0
      uid = 0
      
    • 重启并启用容器服务:

      systemctl restart containerd
      systemctl enable containerd
      
    • 设置crictl

      cat > /etc/crictl.yaml <<EOF
      runtime-endpoint: unix:///var/run/containerd/containerd.sock
      image-endpoint: unix:///var/run/containerd/containerd.sock
      timeout: 10
      debug: false
      pull-image-on-create: false
      EOF
      
  4. 添加阿里云的 Kubernetes 源

    • 首先,导入阿里云的 GPG 密钥:

      curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
      
    • 然后,添加阿里云的 Kubernetes 源:

      tee /etc/apt/sources.list.d/kubernetes.list <<EOF
      deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
      EOF
      
  5. 安装 Kubernetes 组件

    • 更新软件包索引并安装 kubelet、kubeadm 和 kubectl:

      apt-get update
      apt-get install -y kubelet kubeadm kubectl
      
    • 设置 kubelet 使用 systemd 作为 cgroup 驱动:

      # 可忽略
      # sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /var/lib/kubelet/kubeadm-flags.env
      # systemctl daemon-reload
      # systemctl restart kubelet
      
  6. 初始化 Kubernetes 集群

    • 使用 kubeadm 初始化集群,并指定阿里云的镜像仓库:

      # kubeadm init --image-repository registry.aliyuncs.com/google_containers
      I0513 14:16:59.740096   17563 version.go:256] remote version is much newer: v1.30.0; falling back to: stable-1.28
      [init] Using Kubernetes version: v1.28.9
      [preflight] Running pre-flight checks
      [preflight] Pulling images required for setting up a Kubernetes cluster
      [preflight] This might take a minute or two, depending on the speed of your internet connection
      [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
      W0513 14:17:01.440936   17563 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.8" of the container runtime         is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
      [certs] Using certificateDir folder "/etc/kubernetes/pki"
      [certs] Generating "ca" certificate and key
      [certs] Generating "apiserver" certificate and key
      [certs] apiserver serving cert is signed for DNS names [k8smaster.example.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.        cluster.local] and IPs [10.96.0.1 10.1.1.70]
      [certs] Generating "apiserver-kubelet-client" certificate and key
      [certs] Generating "front-proxy-ca" certificate and key
      [certs] Generating "front-proxy-client" certificate and key
      [certs] Generating "etcd/ca" certificate and key
      [certs] Generating "etcd/server" certificate and key
      [certs] etcd/server serving cert is signed for DNS names [k8smaster.example.net localhost] and IPs [10.1.1.70 127.0.0.1 ::1]
      [certs] Generating "etcd/peer" certificate and key
      [certs] etcd/peer serving cert is signed for DNS names [k8smaster.example.net localhost] and IPs [10.1.1.70 127.0.0.1 ::1]
      [certs] Generating "etcd/healthcheck-client" certificate and key
      [certs] Generating "apiserver-etcd-client" certificate and key
      [certs] Generating "sa" key and public key
      [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
      [kubeconfig] Writing "admin.conf" kubeconfig file
      [kubeconfig] Writing "kubelet.conf" kubeconfig file
      [kubeconfig] Writing "controller-manager.conf" kubeconfig file
      [kubeconfig] Writing "scheduler.conf" kubeconfig file
      [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
      [control-plane] Using manifest folder "/etc/kubernetes/manifests"
      [control-plane] Creating static Pod manifest for "kube-apiserver"
      [control-plane] Creating static Pod manifest for "kube-controller-manager"
      [control-plane] Creating static Pod manifest for "kube-scheduler"
      [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
      [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
      [kubelet-start] Starting the kubelet
      [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to         4m0s
      [apiclient] All control plane components are healthy after 4.002079 seconds
      [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
      [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
      [upload-certs] Skipping phase. Please see --upload-certs
      [mark-control-plane] Marking the node k8smaster.example.net as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.        io/exclude-from-external-load-balancers]
      [mark-control-plane] Marking the node k8smaster.example.net as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
      [bootstrap-token] Using token: m9z4yq.dok89ro6yt23wykr
      [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
      [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
      [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
      [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
      [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
      [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
      [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
      [addons] Applied essential addon: CoreDNS
      [addons] Applied essential addon: kube-proxy
      
      Your Kubernetes control-plane has initialized successfully!
      
      To start using your cluster, you need to run the following as a regular user:
      
        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
      
      Alternatively, if you are the root user, you can run:
      
        export KUBECONFIG=/etc/kubernetes/admin.conf
      
      You should now deploy a pod network to the cluster.
      Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
        https://kubernetes.io/docs/concepts/cluster-administration/addons/
      
      Then you can join any number of worker nodes by running the following on each as root:
      
      kubeadm join 10.1.1.70:6443 --token m9z4yq.dok89ro6yt23wykr \
              --discovery-token-ca-cert-hash sha256:17c3f29bd276592e668e9e6a7a187140a887254b4555cf7d293c3313d7c8a178 
      
  7. 配置 kubectl

    • 为当前用户设置 kubectl 访问:

      mkdir -p $HOME/.kube
      cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      chown $(id -u):$(id -g) $HOME/.kube/config
      
  8. 安装网络插件

    • 安装一个 Pod 网络插件,例如 Calico 或 Flannel。例如,使用 Calico:

      kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
      # 网络插件初始化完毕之后,coredns容器就正常了
      kubectl logs -n kube-system -l k8s-app=kube-dns
      
  9. 验证集群

    • 启动一个nginx pod:

      # vim nginx_pod.yml
      apiVersion: v1
      kind: Pod
      metadata:
        name: test-nginx-pod
        namespace: test
        labels:
          app: nginx
      spec:
        containers:
        - name: test-nginx-container
          image: nginx:latest
          ports:
          - containerPort: 80
        tolerations:
          - key: "node-role.kubernetes.io/control-plane"
            operator: "Exists"
            effect: "NoSchedule"
      ---
      
      apiVersion: v1
      kind: Service
      # service和pod必须位于同一个namespace
      metadata:
        name: nginx-service
        namespace: test
      spec:
        type: NodePort
        # selector应该匹配pod的labels
        selector:
          app: nginx
        ports:
        - protocol: TCP
          port: 80
          nodePort: 30007
          targetPort: 80
      

      启动

      kubectl apply -f nginx_pod.yml
      

部署opentelemetry-collector测试

otel-collector和otel-agent需要程序集成API,发送到以DaemonSet运行在每个节点的otel-agent,otel-agent再将数据发送给otel-collector汇总,然后发往可以处理otlp trace数据的后端,如zipkin、jaeger等。

自定义测试yaml文件

apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-conf
  namespace: default
data:
  # 你的配置数据
  config.yaml: |
    receivers:
      otlp:
        protocols:
          grpc:
          http:
    processors:
      batch:
    exporters:
      logging:
        loglevel: debug
    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [batch]
          exporters: [logging]

---
apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
spec:
  type: NodePort
  ports:
    - port: 4317
      targetPort: 4317
      nodePort: 30080
      name: otlp-grpc
    - port: 8888
      targetPort: 8888
      name: metrics
  selector:
    component: otel-collector

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
spec:
  replicas: 1
  selector:
    matchLabels:
      component: otel-collector
  template:
    metadata:
      labels:
        component: otel-collector
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - name: otel-collector
        image: otel/opentelemetry-collector:latest
        ports:
        - containerPort: 4317
        - containerPort: 8888
        env:
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        volumeMounts:
        - name: otel-collector-config-vol
          mountPath: /conf
      volumes:
      - configMap:
          name: otel-collector-conf
        name: otel-collector-config-vol

启动

mkdir /conf
kubectl apply -f otel-collector.yaml
kubectl get -f otel-collector.yaml

删除

kubectl delete -f otel-collector.yaml

使用官方提供示例

kubectl apply -f https://raw.githubusercontent.com/open-telemetry/opentelemetry-collector/main/examples/k8s/otel-config.yaml

根据需要修改文件

otel-config.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-agent-conf
  labels:
    app: opentelemetry
    component: otel-agent-conf
data:
  otel-agent-config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:4317
          http:
            endpoint: ${env:MY_POD_IP}:4318
    exporters:
      otlp:
        endpoint: "otel-collector.default:4317"
        tls:
          insecure: true
        sending_queue:
          num_consumers: 4
          queue_size: 100
        retry_on_failure:
          enabled: true
    processors:
      batch:
      memory_limiter:
        # 80% of maximum memory up to 2G
        limit_mib: 400
        # 25% of limit up to 2G
        spike_limit_mib: 100
        check_interval: 5s
    extensions:
      zpages: {}
    service:
      extensions: [zpages]
      pipelines:
        traces:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [otlp]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: otel-agent
  labels:
    app: opentelemetry
    component: otel-agent
spec:
  selector:
    matchLabels:
      app: opentelemetry
      component: otel-agent
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-agent
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - command:
          - "/otelcol"
          - "--config=/conf/otel-agent-config.yaml"
        image: otel/opentelemetry-collector:0.94.0
        name: otel-agent
        resources:
          limits:
            cpu: 500m
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 55679 # ZPages endpoint.
        - containerPort: 4317 # Default OpenTelemetry receiver port.
        - containerPort: 8888  # Metrics.
        env:
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
          - name: GOMEMLIMIT
            value: 400MiB
        volumeMounts:
        - name: otel-agent-config-vol
          mountPath: /conf
      volumes:
        - configMap:
            name: otel-agent-conf
            items:
              - key: otel-agent-config
                path: otel-agent-config.yaml
          name: otel-agent-config-vol
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-conf
  labels:
    app: opentelemetry
    component: otel-collector-conf
data:
  otel-collector-config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:4317
          http:
            endpoint: ${env:MY_POD_IP}:4318
    processors:
      batch:
      memory_limiter:
        # 80% of maximum memory up to 2G
        limit_mib: 1500
        # 25% of limit up to 2G
        spike_limit_mib: 512
        check_interval: 5s
    extensions:
      zpages: {}
    exporters:
      otlp:
        endpoint: "http://someotlp.target.com:4317" # Replace with a real endpoint.
        tls:
          insecure: true
      zipkin:
        endpoint: "http://10.1.1.10:9411/api/v2/spans"
        format: "proto"
    service:
      extensions: [zpages]
      pipelines:
        traces/1:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [zipkin]
---
apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  ports:
  - name: otlp-grpc # Default endpoint for OpenTelemetry gRPC receiver.
    port: 4317
    protocol: TCP
    targetPort: 4317
  - name: otlp-http # Default endpoint for OpenTelemetry HTTP receiver.
    port: 4318
    protocol: TCP
    targetPort: 4318
  - name: metrics # Default endpoint for querying metrics.
    port: 8888
  selector:
    component: otel-collector
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  selector:
    matchLabels:
      app: opentelemetry
      component: otel-collector
  minReadySeconds: 5
  progressDeadlineSeconds: 120
  replicas: 1 #TODO - adjust this to your own requirements
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-collector
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - command:
          - "/otelcol"
          - "--config=/conf/otel-collector-config.yaml"
        image: otel/opentelemetry-collector:0.94.0
        name: otel-collector
        resources:
          limits:
            cpu: 1
            memory: 2Gi
          requests:
            cpu: 200m
            memory: 400Mi
        ports:
        - containerPort: 55679 # Default endpoint for ZPages.
        - containerPort: 4317 # Default endpoint for OpenTelemetry receiver.
        - containerPort: 14250 # Default endpoint for Jaeger gRPC receiver.
        - containerPort: 14268 # Default endpoint for Jaeger HTTP receiver.
        - containerPort: 9411 # Default endpoint for Zipkin receiver.
        - containerPort: 8888  # Default endpoint for querying metrics.
        env:
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
          - name: GOMEMLIMIT
            value: 1600MiB
        volumeMounts:
        - name: otel-collector-config-vol
          mountPath: /conf
#        - name: otel-collector-secrets
#          mountPath: /secrets
      volumes:
        - configMap:
            name: otel-collector-conf
            items:
              - key: otel-collector-config
                path: otel-collector-config.yaml
          name: otel-collector-config-vol
#        - secret:
#            name: otel-collector-secrets
#            items:
#              - key: cert.pem
#                path: cert.pem
#              - key: key.pem
#                path: key.pem

部署deepflow监控单个k8s集群

官方文档
官方demo

安装helm

snap install helm --classic

设置pv

kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
## config default storage class
kubectl patch storageclass openebs-hostpath  -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

部署deepflow

helm repo add deepflow https://deepflowio.github.io/deepflow
helm repo update deepflow # use `helm repo update` when helm < 3.7.0
helm install deepflow -n deepflow deepflow/deepflow --create-namespace
# 显示如下
NAME: deepflow
LAST DEPLOYED: Tue May 14 14:13:50 2024
NAMESPACE: deepflow
STATUS: deployed
REVISION: 1
NOTES:
██████╗ ███████╗███████╗██████╗ ███████╗██╗      ██████╗ ██╗    ██╗
██╔══██╗██╔════╝██╔════╝██╔══██╗██╔════╝██║     ██╔═══██╗██║    ██║
██║  ██║█████╗  █████╗  ██████╔╝█████╗  ██║     ██║   ██║██║ █╗ ██║
██║  ██║██╔══╝  ██╔══╝  ██╔═══╝ ██╔══╝  ██║     ██║   ██║██║███╗██║
██████╔╝███████╗███████╗██║     ██║     ███████╗╚██████╔╝╚███╔███╔╝
╚═════╝ ╚══════╝╚══════╝╚═╝     ╚═╝     ╚══════╝ ╚═════╝  ╚══╝╚══╝ 

An automated observability platform for cloud-native developers.

# deepflow-agent Port for receiving trace, metrics, and log

deepflow-agent service: deepflow-agent.deepflow
deepflow-agent Host listening port: 38086

# Get the Grafana URL to visit by running these commands in the same shell

NODE_PORT=$(kubectl get --namespace deepflow -o jsonpath="{.spec.ports[0].nodePort}" services deepflow-grafana)
NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")
echo -e "Grafana URL: http://$NODE_IP:$NODE_PORT  \nGrafana auth: admin:deepflow"

节点安装deepflow-ctl

curl -o /usr/bin/deepflow-ctl https://deepflow-ce.oss-cn-beijing.aliyuncs.com/bin/ctl/stable/linux/$(arch | sed 's|x86_64|amd64|' | sed 's|aarch64|arm64|')/deepflow-ctl
chmod a+x /usr/bin/deepflow-ctl

访问grafana页面

NODE_PORT=$(kubectl get --namespace deepflow -o jsonpath="{.spec.ports[0].nodePort}" services deepflow-grafana)
NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")
echo -e "Grafana URL: http://$NODE_IP:$NODE_PORT  \nGrafana auth: admin:deepflow"

Ubuntu-22-LTS部署k8s和deepflow

环境详情:
Static hostname: k8smaster.example.net
Icon name: computer-vm
Chassis: vm
Machine ID: 22349ac6f9ba406293d0541bcba7c05d
Boot ID: 605a74a509724a88940bbbb69cde77f2
Virtualization: vmware
Operating System: Ubuntu 22.04.4 LTS
Kernel: Linux 5.15.0-106-generic
Architecture: x86-64
Hardware Vendor: VMware, Inc.
Hardware Model: VMware Virtual Platform

当您在 Ubuntu 22.04 上安装 Kubernetes 集群时,您可以遵循以下步骤:

  1. 设置主机名并在 hosts 文件中添加条目

    • 登录到主节点并使用 hostnamectl 命令设置主机名:

      hostnamectl set-hostname "k8smaster.example.net"
      
    • 在工作节点上,运行以下命令设置主机名(分别对应第一个和第二个工作节点):

      hostnamectl set-hostname "k8sworker1.example.net"  # 第一个工作节点
      hostnamectl set-hostname "k8sworker2.example.net"  # 第二个工作节点
      
    • 在每个节点的 /etc/hosts 文件中添加以下条目:

      10.1.1.70 k8smaster.example.net k8smaster
      10.1.1.71 k8sworker1.example.net k8sworker1
      
  2. 禁用 swap 并添加内核设置

    • 在所有节点上执行以下命令以禁用交换功能:

      swapoff -a
      sed -i '/swap/ s/^\(.*\)$/#\1/g' /etc/fstab
      
    • 加载以下内核模块:

      tee /etc/modules-load.d/containerd.conf <<EOF
      overlay
      br_netfilter
      EOF
      modprobe overlay
      modprobe br_netfilter
      
    • 为 Kubernetes 设置以下内核参数:

      tee /etc/sysctl.d/kubernetes.conf <<EOF
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      net.ipv4.ip_forward = 1
      EOF
      sysctl --system
      
  3. 安装 containerd 运行时

    • 首先安装 containerd 的依赖项:

      apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
      
    • 启用 Docker 存储库:

      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
      add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
      
    • 安装 containerd:

      apt update
      apt install -y containerd.io
      
    • 配置 containerd 使用 systemd 作为 cgroup:

      containerd config default | tee /etc/containerd/config.toml > /dev/null 2>&1
      sed -i 's/SystemdCgroup\\=false/SystemdCgroup\\=true/g' /etc/containerd/config.toml
      

      部分配置手动修改

      disabled_plugins = []
      imports = []
      oom_score = 0
      plugin_dir = ""
      required_plugins = []
      root = "/var/lib/containerd"
      state = "/run/containerd"
      temp = ""
      version = 2
      
      [cgroup]
      path = ""
      
      [debug]
      address = ""
      format = ""
      gid = 0
      level = ""
      uid = 0
      
      [grpc]
      address = "/run/containerd/containerd.sock"
      gid = 0
      max_recv_message_size = 16777216
      max_send_message_size = 16777216
      tcp_address = ""
      tcp_tls_ca = ""
      tcp_tls_cert = ""
      tcp_tls_key = ""
      uid = 0
      
      [metrics]
      address = ""
      grpc_histogram = false
      
      [plugins]
      
      [plugins."io.containerd.gc.v1.scheduler"]
          deletion_threshold = 0
          mutation_threshold = 100
          pause_threshold = 0.02
          schedule_delay = "0s"
          startup_delay = "100ms"
      
      [plugins."io.containerd.grpc.v1.cri"]
          device_ownership_from_security_context = false
          disable_apparmor = false
          disable_cgroup = false
          disable_hugetlb_controller = true
          disable_proc_mount = false
          disable_tcp_service = true
          drain_exec_sync_io_timeout = "0s"
          enable_selinux = false
          enable_tls_streaming = false
          enable_unprivileged_icmp = false
          enable_unprivileged_ports = false
          ignore_deprecation_warnings = []
          ignore_image_defined_volumes = false
          max_concurrent_downloads = 3
          max_container_log_line_size = 16384
          netns_mounts_under_state_dir = false
          restrict_oom_score_adj = false
          # 修改以下这行
          sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.8"
          selinux_category_range = 1024
          stats_collect_period = 10
          stream_idle_timeout = "4h0m0s"
          stream_server_address = "127.0.0.1"
          stream_server_port = "0"
          systemd_cgroup = false
          tolerate_missing_hugetlb_controller = true
          unset_seccomp_profile = ""
      
          [plugins."io.containerd.grpc.v1.cri".cni]
          bin_dir = "/opt/cni/bin"
          conf_dir = "/etc/cni/net.d"
          conf_template = ""
          ip_pref = ""
          max_conf_num = 1
      
          [plugins."io.containerd.grpc.v1.cri".containerd]
          default_runtime_name = "runc"
          disable_snapshot_annotations = true
          discard_unpacked_layers = false
          ignore_rdt_not_enabled_errors = false
          no_pivot = false
          snapshotter = "overlayfs"
      
          [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = ""
      
              [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
      
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
      
              [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = "io.containerd.runc.v2"
      
              [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
                  BinaryName = ""
                  CriuImagePath = ""
                  CriuPath = ""
                  CriuWorkPath = ""
                  IoGid = 0
                  IoUid = 0
                  NoNewKeyring = false
                  NoPivotRoot = false
                  Root = ""
                  ShimCgroup = ""
                  SystemdCgroup = true
      
          [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
              base_runtime_spec = ""
              cni_conf_dir = ""
              cni_max_conf_num = 0
              container_annotations = []
              pod_annotations = []
              privileged_without_host_devices = false
              runtime_engine = ""
              runtime_path = ""
              runtime_root = ""
              runtime_type = ""
      
              [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
      
          [plugins."io.containerd.grpc.v1.cri".image_decryption]
          key_model = "node"
      
          [plugins."io.containerd.grpc.v1.cri".registry]
          config_path = ""
      
          [plugins."io.containerd.grpc.v1.cri".registry.auths]
      
          [plugins."io.containerd.grpc.v1.cri".registry.configs]
      
          [plugins."io.containerd.grpc.v1.cri".registry.headers]
      
          [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
              # 添加如下4行
              [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
              endpoint = ["https://docker.mirrors.ustc.edu.cn"]
              [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
              endpoint = ["https://registry.aliyuncs.com/google_containers"]
      
          [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
          tls_cert_file = ""
          tls_key_file = ""
      
      [plugins."io.containerd.internal.v1.opt"]
          path = "/opt/containerd"
      
      [plugins."io.containerd.internal.v1.restart"]
          interval = "10s"
      
      [plugins."io.containerd.internal.v1.tracing"]
          sampling_ratio = 1.0
          service_name = "containerd"
      
      [plugins."io.containerd.metadata.v1.bolt"]
          content_sharing_policy = "shared"
      
      [plugins."io.containerd.monitor.v1.cgroups"]
          no_prometheus = false
      
      [plugins."io.containerd.runtime.v1.linux"]
          no_shim = false
          runtime = "runc"
          runtime_root = ""
          shim = "containerd-shim"
          shim_debug = false
      
      [plugins."io.containerd.runtime.v2.task"]
          platforms = ["linux/amd64"]
          sched_core = false
      
      [plugins."io.containerd.service.v1.diff-service"]
          default = ["walking"]
      
      [plugins."io.containerd.service.v1.tasks-service"]
          rdt_config_file = ""
      
      [plugins."io.containerd.snapshotter.v1.aufs"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.btrfs"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.devmapper"]
          async_remove = false
          base_image_size = ""
          discard_blocks = false
          fs_options = ""
          fs_type = ""
          pool_name = ""
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.native"]
          root_path = ""
      
      [plugins."io.containerd.snapshotter.v1.overlayfs"]
          mount_options = []
          root_path = ""
          sync_remove = false
          upperdir_label = false
      
      [plugins."io.containerd.snapshotter.v1.zfs"]
          root_path = ""
      
      [plugins."io.containerd.tracing.processor.v1.otlp"]
          endpoint = ""
          insecure = false
          protocol = ""
      
      [proxy_plugins]
      
      [stream_processors]
      
      [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
          accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
          args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
          env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
          path = "ctd-decoder"
          returns = "application/vnd.oci.image.layer.v1.tar"
      
      [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
          accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
          args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
          env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
          path = "ctd-decoder"
          returns = "application/vnd.oci.image.layer.v1.tar+gzip"
      
      [timeouts]
      "io.containerd.timeout.bolt.open" = "0s"
      "io.containerd.timeout.shim.cleanup" = "5s"
      "io.containerd.timeout.shim.load" = "5s"
      "io.containerd.timeout.shim.shutdown" = "3s"
      "io.containerd.timeout.task.state" = "2s"
      
      [ttrpc]
      address = ""
      gid = 0
      uid = 0
      
    • 重启并启用容器服务:

      systemctl restart containerd
      systemctl enable containerd
      
    • 设置crictl

      cat > /etc/crictl.yaml <<EOF
      runtime-endpoint: unix:///var/run/containerd/containerd.sock
      image-endpoint: unix:///var/run/containerd/containerd.sock
      timeout: 10
      debug: false
      pull-image-on-create: false
      EOF
      
  4. 添加阿里云的 Kubernetes 源

    • 首先,导入阿里云的 GPG 密钥:

      curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
      
    • 然后,添加阿里云的 Kubernetes 源:

      tee /etc/apt/sources.list.d/kubernetes.list <<EOF
      deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
      EOF
      
  5. 安装 Kubernetes 组件

    • 更新软件包索引并安装 kubelet、kubeadm 和 kubectl:

      apt-get update
      apt-get install -y kubelet kubeadm kubectl
      
    • 设置 kubelet 使用 systemd 作为 cgroup 驱动:

      # 可忽略
      # sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /var/lib/kubelet/kubeadm-flags.env
      # systemctl daemon-reload
      # systemctl restart kubelet
      
  6. 初始化 Kubernetes 集群

    • 使用 kubeadm 初始化集群,并指定阿里云的镜像仓库:

      # kubeadm init --image-repository registry.aliyuncs.com/google_containers
      I0513 14:16:59.740096   17563 version.go:256] remote version is much newer: v1.30.0; falling back to: stable-1.28
      [init] Using Kubernetes version: v1.28.9
      [preflight] Running pre-flight checks
      [preflight] Pulling images required for setting up a Kubernetes cluster
      [preflight] This might take a minute or two, depending on the speed of your internet connection
      [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
      W0513 14:17:01.440936   17563 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.8" of the container runtime         is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
      [certs] Using certificateDir folder "/etc/kubernetes/pki"
      [certs] Generating "ca" certificate and key
      [certs] Generating "apiserver" certificate and key
      [certs] apiserver serving cert is signed for DNS names [k8smaster.example.net kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.        cluster.local] and IPs [10.96.0.1 10.1.1.70]
      [certs] Generating "apiserver-kubelet-client" certificate and key
      [certs] Generating "front-proxy-ca" certificate and key
      [certs] Generating "front-proxy-client" certificate and key
      [certs] Generating "etcd/ca" certificate and key
      [certs] Generating "etcd/server" certificate and key
      [certs] etcd/server serving cert is signed for DNS names [k8smaster.example.net localhost] and IPs [10.1.1.70 127.0.0.1 ::1]
      [certs] Generating "etcd/peer" certificate and key
      [certs] etcd/peer serving cert is signed for DNS names [k8smaster.example.net localhost] and IPs [10.1.1.70 127.0.0.1 ::1]
      [certs] Generating "etcd/healthcheck-client" certificate and key
      [certs] Generating "apiserver-etcd-client" certificate and key
      [certs] Generating "sa" key and public key
      [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
      [kubeconfig] Writing "admin.conf" kubeconfig file
      [kubeconfig] Writing "kubelet.conf" kubeconfig file
      [kubeconfig] Writing "controller-manager.conf" kubeconfig file
      [kubeconfig] Writing "scheduler.conf" kubeconfig file
      [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
      [control-plane] Using manifest folder "/etc/kubernetes/manifests"
      [control-plane] Creating static Pod manifest for "kube-apiserver"
      [control-plane] Creating static Pod manifest for "kube-controller-manager"
      [control-plane] Creating static Pod manifest for "kube-scheduler"
      [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
      [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
      [kubelet-start] Starting the kubelet
      [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to         4m0s
      [apiclient] All control plane components are healthy after 4.002079 seconds
      [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
      [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
      [upload-certs] Skipping phase. Please see --upload-certs
      [mark-control-plane] Marking the node k8smaster.example.net as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.        io/exclude-from-external-load-balancers]
      [mark-control-plane] Marking the node k8smaster.example.net as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
      [bootstrap-token] Using token: m9z4yq.dok89ro6yt23wykr
      [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
      [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
      [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
      [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
      [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
      [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
      [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
      [addons] Applied essential addon: CoreDNS
      [addons] Applied essential addon: kube-proxy
      
      Your Kubernetes control-plane has initialized successfully!
      
      To start using your cluster, you need to run the following as a regular user:
      
        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
      
      Alternatively, if you are the root user, you can run:
      
        export KUBECONFIG=/etc/kubernetes/admin.conf
      
      You should now deploy a pod network to the cluster.
      Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
        https://kubernetes.io/docs/concepts/cluster-administration/addons/
      
      Then you can join any number of worker nodes by running the following on each as root:
      
      kubeadm join 10.1.1.70:6443 --token m9z4yq.dok89ro6yt23wykr \
              --discovery-token-ca-cert-hash sha256:17c3f29bd276592e668e9e6a7a187140a887254b4555cf7d293c3313d7c8a178 
      
  7. 配置 kubectl

    • 为当前用户设置 kubectl 访问:

      mkdir -p $HOME/.kube
      cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      chown $(id -u):$(id -g) $HOME/.kube/config
      
  8. 安装网络插件

    • 安装一个 Pod 网络插件,例如 Calico 或 Flannel。例如,使用 Calico:

      kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
      # 网络插件初始化完毕之后,coredns容器就正常了
      kubectl logs -n kube-system -l k8s-app=kube-dns
      
  9. 验证集群

    • 启动一个nginx pod:

      # vim nginx_pod.yml
      apiVersion: v1
      kind: Pod
      metadata:
        name: test-nginx-pod
        namespace: test
        labels:
          app: nginx
      spec:
        containers:
        - name: test-nginx-container
          image: nginx:latest
          ports:
          - containerPort: 80
        tolerations:
          - key: "node-role.kubernetes.io/control-plane"
            operator: "Exists"
            effect: "NoSchedule"
      ---
      
      apiVersion: v1
      kind: Service
      # service和pod必须位于同一个namespace
      metadata:
        name: nginx-service
        namespace: test
      spec:
        type: NodePort
        # selector应该匹配pod的labels
        selector:
          app: nginx
        ports:
        - protocol: TCP
          port: 80
          nodePort: 30007
          targetPort: 80
      

      启动

      kubectl apply -f nginx_pod.yml
      

部署opentelemetry-collector测试

otel-collector和otel-agent需要程序集成API,发送到以DaemonSet运行在每个节点的otel-agent,otel-agent再将数据发送给otel-collector汇总,然后发往可以处理otlp trace数据的后端,如zipkin、jaeger等。

自定义测试yaml文件

apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-conf
  namespace: default
data:
  # 你的配置数据
  config.yaml: |
    receivers:
      otlp:
        protocols:
          grpc:
          http:
    processors:
      batch:
    exporters:
      logging:
        loglevel: debug
    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [batch]
          exporters: [logging]

---
apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
spec:
  type: NodePort
  ports:
    - port: 4317
      targetPort: 4317
      nodePort: 30080
      name: otlp-grpc
    - port: 8888
      targetPort: 8888
      name: metrics
  selector:
    component: otel-collector

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
spec:
  replicas: 1
  selector:
    matchLabels:
      component: otel-collector
  template:
    metadata:
      labels:
        component: otel-collector
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - name: otel-collector
        image: otel/opentelemetry-collector:latest
        ports:
        - containerPort: 4317
        - containerPort: 8888
        env:
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        volumeMounts:
        - name: otel-collector-config-vol
          mountPath: /conf
      volumes:
      - configMap:
          name: otel-collector-conf
        name: otel-collector-config-vol

启动

mkdir /conf
kubectl apply -f otel-collector.yaml
kubectl get -f otel-collector.yaml

删除

kubectl delete -f otel-collector.yaml

使用官方提供示例

kubectl apply -f https://raw.githubusercontent.com/open-telemetry/opentelemetry-collector/main/examples/k8s/otel-config.yaml

根据需要修改文件

otel-config.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-agent-conf
  labels:
    app: opentelemetry
    component: otel-agent-conf
data:
  otel-agent-config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:4317
          http:
            endpoint: ${env:MY_POD_IP}:4318
    exporters:
      otlp:
        endpoint: "otel-collector.default:4317"
        tls:
          insecure: true
        sending_queue:
          num_consumers: 4
          queue_size: 100
        retry_on_failure:
          enabled: true
    processors:
      batch:
      memory_limiter:
        # 80% of maximum memory up to 2G
        limit_mib: 400
        # 25% of limit up to 2G
        spike_limit_mib: 100
        check_interval: 5s
    extensions:
      zpages: {}
    service:
      extensions: [zpages]
      pipelines:
        traces:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [otlp]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: otel-agent
  labels:
    app: opentelemetry
    component: otel-agent
spec:
  selector:
    matchLabels:
      app: opentelemetry
      component: otel-agent
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-agent
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - command:
          - "/otelcol"
          - "--config=/conf/otel-agent-config.yaml"
        image: otel/opentelemetry-collector:0.94.0
        name: otel-agent
        resources:
          limits:
            cpu: 500m
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 55679 # ZPages endpoint.
        - containerPort: 4317 # Default OpenTelemetry receiver port.
        - containerPort: 8888  # Metrics.
        env:
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
          - name: GOMEMLIMIT
            value: 400MiB
        volumeMounts:
        - name: otel-agent-config-vol
          mountPath: /conf
      volumes:
        - configMap:
            name: otel-agent-conf
            items:
              - key: otel-agent-config
                path: otel-agent-config.yaml
          name: otel-agent-config-vol
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-conf
  labels:
    app: opentelemetry
    component: otel-collector-conf
data:
  otel-collector-config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: ${env:MY_POD_IP}:4317
          http:
            endpoint: ${env:MY_POD_IP}:4318
    processors:
      batch:
      memory_limiter:
        # 80% of maximum memory up to 2G
        limit_mib: 1500
        # 25% of limit up to 2G
        spike_limit_mib: 512
        check_interval: 5s
    extensions:
      zpages: {}
    exporters:
      otlp:
        endpoint: "http://someotlp.target.com:4317" # Replace with a real endpoint.
        tls:
          insecure: true
      zipkin:
        endpoint: "http://10.1.1.10:9411/api/v2/spans"
        format: "proto"
    service:
      extensions: [zpages]
      pipelines:
        traces/1:
          receivers: [otlp]
          processors: [memory_limiter, batch]
          exporters: [zipkin]
---
apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  ports:
  - name: otlp-grpc # Default endpoint for OpenTelemetry gRPC receiver.
    port: 4317
    protocol: TCP
    targetPort: 4317
  - name: otlp-http # Default endpoint for OpenTelemetry HTTP receiver.
    port: 4318
    protocol: TCP
    targetPort: 4318
  - name: metrics # Default endpoint for querying metrics.
    port: 8888
  selector:
    component: otel-collector
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  selector:
    matchLabels:
      app: opentelemetry
      component: otel-collector
  minReadySeconds: 5
  progressDeadlineSeconds: 120
  replicas: 1 #TODO - adjust this to your own requirements
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-collector
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      containers:
      - command:
          - "/otelcol"
          - "--config=/conf/otel-collector-config.yaml"
        image: otel/opentelemetry-collector:0.94.0
        name: otel-collector
        resources:
          limits:
            cpu: 1
            memory: 2Gi
          requests:
            cpu: 200m
            memory: 400Mi
        ports:
        - containerPort: 55679 # Default endpoint for ZPages.
        - containerPort: 4317 # Default endpoint for OpenTelemetry receiver.
        - containerPort: 14250 # Default endpoint for Jaeger gRPC receiver.
        - containerPort: 14268 # Default endpoint for Jaeger HTTP receiver.
        - containerPort: 9411 # Default endpoint for Zipkin receiver.
        - containerPort: 8888  # Default endpoint for querying metrics.
        env:
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
          - name: GOMEMLIMIT
            value: 1600MiB
        volumeMounts:
        - name: otel-collector-config-vol
          mountPath: /conf
#        - name: otel-collector-secrets
#          mountPath: /secrets
      volumes:
        - configMap:
            name: otel-collector-conf
            items:
              - key: otel-collector-config
                path: otel-collector-config.yaml
          name: otel-collector-config-vol
#        - secret:
#            name: otel-collector-secrets
#            items:
#              - key: cert.pem
#                path: cert.pem
#              - key: key.pem
#                path: key.pem

部署deepflow监控单个k8s集群

官方文档
官方demo

安装helm

snap install helm --classic

设置pv

kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
## config default storage class
kubectl patch storageclass openebs-hostpath  -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

部署deepflow

helm repo add deepflow https://deepflowio.github.io/deepflow
helm repo update deepflow # use `helm repo update` when helm < 3.7.0
helm install deepflow -n deepflow deepflow/deepflow --create-namespace
# 显示如下
NAME: deepflow
LAST DEPLOYED: Tue May 14 14:13:50 2024
NAMESPACE: deepflow
STATUS: deployed
REVISION: 1
NOTES:
██████╗ ███████╗███████╗██████╗ ███████╗██╗      ██████╗ ██╗    ██╗
██╔══██╗██╔════╝██╔════╝██╔══██╗██╔════╝██║     ██╔═══██╗██║    ██║
██║  ██║█████╗  █████╗  ██████╔╝█████╗  ██║     ██║   ██║██║ █╗ ██║
██║  ██║██╔══╝  ██╔══╝  ██╔═══╝ ██╔══╝  ██║     ██║   ██║██║███╗██║
██████╔╝███████╗███████╗██║     ██║     ███████╗╚██████╔╝╚███╔███╔╝
╚═════╝ ╚══════╝╚══════╝╚═╝     ╚═╝     ╚══════╝ ╚═════╝  ╚══╝╚══╝ 

An automated observability platform for cloud-native developers.

# deepflow-agent Port for receiving trace, metrics, and log

deepflow-agent service: deepflow-agent.deepflow
deepflow-agent Host listening port: 38086

# Get the Grafana URL to visit by running these commands in the same shell

NODE_PORT=$(kubectl get --namespace deepflow -o jsonpath="{.spec.ports[0].nodePort}" services deepflow-grafana)
NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")
echo -e "Grafana URL: http://$NODE_IP:$NODE_PORT  \nGrafana auth: admin:deepflow"

节点安装deepflow-ctl

curl -o /usr/bin/deepflow-ctl https://deepflow-ce.oss-cn-beijing.aliyuncs.com/bin/ctl/stable/linux/$(arch | sed 's|x86_64|amd64|' | sed 's|aarch64|arm64|')/deepflow-ctl
chmod a+x /usr/bin/deepflow-ctl

访问grafana页面

NODE_PORT=$(kubectl get --namespace deepflow -o jsonpath="{.spec.ports[0].nodePort}" services deepflow-grafana)
NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")
echo -e "Grafana URL: http://$NODE_IP:$NODE_PORT  \nGrafana auth: admin:deepflow"

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

FAQ

如何将pod的端口暴露出来

在 Kubernetes 中,如果你想要将 Pod 的端口暴露给集群外部的用户或服务访问,你可以通过创建 Service 来实现。Service 提供了几种不同的类型来支持不同的用例和网络需求。以下是一些常见的方法:

  1. NodePort:这种类型的 Service 会在集群的所有节点上开放一个指定的端口(通常在 30000-32767 范围内),任何发送到这个端口的流量都会被转发到对应的 Pod。这是最简单的方式,但它会占用节点上的端口。

    示例 YAML 配置:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      type: NodePort
      selector:
        app: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
          nodePort: 30007
    
  2. LoadBalancer:这种类型的 Service 通常由云提供商支持,它会为 Service 分配一个外部 IP 地址,所有发送到这个 IP 地址的流量都会被转发到 Pod。这种方式适用于需要直接从互联网访问服务的场景。

    示例 YAML 配置:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      selector:
        app: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
      type: LoadBalancer
    
  3. Ingress:Ingress 是一个 API 对象,它定义了外部访问到集群服务的规则。它可以提供负载均衡、SSL 终端和基于名称的虚拟托管。这是一个更高级的暴露方式,可以让你使用一个单一的入口点来管理多个服务的访问。

    示例 YAML 配置:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: example-ingress
    spec:
      rules:
      - host: www.example.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-service
                port:
                  number: 80
    
  4. Port Forwarding:如果你只是临时需要从本地机器访问 Pod,可以使用 kubectl port-forward 命令将本地端口转发到 Pod 中的端口。

    示例命令:

    kubectl port-forward pods/my-pod 8080:80
    

这些方法可以根据你的具体需求和环境来选择使用。

deepflow是一个可观测平台吗

是的,DeepFlow 是一个可观测性平台。它是由云杉网络开发的,旨在为复杂的云基础设施及云原生应用提供深度可观测性。DeepFlow 基于 eBPF 技术实现了应用性能指标、分布式追踪、持续性能剖析等观测信号的零侵扰采集,并结合智能标签技术实现了所有观测信号的全栈关联和高效存取¹。

DeepFlow 提供了以下核心特性:

  • 全景图:利用 eBPF 技术零侵扰绘制生产环境的服务全景图。
  • 分布式追踪:基于 eBPF 和 Wasm 技术零侵扰实现分布式追踪。
  • 持续性能剖析:利用 eBPF 技术以低开销零侵扰采集生产环境进程的性能剖析数据。
  • 存储性能优化:通过 SmartEncoding 机制大幅降低存储成本。
  • 无缝集成:可以作为 Prometheus、OpenTelemetry、SkyWalking、Pyroscope 的存储后端,也提供 SQL、PromQL、OTLP 等数据接口¹。

DeepFlow 旨在解决传统 APM 方案中的侵扰性和观测盲点问题,提供一种更高效、自动化的可观测性解决方案。

deepflow会自动监控k8s中的pod应用吗

是的,DeepFlow 可以自动监控 Kubernetes 中的 Pod 应用。DeepFlow 是一个可观测性平台,它基于 eBPF 技术实现了应用性能指标、分布式追踪、持续性能剖析等观测信号的零侵扰采集,并结合智能标签技术实现了所有观测信号的全栈关联和高效存取。

以下是 DeepFlow 在 Kubernetes 中自动监控 Pod 应用的一些特性:

  1. 零侵扰采集:
    DeepFlow 使用 eBPF 技术,无需修改应用程序代码,即可自动收集应用性能指标、分布式追踪数据和持续性能剖析信息。

  2. 全栈关联:
    DeepFlow 通过智能标签技术,将所有观测信号(指标、日志、追踪)关联到全栈中的不同组件,帮助您更好地理解应用程序的整体性能。

  3. 自动标记 K8s 资源:
    DeepFlow 自动标记 Kubernetes 资源(如 Pod、Service、Deployment 等),使您可以轻松地按照资源维度进行监控。

  4. 持续性能剖析:
    DeepFlow 可以低开销地采集生产环境进程的性能剖析数据,帮助您深入分析应用程序的性能瓶颈。

总之,DeepFlow 是一个强大的可观测性工具,可以自动监控 Kubernetes 中的应用程序,帮助您更好地理解和优化应用性能。

FAQ

如何将pod的端口暴露出来

在 Kubernetes 中,如果你想要将 Pod 的端口暴露给集群外部的用户或服务访问,你可以通过创建 Service 来实现。Service 提供了几种不同的类型来支持不同的用例和网络需求。以下是一些常见的方法:

  1. NodePort:这种类型的 Service 会在集群的所有节点上开放一个指定的端口(通常在 30000-32767 范围内),任何发送到这个端口的流量都会被转发到对应的 Pod。这是最简单的方式,但它会占用节点上的端口。

    示例 YAML 配置:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      type: NodePort
      selector:
        app: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
          nodePort: 30007
    
  2. LoadBalancer:这种类型的 Service 通常由云提供商支持,它会为 Service 分配一个外部 IP 地址,所有发送到这个 IP 地址的流量都会被转发到 Pod。这种方式适用于需要直接从互联网访问服务的场景。

    示例 YAML 配置:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      selector:
        app: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
      type: LoadBalancer
    
  3. Ingress:Ingress 是一个 API 对象,它定义了外部访问到集群服务的规则。它可以提供负载均衡、SSL 终端和基于名称的虚拟托管。这是一个更高级的暴露方式,可以让你使用一个单一的入口点来管理多个服务的访问。

    示例 YAML 配置:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: example-ingress
    spec:
      rules:
      - host: www.example.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-service
                port:
                  number: 80
    
  4. Port Forwarding:如果你只是临时需要从本地机器访问 Pod,可以使用 kubectl port-forward 命令将本地端口转发到 Pod 中的端口。

    示例命令:

    kubectl port-forward pods/my-pod 8080:80
    

这些方法可以根据你的具体需求和环境来选择使用。

deepflow是一个可观测平台吗

是的,DeepFlow 是一个可观测性平台。它是由云杉网络开发的,旨在为复杂的云基础设施及云原生应用提供深度可观测性。DeepFlow 基于 eBPF 技术实现了应用性能指标、分布式追踪、持续性能剖析等观测信号的零侵扰采集,并结合智能标签技术实现了所有观测信号的全栈关联和高效存取¹。

DeepFlow 提供了以下核心特性:

  • 全景图:利用 eBPF 技术零侵扰绘制生产环境的服务全景图。
  • 分布式追踪:基于 eBPF 和 Wasm 技术零侵扰实现分布式追踪。
  • 持续性能剖析:利用 eBPF 技术以低开销零侵扰采集生产环境进程的性能剖析数据。
  • 存储性能优化:通过 SmartEncoding 机制大幅降低存储成本。
  • 无缝集成:可以作为 Prometheus、OpenTelemetry、SkyWalking、Pyroscope 的存储后端,也提供 SQL、PromQL、OTLP 等数据接口¹。

DeepFlow 旨在解决传统 APM 方案中的侵扰性和观测盲点问题,提供一种更高效、自动化的可观测性解决方案。

deepflow会自动监控k8s中的pod应用吗

是的,DeepFlow 可以自动监控 Kubernetes 中的 Pod 应用。DeepFlow 是一个可观测性平台,它基于 eBPF 技术实现了应用性能指标、分布式追踪、持续性能剖析等观测信号的零侵扰采集,并结合智能标签技术实现了所有观测信号的全栈关联和高效存取。

以下是 DeepFlow 在 Kubernetes 中自动监控 Pod 应用的一些特性:

  1. 零侵扰采集:
    DeepFlow 使用 eBPF 技术,无需修改应用程序代码,即可自动收集应用性能指标、分布式追踪数据和持续性能剖析信息。

  2. 全栈关联:
    DeepFlow 通过智能标签技术,将所有观测信号(指标、日志、追踪)关联到全栈中的不同组件,帮助您更好地理解应用程序的整体性能。

  3. 自动标记 K8s 资源:
    DeepFlow 自动标记 Kubernetes 资源(如 Pod、Service、Deployment 等),使您可以轻松地按照资源维度进行监控。

  4. 持续性能剖析:
    DeepFlow 可以低开销地采集生产环境进程的性能剖析数据,帮助您深入分析应用程序的性能瓶颈。

总之,DeepFlow 是一个强大的可观测性工具,可以自动监控 Kubernetes 中的应用程序,帮助您更好地理解和优化应用性能。

FAQ

如何将pod的端口暴露出来

在 Kubernetes 中,如果你想要将 Pod 的端口暴露给集群外部的用户或服务访问,你可以通过创建 Service 来实现。Service 提供了几种不同的类型来支持不同的用例和网络需求。以下是一些常见的方法:

  1. NodePort:这种类型的 Service 会在集群的所有节点上开放一个指定的端口(通常在 30000-32767 范围内),任何发送到这个端口的流量都会被转发到对应的 Pod。这是最简单的方式,但它会占用节点上的端口。

    示例 YAML 配置:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      type: NodePort
      selector:
        app: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
          nodePort: 30007
    
  2. LoadBalancer:这种类型的 Service 通常由云提供商支持,它会为 Service 分配一个外部 IP 地址,所有发送到这个 IP 地址的流量都会被转发到 Pod。这种方式适用于需要直接从互联网访问服务的场景。

    示例 YAML 配置:

    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      selector:
        app: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
      type: LoadBalancer
    
  3. Ingress:Ingress 是一个 API 对象,它定义了外部访问到集群服务的规则。它可以提供负载均衡、SSL 终端和基于名称的虚拟托管。这是一个更高级的暴露方式,可以让你使用一个单一的入口点来管理多个服务的访问。

    示例 YAML 配置:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: example-ingress
    spec:
      rules:
      - host: www.example.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-service
                port:
                  number: 80
    
  4. Port Forwarding:如果你只是临时需要从本地机器访问 Pod,可以使用 kubectl port-forward 命令将本地端口转发到 Pod 中的端口。

    示例命令:

    kubectl port-forward pods/my-pod 8080:80
    

这些方法可以根据你的具体需求和环境来选择使用。

deepflow是一个可观测平台吗

是的,DeepFlow 是一个可观测性平台。它是由云杉网络开发的,旨在为复杂的云基础设施及云原生应用提供深度可观测性。DeepFlow 基于 eBPF 技术实现了应用性能指标、分布式追踪、持续性能剖析等观测信号的零侵扰采集,并结合智能标签技术实现了所有观测信号的全栈关联和高效存取¹。

DeepFlow 提供了以下核心特性:

  • 全景图:利用 eBPF 技术零侵扰绘制生产环境的服务全景图。
  • 分布式追踪:基于 eBPF 和 Wasm 技术零侵扰实现分布式追踪。
  • 持续性能剖析:利用 eBPF 技术以低开销零侵扰采集生产环境进程的性能剖析数据。
  • 存储性能优化:通过 SmartEncoding 机制大幅降低存储成本。
  • 无缝集成:可以作为 Prometheus、OpenTelemetry、SkyWalking、Pyroscope 的存储后端,也提供 SQL、PromQL、OTLP 等数据接口¹。

DeepFlow 旨在解决传统 APM 方案中的侵扰性和观测盲点问题,提供一种更高效、自动化的可观测性解决方案。

deepflow会自动监控k8s中的pod应用吗

是的,DeepFlow 可以自动监控 Kubernetes 中的 Pod 应用。DeepFlow 是一个可观测性平台,它基于 eBPF 技术实现了应用性能指标、分布式追踪、持续性能剖析等观测信号的零侵扰采集,并结合智能标签技术实现了所有观测信号的全栈关联和高效存取。

以下是 DeepFlow 在 Kubernetes 中自动监控 Pod 应用的一些特性:

  1. 零侵扰采集:
    DeepFlow 使用 eBPF 技术,无需修改应用程序代码,即可自动收集应用性能指标、分布式追踪数据和持续性能剖析信息。

  2. 全栈关联:
    DeepFlow 通过智能标签技术,将所有观测信号(指标、日志、追踪)关联到全栈中的不同组件,帮助您更好地理解应用程序的整体性能。

  3. 自动标记 K8s 资源:
    DeepFlow 自动标记 Kubernetes 资源(如 Pod、Service、Deployment 等),使您可以轻松地按照资源维度进行监控。

  4. 持续性能剖析:
    DeepFlow 可以低开销地采集生产环境进程的性能剖析数据,帮助您深入分析应用程序的性能瓶颈。

总之,DeepFlow 是一个强大的可观测性工具,可以自动监控 Kubernetes 中的应用程序,帮助您更好地理解和优化应用性能。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/687693.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

红海边的这座城,如何点亮人类可持续发展之梦

导读&#xff1a;造一座100%清洁能源供电的城市&#xff1f; 城市是人类最伟大的成就之一&#xff0c;数千年来城市承载了人类文明的跃迁。21世纪&#xff0c;在人类追求可持续发展的蓝图中&#xff0c;城市有了新的使命。 容纳全球50%以上人口&#xff0c;但碳排放量超过70%的…

植物大战僵尸杂交版 2.0 下载及配置图文教程

文章目录 Part.I IntroductionPart.II 下载Chap.I 下载地址Chap.II 网盘直链下载 Part.III 配置Chap.I 解压与安装Chap.II 加载存档Chap.III 其他设置 Reference Part.I Introduction 最近看大仙儿直播植物大战僵尸&#xff0c;觉得挺好玩的。它大概长这样&#xff1a; 就上网…

怎么取消Intellij IDEA中的项目和Git仓库的关联

这篇文章分享一种最简单的方法&#xff0c;取消已经开启代码控制的项目与git代码仓库的关联。 打开项目的位置&#xff0c;然后点击文件管理器上方的查看选项卡&#xff0c;勾选【隐藏的项目】。 然后可以看到项目的文件夹下显示了一个隐藏的.git文件夹&#xff0c;直接把这个.…

英伟达Blackwell芯片正式投入生产 | 百能云芯

在近日的一场公开活动中&#xff0c;英伟达公司的创始人和首席执行官黄仁勋正式宣布&#xff0c;备受瞩目的Blackwell芯片已成功投产。 黄仁勋在讲话中强调&#xff0c;英伟达将继续坚持其数据中心规模、一年节奏、技术限制、一个架构的战略方向。这意味着英伟达将继续运用业界…

一个简单好用的 C# Animation Easing 缓动动画类库

文章目录 1.类库说明2.使用步骤2.1 创建一个Windows Form 项目2.2 安装类库2.3 编码2.4 效果 3.该库支持的缓动函数4.代码下载 1.类库说明 App.Animations 类库是一个很精炼、好用的 csharp easing 动画库 基于 net-standard 2.0提供 Fluent API&#xff0c;写代码非常舒服。…

项目3:从0开始的RPC框架

一. 基本概念 区别于传统的增删改查型的业务项目&#xff0c;本项目侧重于开发框架&#xff0c;并且涉及架构方面的技术知识点。 1. 什么是RPC&#xff1f; 远程过程调用&#xff08;Remote Procedure Call&#xff09;&#xff0c;是一种计算机通信协议&#xff0c;它允许程…

02眼电识别眼动--软件V1.0

对应视频链接点击直达 01项目点击下载&#xff0c;可直接运行&#xff08;含数据库&#xff09; 02眼电识别眼动--软件V1.0 对应视频链接点击直达构思结语其他以下是废话 构思 对于软件&#xff0c;主要就是接收数据、处理数据、储存和显示数据。 这是主要页面&#xff0c;…

爬取股票数据python

最近在搜集数据要做分析&#xff0c;一般的数据来源是一手数据&#xff08;生产的&#xff09;和二手数据&#xff08;来自其他地方的&#xff09;。 今天我们爬取同花顺这个网站的数据。url为&#xff1a;https://data.10jqka.com.cn/ipo/xgsgyzq/ 话不多说直接上代码。有帮…

QT项目实战: 五子棋小游戏

目录 内容介绍 一.添加头文件 二.画棋盘 1.宏定义 2.棋盘 三.画棋子 四.获取棋子摆放位置 五.判断棋子存在 六.判断胜利 1.变量定义和初始化 2.检查获胜条件 3.游戏结束处理 七.重绘 八.效果展示 九.代码 1.mainwindow.h 2.mainwindow.cpp 3.chessitem.h 4…

部署kubesphere报错

安装kubesphere报错命名空间terminted [rootk8smaster ~]# kubectl apply -f kubesphere-installer.yaml Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16, unavailable in v1.22; use apiextensions.k8s.io/v1 CustomResourceDefini…

【Oracle篇】rman全库异机恢复:从单机环境到RAC测试环境的转移(第五篇,总共八篇)

&#x1f4ab;《博主介绍》&#xff1a;✨又是一天没白过&#xff0c;我是奈斯&#xff0c;DBA一名✨ &#x1f4ab;《擅长领域》&#xff1a;✌️擅长Oracle、MySQL、SQLserver、阿里云AnalyticDB for MySQL(分布式数据仓库)、Linux&#xff0c;也在扩展大数据方向的知识面✌️…

车载以太网测试要测些什么呢?

车载以太网测试大致可以分成两块&#xff1a;TC8测试和以太网通信测试。 TC8测试全称TC8一致性测试&#xff0c;其规范由OPEN联盟制定&#xff0c;包括车载以太网ECU从物理层到应用层的各层互操作性以及常规基础功能服务。目的在于提高不同ECU之间的兼容性。 TC8测试规范可以…

C++STL---stack queue知识汇总

前言 C将stack和queue划归到了Containers中&#xff0c;但严格的说这并不准确&#xff0c;stack和queue实际上已经不再是容器了&#xff0c;而是属于容器适配器&#xff0c;适配器做的功能是转换&#xff0c;即&#xff1a;它不是直接实现的&#xff0c;而是由其他容器封装转换…

ruoyi若依二次开发怎么添加扫描自己的controller和mapper,配置三个地方即可。

概要 首先&#xff0c;添加在com.ruoyi外的类&#xff0c;项目启动后&#xff0c;调用接口&#xff0c;是会返回404找不到的。 必须要对这以外的接口类进行配置。目录结构如下&#xff1a; 解决步骤 一、添加 com.ruoyi.framework.config 下&#xff1a; // 指定要扫描的M…

深度学习革命-AI发展详解

深度学习革命 《深度学习革命》是一部引人深思的作品&#xff0c;详细讲述了深度学习技术的发展历程及其对各个行业的深远影响。由杰出的计算机科学家、深度学习专家撰写&#xff0c;这本书不仅适合科技领域的专业人士阅读&#xff0c;也为普通读者提供了一个理解人工智能革命…

LeetCode322.零钱兑换

文章目录 题目描述解题思路递归记忆化搜索动态规划另一种实现 题目描述 https://leetcode.cn/problems/coin-change/description/?envTypestudy-plan-v2&envIdtop-interview-150 给你一个整数数组 coins &#xff0c;表示不同面额的硬币&#xff1b;以及一个整数 amount …

考研规划,这么学上岸率比985的学霸还高!

我觉得985考研上岸率低是可以理解的 因为大家本科能考上985&#xff0c;那研究生大概率会报考和本学校差不多或者更高水平的院校&#xff0c;甚至清华北大都有人敢报&#xff0c;去实现自己本科没有实现的梦想。 而且985其实保研的比例很高&#xff0c;一般有30%的保研比例了…

亚马逊测评自养号技术全攻略:一站式解决方案

在跨境电商这行&#xff0c;产品测评可是个大问题。如果你的商品销量少&#xff0c;评价也不多&#xff0c;那买家就很难注意到你的产品&#xff0c;更别提下单购买了。毕竟&#xff0c;大家都喜欢跟风买那些已经有很多人好评的产品&#xff0c;而不是冒险尝试一个全新的。 我们…

基于JSP技术的社区疫情防控管理信息系统

你好呀&#xff0c;我是计算机学长猫哥&#xff01;如果有相关需求&#xff0c;文末可以找到我的联系方式。 开发语言&#xff1a;JSP 数据库&#xff1a;MySQL 技术&#xff1a;JSPJavaBeans 工具&#xff1a;MyEclipse、Tomcat、Navicat 系统展示 首页 用户注册与登录界…

《大宅门》特别活动走进李良济,开启探寻中医药文化之旅!

《大宅门》话剧将于6月14-16日在苏州湾大剧院上演&#xff0c;为了让大家了解到中医药知识&#xff0c;6月2日&#xff0c;李良济携手苏州湾大剧院举办《大宅门》特别活动“探寻中医药文化之旅”&#xff01; 6月2日下午&#xff0c;大家一起走进李良济&#xff0c;深度了解传统…