Kubernetes 二进制部署高可用集群 失败 看报错

概述

openssl证书有问题导致失败,未能解决openssl如何创建私钥,可参考ansible

在私有局域网内完成Kubernetes二进制高可用集群的部署

ETCD

Openssl ==> ca 证书

Haproxy

Keepalived

Kubernetes

主机规划

序号名字功能VMNET 1备注 + 1备注 + 2备注 +3 备注 + 4备注 +5
0orgin界面192.168.164.10haproxykeepalived
1reporsitory仓库192.168.164.16yum 仓库registoryhaproxykeepalived
2master01H-K8S-1192.168.164.11kube-apicontrollerscheduleretcd
3master02H-K8S-2192.168.164.12kube-apicontrollerscheduleretcd
4master03H-K8S-3192.168.164.13kube-apicontrollerscheduleretcd
5node04H-K8S-1192.168.164.14kube-proxykubeletdocker
6node05H-K8S-2192.168.164.15kube-proxykubeletdocker
7node07H-K8S-3192.168.164.17kube-proxykubeletdocker

图例

步骤

0. 前期环境准备 firewalld + selinux + 系统调优 + ansible安装

ansible 配置

配置主机清单

ansible]# cat hostlist
[k8s:children]
k8sm
k8ss

[lb:children]
origin
repo

[k8sm]
192.168.164.[11:13]

[k8ss]
192.168.164.[14:15]
192.168.164.17

[origin]
192.168.164.10

[repo]
192.168.164.16

配置ansible.cfg

hk8s]# cat ansible.cfg
[defaults]
inventory   = /root/ansible/hk8s/hostlist
roles_path  = /root/ansible/hk8s/roles
host_key_checking = False

firewalld + selinux

# 关闭selinux
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

# 关闭防火墙
systemctl disable --now firewalld

系统调优

1. 创建CA根证书

(46条消息) 【k8s学习2】二进制文件方式安装 Kubernetes之etcd集群部署_etcd 二进制文件_温殿飞的博客-CSDN博客

创建CA根证书完成ETCD和K8S的安全认证与联通性

使用openssl创建CA根证书,使用同一套。私钥:ca.key + 证书:ca.crt

假如存在不同的CA根证书, 可以完成集群间的授权与规划管理。

# 创建私钥
openssl genrsa -out ca.key 2048

# 基于私钥,创建证书
openssl req -x509 -new -nodes -key ca.key -subj "/CN=192.168.164.11" -days 36500 -out ca.crt

# -subj "/CN=ip" 指定master主机
# -days 证书有效期

# 证书存放地址为 /etc/kubernetes/pki
mkdir -p /etc/kubernetes/ && mv ~/ca /etc/kubernetes/pki
ls /etc/kubernetes/pki

2. 部署ETCD高可用集群

 Tags · etcd-io/etcd · GitHub 下载

Release v3.4.26 · etcd-io/etcd · GitHub
https://storage.googleapis.com/etcd/v3.4.26/etcd-v3.4.26-linux-amd64.tar.gz

下载tar包

 ansible unarchive

 将tar包远程传递给各master节点

# tar包解压到 ~ 目录下
# ansible k8sm -m unarchive -a "src=/var/ftp/localrepo/etcd/etcd-3.4.26.tar.gz dest=~ copy=yes mode=0755"
ansible k8sm -m unarchive -a "src=/var/ftp/localrepo/etcd/etcd-v3.4.26-linux-amd64.tar.gz dest=~ copy=yes mode=0755"

# 查看文件是否存在
ansible k8sm -m shell -a "ls -l ~"
# 错误则删除
ansible k8sm -m file -a "state=absent path=~/etcd*"

# 配置etcd etcdctl命令到/usr/bin
ansible k8sm -m shell -a "cp ~/etcd-v3.4.26-linux-amd64/etcd /usr/bin/"
ansible k8sm -m shell -a "cp ~/etcd-v3.4.26-linux-amd64/etcdctl /usr/bin/"

ansible k8sm -m shell -a "ls -l /usr/bin/etcd"
ansible k8sm -m shell -a "ls -l /usr/bin/etcdctl"

 

 官方install etcd脚本分析 == 寻找正确的安装包下载路径

#!/bin/bash

# 定义一系列环境变量
ETCD_VER=v3.4.26
# choose either URL
GOOGLE_URL=https://storage.googleapis.com/etcd
GITHUB_URL=https://github.com/etcd-io/etcd/releases/download
DOWNLOAD_URL=${GOOGLE_URL}


# 在/tmp文件夹下删除关于etcd的tar包清空环境
rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
rm -rf /tmp/etcd-download-test && mkdir -p /tmp/etcd-download-test

# 下载特定版本的tar包
curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz

# 解压到指定文件
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/etcd-download-test --strip-components=1
# 删除tar包
# rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz

# 验证功能
/tmp/etcd-download-test/etcd --version
/tmp/etcd-download-test/etcdctl version

etcd.service 文件创建与配置

etcd/etcd.service at v3.4.26 · etcd-io/etcd · GitHub可查看到官方文件

etcd.service 将保存在/usr/lib/systemd/system/目录下方

/etc/etcd/配置文件夹 + /var/lib/etcd 需要创建

[Unit]
Description=etcd key-value store
Documentation=https://github.com/etcd-io/etcd
After=network.target

[Service]
Environment=ETCD_DATA_DIR=/var/lib/etcd
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd
Restart=always
RestartSec=10s
LimitNOFILE=40000

[Install]
WantedBy=multi-user.target

使用ansible转存文件 + 判断文件是否存在

Ansible 检查文件是否存在_harber_king的技术博客_51CTO博客

# 传输
ansible k8sm -m copy -a "src=/root/ansible/hk8s/etcd/etcd.service dest=/usr/lib/systemd/system/ mode=0644"
# 判断
ansible k8sm -m shell -a "ls -l /usr/lib/systemd/system/etcd.service" 

# 创建文件夹
ansible k8sm -m shell -a "mkdir -p /etc/etcd"
ansible k8sm -m shell -a "mkdir -p /var/lib/etcd"

etcd-CA 证书创建

【k8s学习2】二进制文件方式安装 Kubernetes之etcd集群部署_etcd 二进制文件_温殿飞的博客-CSDN博客

必须在一台master主机上面创建,不同主机创建的证书结果不同,创建完后将这个证书拷贝到其他节点的相同目录底下

将etcd_server.key + etcd_server.crt + etcd_server.csr + etcd_client.key + etcd_client.crt + etcd_client.csr 都保存在/etc/etcd/pki

本人将etcd_ssl.cnf也保存在/etc/etcd/pki内,一共产生7份文件

 etcd_ssl.cnf

[ req ]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[ req_distinguished_name ]

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[ alt_names ]
IP.1 = 192.168.164.11
IP.2 = 192.168.164.12
IP.3 = 192.168.164.13

具体命令

# 进入指定目录
mkdir -p /etc/etcd/pki &&  cd /etc/etcd/pki

# 创建server密钥
openssl genrsa -out etcd_server.key 2048

openssl req -new -key etcd_server.key -config etcd_ssl.cnf -subj "/CN=etcd-server" -out etcd_server.csr

openssl x509 -req -in etcd_server.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_server.crt

# 创建客户端密钥
openssl genrsa -out etcd_client.key 2048

openssl req -new -key etcd_client.key -config etcd_ssl.cnf -subj "/CN=etcd-client" -out etcd_client.csr

openssl x509 -req -in etcd_client.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_client.crt

etcd.conf.yml.sample 参数配置

etcd/etcd.conf.yml.sample at v3.4.26 · etcd-io/etcd · GitHub

配置 - etcd官方文档中文版 (gitbook.io)

k8s-二进制安装v1.25.8 - du-z - 博客园 (cnblogs.com)

二进制安装Kubernetes(k8s) v1.25.0 IPv4/IPv6双栈-阿里云开发者社区 (aliyun.com)

使用ansible在所有节点中跟新配置文件,需要使用shell 或者 ansible的传参方式进行更新ip名称等

以master01为例子,master02和master03需要执行对应修改

参数默认环境变量变更值描述实际计划值备注

name:

ETCD_NAME

hostname

master01

data-dir:

ETCD_DATA_DIR

/var/lib/etcd

/var/lib/etcd

listen-peer-urls

ETCD_LISTEN_PEER_URLS

https://ip:2380

https://192.168.164.11:2380

listen-client-urls

ETCD_LISTEN_CLIENT_URLS

http://ip:2379

https://ip:2379

"http://192.168.164.11:2379,https://192.168.164.11:2380"

initial-advertise-peer-urls

ETCD_INITIAL_ADVERTISE_PEER_URLS

https://ip:2380

"https://192.168.164.11:2380"

advertise-client-urls

ETCD_ADVERTISE_CLIENT_URLS

https://ip:2379

https://192.168.164.11:2379

initial-cluster

ETCD_INITIAL_CLUSTER

各节点=https://ip:2380

'master01=https://192.168.164.11:2380,master02=https://192.168.164.12:2380,master03=https://192.168.164.13:2380'

initial-cluster-state

ETCD_INITIAL_CLUSTER_STATE

new 新建 + existing 加入已有

new

cert-file:

client-transport-security:

/etc/etcd/pki/etcd_server.crt

key-file:

client-transport-security:

/etc/etcd/pki/etcd_server.key

client-cert-auth:

falsetrue

trusted-ca-file

client-transport-security:

etc/kubernetes/pki/ca.crt

auto-tls

falsetrue

cert-file

peer-transport-security:

/etc/etcd/pki/etcd_server.crt

key-file

peer-transport-security:

/etc/etcd/pki/etcd_server.key

client-cert-auth

falsetrue

trusted-ca-file

peer-transport-security:

/etc/kubernetes/pki/ca.crt

auto-tls

false    true

 配置没有经过测试

# This is the configuration file for the etcd server.
# https://doczhcn.gitbook.io/etcd/index/index-1/configuration 参考文档

# Human-readable name for this member.
# 建议使用hostname, 唯一值,环境变量: ETCD_NAME
name: "master01"

# Path to the data directory.
# 数据存储地址,需要和etcd.service保持一致,环境变量: ETCD_DATA_DIR
data-dir: /var/lib/etcd

# Path to the dedicated wal directory.
# 环境变量: ETCD_WAL_DIR
wal-dir:

# Number of committed transactions to trigger a snapshot to disk.
# 触发快照到硬盘的已提交事务的数量.
snapshot-count: 10000

# Time (in milliseconds) of a heartbeat interval.
# 心跳间隔时间 (单位 毫秒),环境变量: ETCD_HEARTBEAT_INTERVAL
heartbeat-interval: 100

# Time (in milliseconds) for an election to timeout.
# 选举的超时时间(单位 毫秒),环境变量: ETCD_ELECTION_TIMEOUT
election-timeout: 1000

# Raise alarms when backend size exceeds the given quota. 0 means use the
# default quota.
quota-backend-bytes: 0

# List of comma separated URLs to listen on for peer traffic.
# 环境变量: ETCD_LISTEN_PEER_URLS
listen-peer-urls: "https://192.168.164.11:2380"

# List of comma separated URLs to listen on for client traffic.
# 环境变量: ETCD_LISTEN_CLIENT_URLS
listen-client-urls: "http://192.168.164.11:2379,https://192.168.164.11:2380"

# Maximum number of snapshot files to retain (0 is unlimited).
max-snapshots: 5

# Maximum number of wal files to retain (0 is unlimited).
max-wals: 5

# Comma-separated white list of origins for CORS (cross-origin resource sharing).
cors:

# List of this member's peer URLs to advertise to the rest of the cluster.
# The URLs needed to be a comma-separated list.
# 环境变量: ETCD_INITIAL_ADVERTISE_PEER_URLS
initial-advertise-peer-urls: "https://192.168.164.11:2380"

# List of this member's client URLs to advertise to the public.
# The URLs needed to be a comma-separated list.
advertise-client-urls: https://192.168.164.11:2379

# Discovery URL used to bootstrap the cluster.
discovery:

# Valid values include 'exit', 'proxy'
discovery-fallback: "proxy"

# HTTP proxy to use for traffic to discovery service.
discovery-proxy:

# DNS domain used to bootstrap initial cluster.
discovery-srv:

# Initial cluster configuration for bootstrapping.
# 为启动初始化集群配置, 环境变量: ETCD_INITIAL_CLUSTER
initial-cluster: "master01=https://192.168.164.11:2380,master02=https://192.168.164.12:2380,master03=https://192.168.164.13:2380"

# Initial cluster token for the etcd cluster during bootstrap.
# 在启动期间用于 etcd 集群的初始化集群记号(cluster token)。环境变量: ETCD_INITIAL_CLUSTER_TOKEN
initial-cluster-token: "etcd-cluster"

# Initial cluster state ('new' or 'existing').
# 环境变量: ETCD_INITIAL_CLUSTER_STATE。new 新建 + existing 加入已有
initial-cluster-state: "new"

# Reject reconfiguration requests that would cause quorum loss.
strict-reconfig-check: false

# Accept etcd V2 client requests
enable-v2: true

# Enable runtime profiling data via HTTP server
enable-pprof: true

# Valid values include 'on', 'readonly', 'off'
proxy: "off"

# Time (in milliseconds) an endpoint will be held in a failed state.
proxy-failure-wait: 5000

# Time (in milliseconds) of the endpoints refresh interval.
proxy-refresh-interval: 30000

# Time (in milliseconds) for a dial to timeout.
proxy-dial-timeout: 1000

# Time (in milliseconds) for a write to timeout.
proxy-write-timeout: 5000

# Time (in milliseconds) for a read to timeout.
proxy-read-timeout: 0

client-transport-security:
  # https://doczhcn.gitbook.io/etcd/index/index-1/security 参考
  # Path to the client server TLS cert file.
  cert-file: /etc/etcd/pki/etcd_server.crt

  # Path to the client server TLS key file.
  key-file: /etc/etcd/pki/etcd_server.key

  # Enable client cert authentication.
  client-cert-auth: true

  # Path to the client server TLS trusted CA cert file.
  trusted-ca-file: /etc/kubernetes/pki/ca.crt

  # Client TLS using generated certificates
  auto-tls: true

peer-transport-security:
  # Path to the peer server TLS cert file.
  cert-file: /etc/etcd/pki/etcd_server.crt

  # Path to the peer server TLS key file.
  key-file: /etc/etcd/pki/etcd_server.key

  # Enable peer client cert authentication.
  client-cert-auth: true

  # Path to the peer server TLS trusted CA cert file.
  trusted-ca-file: /etc/kubernetes/pki/ca.crt

  # Peer TLS using generated certificates.
  auto-tls: true

# Enable debug-level logging for etcd.
debug: false

logger: zap

# Specify 'stdout' or 'stderr' to skip journald logging even when running under systemd.
log-outputs: [stderr]

# Force to create a new one member cluster.
force-new-cluster: false

auto-compaction-mode: periodic
auto-compaction-retention: "1"

配置经过测试 /etc/etcd/etcd.conf

ETCD_NAME=master03
ETCD_DATA_DIR=/var/lib/etcd

# [Cluster Flags]
# ETCD_AUTO_COMPACTION_RETENTIO:N=0


ETCD_LISTEN_PEER_URLS=https://192.168.164.13:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.164.13:2380

ETCD_LISTEN_CLIENT_URLS=https://192.168.164.13:2379
ETCD_ADVERTISE_CLIENT_URLS=https://192.168.164.13:2379

ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_INITIAL_CLUSTER="master01=https://192.168.164.11:2380,master02=https://192.168.164.12:2380,master03=https://192.168.164.13:2380"

# [Proxy Flags]
ETCD_PROXY=off

# [Security flags]
# 指定etcd的公钥证书和私钥
ETCD_TRUSTED_CA_FILE=/etc/kubernetes/pki/ca.crt
ETCD_CERT_FILE=/etc/etcd/pki/etcd_server.crt
ETCD_KEY_FILE=/etc/etcd/pki/etcd_server.key
ETCD_CLIENT_CERT_AUTH=true

# 指定etcd的Peers通信的公钥证书和私钥
ETCD_PEER_TRUSTED_CA_FILE=/etc/kubernetes/pki/ca.crt
ETCD_PEER_CERT_FILE=/etc/etcd/pki/etcd_server.crt
ETCD_PEER_KEY_FILE=/etc/etcd/pki/etcd_server.key

所有节点同步文件kpi文件

名称作用子文件备注
/etc/kubernetes/pkikubernetes ca根证书ca.crt  ca.key 
/etc/etcdetcd的配置文件和ca证书etcd.conf.yml  pki
/etc/etcd/pkietcd的ca证书etcd_client.crt  etcd_client.csr  etcd_client.key  etcd_server.crt  etcd_server.csr  etcd_server.key  etcd_ssl.cnf
/var/lib/etcd存放数据的文件
/usr/bin/etcdetcd命令
/usr/bin/etcdctletcdctl命令
/usr/lib/systemd/system/etcd.servicesystemctl 管理etcd配置文件

启动服务

ansible k8sm -m systemd -a "name=etcd state=restarted enabled=yes"

检测集群状态

etcdctl --cacert="/etc/kubernetes/pki/ca.crt" \
--cert="/etc/etcd/pki/etcd_client.crt" \
--key="/etc/etcd/pki/etcd_client.key" \
--endpoints=https://192.168.164.11:2379,https://192.168.164.12:2379,https://192.168.164.13:2379 endpoint health -w table

报错:

排除

firewalld是否停止,为disabled?

selinux是否为permission,配置文件是否完成修改?

CA证书创建是否成功,是不是敲错命令,传递错误参数?

etcd_ssl.conf 配置文件,是否成功配置,IP.1 + IP.2 + IP.3 地址是否写全?

CA 证书是否为同一相同文件? 因为只在一台主机上面生成证书,接着传递给其他主机,所以是否成功传递,并保证相同

etcd.conf配置文件是否完成修改,并完成对应的编辑

验证命令是否敲错https非http

3. Kubernetes 高可用集群搭建

kubernetes/CHANGELOG-1.20.md at v1.20.13 · kubernetes/kubernetes · GitHub

下载1.20.10的版本

 软件准备与部署

# 部署软件到节点
ansible k8s -m unarchive -a "src=/var/ftp/localrepo/k8s/hk8s/kubernetes-server-linux-amd64.tar.gz dest=~ copy=yes mode=0755"

# 检测软件包安装是否齐全
ansible k8sm -m shell -a "ls -l /root/kubernetes/server/bin"
文件名说明
kube-apiserverkube-apiserver 主程序
kube-apiserver.docker_tagkube-apiserver docker 镜像的 tag
kube-apiserver.tarkube-apiserver docker 镜像文件
kube-controller-managerkube-controller-manager 主程序
kube-controller-manager.docker_tagkube-controller-manager docker 镜像的 tag
kube-controller-manager.tarkube-controller-manager docker 镜像文件
kube-schedulerkube-scheduler 主程序
kube-scheduler.docker_tagkube-scheduler docker 镜像的 tag
kube-scheduler.tarkube-scheduler docker 镜像文件
kubeletkubelet 主程序
kube-proxykube-proxy 主程序
kube-proxy.docker_tagkube-proxy docker 镜像的 tag
kube-proxy.tarkube-proxy docker 镜像文件
kubectl客户端命令行工具
kubeadmKubernetes 集群安装的命令工具
apiextensions-apiserver提供实现自定义资源对象的扩展 API Server
kube-aggregator聚合 API Server 程序

masters + slaves 分别部署相关的程序到/usr/bin

# 在master上部署组件
ansible k8sm -m shell -a "cp -r /root/kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} /usr/bin"

# 造slave上部署组件
ansible k8ss -m shell -a "cp -r /root/kubernetes/server/bin/kube{let,-proxy} /usr/bin"

# master检验
ansible k8sm -m shell -a "ls -l /usr/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} "

# slave检验
ansible k8ss -m shell -a "ls -l /usr/bin/kube{let,-proxy} "

3.1 kube-apiserver

部署kube-apiserver服务 -- CA证书配置

在master01主机上执行命令

文件路径:/etc/kubernetes/pki

具体相关命令:

openssl genrsa -out apiserver.key 2048

openssl req -new -key apiserver.key -config master_ssl.cnf -subj "/CN=192.168.164.11" -out apiserver.csr

openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile master_ssl.cnf -out apiserver.crt

 master_ssl.cnf 文件内容

[ req ]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[ req_distinguished_name ]

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[ alt_names ]
IP.1 = 169.169.0.1
IP.2 = 192.168.164.12
IP.3 = 192.168.164.13
IP.4 = 192.168.164.11
IP.5 = 192.168.164.200
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = master01
DNS.6 = master02
DNS.7 = master03

 配置kube-apiserver.service

cat /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=always
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

cat /etc/kubernetes/apiserver

KUBE_API_ARGS="--insecure-port=0  \
--secure-port=6443  \
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt  \
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key  \
--client-ca-file=/etc/kubernetes/pki/ca.crt  \
--apiserver-count=3 --endpoint-reconciler-type=master-count \
--etcd-servers=https://192.168.164.11:2379,https://192.168.164.12:2379,https://192.168.164.13:2379 \
--etcd-cafile=/etc/kubernetes/pki/ca.crt  \
--etcd-certfile=/etc/etcd/pki/etcd_client.crt  \
--etcd-keyfile=/etc/etcd/pki/etcd_client.key \
--service-cluster-ip-range=169.169.0.0/16  \
--service-node-port-range=30000-32767  \
--allow-privileged=true  \
--logtostderr=false  --log-dir=/var/log/kubernetes --v=0"

systemctl stop kube-apiserver  && systemctl daemon-reload && systemctl restart kube-apiserver && systemctl status kube-apiserver

ansible k8sm -m shell  -a " systemctl daemon-reload && systemctl restart kube-apiserver && systemctl status kube-apiserver "

 3.1.1 tailf -30 /var/log/messages 进行error j解决

cat /var/log/messages|grep kube-apiserver|grep -i error

error='no default routes found in "/proc/net/route" or "/proc/net/ipv6_route"'. Try to set the AdvertiseAddress directly or provide a valid BindAddress to fix this.

需要为虚拟机配置网关

Error: [--etcd-servers must be specified, service-account-issuer is a required flag, --service-account-signing-key-file and --service-account-issuer are required flags]

 可能是版本原因问题:不建议使用openssl进行加密配置,官方使用 cfssl 软件进行加密配置。本人重新下载了K8S的1.19进行实践解决了service-account-issuer is a required flag, --service-account-signing-key-file and --service-account-issuer are required flags这个报错提示。但这不是更本原因

另外一个是原因

原因是CA证书配置出错,每一个master服务器需要配置自己单独的证书,不能共用CA证书

3.2 创建controller + scheduler + kubelet +kube-proxy 证书

kube-controller-manager、kube-scheduler、kublet和kube-proxy 都是apiserver的客户端

kube-controller-manager + kube-scheduler + kubelet + kube-proxy 可以根据实际情况单独配置CA证书从而链接到kube-apiserver。以下进行统一创建相同证书为例子

用openssl创建证书并放到/etc/kubernetes/pki/ 创建好的证书考到同集群的其他服务器使用

-subj "/CN=admin" 用于标识链接kube-apiserver的客户端用户名称

cd   /etc/kubernetes/pki

openssl genrsa -out client.key 2048

openssl req -new -key client.key -subj "/CN=admin" -out client.csr

openssl x509 -req -in client.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out client.crt -days 36500

 

kubeconfig 配置文件

创建客户端连接 kube-apiserver 服务所需的 kubeconfig 配置文件

kube-controller-manager、kube-scheduler、kubelet、kube-proxy、kubectl统一使用的链接到kube-api的配置文件

文件存放在/etc/kubernetes下

# 文件传递到跳板机
scp -r ./client.* root@192.168.164.16:/root/ansible/hk8s/

官方文档:使用 kubeconfig 文件组织集群访问 | Kubernetes

PKI 证书和要求 | Kubernetes

配置对多集群的访问 | Kubernetes

apiVersion: v1
kind: Config
clusters:
  - name: default
    cluster:
      server: https://192.168.164.200:9443 # 虚拟ip地址 Haproxy地址 + haproxy的监听端口
      certificate-authority: /etc/kubernetes/pki/ca.crt

users:
  - name: admin # 链接apiserver的用户名
    user:
      client-certificate: /etc/kubernetes/pki/client.crt
      client-key: /etc/kubernetes/pki/client.key

contexts:
  - name: default
    context:
      cluster: default
      user: admin # 链接apiserver的用户名
    current-context: default

ansible部署文件

ansible k8s -m shell -a "ls -l /etc/kubernetes/kubeconfig"

ansible k8s -m copy -a "src=/root/ansible/hk8s/kubeconfig dest=/etc/kubernetes/"

ansible k8ss,192.168.164.12,192.168.164.13 -m copy -a "src=/root/ansible/hk8s/client.csr  dest=/etc/kubernetes/pki/" >> /dev/null

ansible k8ss,192.168.164.12,192.168.164.13 -m copy -a "src=/root/ansible/hk8s/client.crt  dest=/etc/kubernetes/pki/" >> /dev/null

ansible k8ss,192.168.164.12,192.168.164.13 -m copy -a "src=/root/ansible/hk8s/client.key  dest=/etc/kubernetes/pki/" >> /dev/null

3.3 kube-controller-manager

部署kube-controller-manager服务

/usr/lib/systemd/system/ 存放相应配置文件 kube-controller-manager.service

kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=always
 
[Install]
WantedBy=multi-user.target

controller-manager

EnvironmentFile=/etc/kubernetes/controller-manager

KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--leader-elect=true \
--service-cluster-ip-range=169.169.0.0/16 \
--service-account-private-key-file=/etc/kubernetes/pki/apiserver.key \
--root-ca-file=/etc/kubernetes/pki/ca.crt \
--v=0"

ansible 传递

ansible k8sm -m copy -a "src=./kube-controller-manager/controller-manager  dest=/etc/kubernetes/ "

ansible k8sm -m copy -a "src=./kube-controller-manager/kube-controller-manager.service dest=/usr/lib/systemd/system/ "

启动服务

systemctl daemon-reload && systemctl start kube-controller-manager && systemctl status kube-controller-manager && systemctl enable --now kube-controller-manager

ansible k8sm -m shell -a "systemctl daemon-reload && systemctl enable--now kube-controller-manager && systemctl status kube-controller-manager"

error: KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined

3.4 kube-scheduler

同理配置

/usr/lib/systemd/system/ 存放相应配置文件 kube-sheduler.service

/etc/kubernetes/存放scheduler

 kube-sheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=always
 
[Install]
WantedBy=multi-user.target

scheduler

KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--leader-elect=true \
--v=0"

ansible 命令

ansible k8sm -m copy -a "src=./kube-scheduler/kube-scheduler.service  dest=/usr/lib/systemd/system/ "

ansible k8sm -m copy -a "src=./kube-scheduler/scheduler  dest=/etc/kubernetes/ "

ansible k8sm -m shell -a "systemctl daemon-reload && systemctl start kube-controller-manager && systemctl status kube-scheduler"

网页链接

Tags · etcd-io/etcd · GitHub

v3.4 docs | etcd

Introduction - etcd官方文档中文版 (gitbook.io)

 二进制安装Kubernetes(k8s) v1.25.0 IPv4/IPv6双栈-阿里云开发者社区 (aliyun.com)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/20914.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

【C++】-static在类和对象中的作用和细节(下)

💖作者:小树苗渴望变成参天大树 ❤️‍🩹作者宣言:认真写好每一篇博客 💨作者gitee:gitee 💞作者专栏:C语言,数据结构初阶,Linux,C 文章目录 前言 前言 今天我们来讲一个static对类的对象的作用…

C++模板template

我们现在有几个变量,我们向要实现他们的交换,所以我们现在写了一个swap函数 我们现在可以实现对这两个变量之间的交换, 那么我们有有两个变量需要交换呢?? 我们刚才的Swap函数的参数是int类型的,我们现在的…

SOME/IP 草稿

SOME/IP 名词解释 SOME/IP 全称是 Scalable service-Oriented MiddlewarE over IP。也就是基于 IP 协议的面向服务的可扩展性通信中间件协议。 面向服务 SOA基于 IP 协议之上的通信协议中间件 SOME/IP 功能 服务发现 (Service Discovery)远程服务调用 (RPC,rem…

ConvTranspose2d 的简单例子理解

文章目录 参考基础概念output_padding 简单例子: stride2step1step2step3 参考 逆卷积的详细解释ConvTranspose2d(fractionally-strided convolutions)nn.ConvTranspose2d的参数output_padding的作用torch.nn.ConvTranspose2d Explained 基础概念 逆卷…

VMware、CentOS、XShell、Xftp的安装

第 1 章 VMware 1.1 VMware 安装 一台电脑本身是可以装多个操作系统的,但是做不到多个操作系统切换自如,所以我们 需要一款软件帮助我们达到这个目的,不然数仓项目搭建不起来。 推荐的软件为 VMware,VMware 可以使用户在一台计…

一篇文章搞定《Android中的ANR》

------《ANR》 什么是ANR举个例子帮你认识ANRANR的产生原因ANR的监控手段方法一: 监控trace文件夹方法二:利用我们主线程的Looper方法三:监控SIGQUIT信号 ANR日志Traces.txtTraces文件分析几个分析案例:一、好定位的问题(简单案例…

【C++】设计模式

目录 设计模式概述 单例模式 饿汉模式 懒汉模式 工厂模式 简单工厂模式 工厂方法模式 抽象工厂模式 观察者模式 设计模式概述 设计模式:一套反复被人使用、多数人知晓的、经过分类编目的代码设计经验的总结。一种固定的写代码的思维逻辑方式,一…

小学妹刚毕业没地方住想来借宿,于是我连夜用Python给她找了个好房子,我真是太机智了

事情是这样的,小学妹刚毕业参加工作,人生地不熟的,因为就在我附近上班,所以想找我借宿。。。 想什么呢,都不给住宿费,想免费住?于是我用Python连夜给她找了个单间,自己去住吧&#…

ChatGPT api 接口调用测试

参考文档: https://platform.openai.com/docs/quickstart/build-your-application示例说明: 本示例会生成一个简单的ChatGPT api接口调用server程序,该程序可以给用户输入的宠物类别为宠物取三个名字。打开网页后,会看到用户输入…

机器学习在生态、环境经济学中的应用及论文写作

近年来,人工智能领域已经取得突破性进展,对经济社会各个领域都产生了重大影响,结合了统计学、数据科学和计算机科学的机器学习是人工智能的主流方向之一,目前也在飞快的融入计量经济学研究。表面上机器学习通常使用大数据&#xf…

【一起撸个深度学习框架】6 折与曲的相会——激活函数

CSDN个人主页:清风莫追欢迎关注本专栏:《一起撸个DL框架》GitHub获取源码:https://github.com/flying-forever/OurDLblibli视频合集:https://space.bilibili.com/3493285974772098/channel/series 文章目录 6 折与曲的相会——激活…

使用Visual Studio进行cuda编程配置环境四大坑(附解决方案)

写在前面,用于没有使用过Visual Studio进行cuda编程的同学看,以免在安装环境的时候踩到坑 第一坑:CUDA版本与NVIDIA显卡版本不匹配问题: 安装cuda版本坑,强烈建议看下自己的显卡支持什么版本的cuda,切记不要用最新版…

IP 子网划分详解

文章目录 1 概述1.1 划分目的1.2 划分原则1.3 子网掩码 2 IP 子网划分示例3 网工软考真题3.1 判断网络号和主机号3.2 计算可容纳的主机数 1 概述 IP 子网划分:实际上就是设计 子网掩码 的过程。原因:由于在五类的IP地址中,网络号与主机号的的…

Class 02 - R语言Rstudio的安装

Class 02 - R语言&Rstudio的安装 下载和安装R安装前准备下载R语言安装R语言开始使用R语言 下载和安装RStudio安装前准备下载RStudio安装RStudio开始使用RStudio如何编写代码 下载和安装R 在这个部分中,你将完成在计算机上下载和安装R语言程序。当安装完成后&am…

ThingsBoard部署tb-gateway并配置OPCUA

1、安装 我实在自己的虚拟机上安装,使用官方Docker的安装方式 docker run -it -v ~/.tb-gateway/logs:/thingsboard_gateway/logs -v ~/.tb-gateway/extensions:/thingsboard_gateway/extensions -v ~/.tb-gateway/config:/thingsboard_gateway/config --name tb-gateway --…

JavaScript的三座大山

前言:这个题目是抄的,看着很有意思,就拿过用了,毕竟CV是程序员的基本功底嘛,顺带把图也拿过来了 作用域和闭包 这个几乎是天天在用的东西,可能有些人甚至不知道这个概念,但是用到过这种方法去解…

Html中使用jquery通过Ajax请求WebService接口以及跨域问题解决

场景 VS2019新建WebService/Web服务/asmx并通过IIS实现发布和调用: VS2019新建WebService/Web服务/asmx并通过IIS实现发布和调用_霸道流氓气质的博客-CSDN博客 在上面实现发布WebService的基础上,怎样在html中通过jquery对接口发起 请求和解析数据。…

测试4年,跳槽一次涨8k,我跳了3次···

最近有人说,现在测试岗位初始工资太低了,有些刚刚入行的程序员朋友说自己工资连5位数都没有.....干了好几年也没怎么涨。看看别人动辄月薪2-3万,其实我想说也没那么难。说下如何高效地拿到3w。 1.暂且把刚入行的条件设低些吧,大专…

【野火启明_瑞萨RA6M5】梦的开始 ---- 点灯(FSP库)

文章目录 一、FSP配置二、hal_entry入口函数三、封装 LED 设备驱动程序下载验证 一、FSP配置 对于 Keil 开发环境: 拷贝一份之前的 Keil 工程模板 “06_Template”, 然后将工程文件夹重命名为 “11_GPIO_LED”,并进入该文件夹里面双击 Keil …

EW代理工具的使用说明

一、EW介绍 Earthworm(EW) 是一套便携式的网络穿透工具,具有 SOCKS v5服务架设和端口转发两大核心功能,可在复杂网络环境下完成网络穿透。 该工具能够以“正向”、“反向”、“多级级联”等方式打通一条网络隧道,直达…