1. 规划
节点功能 | 节点IP | 数据目录 |
---|---|---|
服务端 | 192.168.99.210 | /data |
客户端 | 192.168.99.211 |
2. 搭建NFS服务
2.1 安装服务端
若NFS已搭建完成,可跳过此节。
在服务端节点上执行如下操作:
安装NFS、RPC服务
yum install -y nfs-utils rpcbind
创建共享目录
# 此处需要执行权限
mkdir /data
修改配置文件vim /etc/exports,添加如下内容
/data *(rw,sync,insecure,no_subtree_check,no_root_squash)
启动RPC,NFS服务
systemctl start rpcbind
systemctl start nfs-server
systemctl enable rpcbind
systemctl enable nfs-server
查看服务端是否正常加载配置文件
showmount -e localhost
# 有如下输出
Export list for localhost:
/data *
2.2 安装客户端
安装NFS客户端nfs-utils,所有k8s集群中的节点都执行以下安装:
yum install nfs-utils -y
查看服务端可共享的目录
# 192.168.99.210为NFS服务端IP,/data为NFS数据目录
showmount -e 192.168.99.210
# 有如下输出
Export list for 192.168.99.210:
/data *
3. 配置NFS作为K8S的存储类
3.1 创建provisioner
vim nfs-rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
#vi nfs-deployment.yaml;创建nfs-client的授权
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-client-provisioner
image: gmoney23/nfs-client-provisioner:1.1
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME #供应者的名字
value: storage.pri/nfs #名字虽然可以随便起,以后引用要一致
- name: NFS_SERVER
value: 192.168.99.210
- name: NFS_PATH
value: /data
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
volumes:
- name: nfs-client-root
nfs:
server: 192.168.99.210
path: /data
注意修改NFS服务端地址和数据目录
kubectl apply -f nfs-rbac.yaml
3.2 创建存储类
vim storageclass-nfs.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: storage-nfs
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: storage.pri/nfs
reclaimPolicy: Delete
存储类名称为:storage-nfs
。
kubectl apply -f storageclass-nfs.yaml
3.3 验证nfs动态供应
vim pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
# annotations:
# volume.beta.kubernetes.io/storage-class: "storage-nfs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: nfs-pv
storageClassName: "storage-nfs"
vim testpod.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: nfs-pvc
执行:
kubectl apply -f pvc.yaml
kubectl apply -f testpod.yaml
查看执行结果: