要在Kubernetes
上安装LongHorn
,您可以按照以下步骤进行操作:
准备工作
- 将
LongHorn
只部署在k8s-worker5
节点上。
给节点设置污点
$. kubectl taint nodes k8s-worker5 longhorn:PreferNoSchedule
# ======================== 参考 ======================== #
# 删除污点
# kubectl taint nodes k8s-worker5 longhorn:PreferNoSchedule-
# 查看该节点污点
# kubectl describe node k8s-worker5 | grep Taints
给节点打标签
$. kubectl label node k8s-worker5 longhorn=deploy
# ======================== 参考 ======================== #
# 查询node标签
# kubectl get no -o wide --show-labels | grep k8s-worker5
# 删除label
# kubectl label nodes k8s-worker5 k8s-worker5-
# 修改label
# kubectl label nodes k8s-worker5 longhorn=deploy --overwrite
# or
# kubectl edit nodes k8s-worker5
部署LongHorn
官网地址: longhorn-官网安装
下载longhorn.yaml
# use
$. wget https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml
$. cp longhorn.yaml longhorn.yaml-bak
# old
$. kubectl apply \
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
# ========================= 参考 ======================== #
# 官网地址: https://longhorn.io/docs/1.6.0/deploy/install/install-with-kubectl/
vi longhorn.yaml
查找 template:
, 并在其中添加如下内容:
...
tolerations: # 添加
- key: longhorn
operator: Exists
effect: PreferNoSchedule
affinity:
nodeAffinity: # 添加
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: longhorn
operator: In
values:
- deploy
...
添加效果参考:longhorn.yaml
开始部署
$. kubectl apply -f longhorn.yaml
# 命令试执行(不会真的创建或修改任何集群中的资源。)
$. kubectl --dry-run=client apply -f longhorn.yaml
# ========================= 参考 ========================== #
# 删除
$. kubectl delete -f longhorn.yaml
查看pod状态
等待Pod启动:
一旦存储库创建成功,LongHorn
系统将启动一系列的Pod
。
检查Pod
的状态:
$ kubectl get pods -n longhorn-system
# use
kubectl get pods -o wide -n longhorn-system
确认所有的Pod
都处于"Running
"状态。
# 全部是 Running和状态READY左右两边一致就可以了
[root@master ~]# kubectl get pod \
-n longhorn-system \
-o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-attacher-7bf4b7f996-ffhxm 1/1 Running 0 45m 10.244.166.146 node1 <none> <none>
csi-attacher-7bf4b7f996-kts4l 1/1 Running 0 45m 10.244.104.20 node2 <none> <none>
csi-attacher-7bf4b7f996-vtzdb 1/1 Running 0 45m 10.244.166.142 node1 <none> <none>
csi-provisioner-869bdc4b79-bjf8q 1/1 Running 0 45m 10.244.166.150 node1 <none> <none>
csi-provisioner-869bdc4b79-m2swk 1/1 Running 0 45m 10.244.104.18 node2 <none> <none>
csi-provisioner-869bdc4b79-pmkfq 1/1 Running 0 45m 10.244.166.143 node1 <none> <none>
csi-resizer-869fb9dd98-6ndgb 1/1 Running 0 8m47s 10.244.104.28 node2 <none> <none>
csi-resizer-869fb9dd98-czvzh 1/1 Running 0 45m 10.244.104.16 node2 <none> <none>
csi-resizer-869fb9dd98-pq2p5 1/1 Running 0 45m 10.244.166.149 node1 <none> <none>
csi-snapshotter-7d59d56b5c-85cr6 1/1 Running 0 45m 10.244.104.19 node2 <none> <none>
csi-snapshotter-7d59d56b5c-dlwjk 1/1 Running 0 45m 10.244.166.148 node1 <none> <none>
csi-snapshotter-7d59d56b5c-xsc6s 1/1 Running 0 45m 10.244.166.147 node1 <none> <none>
engine-image-ei-f9e7c473-ld6zp 1/1 Running 0 46m 10.244.104.15 node2 <none> <none>
engine-image-ei-f9e7c473-qw96n 1/1 Running 1 (6m5s ago) 8m13s 10.244.166.154 node1 <none> <none>
instance-manager-e-16be548a213303f54febe8742dc8e307 1/1 Running 0 8m19s 10.244.104.29 node2 <none> <none>
instance-manager-e-1e8d2b6ac4bdab53558aa36fa56425b5 1/1 Running 0 7m58s 10.244.166.155 node1 <none> <none>
instance-manager-r-16be548a213303f54febe8742dc8e307 1/1 Running 0 46m 10.244.104.14 node2 <none> <none>
instance-manager-r-1e8d2b6ac4bdab53558aa36fa56425b5 1/1 Running 0 7m31s 10.244.166.156 node1 <none> <none>
longhorn-admission-webhook-69979b57c4-rt7rh 1/1 Running 0 52m 10.244.166.136 node1 <none> <none>
longhorn-admission-webhook-69979b57c4-s2bmd 1/1 Running 0 52m 10.244.104.11 node2 <none> <none>
longhorn-conversion-webhook-966d775f5-st2ld 1/1 Running 0 52m 10.244.166.135 node1 <none> <none>
longhorn-conversion-webhook-966d775f5-z5m4h 1/1 Running 0 52m 10.244.104.12 node2 <none> <none>
longhorn-csi-plugin-gcngb 3/3 Running 0 45m 10.244.166.145 node1 <none> <none>
longhorn-csi-plugin-z4pr6 3/3 Running 0 45m 10.244.104.17 node2 <none> <none>
longhorn-driver-deployer-5d74696c6-g2p7p 1/1 Running 0 52m 10.244.104.9 node2 <none> <none>
longhorn-manager-j69pn 1/1 Running 0 52m 10.244.166.139 node1 <none> <none>
longhorn-manager-rnzwr 1/1 Running 0 52m 10.244.104.10 node2 <none> <none>
longhorn-recovery-backend-6576b4988d-l4lmb 1/1 Running 0 52m 10.244.104.8 node2 <none> <none>
longhorn-recovery-backend-6576b4988d-v79b4 1/1 Running 0 52m 10.244.166.138 node1 <none> <none>
longhorn-ui-596d5f6876-ms4dn 1/1 Running 0 52m 10.244.166.137 node1 <none> <none>
longhorn-ui-596d5f6876-td7ww 1/1 Running 0 52m 10.244.104.7 node2 <none> <none>
[root@master ~]#
设置svc服务
kubectl get svc -n longhorn-system
# 过滤
[root@master ~]# kubectl get svc -n longhorn-system|grep longhorn-frontend
longhorn-frontend ClusterIP 10.106.154.54 <none> 80/TCP 55m
[root@master ~]#
## 安装好后名为longhorn-frontend的svc服务默认是clusterip模式,
## 除了集群之外的网络是访问不到此服务的,所以要将此svc服务改为nodeport模式
$. kubectl edit svc longhorn-frontend -n longhorn-system
##type: NodePort
#将type的ClusterIP改为 NodePort 即可,保存退出
# 查询刚才变更为 NodePort 暴露的端口
[root@master ~]# kubectl get svc \
-n longhorn-system|grep longhorn-frontend
longhorn-frontend NodePort 10.106.154.54 <none> 80:32146/TCP 58m
[root@master ~]#
浏览器访问
在浏览器访问此端口即可
http://ip:32146 # 根据自己的k8s宿主机ip地址输入
LongHorn的使用
创建LongHorn存储类(SC):
接下来,您需要创建一个LongHorn
存储类,以便为Kubernetes
应用程序提供块存储。您可以将以下内容保存为 longhorn-storageclass.yaml
文件:
# vi longhorn-storageclass.yaml
# kubectl apply -f longhorn-storageclass.yaml
# kubectl delete -f longhorn-storageclass.yaml
# kubectl get sc
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: longhorn-ns
# namespace: rpp-ns # sc 没有 namespace 属性
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: Delete
创建PersistentVolumeClaim(PVC):
现在,您可以为应用程序创建一个PersistentVolumeClaim
,以便使用LongHorn
提供的块存储。您可以将以下内容保存为 longhorn-pvc.yaml
文件:
# vi longhorn-pvc.yaml
# kubectl apply -f longhorn-pvc.yaml --dry-run=client
# kubectl apply -f longhorn-pvc.yaml
# kubectl delete -f longhorn-pvc.yaml
# kubectl get pvc -n rpp-ns
# kubectl delete pvc longhorn-volume -n rpp-ns
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-pvc
namespace: rpp-ns
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
使用LongHorn存储:
一旦PVC
创建成功,您可以将其绑定到您的应用程序中。您可以添加一个示例应用程序Pod
,并将挂载PVC
作为卷。例如,您可以将以下内容保存为 app-pod.yaml
文件:
# vi app-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app-container
image: your-app-image
volumeMounts:
- name: longhorn-volume
mountPath: /data
volumes:
- name: longhorn-volume
persistentVolumeClaim:
claimName: longhorn-volume
---
# use
# vi rpp-longhorn-pod.yaml
# kubectl apply -f rpp-longhorn-pod.yaml --dry-run=client
# kubectl apply -f rpp-longhorn-pod.yaml
# kubectl get pod -n rpp-ns
# kubectl describe pod rpp-longhorn-pod -n rpp-ns
# kubectl exec -it -n rpp-ns rpp-longhorn-pod -- sh
# kubectl delete -f rpp-longhorn-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: rpp-longhorn-pod
namespace: rpp-ns
spec:
nodeSelector:
kubernetes.io/hostname: k8s-worker5 # 必须与 longhorn 存储在同一个node上
containers:
- name: springboot-docker-longhorn # 容器名
image: harbor.echo01.com/my-project/spring-boot-docker2:0.0.1-SNAPSHOT
volumeMounts:
- name: loghorn-volume
mountPath: /data_loghorn
ports: # 端口
- containerPort: 8600 # 容器暴露的端口
name: business-port
- containerPort: 8800
name: actuator-port
volumes:
- name: loghorn-volume
persistentVolumeClaim:
claimName: longhorn-pvc
---
# use
# 再创建一个pod, 查看pod中挂载的longhorn目录下是否有第一个pod创建的文件
# vi rpp-longhorn-pod2.yaml
# kubectl apply -f rpp-longhorn-pod2.yaml --dry-run=client
# kubectl apply -f rpp-longhorn-pod2.yaml
# kubectl get pod -n rpp-ns
# kubectl describe pod rpp-longhorn-pod -n rpp-ns
# kubectl exec -it -n rpp-ns rpp-longhorn-pod2 -- sh
# kubectl delete -f rpp-longhorn-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: rpp-longhorn-pod2
namespace: rpp-ns
spec:
nodeSelector:
kubernetes.io/hostname: k8s-worker5 # 必须与 longhorn 存储在同一个node上
containers:
- name: springboot-docker-longhorn2 # 容器名
image: harbor.echo01.com/my-project/spring-boot-docker2:0.0.1-SNAPSHOT
volumeMounts:
- name: loghorn-volume
mountPath: /data_loghorn
ports: # 端口
- containerPort: 8600 # 容器暴露的端口
name: business-port
- containerPort: 8800
name: actuator-port
volumes:
- name: loghorn-volume
persistentVolumeClaim:
claimName: longhorn-pvc
然后,使用以下命令创建Pod:
kubectl apply -f app-pod.yaml
现在,您的应用程序将能够使用LongHorn
提供的块存储。