KubeVirt下gpu operator实践(GPU直通)
参考《在 KubeVirt 中使用 GPU Operator》,记录gpu operator在KubeVirt下实践的过程,包括虚拟机配置GPU直通,容器挂载GPU设备等。
KubeVirt 提供了一种将主机设备分配给虚拟机的机制。该机制具有通用性,支持分配各种类型的 PCI 设备,例如加速器(包括 GPU)或其他连接到 PCI 总线的设备。
1. 准备工作
硬件信息
三台服务器,每台服务器的主要硬件配置信息如下:
-
型号: H3C UniServer R5200 G3
-
cpu:Xeon® Gold 5118 CPU @ 2.30GHz,2x12x2
-
mem: 512GB
-
disk: 1.75GBx2,一块用于系统盘,一块用于ceph数据盘。
-
gpu:GV100GL_TESLA_V100_PCIE_16GB*4
kubernetes集群环境
k8s环境信息如下,使用v1.27.6版本,节点同时作为管理节点和业务节点。
]# kubectl get node
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane,master 7d18h v1.27.6
node2 Ready control-plane,master 7d18h v1.27.6
node3 Ready control-plane,master 7d18h v1.27.6
rook-ceph环境安装
使用版本v1.13.5,组件信息如下:
]# kubectl get pod -n rook-ceph
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-5k87k 2/2 Running 4 (5d22h ago) 12d
csi-cephfsplugin-88rmz 2/2 Running 0 5d22h
csi-cephfsplugin-np9lb 2/2 Running 4 (5d22h ago) 12d
csi-cephfsplugin-provisioner-5556f68f89-bsphz 5/5 Running 0 5d22h
csi-cephfsplugin-provisioner-5556f68f89-jswvj 5/5 Running 0 5d22h
csi-rbdplugin-5v8dm 2/2 Running 0 5d22h
csi-rbdplugin-provisioner-76f966fdd8-6jwdk 5/5 Running 0 5d22h
csi-rbdplugin-provisioner-76f966fdd8-sjf6c 5/5 Running 3 (4d21h ago) 5d22h
csi-rbdplugin-s8k4x 2/2 Running 4 (5d22h ago) 12d
csi-rbdplugin-s97mc 2/2 Running 4 (5d22h ago) 12d
os-rook-set-cronjob-28865084-h5jdf 0/1 Completed 0 118s
rook-ceph-agent-7bdf69b4f7-wzl7r 1/1 Running 0 5d22h
rook-ceph-crashcollector-node1-c9bc54894-s7lps 1/1 Running 0 5d22h
rook-ceph-crashcollector-node2-6448cdd8f9-7dlgs 1/1 Running 0 5d22h
rook-ceph-crashcollector-node3-56d876f9c6-6bjlr 1/1 Running 0 5d22h
rook-ceph-exporter-node1-7c7d659d96-6k55c 1/1 Running 0 5d22h
rook-ceph-exporter-node2-7bc85dfdf-xdt8g 1/1 Running 0 5d22h
rook-ceph-exporter-node3-f45f5db9d-8jrgs 1/1 Running 0 5d22h
rook-ceph-mds-ceph-filesystem-a-5bbf4d5d79-qwsd2 2/2 Running 0 5d22h
rook-ceph-mds-ceph-filesystem-b-69b5fc4f7-fhlth 2/2 Running 0 5d22h
rook-ceph-mgr-a-5f5768988-6dsm2 3/3 Running 0 5d22h
rook-ceph-mgr-b-5c96dcf465-7gcrk 3/3 Running 0 5d22h
rook-ceph-mon-a-5dcb9b69c5-p5mh9 2/2 Running 0 5d22h
rook-ceph-mon-b-6575d4f46b-gsthp 2/2 Running 0 5d22h
rook-ceph-mon-c-7ff969d568-gqzr4 2/2 Running 0 5d22h
rook-ceph-operator-86d5cb7c46-nx4jc 1/1 Running 0 5d22h
rook-ceph-osd-0-69c8c7fb45-nvvsx 2/2 Running 0 5d22h
rook-ceph-osd-1-5fcdbc57bf-dh8cf 2/2 Running 0 5d22h
rook-ceph-osd-2-7445bdc885-sxqbc 2/2 Running 0 5d22h
rook-ceph-rgw-ceph-objectstore-a-795c4c64cf-xbhkl 2/2 Running 0 5d22h
rook-ceph-tools-5877f9f669-tndwc 1/1 Running 0 5d22h
kubevirt安装
参考kubevirt官方文档完成虚拟化插件的安装,本文使用的版本为v1.2.0-amd64。
]# kubectl get pod -n kubevirt
NAME READY STATUS RESTARTS AGE
virt-api-74d58d7fc8-5v5t4 1/1 Running 0 5d21h
virt-api-74d58d7fc8-m7lhw 1/1 Running 0 5d21h
virt-controller-55d7978dc-d9tk2 1/1 Running 0 5d21h
virt-controller-55d7978dc-xsvtm 1/1 Running 0 5d21h
virt-exportproxy-795d79f86b-qgc4c 1/1 Running 0 5d21h
virt-exportproxy-795d79f86b-wxt4b 1/1 Running 0 5d21h
virt-handler-4x55q 1/1 Running 0 5d21h
virt-handler-b9b27 1/1 Running 0 5d21h
virt-handler-n8bf8 1/1 Running 0 5d21h
virt-operator-79bb89f7bd-zrxx6 1/1 Running 0 5d21h
禁用nouveau驱动
- 移除模块
]# modprobe -r nouveau
- 配置文件
创建文件/etc/modprobe.d/blacklist-nouveau.conf
, 并添加如下内容:
blacklist nouveau
options nouveau modeset=0
- 重启内核 initramfs(如报错可忽略):
]# sudo dracut --force
- 检查 nouveau 模块是否被禁用:
]# lsmod | grep nouveau
卸载nvidia驱动
使用vfio-pci驱动实现GPU直通,节点操作系统不需要安装nvidia驱动,如已安装,通过如下命令卸载:
nvidia-uninstall
本文环境中节点1、2配置vfio-pci驱动,不安装nvida驱动;节点3提前预安装了nvida驱动(根据gpu-operator部署方式决定)。
主机配置PCI直通
主机设备直通需要在 BIOS 中启用虚拟化扩展和 IOMMU 扩展(Intel VT-d 或 AMD IOMMU)。
要启用 IOMMU,根据 CPU 类型,主机应通过额外的内核参数启动,对于 Intel CPU 使用 intel_iommu=on
,对于 AMD CPU 使用 amd_iommu=on
。
修改grub引导配置文件,如下两种方式供参考:
grub配置开启intel IOMMU
修改 GRUB 配置的模板文件,将参数附加到 grub 配置文件中 GRUB_CMDLINE_LINUX
行的末尾。
]# vi /etc/default/grub
...
GRUB_CMDLINE_LINUX="nofb splash=quiet console=tty0 ... intel_iommu=on
...
]# grub2-mkconfig -o /boot/grub2/grub.cfg
]# reboot
修改最终的引导配置文件
这里看具体的系统引导文件位置,比如H3Linux的引导文件位置为:/boot/efi/EFI/H3Linux/grub.cfg
,在linux
命令后面添加 intel_iommu=on
:
...
### BEGIN /etc/grub.d/10_linux ###
menuentry 'H3Linux (5.10.0-136.12.0.86.4.hl202.x86_64) 2.0.2-SP01' --class h3linux --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-5.10.0-136.12.0.86.4.hl202.x86_64-advanced-fdd76f12-53d6-41d2-aaab-be7a10e2009c' {
load_video
set gfxpayload=keep
insmod gzio
insmod part_gpt
insmod ext2
set root='hd0,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 bb0505b3-0529-4916-af54-c39fca87a752
else
search --no-floppy --fs-uuid --set=root bb0505b3-0529-4916-af54-c39fca87a752
fi
echo 'Loading Linux 5.10.0-136.12.0.86.4.hl202.x86_64 ...'
linux /vmlinuz-5.10.0-136.12.0.86.4.hl202.x86_64 root=/dev/mapper/cloudos-root ro rd.lvm.lv=cloudos/root cgroup_disable=files panic=3 nmi_watchdog=1 console=tty0 crashkernel=512M rhgb quiet no-kvmclock user_namespace.enable=1 intel_pstate=disable cpufreq.off=1 loglevel=3 intel_iommu=on
echo 'Loading initial ramdisk ...'
initrd /initramfs-5.10.0-136.12.0.86.4.hl202.x86_64.img
}
...
BIOS配置开启intel VT-d
里以H3C 服务器BIOS为例,进入BIOS,打开【Socket Configuration】【IIO Configuration】
加载vfio驱动
加载vfio驱动
]# modprobe vfio_pci
正常情况下可看到vfio内核模块已加载:
]# lsmod | grep vfio_
vfio_pci 81920 0
vfio_virqfd 16384 1 vfio_pci
vfio_iommu_type1 53248 0
vfio 45056 2 vfio_iommu_type1,vfio_pci
irqbypass 16384 2 vfio_pci,kvm
2. GPU Operator安装
本文使用最新的GPU Operator 24.9.0,参考对应的安装部署。安装场景采用"预安装的NVIDIA GPU驱动程序",driver.enabled=false
。
查看节点的GPU设备:
]# kubectl get node node3 -ojson | jq -r ".status.capacity"
# 示例输出:
{
"cpu": "48",
"devices.kubevirt.io/kvm": "1k",
"devices.kubevirt.io/tun": "1k",
"devices.kubevirt.io/vhost-net": "1k",
"ephemeral-storage": "514937088Ki",
"h3c.com/vcuda-core": "0",
"h3c.com/vcuda-memory": "0",
"hugepages-1Gi": "0",
"hugepages-2Mi": "0",
"memory": "790609008Ki",
"nvidia.com/GV100GL_TESLA_V100_PCIE_16GB": "4",
"nvidia.com/gpu": "0",
"pods": "250"
}
3. 节点配置GPU工作负载
语法:
# 虚拟机gpu直通负载
kubectl label node <node-name> --overwrite nvidia.com/gpu.workload.config=vm-passthrough
# 容器工作负载
kubectl label node <node-name> --overwrite nvidia.com/gpu.workload.config=container
查看节点可分配的GPU资源:
]# kubectl get node node1 -o json | jq '.status.allocatable | with_entries(select(.key | startswith("nvidia.com/"))) | with_entries(select(.value != "0"))'
{
"nvidia.com/GV100GL_TESLA_V100_PCIE_16GB": "4"
}
本文测试环境中将node1/2节点配置虚拟机GPU直通工作负载,node3节点配置容器GPU工作负载:
kubectl label node node1 --overwrite nvidia.com/gpu.workload.config=vm-passthrough
kubectl label node node2 --overwrite nvidia.com/gpu.workload.config=vm-passthrough
kubectl label node node3 --overwrite nvidia.com/gpu.workload.config=container
4. 向 KubeVirt CR 中添加 GPU 资源
- 确定GPU设备的资源名称
[root@node3 ~]# kubectl get node node1 -o json | jq '.status.allocatable | with_entries(select(.key | startswith("nvidia.com/"))) | with_entries(select(.value != "0"))'
示例输出:
{
"nvidia.com/GV100GL_TESLA_V100_PCIE_16GB": "4"
}
或者通过如下命令查看:
]# kubectl get node node1 -ojson | jq -r ".status.capacity"
{
"cpu": "48",
"devices.kubevirt.io/kvm": "1k",
"devices.kubevirt.io/tun": "1k",
"devices.kubevirt.io/vhost-net": "1k",
"ephemeral-storage": "514937088Ki",
"h3c.com/vcuda-core": "0",
"h3c.com/vcuda-memory": "0",
"hugepages-1Gi": "0",
"hugepages-2Mi": "0",
"memory": "790609008Ki",
"nvidia.com/GV100GL_TESLA_V100_PCIE_16GB": "4",
"nvidia.com/gpu": "0",
"pods": "250"
}
- 确定 GPU 的 PCI 设备 ID。
可以在 PCI IDs 数据库中按设备名称进行搜索。
如果有节点的主机访问权限,可以使用以下命令列出 NVIDIA GPU 设备:
lspci -nnk -d 10de:
示例输出
]# lspci -nnk -d 10de:
60:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 16GB] [10de:1db4] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:1214]
Kernel driver in use: nvidia
Kernel modules: nouveau, nvidia_drm, nvidia
64:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 16GB] [10de:1db4] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:1214]
Kernel driver in use: nvidia
Kernel modules: nouveau, nvidia_drm, nvidia
65:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 16GB] [10de:1db4] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:1214]
Kernel driver in use: nvidia
Kernel modules: nouveau, nvidia_drm, nvidia
66:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 16GB] [10de:1db4] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:1214]
Kernel driver in use: nvidia
Kernel modules: nouveau, nvidia_drm, nvidia
或者通过如下命令查看设备的vid和pid:
]# lspci -nnv | grep -i nvidia
60:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 16GB] [10de:1db4] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:1214]
Kernel driver in use: nvidia
Kernel modules: nouveau, nvidia_drm, nvidia
64:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 16GB] [10de:1db4] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:1214]
Kernel driver in use: nvidia
Kernel modules: nouveau, nvidia_drm, nvidia
65:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 16GB] [10de:1db4] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:1214]
Kernel driver in use: nvidia
Kernel modules: nouveau, nvidia_drm, nvidia
66:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 16GB] [10de:1db4] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:1214]
Kernel driver in use: nvidia
Kernel modules: nouveau, nvidia_drm, nvidia
- 修改kubevirt CRD,添加GPU资源配置。
kubectl edit kubevirts.kubevirt.io -n kubevirt kubevirt
添加如下内容:
...
spec:
certificateRotateStrategy: {}
configuration:
developerConfiguration:
cpuAllocationRatio: 2
featureGates:
- HotplugVolumes
- LiveMigration
- Snapshot
- AutoResourceLimitsGate
- VMExport
- ExpandDisks
- HotplugNICs
- VMLiveUpdateFeatures
- GPU # 增加的配置
- DisableMDEVConfiguration # 增加的配置。设置 DisableMDEVConfiguration 功能门控
liveUpdateConfiguration:
maxCpuSockets: 48
maxGuest: 128Gi
migrations:
parallelMigrationsPerCluster: 30
parallelOutboundMigrationsPerNode: 20
permittedHostDevices:
pciHostDevices:
- externalResourceProvider: true
pciVendorSelector: 10DE:1214 # 增加的配置,vid:pid
resourceName: nvidia.com/GV100GL_TESLA_V100_PCIE_16GB # 增加的配置,gpu型号
vmRolloutStrategy: LiveUpdate
customizeComponents: {}
imagePullPolicy: IfNotPresent
...
根据您的设备替换 YAML 中的值:
- 在
pciHostDevices
下,将pciVendorSelector
和resourceName
替换为您的 GPU 型号。
设置 externalResourceProvider=true
,以表明该资源由外部设备插件(即 GPU Operator 部署的 sandbox-device-plugin
)提供。
5. 配置虚拟机GPU直通
在 GPU Operator 完成将沙箱设备插件和 VFIO 管理器 pod 部署到工作节点上,并将 GPU 资源添加到 KubeVirt 允许列表之后,可以通过编辑 VirtualMachineInstance 清单中的 spec.domain.devices.gpus
字段,将 GPU 分配给虚拟机。
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachineInstance
...
spec:
domain:
devices:
gpus:
- deviceName: nvidia.com/GV100GL_TESLA_V100_PCIE_16GB
name: gpu1
...
deviceName
是表示设备的资源名称。name
是在虚拟机中用于标识该设备的名称。
已经存在的虚拟机配置单GPU直通
这里以已经创建的虚拟机为例。
kubectl edit vm -n vm-a34ceaea i-1hwrg6bn
在spec-domain-devices里新增gpu资源。
...
- cdrom:
bus: sata
name: img-vm-0p9i3k9z
- bootOrder: 3
cdrom:
bus: sata
name: img-vm-rg29oicy
gpus:
- deviceName: nvidia.com/GV100GL_TESLA_V100_PCIE_16GB
name: gpu1
interfaces:
- macAddress: 00:00:00:78:5F:66
...
deviceName
是表示设备的资源名称。name
是在虚拟机中用于标识该设备的名称。
配置多GPU直通
如果虚拟机需要配置多块GPU,则在 spec-domain-devices
里新增多个 gpus
资源。
...
- cdrom:
bus: sata
name: img-vm-0p9i3k9z
- bootOrder: 3
cdrom:
bus: sata
name: img-vm-rg29oicy
gpus:
- deviceName: nvidia.com/GV100GL_TESLA_V100_PCIE_16GB
name: gpu1
- deviceName: nvidia.com/GV100GL_TESLA_V100_PCIE_16GB
name: gpu2
interfaces:
- macAddress: 00:00:00:78:5F:66
...
注意:
虚拟机配置的GPU数量不超过所在节点的GPU数量。
6. 验证GPU直通
如果配置虚拟机时处于开机状态,修改完成后,需要重启虚拟机,进入系统后查看GPU设备是否直通成功。
]# lspci | grep -i nvidia
gpu-operator不会给该虚机安装nvidia驱动,虚拟机内使用GPU需要自行安装驱动。
7. 节点负载变更
将gpu直通工作节点配置为容器节点:
-
将该节点上挂载gpu资源的vm进行gpu资源卸载并关机
-
label修改节点工作负载为container
kubectl label node node3 --overwrite nvidia.com/gpu.workload.config=container
-
节点安装英伟达驱动
-
最终三台节点运行的gpu-operator组件列表如下
可以看出运行容器gpu负载的node3与运行虚拟机gpu负载的node1/2节点运行的组件是不同的。
dcgm-exporter-k48tk 1/1 Running 0 node3
gg-node-feature-discovery-master-8c4456d6d-d5rh2 1/1 Running 0 node3
gg-node-feature-discovery-worker-h25wc 1/1 Running 0 node3
gg-node-feature-discovery-worker-kqsqc 1/1 Running 0 node2
gg-node-feature-discovery-worker-sw74t 1/1 Running 0 node1
gpu-feature-discovery-24vgc 1/1 Running 0 node3
gpu-operator-56b9f58d9c-dzsk6 1/1 Running 0 node3
nvidia-container-toolkit-daemonset-htlvx 1/1 Running 0 node3
nvidia-device-plugin-daemonset-hdmwc 1/1 Running 0 node3
nvidia-operator-validator-m7kch 1/1 Running 0 node3
nvidia-sandbox-device-plugin-daemonset-fwb2q 1/1 Running 0 node2
nvidia-sandbox-device-plugin-daemonset-h4dtn 1/1 Running 0 node1
nvidia-sandbox-validator-294bd 1/1 Running 0 node1
nvidia-sandbox-validator-s4qrg 1/1 Running 0 node2
nvidia-vfio-manager-8f7xx 1/1 Running 0 node2
nvidia-vfio-manager-jszqg 1/1 Running 0 node1
smi-exporter-9s6x2 1/1 Running 0 node1
smi-exporter-b8wzc 1/1 Running 0 node3
smi-exporter-xg7rs 1/1 Running 0 node2
说明
gpu-operator设置driver.enabled=false,所以node3上默认没有驱动程序安装pod。
8. 参考
-
GPU Operator with KubeVirt:https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/gpu-operator-kubevirt.html
-
Host Devices Assignment:https://kubevirt.io/user-guide/compute/host-devices/#listing-permitted-devices