RBD 方式的 工作 流程:
1、客户端创建一个pool,并指定pg数量,创建 rbd 设备并map 到文件系统;
2、用户写入数据,ceph进行对数据切块,每个块的大小默认为 4M,每个 块名字是 object+序号;
3、将每个object通过pg进行 副本位置的分配 ;
4、pg 根据 crush算法 会寻找 3个osd,把这object分别保存在这3个osd上 存储;
5、osd实际把硬盘格式化为xfs文件系统,object存储在这个文件系统就相 当于存储了一个文件rbd0.object1.file。
直接开始实验
继续上一篇文章ceph分布式集群部署好之后,开始使用ceph的rbd块存储
现在查看上个文章刚创建好的ceph分布式集群
[root@server153 ~]# ceph -s
cluster:
id: e86b8687-5af1-4c9e-a816-c1b0c0855349
health: HEALTH_OK
services:
mon: 3 daemons, quorum server153,server154,server155 (age 5h)
mgr: server153(active, since 23h), standbys: server154, server155
osd: 6 osds: 6 up (since 23h), 6 in (since 23h)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 6.0 GiB used, 114 GiB / 120 GiB avail
pgs:
然后我们现在将这个分布式集群用起来,先创建一个存储池
[root@server153 ceph-cluster]# ceph osd pool create rbd-pool 256 256
pool 'rbd-pool' created
查看我们创建的pool存储池的详细信息以及副本数
[root@server153 ceph-cluster]# ceph osd pool ls
rbd-pool
[root@server153 ceph-cluster]# ceph osd pool ls detail
pool 1 'rbd-pool' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 28 flags hashpspool stripe_width 0
[root@server153 ceph-cluster]# ceph osd pool get rbd-pool size
size: 3
将这个存储池指定用于rbd类型的存储
[root@server153 ceph-cluster]# ceph osd pool application enable rbd-pool rbd
enabled application 'rbd' on pool 'rbd-pool'
然后在这个存储池里创建一个10G的块设备
[root@server153 ceph-cluster]# rbd create --size 10240 rbd-pool/block1
[root@server153 ceph-cluster]# rbd ls rbd-pool
block1
查看这个block1块设备的详细情况
[root@server153 ceph-cluster]# rbd info rbd-pool/block1
rbd image 'block1':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 117cfcb49bc4
block_name_prefix: rbd_data.117cfcb49bc4
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Sat Nov 25 19:54:09 2023
access_timestamp: Sat Nov 25 19:54:09 2023
modify_timestamp: Sat Nov 25 19:54:09 2023
比如155节点的磁盘快满了,我们在155节点上使用这个block1块设备来扩大磁盘存储
先查看155节点的磁盘情况
lsblk
然后映射块设备到本地,不过得先禁用掉些高级高级功能,不然就会出现如下报错
[root@server155 ~]# rbd map rbd-pool/block1
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable rbd-pool/block1 object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
[root@server155 ~]# rbd feature disable rbd-pool/block1 object-map fast-diff deep-flatten
[root@server155 ~]# rbd map rbd-pool/block1
/dev/rbd0
[root@server155 ~]#
查看我们155节点磁盘的情况
而且看sdb和sdc,都是lvm的,rbd0也是lvm的,想必大家看到这已经明白了
没错,底层用的就是lvm,所以它能随意扩展,使用很灵活
磁盘都已经有了,怎么用相信大家都知道了
正常格式化然后挂载就可以了
[root@server155 ~]# mkfs.xfs /dev/rbd0
Discarding blocks...Done.
meta-data=/dev/rbd0 isize=512 agcount=16, agsize=163840 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@server155 ~]# mount /dev/rbd /mnt/
mount: /dev/rbd is not a block device
[root@server155 ~]# mount /dev0/rbd /mnt/
mount: special device /dev0/rbd does not exist
[root@server155 ~]# mount /dev/rbd0 /mnt/
可以看到我们的磁盘已经挂载完毕
ceph分布式存储的rbd块存储使用方法就是这样了
希望对大家有帮助