arm架构ceph pacific部署

背景

合作伙伴实验室的华为私有云原来使用单点的nfs做为存储设备,现有两方面考量,业务需要使用oss了,k8s集群及其他机器也需要一套可扩展的分布式文件系统

部署ceph

初始机器配置规划

IP配置主机名Role
10.17.3.144c8g1T数据盘ceph-node01.xx.localmon1 mgr1 node01
10.17.3.154c8g1T数据盘ceph-node02.xx.localmon2 mgr12 node02
10.17.3.164c8g1T数据盘ceph-node03.xx.localmon3 mgr3 node03

所有节点执行:

节点上的硬盘需要做ceph osd的需要需要取消挂载

节点时间配置

apt install apt-transport-https ca-certificates curl software-properties-common -y
vim /etc/chrony/chrony.conf
server ntp.xx.xx.cn minpoll 4 maxpoll 10 iburst # 内部ntp服务器
systemctl restart chronyd

root@ceph-node01:/etc/ceph-cluster# chronyc sources -v
210 Number of sources = 1
 
  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample              
===============================================================================
^* 100.xx.0.35                  4   8   377   177  +3929ns[+1073ns] +/-  273ms

root@ceph-node01:/etc/ceph-cluster# tail -n 3 /etc/hosts
10.17.3.14 ceph-node01.xx.local ceph-node01
10.17.3.15 ceph-node02.xx.local ceph-node02
10.17.3.16 ceph-node03.xx.local ceph-node03

使用ceph-deploy部署

curl -x socks5://10.17.3.154:7891 -LO https://download.ceph.com/keys/release.asc
apt-key add release.asc
echo "deb https://download.ceph.com/debian-pacific/ bionic main" | tee /etc/apt/sources.list.d/ceph.list
# 创建普通账户
groupadd -r -g 2088 cephadmin && useradd -r -m -s /bin/bash -u 2088 -g 2088 cephadmin && echo "cephadmin:xx" | chpasswd
echo "cephadmin ALL=(ALL:ALL) NOPASSWD: ALL" >> /etc/sudoers
su cephadmin
 
apt install ceph-common -y
mkdir -pv /etc/ceph-cluster
 
 
ceph-deploy install --release pacific ceph-node01
ceph-deploy install --release pacific ceph-node02

ceph quorum_status --format json-pretty

集群deploy节点初始化

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy new --cluster-network 10.17.3.0/24 --public-network 10.17.3.0/24 ceph-node1.xx.local
sudo: unable to resolve host ceph-node01: Resource temporarily unavailable
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy new --cluster-network 10.17.3.0/24 --public-network 10.17.3.0/24 ceph-node1.xx.local
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffffb6791c20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-node1.xx.local']
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0xffffb6772410>
[ceph_deploy.cli][INFO  ]  public_network                : 10.17.3.0/24
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : 10.17.3.0/24
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-node1.xx.local][DEBUG ] connected to host: ceph-node01
[ceph-node1.xx.local][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-node1.xx.local
[ceph_deploy.new][WARNIN] could not connect via SSH
[ceph_deploy.new][INFO  ] will connect again with password prompt
root@ceph-node1.xx.local's password:
Permission denied, please try again.
root@ceph-node1.xx.local's password:
[ceph-node1.xx.local][DEBUG ] connected to host: ceph-node1.xx.local
[ceph-node1.xx.local][DEBUG ] detect platform information from remote host
[ceph-node1.xx.local][DEBUG ] detect machine type
[ceph_deploy.new][INFO  ] adding public keys to authorized_keys
[ceph-node1.xx.local][DEBUG ] append contents to file
root@ceph-node1.xx.local's password:
root@ceph-node1.xx.local's password:
[ceph-node1.xx.local][DEBUG ] connected to host: ceph-node1.xx.local
[ceph-node1.xx.local][DEBUG ] detect platform information from remote host
[ceph-node1.xx.local][DEBUG ] detect machine type
[ceph-node1.xx.local][DEBUG ] find the location of an executable
[ceph-node1.xx.local][INFO  ] Running command: /bin/ip link show
[ceph-node1.xx.local][INFO  ] Running command: /bin/ip addr show
[ceph-node1.xx.local][DEBUG ] IP addresses found: [u'10.108.101.32', u'10.104.61.120', u'10.98.52.88', u'10.244.24.0', u'10.244.24.1', u'10.99.115.16', u'10.106.43.191', u'10.104.75.139', u'10.105.7.41', u'10.100.142.181', u'10.97.252.180', u'10.110.23.237', u'10.98.213.254', u'10.96.0.1', u'10.101.27.103', u'10.99.3.237', u'10.97.241.24', u'10.17.3.14', u'10.110.31.40', u'10.109.24.221', u'10.97.44.182', u'10.99.46.158', u'10.100.68.217', u'10.96.87.174', u'10.97.255.233', u'10.111.118.0', u'10.96.0.10', u'10.96.23.220', u'10.105.34.53', u'10.106.170.182', u'10.106.145.33']
[ceph_deploy.new][DEBUG ] Resolving host ceph-node1.xx.local
[ceph_deploy.new][DEBUG ] Monitor ceph-node1 at 10.17.3.14
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-node1']
[ceph_deploy.new][DEBUG ] Monitor addrs are [u'10.17.3.14']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
cephadmin@ceph-node01:/etc/ceph-cluster$ ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

node节点初始化

sudo ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node1.xx.local ceph-node2.xx.local ceph-node3.xx.local
 
sudo: unable to resolve host ceph-node01: Resource temporarily unavailable
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy install --no-adjust-repos --nogpgcheck ceph-node1.xx.local ceph-node2.xx.local ceph-node3.xx.local
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff9f33dc80>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : False
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0xffff9f3fac50>
[ceph_deploy.cli][INFO  ]  install_mgr                   : False
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['ceph-node1.xx.local', 'ceph-node2.xx.local', 'ceph-node3.xx.local']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  nogpgcheck                    : True
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : None
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version mimic on cluster ceph hosts ceph-node1.xx.local ceph-node2.xx.local ceph-node3.xx.local
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-node1.xx.local ...
root@ceph-node1.xx.local's password:
root@ceph-node1.xx.local's password:
[ceph-node1.xx.local][DEBUG ] connected to host: ceph-node1.xx.local
[ceph-node1.xx.local][DEBUG ] detect platform information from remote host
[ceph-node1.xx.local][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node1.xx.local][INFO  ] installing Ceph on ceph-node1.xx.local
[ceph-node1.xx.local][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[ceph-node1.xx.local][DEBUG ] Hit:1 https://mirrors.ustc.edu.cn/kubernetes/core:/stable:/v1.30/deb  InRelease
[ceph-node1.xx.local][DEBUG ] Hit:2 http://ports.ubuntu.com/ubuntu-ports bionic InRelease
[ceph-node1.xx.local][DEBUG ] Hit:3 http://ports.ubuntu.com/ubuntu-ports bionic-updates InRelease
[ceph-node1.xx.local][DEBUG ] Hit:4 http://ports.ubuntu.com/ubuntu-ports bionic-backports InRelease
[ceph-node1.xx.local][DEBUG ] Hit:5 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease
[ceph-node1.xx.local][DEBUG ] Hit:6 https://download.ceph.com/debian-pacific bionic InRelease
[ceph-node1.xx.local][DEBUG ] Reading package lists...
[ceph-node1.xx.local][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[ceph-node1.xx.local][DEBUG ] Reading package lists...
[ceph-node1.xx.local][DEBUG ] Building dependency tree...
[ceph-node1.xx.local][DEBUG ] Reading state information...
[ceph-node1.xx.local][DEBUG ] ca-certificates is already the newest version (20230311ubuntu0.18.04.1).
[ceph-node1.xx.local][DEBUG ] apt-transport-https is already the newest version (1.6.17).
[ceph-node1.xx.local][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 340 not upgraded.
[ceph-node1.xx.local][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[ceph-node1.xx.local][DEBUG ] Hit:1 https://mirrors.ustc.edu.cn/kubernetes/core:/stable:/v1.30/deb  InRelease
[ceph-node1.xx.local][DEBUG ] Hit:2 http://ports.ubuntu.com/ubuntu-ports bionic InRelease
[ceph-node1.xx.local][DEBUG ] Hit:3 https://download.ceph.com/debian-pacific bionic InRelease
[ceph-node1.xx.local][DEBUG ] Hit:4 http://ports.ubuntu.com/ubuntu-ports bionic-updates InRelease
[ceph-node1.xx.local][DEBUG ] Hit:5 http://ports.ubuntu.com/ubuntu-ports bionic-backports InRelease
[ceph-node1.xx.local][DEBUG ] Hit:6 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease
[ceph-node1.xx.local][DEBUG ] Reading package lists...
[ceph-node1.xx.local][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ceph ceph-osd ceph-mds ceph-mon radosgw
[ceph-node1.xx.local][DEBUG ] Reading package lists...
[ceph-node1.xx.local][DEBUG ] Building dependency tree...
[ceph-node1.xx.local][DEBUG ] Reading state information...
[ceph-node1.xx.local][DEBUG ] The following packages were automatically installed and are no longer required:
[ceph-node1.xx.local][DEBUG ]   formencode-i18n libpython2.7 python-asn1crypto python-bcrypt python-bs4
[ceph-node1.xx.local][DEBUG ]   python-ceph-argparse python-certifi python-cffi-backend python-chardet
[ceph-node1.xx.local][DEBUG ]   python-cherrypy3 python-cryptography python-dnspython python-enum34
[ceph-node1.xx.local][DEBUG ]   python-formencode python-idna python-ipaddress python-jinja2 python-logutils
[ceph-node1.xx.local][DEBUG ]   python-mako python-markupsafe python-openssl python-paste python-pastedeploy
[ceph-node1.xx.local][DEBUG ]   python-pecan python-pkg-resources python-prettytable python-rbd
[ceph-node1.xx.local][DEBUG ]   python-requests python-simplegeneric python-simplejson python-singledispatch
[ceph-node1.xx.local][DEBUG ]   python-six python-tempita python-urllib3 python-waitress python-webob
[ceph-node1.xx.local][DEBUG ]   python-webtest python-werkzeug
[ceph-node1.xx.local][DEBUG ] Use 'apt autoremove' to remove them.
[ceph-node1.xx.local][DEBUG ] The following additional packages will be installed:
[ceph-node1.xx.local][DEBUG ]   ceph-base ceph-common ceph-mgr ceph-mgr-modules-core libcephfs2 libjaeger
[ceph-node1.xx.local][DEBUG ]   liblua5.3-0 librabbitmq4 librados2 libradosstriper1 librbd1 librdkafka1
[ceph-node1.xx.local][DEBUG ]   librdmacm1 librgw2 libsqlite3-mod-ceph python3-bcrypt python3-bs4
[ceph-node1.xx.local][DEBUG ]   python3-ceph-argparse python3-ceph-common python3-cephfs python3-cherrypy3
[ceph-node1.xx.local][DEBUG ]   python3-dateutil python3-distutils python3-jwt python3-lib2to3
[ceph-node1.xx.local][DEBUG ]   python3-logutils python3-mako python3-markupsafe python3-paste
[ceph-node1.xx.local][DEBUG ]   python3-pastedeploy python3-pecan python3-prettytable python3-rados
[ceph-node1.xx.local][DEBUG ]   python3-rbd python3-rgw python3-simplegeneric python3-singledispatch
[ceph-node1.xx.local][DEBUG ]   python3-tempita python3-waitress python3-webob python3-webtest
[ceph-node1.xx.local][DEBUG ]   python3-werkzeug
[ceph-node1.xx.local][DEBUG ] Suggested packages:
[ceph-node1.xx.local][DEBUG ]   python3-influxdb python3-crypto python3-beaker python-mako-doc httpd-wsgi
[ceph-node1.xx.local][DEBUG ]   libapache2-mod-python libapache2-mod-scgi libjs-mochikit python-pecan-doc
[ceph-node1.xx.local][DEBUG ]   python-waitress-doc python-webob-doc python-webtest-doc ipython3
[ceph-node1.xx.local][DEBUG ]   python3-lxml python3-termcolor python3-watchdog python-werkzeug-doc
[ceph-node1.xx.local][DEBUG ] Recommended packages:
[ceph-node1.xx.local][DEBUG ]   nvme-cli smartmontools ceph-fuse ceph-mgr-dashboard
[ceph-node1.xx.local][DEBUG ]   ceph-mgr-diskprediction-local ceph-mgr-k8sevents ceph-mgr-cephadm
[ceph-node1.xx.local][DEBUG ]   python3-lxml python3-routes python3-simplejson python3-pastescript
[ceph-node1.xx.local][DEBUG ]   python3-pyinotify
[ceph-node1.xx.local][DEBUG ] The following packages will be REMOVED:
[ceph-node1.xx.local][DEBUG ]   python-cephfs python-rados python-rgw
[ceph-node1.xx.local][DEBUG ] The following NEW packages will be installed:
[ceph-node1.xx.local][DEBUG ]   ceph-mgr-modules-core libjaeger liblua5.3-0 librabbitmq4 librdkafka1
[ceph-node1.xx.local][DEBUG ]   librdmacm1 libsqlite3-mod-ceph python3-bcrypt python3-bs4
[ceph-node1.xx.local][DEBUG ]   python3-ceph-argparse python3-ceph-common python3-cephfs python3-cherrypy3
[ceph-node1.xx.local][DEBUG ]   python3-dateutil python3-distutils python3-jwt python3-lib2to3
[ceph-node1.xx.local][DEBUG ]   python3-logutils python3-mako python3-markupsafe python3-paste
[ceph-node1.xx.local][DEBUG ]   python3-pastedeploy python3-pecan python3-prettytable python3-rados
[ceph-node1.xx.local][DEBUG ]   python3-rbd python3-rgw python3-simplegeneric python3-singledispatch
[ceph-node1.xx.local][DEBUG ]   python3-tempita python3-waitress python3-webob python3-webtest
[ceph-node1.xx.local][DEBUG ]   python3-werkzeug
[ceph-node1.xx.local][DEBUG ] The following packages will be upgraded:
[ceph-node1.xx.local][DEBUG ]   ceph ceph-base ceph-common ceph-mds ceph-mgr ceph-mon ceph-osd libcephfs2
[ceph-node1.xx.local][DEBUG ]   librados2 libradosstriper1 librbd1 librgw2 radosgw
[ceph-node1.xx.local][DEBUG ] 13 upgraded, 34 newly installed, 3 to remove and 327 not upgraded.
[ceph-node1.xx.local][DEBUG ] Need to get 70.2 MB of archives.
[ceph-node1.xx.local][DEBUG ] After this operation, 117 MB of additional disk space will be used.
[ceph-node1.xx.local][DEBUG ] Get:1 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 librdmacm1 arm64 17.1-1ubuntu0.2 [49.1 kB]
[ceph-node1.xx.local][DEBUG ] Get:2 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 liblua5.3-0 arm64 5.3.3-1ubuntu0.18.04.1 [105 kB]
[ceph-node1.xx.local][DEBUG ] Get:3 http://ports.ubuntu.com/ubuntu-ports bionic-updates/universe arm64 librabbitmq4 arm64 0.8.0-1ubuntu0.18.04.2 [30.3 kB]
[ceph-node1.xx.local][DEBUG ] Get:4 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 librdkafka1 arm64 0.11.3-1build1 [245 kB]
[ceph-node1.xx.local][DEBUG ] Get:5 https://download.ceph.com/debian-pacific bionic/main arm64 libradosstriper1 arm64 16.2.15-1bionic [387 kB]
[ceph-node1.xx.local][DEBUG ] Get:6 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-dateutil all 2.6.1-1 [52.3 kB]
[ceph-node1.xx.local][DEBUG ] Get:7 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-bcrypt arm64 3.1.4-2 [25.3 kB]
[ceph-node1.xx.local][DEBUG ] Get:8 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-cherrypy3 all 8.9.1-2 [160 kB]
[ceph-node1.xx.local][DEBUG ] Get:9 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 python3-lib2to3 all 3.6.9-1~18.04 [77.4 kB]
[ceph-node1.xx.local][DEBUG ] Get:10 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 python3-distutils all 3.6.9-1~18.04 [144 kB]
[ceph-node1.xx.local][DEBUG ] Get:11 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 python3-jwt all 1.5.3+ds1-1ubuntu0.1 [16.6 kB]
[ceph-node1.xx.local][DEBUG ] Get:12 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-logutils all 0.3.3-5 [16.7 kB]
[ceph-node1.xx.local][DEBUG ] Get:13 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-markupsafe arm64 1.0-1build1 [13.2 kB]
[ceph-node1.xx.local][DEBUG ] Get:14 http://ports.ubuntu.com/ubuntu-ports bionic-updates/main arm64 python3-mako all 1.0.7+ds1-1ubuntu0.2 [59.4 kB]
[ceph-node1.xx.local][DEBUG ] Get:15 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-simplegeneric all 0.8.1-1 [11.5 kB]
[ceph-node1.xx.local][DEBUG ] Get:16 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-singledispatch all 3.4.0.3-2 [7,022 B]
[ceph-node1.xx.local][DEBUG ] Get:17 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-webob all 1:1.7.3-2fakesync1 [64.3 kB]
[ceph-node1.xx.local][DEBUG ] Get:18 https://download.ceph.com/debian-pacific bionic/main arm64 radosgw arm64 16.2.15-1bionic [9,564 kB]
[ceph-node1.xx.local][DEBUG ] Get:19 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-bs4 all 4.6.0-1 [67.8 kB]
[ceph-node1.xx.local][DEBUG ] Get:20 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-waitress all 1.0.1-1 [53.4 kB]
[ceph-node1.xx.local][DEBUG ] Get:21 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-tempita all 0.5.2-2 [13.9 kB]
[ceph-node1.xx.local][DEBUG ] Get:22 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-paste all 2.0.3+dfsg-4ubuntu1 [456 kB]
[ceph-node1.xx.local][DEBUG ] Get:23 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-pastedeploy all 1.5.2-4 [13.4 kB]
[ceph-node1.xx.local][DEBUG ] Get:24 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-webtest all 2.0.28-1ubuntu1 [27.9 kB]
[ceph-node1.xx.local][DEBUG ] Get:25 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 python3-pecan all 1.2.1-2 [86.1 kB]
[ceph-node1.xx.local][DEBUG ] Get:26 http://ports.ubuntu.com/ubuntu-ports bionic-updates/universe arm64 python3-werkzeug all 0.14.1+dfsg1-1ubuntu0.2 [175 kB]
[ceph-node1.xx.local][DEBUG ] Get:27 http://ports.ubuntu.com/ubuntu-ports bionic/main arm64 python3-prettytable all 0.7.2-3 [19.7 kB]
 
.....
[ceph-node3.xx.local][DEBUG ] Setting up ceph (16.2.15-1bionic) ...
[ceph-node3.xx.local][DEBUG ] Processing triggers for systemd (237-3ubuntu10.31) ...
[ceph-node3.xx.local][DEBUG ] Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
[ceph-node3.xx.local][DEBUG ] Processing triggers for ureadahead (0.100.0-21) ...
[ceph-node3.xx.local][DEBUG ] ureadahead will be reprofiled on next reboot
[ceph-node3.xx.local][DEBUG ] Processing triggers for libc-bin (2.27-3ubuntu1) ...
[ceph-node3.xx.local][INFO  ] Running command: ceph --version
[ceph-node3.xx.local][DEBUG ] ceph version 16.2.15 (618f440892089921c3e944a991122ddc44e60516) pacific (stable)

ceph集群添加ceph-mon服务,mon初始化

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy  mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff82b5ceb0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0xffff82bc9cd0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-node01
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-node01 ...
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: Ubuntu 18.04 bionic
[ceph-node01][DEBUG ] determining if provided host has same hostname in remote
[ceph-node01][DEBUG ] get remote short hostname
[ceph-node01][DEBUG ] deploying mon to ceph-node01
[ceph-node01][DEBUG ] get remote short hostname
[ceph-node01][DEBUG ] remote hostname: ceph-node01
[ceph-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node01][DEBUG ] create the mon path if it does not exist
[ceph-node01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-node01/done
[ceph-node01][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-node01][DEBUG ] create the init path if it does not exist
[ceph-node01][INFO  ] Running command: systemctl enable ceph.target
[ceph-node01][INFO  ] Running command: systemctl enable ceph-mon@ceph-node01
[ceph-node01][INFO  ] Running command: systemctl start ceph-mon@ceph-node01
[ceph-node01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node01.asok mon_status
[ceph-node01][DEBUG ] ********************************************************************************
[ceph-node01][DEBUG ] status for monitor: mon.ceph-node01
[ceph-node01][DEBUG ] {
[ceph-node01][DEBUG ]   "election_epoch": 3,
[ceph-node01][DEBUG ]   "extra_probe_peers": [],
[ceph-node01][DEBUG ]   "feature_map": {
[ceph-node01][DEBUG ]     "mon": [
[ceph-node01][DEBUG ]       {
[ceph-node01][DEBUG ]         "features": "0x3f01cfbdfffdffff",
[ceph-node01][DEBUG ]         "num": 1,
[ceph-node01][DEBUG ]         "release": "luminous"
[ceph-node01][DEBUG ]       }
[ceph-node01][DEBUG ]     ]
[ceph-node01][DEBUG ]   },
[ceph-node01][DEBUG ]   "features": {
[ceph-node01][DEBUG ]     "quorum_con": "4540138314316775423",
[ceph-node01][DEBUG ]     "quorum_mon": [
[ceph-node01][DEBUG ]       "kraken",
[ceph-node01][DEBUG ]       "luminous",
[ceph-node01][DEBUG ]       "mimic",
[ceph-node01][DEBUG ]       "osdmap-prune",
[ceph-node01][DEBUG ]       "nautilus",
[ceph-node01][DEBUG ]       "octopus",
[ceph-node01][DEBUG ]       "pacific",
[ceph-node01][DEBUG ]       "elector-pinging"
[ceph-node01][DEBUG ]     ],
[ceph-node01][DEBUG ]     "required_con": "2449958747317026820",
[ceph-node01][DEBUG ]     "required_mon": [
[ceph-node01][DEBUG ]       "kraken",
[ceph-node01][DEBUG ]       "luminous",
[ceph-node01][DEBUG ]       "mimic",
[ceph-node01][DEBUG ]       "osdmap-prune",
[ceph-node01][DEBUG ]       "nautilus",
[ceph-node01][DEBUG ]       "octopus",
[ceph-node01][DEBUG ]       "pacific",
[ceph-node01][DEBUG ]       "elector-pinging"
[ceph-node01][DEBUG ]     ]
[ceph-node01][DEBUG ]   },
[ceph-node01][DEBUG ]   "monmap": {
[ceph-node01][DEBUG ]     "created": "2024-10-08T10:12:42.715558Z",
[ceph-node01][DEBUG ]     "disallowed_leaders: ": "",
[ceph-node01][DEBUG ]     "election_strategy": 1,
[ceph-node01][DEBUG ]     "epoch": 1,
[ceph-node01][DEBUG ]     "features": {
[ceph-node01][DEBUG ]       "optional": [],
[ceph-node01][DEBUG ]       "persistent": [
[ceph-node01][DEBUG ]         "kraken",
[ceph-node01][DEBUG ]         "luminous",
[ceph-node01][DEBUG ]         "mimic",
[ceph-node01][DEBUG ]         "osdmap-prune",
[ceph-node01][DEBUG ]         "nautilus",
[ceph-node01][DEBUG ]         "octopus",
[ceph-node01][DEBUG ]         "pacific",
[ceph-node01][DEBUG ]         "elector-pinging"
[ceph-node01][DEBUG ]       ]
[ceph-node01][DEBUG ]     },
[ceph-node01][DEBUG ]     "fsid": "5a6fdfb7-81a1-40f6-97b7-c92f96de9ac5",
[ceph-node01][DEBUG ]     "min_mon_release": 16,
[ceph-node01][DEBUG ]     "min_mon_release_name": "pacific",
[ceph-node01][DEBUG ]     "modified": "2024-10-08T10:12:42.715558Z",
[ceph-node01][DEBUG ]     "mons": [
[ceph-node01][DEBUG ]       {
[ceph-node01][DEBUG ]         "addr": "10.17.3.14:6789/0",
[ceph-node01][DEBUG ]         "crush_location": "{}",
[ceph-node01][DEBUG ]         "name": "ceph-node01",
[ceph-node01][DEBUG ]         "priority": 0,
[ceph-node01][DEBUG ]         "public_addr": "10.17.3.14:6789/0",
[ceph-node01][DEBUG ]         "public_addrs": {
[ceph-node01][DEBUG ]           "addrvec": [
[ceph-node01][DEBUG ]             {
[ceph-node01][DEBUG ]               "addr": "10.17.3.14:3300",
[ceph-node01][DEBUG ]               "nonce": 0,
[ceph-node01][DEBUG ]               "type": "v2"
[ceph-node01][DEBUG ]             },
[ceph-node01][DEBUG ]             {
[ceph-node01][DEBUG ]               "addr": "10.17.3.14:6789",
[ceph-node01][DEBUG ]               "nonce": 0,
[ceph-node01][DEBUG ]               "type": "v1"
[ceph-node01][DEBUG ]             }
[ceph-node01][DEBUG ]           ]
[ceph-node01][DEBUG ]         },
[ceph-node01][DEBUG ]         "rank": 0,
[ceph-node01][DEBUG ]         "weight": 0
[ceph-node01][DEBUG ]       }
[ceph-node01][DEBUG ]     ],
[ceph-node01][DEBUG ]     "removed_ranks: ": "",
[ceph-node01][DEBUG ]     "stretch_mode": false,
[ceph-node01][DEBUG ]     "tiebreaker_mon": ""
[ceph-node01][DEBUG ]   },
[ceph-node01][DEBUG ]   "name": "ceph-node01",
[ceph-node01][DEBUG ]   "outside_quorum": [],
[ceph-node01][DEBUG ]   "quorum": [
[ceph-node01][DEBUG ]     0
[ceph-node01][DEBUG ]   ],
[ceph-node01][DEBUG ]   "quorum_age": 77,
[ceph-node01][DEBUG ]   "rank": 0,
[ceph-node01][DEBUG ]   "state": "leader",
[ceph-node01][DEBUG ]   "stretch_mode": false,
[ceph-node01][DEBUG ]   "sync_provider": []
[ceph-node01][DEBUG ] }
[ceph-node01][DEBUG ] ********************************************************************************
[ceph-node01][INFO  ] monitor: mon.ceph-node01 is running
[ceph-node01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node01.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] find the location of an executable
[ceph-node01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node01.asok mon_status
[ceph_deploy.mon][INFO  ] mon.ceph-node01 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpWWGCyS
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] get remote short hostname
[ceph-node01][DEBUG ] fetch remote file
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph-node01.asok mon_status
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node01/keyring auth get client.admin
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node01/keyring auth get client.bootstrap-mds
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node01/keyring auth get client.bootstrap-mgr
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node01/keyring auth get client.bootstrap-osd
[ceph-node01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node01/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpWWGCyS

验证生成的文件

cephadmin@ceph-node01:/etc/ceph-cluster$ ls /etc/ceph/
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring  rbdmap  tmpAi40Po  tmpSILILE  tmpwq6jcL
cephadmin@ceph-node01:/etc/ceph-cluster$ ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-mgr.keyring  ceph.bootstrap-osd.keyring  ceph.bootstrap-rgw.keyring  ceph.client.admin.keyring  ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

将ceph admin密钥分发至各机器

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy admin ceph-node01 ceph-node02 ceph-node03
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy admin ceph-node01 ceph-node02 ceph-node03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff99fbb0f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-node01', 'ceph-node02', 'ceph-node03']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0xffff9a0d5c50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node02
The authenticity of host 'ceph-node02 (10.17.3.15)' can't be established.
ECDSA key fingerprint is SHA256:G3fJV27edH5tu4HNY0ArPdlNDPO9eaIEQKOdd1MAcdo.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph-node02' (ECDSA) to the list of known hosts.
root@ceph-node02's password:
root@ceph-node02's password:
[ceph-node02][DEBUG ] connected to host: ceph-node02

部署ceph-mgr节点,后续将node02和node03都添加进来

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy mgr create ceph-node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy mgr create ceph-node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('ceph-node01', 'ceph-node01')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff9f0271e0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0xffff9f11b350>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-node01:ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-node01
[ceph-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node01][WARNIN] mgr keyring does not exist yet, creating one
[ceph-node01][DEBUG ] create a keyring file
[ceph-node01][DEBUG ] create path recursively if it doesn't exist
[ceph-node01][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-node01 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-node01/keyring
[ceph-node01][INFO  ] Running command: systemctl enable ceph-mgr@ceph-node01
[ceph-node01][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-node01.service → /lib/systemd/system/ceph-mgr@.service.
[ceph-node01][INFO  ] Running command: systemctl start ceph-mgr@ceph-node01
[ceph-node01][INFO  ] Running command: systemctl enable ceph.target

在对应节点验证mgr服务是否正常

cephadmin@ceph-node01:/etc/ceph-cluster$ ps -ef |grep ceph-
root        4243       1  0 17:36 ?        00:00:00 /usr/bin/python2.7 /usr/bin/ceph-crash
ceph       11656       1  0 18:39 ?        00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id ceph-node01 --setuser ceph --setgroup ceph
root       11707    5223  0 18:39 pts/1    00:00:00 tail -f /var/log/ceph/ceph-mon.ceph-node01.log
ceph       12301       1  9 18:45 ?        00:00:05 /usr/bin/ceph-mgr -f --cluster ceph --id ceph-node01 --setuser ceph --setgroup ceph
cephadm+   12529    9641  0 18:46 pts/0    00:00:00 grep --color=auto ceph-

推送管理集群的证书给node01 node02 node03

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy admin ceph-node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy admin ceph-node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff83ba20f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-node01']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0xffff83cbcc50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

初始化存储节点,也就是用来存储数据的节点,ceph集群中拥有最多osd的机器

# 所有存储节点都执行
ceph-deploy install --release pacific ceph-node02
ceph-deploy install --release pacific ceph-node03
root@ceph-node01:/etc/ceph-cluster# ceph-deploy install --release pacific ceph-node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy install --release pacific ceph-node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffffa437daf0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : True
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0xffffa4439c50>
[ceph_deploy.cli][INFO  ]  install_mgr                   : False
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['ceph-node01']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  nogpgcheck                    : False
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : pacific
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version pacific on cluster ceph hosts ceph-node01
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-node01 ...
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node01][INFO  ] installing Ceph on ceph-node01
[ceph-node01][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q update
[ceph-node01][DEBUG ] Hit:1 https://mirrors.ustc.edu.cn/kubernetes/core:/stable:/v1.30/deb  InRelease
[ceph-node01][DEBUG ] Hit:2 http://ports.ubuntu.com/ubuntu-ports bionic InRelease
[ceph-node01][DEBUG ] Hit:3 http://ports.ubuntu.com/ubuntu-ports bionic-updates InRelease
[ceph-node01][DEBUG ] Hit:4 https://download.ceph.com/debian-pacific bionic InRelease
[ceph-node01][DEBUG ] Hit:5 http://ports.ubuntu.com/ubuntu-ports bionic-backports InRelease
[ceph-node01][DEBUG ] Hit:6 http://ports.ubuntu.com/ubuntu-ports bionic-security InRelease
[ceph-node01][DEBUG ] Reading package lists...
[ceph-node01][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https
[ceph-node01][DEBUG ] Reading package lists...
[ceph-node01][DEBUG ] Building dependency tree...
[ceph-node01][DEBUG ] Reading state information...
[ceph-node01][DEBUG ] ca-certificates is already the newest version (20230311ubuntu0.18.04.1).
[ceph-node01][DEBUG ] apt-transport-https is already the newest version (1.6.17).
[ceph-node01][DEBUG ] The following packages were automatically installed and are no longer required:
[ceph-node01][DEBUG ]   formencode-i18n libpython2.7 python-asn1crypto python-bcrypt python-bs4
[ceph-node01][DEBUG ]   python-ceph-argparse python-certifi python-cffi-backend python-chardet
[ceph-node01][DEBUG ]   python-cherrypy3 python-cryptography python-dnspython python-enum34
[ceph-node01][DEBUG ]   python-formencode python-idna python-ipaddress python-jinja2 python-logutils
[ceph-node01][DEBUG ]   python-mako python-markupsafe python-openssl python-paste python-pastedeploy
[ceph-node01][DEBUG ]   python-pecan python-pkg-resources python-prettytable python-rbd
[ceph-node01][DEBUG ]   python-requests python-simplegeneric python-simplejson python-singledispatch
[ceph-node01][DEBUG ]   python-six python-tempita python-urllib3 python-waitress python-webob
[ceph-node01][DEBUG ]   python-webtest python-werkzeug
[ceph-node01][DEBUG ] Use 'apt autoremove' to remove them.
[ceph-node01][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 327 not upgraded.
[ceph-node01][INFO  ] Running command: wget -O release.asc https://download.ceph.com/keys/release.asc
[ceph-node01][WARNIN] --2024-10-08 19:02:09--  https://download.ceph.com/keys/release.asc
[ceph-node01][WARNIN] Resolving download.ceph.com (download.ceph.com)... 158.69.68.124, 2607:5300:201:2000::3:58a1
[ceph-node01][WARNIN] Connecting to download.ceph.com (download.ceph.com)|158.69.68.124|:443... connected.
[ceph-node01][WARNIN] HTTP request sent, awaiting response... 200 OK
[ceph-node01][WARNIN] Length: 1645 (1.6K) [application/octet-stream]
[ceph-node01][WARNIN] Saving to: ‘release.asc’
[ceph-node01][WARNIN]
[ceph-node01][WARNIN]      0K .                                                     100%  439M=0s
[ceph-node01][WARNIN]
[ceph-node01][WARNIN] 2024-10-08 19:02:10 (439 MB/s) - ‘release.asc’ saved [1645/1645]

查看节点磁盘并初始化

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk list ceph-node01.xx.local
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk list ceph-node02.xx.local
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk list ceph-node03.xx.local

擦除磁盘数据

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk zap ceph-node01 /dev/vdb
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk zap ceph-node02 /dev/vdb
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy disk zap ceph-node03 /dev/vdb

输出

ph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy disk zap ceph-node01 /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff93fe9f50>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ceph-node01
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0xffff940514d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/vdb']
[ceph_deploy.osd][DEBUG ] zapping /dev/vdb on ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph-node01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph-node01][DEBUG ] zeroing last few blocks of device
[ceph-node01][DEBUG ] find the location of an executable
[ceph-node01][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/vdb
[ceph-node01][WARNIN] --> Zapping: /dev/vdb
[ceph-node01][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node01][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/vdb bs=1M count=10 conv=fsync
[ceph-node01][WARNIN]  stderr: 10+0 records in
[ceph-node01][WARNIN] 10+0 records out
[ceph-node01][WARNIN]  stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0246339 s, 426 MB/s
[ceph-node01][WARNIN] --> Zapping successful for: <Raw Device: /dev/vdb>

添加osd,数据data 元数据block wal日志block-wal都放在一起

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy osd create ceph-node01.xx.local --data /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy osd create ceph-node01.xx.local --data /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff811c2aa0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-node01.xx.local
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xffff81225450>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/vdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
[ceph-node01.xx.local][DEBUG ] connected to host: ceph-node01.xx.local
[ceph-node01.xx.local][DEBUG ] detect platform information from remote host
[ceph-node01.xx.local][DEBUG ] detect machine type
[ceph-node01.xx.local][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node01.xx.local
[ceph-node01.xx.local][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node01.xx.local][WARNIN] osd keyring does not exist yet, creating one
[ceph-node01.xx.local][DEBUG ] create a keyring file
[ceph-node01.xx.local][DEBUG ] find the location of an executable
[ceph-node01.xx.local][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 66fd9200-a35e-4a36-85a2-a512b09826de
[ceph-node01.xx.local][WARNIN] Running command: vgcreate --force --yes ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9 /dev/vdb
[ceph-node01.xx.local][WARNIN]  stdout: Physical volume "/dev/vdb" successfully created.
[ceph-node01.xx.local][WARNIN]  stdout: Volume group "ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9" successfully created
[ceph-node01.xx.local][WARNIN] Running command: lvcreate --yes -l 262143 -n osd-block-66fd9200-a35e-4a36-85a2-a512b09826de ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9
[ceph-node01.xx.local][WARNIN]  stdout: Logical volume "osd-block-66fd9200-a35e-4a36-85a2-a512b09826de" created.
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-node01.xx.local][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph-node01.xx.local][WARNIN] --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9/osd-block-66fd9200-a35e-4a36-85a2-a512b09826de
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[ceph-node01.xx.local][WARNIN] Running command: /bin/ln -s /dev/ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9/osd-block-66fd9200-a35e-4a36-85a2-a512b09826de /var/lib/ceph/osd/ceph-0/block
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph-node01.xx.local][WARNIN]  stderr: 2024-10-08T19:14:26.763+0800 ffff8e3ea1f0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-node01.xx.local][WARNIN] 2024-10-08T19:14:26.763+0800 ffff8e3ea1f0 -1 AuthRegistry(0xffff8805c4d0) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-node01.xx.local][WARNIN]  stderr: got monmap epoch 1
[ceph-node01.xx.local][WARNIN] --> Creating keyring file for osd.0
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 66fd9200-a35e-4a36-85a2-a512b09826de --setuser ceph --setgroup ceph
[ceph-node01.xx.local][WARNIN]  stderr: 2024-10-08T19:14:27.315+0800 ffffb41ab010 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
[ceph-node01.xx.local][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-node01.xx.local][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9/osd-block-66fd9200-a35e-4a36-85a2-a512b09826de --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[ceph-node01.xx.local][WARNIN] Running command: /bin/ln -snf /dev/ceph-cf974d8c-8a5a-47ae-beb6-c2d6902df8f9/osd-block-66fd9200-a35e-4a36-85a2-a512b09826de /var/lib/ceph/osd/ceph-0/block
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-0
[ceph-node01.xx.local][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-node01.xx.local][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-0-66fd9200-a35e-4a36-85a2-a512b09826de
[ceph-node01.xx.local][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-66fd9200-a35e-4a36-85a2-a512b09826de.service → /lib/systemd/system/ceph-volume@.service.
[ceph-node01.xx.local][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@0
[ceph-node01.xx.local][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /lib/systemd/system/ceph-osd@.service.
[ceph-node01.xx.local][WARNIN] Running command: /bin/systemctl start ceph-osd@0
[ceph-node01.xx.local][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[ceph-node01.xx.local][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-node01.xx.local][INFO  ] checking OSD status...
[ceph-node01.xx.local][DEBUG ] find the location of an executable
[ceph-node01.xx.local][INFO  ] Running command: /usr/bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node01.xx.local is now ready for osd use.

验证

root@ceph-node01:/etc/ceph-cluster# ceph -s
  cluster:
    id:     5a6fdfb7-81a1-40f6-97b7-c92f96de9ac5
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
            OSD count 2 < osd_pool_default_size 3
  
  services:
    mon: 1 daemons, quorum ceph-node01 (age 37m)
    mgr: ceph-node01(active, since 31m)
    osd: 2 osds: 2 up (since 21s), 2 in (since 30s)
  
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   580 MiB used, 2.0 TiB / 2.0 TiB avail
    pgs:

mon监控节点状态查看

root@ceph-node01:~# ceph quorum_status --format json-pretty
 
{
    "election_epoch": 20,
    "quorum": [
        0,
        1,
        2
    ],
    "quorum_names": [
        "ceph-node01",
        "ceph-node02",
        "ceph-node03"
    ],
    "quorum_leader_name": "ceph-node01",
    "quorum_age": 77,
    "features": {
        "quorum_con": "4540138314316775423",
        "quorum_mon": [
            "kraken",
            "luminous",
            "mimic",
            "osdmap-prune",
            "nautilus",
            "octopus",
            "pacific",
            "elector-pinging"
        ]
    },
    "monmap": {
        "epoch": 3,
        "fsid": "5a6fdfb7-81a1-40f6-97b7-c92f96de9ac5",
        "modified": "2024-10-08T11:29:51.477381Z",
        "created": "2024-10-08T10:12:42.715558Z",
        "min_mon_release": 16,
        "min_mon_release_name": "pacific",
        "election_strategy": 1,
        "disallowed_leaders: ": "",
        "stretch_mode": false,
        "tiebreaker_mon": "",
        "removed_ranks: ": "",
        "features": {
            "persistent": [
                "kraken",
                "luminous",
                "mimic",
                "osdmap-prune",
                "nautilus",
                "octopus",
                "pacific",
                "elector-pinging"
            ],
            "optional": []
        },
        "mons": [
            {
                "rank": 0,
                "name": "ceph-node01",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "10.17.3.14:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "10.17.3.14:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "10.17.3.14:6789/0",
                "public_addr": "10.17.3.14:6789/0",
                "priority": 0,
                "weight": 0,
                "crush_location": "{}"
            },
            {
                "rank": 1,
                "name": "ceph-node02",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "10.17.3.15:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "10.17.3.15:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "10.17.3.15:6789/0",
                "public_addr": "10.17.3.15:6789/0",
                "priority": 0,
                "weight": 0,
                "crush_location": "{}"
            },
            {
                "rank": 2,
                "name": "ceph-node03",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "10.17.3.16:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "10.17.3.16:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "10.17.3.16:6789/0",
                "public_addr": "10.17.3.16:6789/0",
                "priority": 0,
                "weight": 0,
                "crush_location": "{}"
            }
        ]
    }
}

部署ceph-dashboard

# 是否安装dashboard module
root@ceph-node01:~# dpkg -l |grep ceph-mgr
ii  ceph-mgr                              16.2.15-1bionic                    arm64        manager for the ceph distributed storage system
ii  ceph-mgr-modules-core                 16.2.15-1bionic                    all          ceph manager modules which are always enabled
root@ceph-node01:~# apt install ceph-mgr-dashboard
# 查看当前安装的模块状态以及可启用的模块
root@ceph-node01:~# ceph mgr module ls > ceph-mgr-module.json
# 启用ceph-dashboard模块
root@ceph-node01:~# ceph mgr module enable dashboard
# 禁用ssl
root@ceph-node01:~# ceph config set mgr mgr/dashboard/ssl false
# 配置监听IP
root@ceph-node01:~# ceph config set mgr mgr/dashboard/ceph-node01/server_addr 10.17.3.14
# 配置监听端口
root@ceph-node01:~# ceph config set mgr mgr/dashboard/ceph-node01/server_port 9009
# 验证端口是否监听,不监听则重启mgr服务
root@ceph-node01:~# systemctl restart ceph-mgr@ceph-node01.service
root@ceph-node01:~# systemctl status ceph-mgr@ceph-node01.service

安全组放行9009端口,浏览器测试
在这里插入图片描述

设置ceph-dashboard密码

root@ceph-node01:/etc/ceph-cluster# echo "cephdashboard" > ceph-dashboard-passwd.txt
root@ceph-node01:/etc/ceph-cluster# cat ceph-dashboard-passwd.txt
cephdashboard
root@ceph-node01:/etc/ceph-cluster# ceph dashboard set-login-credentials ceph -i ceph-dashboard-passwd.txt
******************************************************************
***          WARNING: this command is deprecated.              ***
*** Please use the ac-user-* related commands to manage users. ***
******************************************************************
Username and password updated

配置radosgw对象存储网关

apt-cache madison radosgw
cd /etc/ceph-cluster/

cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph-deploy --overwrite-conf rgw create ceph-node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadmin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/local/bin/ceph-deploy --overwrite-conf rgw create ceph-node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  rgw                           : [('ceph-node01', 'rgw.ceph-node01')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff890a19b0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function rgw at 0xffff89142950>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts ceph-node01:rgw.ceph-node01
[ceph-node01][DEBUG ] connected to host: ceph-node01
[ceph-node01][DEBUG ] detect platform information from remote host
[ceph-node01][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO  ] Distro info: Ubuntu 18.04 bionic
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to ceph-node01
[ceph-node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node01][WARNIN] rgw keyring does not exist yet, creating one
[ceph-node01][DEBUG ] create a keyring file
[ceph-node01][DEBUG ] create path recursively if it doesn't exist
[ceph-node01][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.ceph-node01 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.ceph-node01/keyring
[ceph-node01][INFO  ] Running command: systemctl enable ceph-radosgw@rgw.ceph-node01
[ceph-node01][WARNIN] Created symlink /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.ceph-node01.service → /lib/systemd/system/ceph-radosgw@.service.
[ceph-node01][INFO  ] Running command: systemctl start ceph-radosgw@rgw.ceph-node01
[ceph-node01][INFO  ] Running command: systemctl enable ceph.target
[ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host ceph-node01 and default port 7480

验证服务

cephadmin@ceph-node01:/etc/ceph-cluster$ systemctl status ceph-radosgw@rgw.ceph-node01.service
● ceph-radosgw@rgw.ceph-node01.service - Ceph rados gateway
   Loaded: loaded (/lib/systemd/system/ceph-radosgw@.service; indirect; vendor preset: enabled)
   Active: active (running) since Wed 2024-10-09 10:31:20 CST; 57s ago
 Main PID: 20282 (radosgw)
    Tasks: 602
   CGroup: /system.slice/system-ceph\x2dradosgw.slice/ceph-radosgw@rgw.ceph-node01.service
           └─20282 /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-node01 --setuser ceph --setgroup ceph
cephadmin@ceph-node01:/etc/ceph-cluster$
cephadmin@ceph-node01:/etc/ceph-cluster$ ps -ef |grep radosgw
ceph       20282       1  0 10:31 ?        00:00:00 /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-node01 --setuser ceph --setgroup ceph
cephadm+   21020   20167  0 10:32 pts/0    00:00:00 grep --color=auto radosgw

验证rgw客户端

cephadmin@ceph-node01:/etc/ceph-cluster$ curl http://10.17.3.14:7480/
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>cephadmin@ceph-node01:/etc/ceph-cluster$
cephadmin@ceph-node01:/etc/ceph-cluster$
cephadmin@ceph-node01:/etc/ceph-cluster$
cephadmin@ceph-node01:/etc/ceph-cluster$ sudo ceph -s
  cluster:
    id:     5a6fdfb7-81a1-40f6-97b7-c92f96de9ac5
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim
  
  services:
    mon: 3 daemons, quorum ceph-node01,ceph-node02,ceph-node03 (age 39m)
    mgr: ceph-node01(active, since 15h)
    osd: 3 osds: 3 up (since 15h), 3 in (since 15h)
    rgw: 1 daemon active (1 hosts, 1 zones)
  
  data:
    pools:   5 pools, 129 pgs
    objects: 195 objects, 4.9 KiB
    usage:   872 MiB used, 3.0 TiB / 3.0 TiB avail
    pgs:     129 active+clean

客户端s3 browser

下载地址

https://s3browser.com/

在这里插入图片描述
在这里插入图片描述

reference

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/891838.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

Apache Seatunnel Zeta引擎-启动脚本分析

Apache SeaTunnel Zeta引擎的集群模式启动的第一步是执行bin/seatunnel-cluster.sh脚本&#xff0c;所以先来学习下这个脚本。 脚本执行流程分析 脚本简要注释 #!/bin/bash # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license a…

Java项目:154 基于ssm旅游信息网站(含论文+ppt)

作者主页&#xff1a;源码空间codegym 简介&#xff1a;Java领域优质创作者、Java项目、学习资料、技术互助 文中获取源码 项目介绍 使用旅游信息网站的分为管理员和用户两个角色的权限子模块。 管理员所能使用的功能主要有&#xff1a;个人中心、用户管理、旅游景点管理、交…

双指齐下:那晚我与算法的不解之缘

公主请阅 1.快乐数1.1题目说明示例 1示例 2 1.3题目分析1.4代码部分1.5代码解析 2.复写02.1题目说明示例 1示例 2 2.2题目分析2.3代码部分2.4代码解析 1.快乐数 题目传送门 1.1题目说明 编写一个算法来判断一个数 n 是不是快乐数。 「快乐数」定义为&#xff1a; 对于一个正…

探索 Blob 对象的应用场景和实例分析

一. 引言 当我们在开发 Web 应用程序时&#xff0c;常常会遇到需要处理二进制数据的情况。这时&#xff0c;Blob&#xff08;Binary Large Object&#xff09;对象就成为了一个非常有用的工具。Blob 对象可以用来表示一段二进制数据&#xff0c;它可以存储和操作各种类型的数据…

FPAG学习(5)-三种方法实现LED流水灯

目录 1.移位实现LED流水灯 1.1创建工程及源文件代码 1.1.1源代码 1.1.2仿真代码 1.1.3仿真 1.2实验结果 1.2.1总结 2.循环移位实现LED流水灯 3.38译码器实现LED流水灯 3.1原理 3.2源程序 1.移位实现LED流水灯 1.1创建工程及源文件代码 1.1.1源代码 利用计数器计数到…

Python网络爬虫从入门到实战

目录 引言 一、网络爬虫的概念 二、 网络爬虫的基本工作流程 &#xff08;一&#xff09;过程&#xff1a; &#xff08;二&#xff09;安装requests模块和beautifulsoup4模块 &#xff08;三&#xff09;requests库的使用 1、requests库的基本介绍 2、导入requests库的…

IO作业代码

问题 通过 fwrite和 fread去拷贝 文件到另外一个文件上 #include<myhead.h> #include <stdio.h> #include <string.h> #include <stdlib.h> #include<errno.h> #include<time.h> int main(int argc, const char *argv[]) { FILE *fp fo…

新款任天堂switch游戏机方案,支持4K60HZ投屏方案,显示器,手柄方案

据传任天堂将推出新的一代的switch掌机&#xff0c;而新款掌机将支持4K60HZ投屏 都2402年了再做1080P确实有点不太象话了 4K60HZ相较于1080P能够提升很多游戏体验&#xff0c;这时不管是HDMI显示器或者是VR眼睛清晰度都会让人舒服很多。 不过新一代的任天堂似乎也在PD协议上…

答题pk小程序的技术特点和性能优势分析

答题小程序是一种在移动设备上运行的应用程序&#xff0c;旨在提供各种类型的答题体验。以下是答题小程序的一些特点和优势&#xff1a; 一、特点 多样化的题目类型&#xff1a; 包括选择题、填空题、判断题等常见题型&#xff0c;还可能有简答题、论述题等更具挑战性的题型。…

qt+opengl 实现纹理贴图,平移旋转,绘制三角形,方形

1 首先qt 已经封装了opengl&#xff0c;那么我们就可以直接用了&#xff0c;这里面有三个函数需要继承 virtual void initializeGL() override; virtual void resizeGL(int w,int h) override; virtual void paintGL() override; 这三个函数是实现opengl的重要函数。 2 我们…

arp欺骗及其实验

ARP欺骗&#xff08;ARP Spoofing&#xff09;是一种网络攻击技术&#xff0c;攻击者通过伪造ARP&#xff08;地址解析协议&#xff09;消息&#xff0c;将其MAC地址与目标IP地址关联&#xff0c;从而实现对网络流量的截获、篡改或重定向。以下是ARP欺骗的详细信息&#xff1a;…

【JVM】—Java内存区域详解

Java内存区域详解 ⭐⭐⭐⭐⭐⭐ Github主页&#x1f449;https://github.com/A-BigTree 笔记链接&#x1f449;https://github.com/A-BigTree/Code_Learning ⭐⭐⭐⭐⭐⭐ 如果可以&#xff0c;麻烦各位看官顺手点个star~&#x1f60a; 文章目录 Java内存区域详解1 线程私有1…

Linux系统:Ubuntu上安装Chrome浏览器

Ubuntu系统版本&#xff1a;23.04 在Ubuntu系统上安装Google Chrome浏览器&#xff0c;可以通过以下步骤进行&#xff1a; 终端输入以下命令&#xff0c;先更新软件源&#xff1a; sudo apt update 或 sudo apt upgrade终端输入以下命令&#xff0c;下载最新的Google Chrome .…

瑞芯微RK3566/RK3568 Android11使用OTA升级固件方法,深圳触觉智能鸿蒙开发板演示,备战第九届华为ICT大赛

本文介绍瑞芯微RK3566/RK3568在Android11系统OTA升级固件方法&#xff0c;使用触觉智能的Purple Pi OH鸿蒙开发板演示&#xff0c;搭载了瑞芯微RK3566&#xff0c;Laval官方社区主荐&#xff01; 1、OTA包生成 在源码根目录上执行以下命令编译OTA包 # make installclean # …

Docker实践与应用举例

目录 1. 引言 2. Docker的基本概念 2.1 什么是Docker容器 2.2 Docker镜像 2.3 Docker架构 3. Docker的应用场景 3.1 开发与测试环境的隔离 3.2 持续集成与持续交付&#xff08;CI/CD&#xff09; 3.3 微服务架构 4. Docker的实践案例 4.1 部署Nginx反向代理 4.2 使用…

端到端的开源OCR模型:GOT-OCR-2.0,支持场景文本、文档、乐谱、图表、数学公式等内容识别!

今天给大家分享一个端到端的开源 OCR 模型&#xff0c;号称 OCR 2.0&#xff01; 支持场景文本、文档、乐谱、图表、数学公式等内容识别&#xff0c;拿到了 BLEU 0.972 高分。 从给出的演示图来看&#xff0c;一些非常复杂的数学公式都能正确的识别&#xff0c;颇为强大。模型…

文件IO(Linux文件IO)

前言 本文介绍Linux系统下自带的文件IO的函数。 Linux文件IO相关函数 open函数 #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> int open(const char *pathname, int flags); int open(const char *pathname, int flags, mode_t mode)…

JAVA就业笔记8——第二阶段(5)

课程须知 A类知识&#xff1a;工作和面试常用&#xff0c;代码必须要手敲&#xff0c;需要掌握。 B类知识&#xff1a;面试会问道&#xff0c;工作不常用&#xff0c;代码不需要手敲&#xff0c;理解能正确表达即可。 C类知识&#xff1a;工作和面试不常用&#xff0c;代码不…

Spire.PDF for .NET【页面设置】演示:在 C#/VB.NET 中创建 PDF 小册子

当人们打印大型 PDF 文档时&#xff0c;PDF 小册子非常有用。它在书籍、报纸和杂志编辑中特别受欢迎。本节将介绍一种通过C#、VB.NET 中的.NET PDF组件创建 PDF 小册子的非常简单的方法。 Spire.PDF for .NET 是一款独立 PDF 控件&#xff0c;用于 .NET 程序中创建、编辑和操作…

进程和作业管理

1.概念 &#xff08;1&#xff09;进程 进程是指一个具有独立功能的程序的一次运行过程&#xff0c;也是系统进行资源分配和调度的基本单位&#xff0c;即每个程序模块和它执行时所处理的数据组成了进程。进程虽不是程序&#xff0c;但由程序产生。进程与程序的区别在于&#…