ceph跨集群迁移ceph pool rgw

1、跨集群迁移ceph pool   rgw

我这里是迁移rgw的pool

l老环境

[root@ceph-1 data]# yum install s3cmd -y
[root@ceph-1 ~]# ceph config dump
WHO   MASK LEVEL    OPTION                                VALUE                                    RO 
  mon      advanced auth_allow_insecure_global_id_reclaim false                                       
  mgr      advanced mgr/dashboard/ALERTMANAGER_API_HOST   http://20.3.10.91:9093                   *  
  mgr      advanced mgr/dashboard/GRAFANA_API_PASSWORD    admin                                    *  
  mgr      advanced mgr/dashboard/GRAFANA_API_SSL_VERIFY  false                                    *  
  mgr      advanced mgr/dashboard/GRAFANA_API_URL         https://20.3.10.93:3000                  *  
  mgr      advanced mgr/dashboard/GRAFANA_API_USERNAME    admin                                    *  
  mgr      advanced mgr/dashboard/PROMETHEUS_API_HOST     http://20.3.10.91:9092                   *  
  mgr      advanced mgr/dashboard/RGW_API_ACCESS_KEY      9UYWS54KEGHPTXIZK61J                     *  
  mgr      advanced mgr/dashboard/RGW_API_HOST            20.3.10.91                               *  
  mgr      advanced mgr/dashboard/RGW_API_PORT            8080                                     *  
  mgr      advanced mgr/dashboard/RGW_API_SCHEME          http                                     *  
  mgr      advanced mgr/dashboard/RGW_API_SECRET_KEY      MGaia4UnZhKO6DRRtRu89iKwUJjZ0KVS8IgjA2p8 *  
  mgr      advanced mgr/dashboard/RGW_API_USER_ID         ceph-dashboard                           *  
  mgr      advanced mgr/dashboard/ceph-1/server_addr      20.3.10.91                               *  
  mgr      advanced mgr/dashboard/ceph-2/server_addr      20.3.10.92                               *  
  mgr      advanced mgr/dashboard/ceph-3/server_addr      20.3.10.93                               *  
  mgr      advanced mgr/dashboard/server_port             8443                                     *  
  mgr      advanced mgr/dashboard/ssl                     true                                     *  
  mgr      advanced mgr/dashboard/ssl_server_port         8443                                     *  
[root@ceph-1 ~]# cat /root/.s3cfg 
[default]
access_key = 9UYWS54KEGHPTXIZK61J
access_token =
add_encoding_exts =
add_headers =
bucket_location = US
ca_certs_file =
cache_file =
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
connection_max_age = 5
connection_pooling = True
content_disposition =
content_type =
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encrypt = False
expiry_date =
expiry_days =
expiry_prefix =
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase =
guess_mime_type = True
host_base = 20.3.10.91:8080
host_bucket = 20.3.10.91:8080%(bucket)
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key =
limit = -1
limitrate = 0
list_allow_unordered = False
list_md5 = False
log_target_prefix =
long_listing = False
max_delete = -1
mime_type =
multipart_chunk_size_mb = 15
multipart_copy_chunk_size_mb = 1024
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
public_url_use_https = False
put_continue = False
recursive = False
recv_chunk = 65536
reduced_redundancy = False
requester_pays = False
restore_days = 1
restore_priority = Standard
secret_key = MGaia4UnZhKO6DRRtRu89iKwUJjZ0KVS8IgjA2p8
send_chunk = 65536
server_side_encryption = False
signature_v2 = False
signurl_use_https = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
ssl_client_cert_file =
ssl_client_key_file =
stats = False
stop_on_error = False
storage_class =
throttle_max = 100
upload_id =
urlencoding_mode = normal
use_http_expect = False
use_https = False
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html
[root@ceph-1 ~]# s3cmd ls
2024-01-24 10:52  s3://000002
2024-02-01 08:20  s3://000010
2024-01-24 10:40  s3://cloudengine
2024-02-07 02:58  s3://component-000010
2024-01-24 10:52  s3://component-pub
2024-02-27 10:55  s3://deploy-2
2024-01-26 10:53  s3://digital-000002
2024-01-26 11:14  s3://digital-000010
2024-01-29 02:04  s3://docker-000010
2024-01-26 11:46  s3://docker-pub
2024-03-06 11:42  s3://warp-benchmark-bucket


[root@ceph-1 data]# ceph df
RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED 
    hdd       900 GiB     154 GiB     740 GiB      746 GiB         82.86 
    TOTAL     900 GiB     154 GiB     740 GiB      746 GiB         82.86 
 
POOLS:
    POOL                           ID     PGS     STORED      OBJECTS     USED        %USED      MAX AVAIL 
    cephfs_data                     1      16         0 B           0         0 B          0           0 B 
    cephfs_metadata                 2      16     1.0 MiB          23     4.7 MiB     100.00           0 B 
    .rgw.root                       3      16     3.5 KiB           8     1.5 MiB     100.00           0 B 
    default.rgw.control             4      16         0 B           8         0 B          0           0 B 
    default.rgw.meta                5      16     8.5 KiB          31     5.4 MiB     100.00           0 B 
    default.rgw.log                 6      16      64 KiB         207      64 KiB     100.00           0 B 
    default.rgw.buckets.index       7      16     3.5 MiB         192     3.5 MiB     100.00           0 B 
    default.rgw.buckets.data        8      16     146 GiB      51.55k     440 GiB     100.00           0 B 
    default.rgw.buckets.non-ec      9      16     123 KiB          10     2.0 MiB     100.00           0 B 

只迁移default.rgw.buckets.data发现没有backet桶的信息所以要迁移

default.rgw.buckets.data  这里是数据

default.rgw.meta   这里面存的是用户信息和桶的信息

default.rgw.buckets.index  这里是对应关系

1、通过 rados -p pool_name  export  --all    文件

[root@ceph-1 data]# rados -p default.rgw.buckets.data  export  --all   rgwdata
[root@ceph-1 data]# rados -p default.rgw.buckets.index  export   --all   rgwindex
[root@ceph-1 data]# rados -p default.rgw.meta  export   --all   rgwmeta
[root@ceph-1 data]# ls
rgwdata  rgwindex  rgwmeta

2、获取原集群的user信息记住ceph-dashboard的access_key和secret_key  原因是default.rgw.meta 有用户ceph-dashboard 恰好在里面

[root@ceph-1 data]# radosgw-admin user list
[
    "registry",
    "ceph-dashboard"
]
[root@ceph-1 data]# radosgw-admin user info --uid=ceph-dashboard
{
    "user_id": "ceph-dashboard",
    "display_name": "Ceph dashboard",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "ceph-dashboard",
            "access_key": "9UYWS54KEGHPTXIZK61J",
            "secret_key": "MGaia4UnZhKO6DRRtRu89iKwUJjZ0KVS8IgjA2p8"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "system": "true",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": true,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": 1638400
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

新环境

切换到新建好的集群

[root@ceph-1 data]# ceph -s
  cluster:
    id:     d073f5d6-6b4a-4c87-901b-a0f4694ee878
    health: HEALTH_WARN
            mon is allowing insecure global_id reclaim
 
  services:
    mon: 1 daemons, quorum ceph-1 (age 46h)
    mgr: ceph-1(active, since 46h)
    mds: cephfs:1 {0=ceph-1=up:active}
    osd: 2 osds: 2 up (since 46h), 2 in (since 8d)
    rgw: 1 daemon active (ceph-1.rgw0)
 
  task status:
 
  data:
    pools:   9 pools, 144 pgs
    objects: 61.83k objects, 184 GiB
    usage:   1.9 TiB used, 3.3 TiB / 5.2 TiB avail
    pgs:     144 active+clean
 
  io:
    client:   58 KiB/s rd, 7 op/s rd, 0 op/s wr

测试是都可以用rgw

[root@ceph-1 data]# yum install s3cmd -y
[root@ceph-1 data]# ceph config dump
WHO    MASK LEVEL    OPTION                               VALUE                                    RO 
global      advanced mon_warn_on_pool_no_redundancy       false                                       
  mgr       advanced mgr/dashboard/ALERTMANAGER_API_HOST  http://20.3.14.124:9093                  *  
  mgr       advanced mgr/dashboard/GRAFANA_API_PASSWORD   admin                                    *  
  mgr       advanced mgr/dashboard/GRAFANA_API_SSL_VERIFY false                                    *  
  mgr       advanced mgr/dashboard/GRAFANA_API_URL        https://20.3.14.124:3000                 *  
  mgr       advanced mgr/dashboard/GRAFANA_API_USERNAME   admin                                    *  
  mgr       advanced mgr/dashboard/PROMETHEUS_API_HOST    http://20.3.14.124:9092                  *  
  mgr       advanced mgr/dashboard/RGW_API_ACCESS_KEY     9UYWS54KEGHPTXIZK61J                     *  
  mgr       advanced mgr/dashboard/RGW_API_HOST           20.3.14.124                              *  
  mgr       advanced mgr/dashboard/RGW_API_PORT           8090                                     *  
  mgr       advanced mgr/dashboard/RGW_API_SCHEME         http                                     *  
  mgr       advanced mgr/dashboard/RGW_API_SECRET_KEY     MGaia4UnZhKO6DRRtRu89iKwUJjZ0KVS8IgjA2p8 *  
  mgr       advanced mgr/dashboard/RGW_API_USER_ID        ceph-dashboard                           *  
  mgr       advanced mgr/dashboard/ceph-1/server_addr     20.3.14.124                              *  
  mgr       advanced mgr/dashboard/server_port            8443                                     *  
  mgr       advanced mgr/dashboard/ssl                    true                                     *  
  mgr       advanced mgr/dashboard/ssl_server_port        8443                                     *  
[root@ceph-1 data]# s3cmd ls
#创建桶
[root@ceph-1 data]# s3cmd mb s3://test
Bucket 's3://test/' created
#上传测试
[root@ceph-1 data]# s3cmd put test.txt s3://test -r
upload: 'test.txt' -> 's3://test/1234'  [1 of 1]
 29498 of 29498   100% in    0s   634.42 KB/s  done
#删除桶文件
[root@ceph-1 data]# s3cmd del s3://test --recursive --force
delete: 's3://test/1234'
#删除桶
[root@ceph-1 data]# s3cmd rb s3://test --recursive --force
Bucket 's3://test/' removed

[root@ceph-1 data]# ceph df
RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED 
    ssd       5.2 TiB     3.3 TiB     1.9 TiB      1.9 TiB         36.80 
    TOTAL     5.2 TiB     3.3 TiB     1.9 TiB      1.9 TiB         36.80 
 
POOLS:
    POOL                           ID     PGS     STORED      OBJECTS     USED        %USED     MAX AVAIL 
    cephfs_data                     1      16      36 GiB       9.33k      36 GiB      1.16       3.0 TiB 
    cephfs_metadata                 2      16     137 KiB          23     169 KiB         0       3.0 TiB 
    .rgw.root                       3      16     1.2 KiB           4      16 KiB         0       3.0 TiB 
    default.rgw.control             4      16         0 B           8         0 B         0       3.0 TiB 
    default.rgw.meta                5      16     6.5 KiB          32     120 KiB         0       3.0 TiB 
    default.rgw.log                 6      16     1.6 MiB         207     1.6 MiB         0       3.0 TiB 
    default.rgw.buckets.index       7      16         0 B         192         0 B         0       3.0 TiB 
    default.rgw.buckets.data        8      16         0 B      52.04k         0 B      4.52       3.0 TiB 
    default.rgw.buckets.non-ec      9      16         0 B           0         0 B         0       3.0 TiB 


1、把上面的文件传输到当前集群

[root@ceph-1 data]# ls
rgwdata  rgwindex  rgwmeta
[root@ceph-1 data]#rados -p default.rgw.buckets.data  import rgwdata
Importing pool
***Overwrite*** #-9223372036854775808:00000000:::obj_delete_at_hint.0000000078:head#
***Overwrite*** #-9223372036854775808:00000000:gc::gc.30:head#
***Overwrite*** #-9223372036854775808:00000000:::obj_delete_at_hint.0000000070:head#
..............
[root@ceph-1 data]#rados -p default.rgw.buckets.index   import rgwindex
Importing pool
***Overwrite*** #-9223372036854775808:00000000:::obj_delete_at_hint.0000000078:head#
***Overwrite*** #-9223372036854775808:00000000:gc::gc.30:head#
***Overwrite*** #-9223372036854775808:00000000:::obj_delete_at_hint.0000000070:head#
..............
[root@ceph-1 data]#rados -p default.rgw.meta    import rgwmeta
Importing pool
***Overwrite*** #-9223372036854775808:00000000:::obj_delete_at_hint.0000000078:head#
***Overwrite*** #-9223372036854775808:00000000:gc::gc.30:head#
***Overwrite*** #-9223372036854775808:00000000:::obj_delete_at_hint.0000000070:head#
..............

[root@ceph-1 data]# ceph df
RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED 
    ssd       5.2 TiB     3.3 TiB     1.9 TiB      1.9 TiB         36.80 
    TOTAL     5.2 TiB     3.3 TiB     1.9 TiB      1.9 TiB         36.80 
 
POOLS:
    POOL                           ID     PGS     STORED      OBJECTS     USED        %USED     MAX AVAIL 
    cephfs_data                     1      16      36 GiB       9.33k      36 GiB      1.16       3.0 TiB 
    cephfs_metadata                 2      16     137 KiB          23     169 KiB         0       3.0 TiB 
    .rgw.root                       3      16     1.2 KiB           4      16 KiB         0       3.0 TiB 
    default.rgw.control             4      16         0 B           8         0 B         0       3.0 TiB 
    default.rgw.meta                5      16     6.5 KiB          32     120 KiB         0       3.0 TiB 
    default.rgw.log                 6      16     1.6 MiB         207     1.6 MiB         0       3.0 TiB 
    default.rgw.buckets.index       7      16         0 B         192         0 B         0       3.0 TiB 
    default.rgw.buckets.data        8      16     147 GiB      52.04k     147 GiB      4.52       3.0 TiB 
    default.rgw.buckets.non-ec      9      16         0 B           0         0 B         0       3.0 TiB 
[root@ceph-1 data]# 

2、传输完了发现网页异常和s3cmd 不可用了  

1、发现ceph config dump的RGW_API_ACCESS_KEY和RGW_API_SECRET_KEY和 radosgw-admin user info --uid=ceph-dashboard 输出的结果不一样 

radosgw-admin user info --uid=ceph-dashboard结果是老集群的access_key和secret_key 

导入的radosgw-admin user info --uid=ceph-dashboard的access_key和secret_key

   [root@ceph-1 data]# radosgw-admin user info --uid=ceph-dashboard
   [root@ceph-1 data]#  echo 9UYWS54KEGHPTXIZK61J  > access_key
  [root@ceph-1 data]#  echo MGaia4UnZhKO6DRRtRu89iKwUJjZ0KVS8IgjA2p8  > secret_key
   [root@ceph-1 data]# ceph dashboard set-rgw-api-access-key -i  access_key
   [root@ceph-1 data]# ceph dashboard set-rgw-api-secret-key -i secret_key

发现一样了

网页也正常了

用新的access_key和secret_key 配置s3cmd

迁移后的registry 无法使用failed to retrieve info about container registry (HTTP Error: 301: 301 Moved Permanently)  和CEPH RGW集群和bucket的zone group 不一致导致的404异常解决 及 使用radosgw-admin metadata 命令设置bucket metadata 的方法

[root@ceph-1 data]# vi /var/log/ceph/ceph-rgw-ceph-1.rgw0.log

查看 zonegroup  发现不一致

radosgw-admin zonegroup list   看default_info是

79ee051e-ac44-4677-b011-c7f3ad0d1d75 

但是 radosgw-admin metadata get bucket.instance:registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1里面的zonegroup

3ea718b5-ddfe-4641-8f80-53152066e03e

[root@ceph-1 data]# radosgw-admin zonegroup list
{
    "default_info": "79ee051e-ac44-4677-b011-c7f3ad0d1d75",
    "zonegroups": [
        "default"
    ]
}

[root@ceph-1 data]# radosgw-admin metadata list bucket.instance
[
    "docker-000010:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.4",
    "docker-pub:204e1689-81f3-41e6-a487-8a0cfe918e2e.4743.3",
    "digital-000010:204e1689-81f3-41e6-a487-8a0cfe918e2e.4801.1",
    "digital-000002:204e1689-81f3-41e6-a487-8a0cfe918e2e.4743.2",
    "000002:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.3",
    "component-pub:204e1689-81f3-41e6-a487-8a0cfe918e2e.4743.1",
    "cloudengine:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.2",
    "deploy-2:204e1689-81f3-41e6-a487-8a0cfe918e2e.4801.5",
    "warp-benchmark-bucket:204e1689-81f3-41e6-a487-8a0cfe918e2e.4801.6",
    "registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1",
    "000010:204e1689-81f3-41e6-a487-8a0cfe918e2e.4801.2",
    "component-000010:204e1689-81f3-41e6-a487-8a0cfe918e2e.4743.4"
]

[root@ceph-1 data]# radosgw-admin metadata get bucket.instance:registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1
{
    "key": "bucket.instance:registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1",
    "ver": {
        "tag": "_CMGeYR69ptByuWSkghrYCln",
        "ver": 1
    },
    "mtime": "2024-03-08 07:42:50.397826Z",
    "data": {
        "bucket_info": {
            "bucket": {
                "name": "registry",
                "marker": "204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1",
                "bucket_id": "204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1",
                "tenant": "",
                "explicit_placement": {
                    "data_pool": "",
                    "data_extra_pool": "",
                    "index_pool": ""
                }
            },
            "creation_time": "2024-01-24 10:36:55.798976Z",
            "owner": "registry",
            "flags": 0,
            "zonegroup": "3ea718b5-ddfe-4641-8f80-53152066e03e",
            "placement_rule": "default-placement",
            "has_instance_obj": "true",
            "quota": {
                "enabled": false,
                "check_on_raw": true,
                "max_size": -1,
                "max_size_kb": 0,
                "max_objects": -1
            },
            "num_shards": 16,
            "bi_shard_hash_type": 0,
            "requester_pays": "false",
            "has_website": "false",
            "swift_versioning": "false",
            "swift_ver_location": "",
            "index_type": 0,
            "mdsearch_config": [],
            "reshard_status": 0,
            "new_bucket_instance_id": ""
        },
        "attrs": [
            {
                "key": "user.rgw.acl",
                "val": "AgKTAAAAAwIYAAAACAAAAHJlZ2lzdHJ5CAAAAHJlZ2lzdHJ5BANvAAAAAQEAAAAIAAAAcmVnaXN0cnkPAAAAAQAAAAgAAAByZWdpc3RyeQUDPAAAAAICBAAAAAAAAAAIAAAAcmVnaXN0cnkAAAAAAAAAAAICBAAAAA8AAAAIAAAAcmVnaXN0cnkAAAAAAAAAAAAAAAAAAAAA"
            }
        ]
    }
}

解决

1、把registry信息导入文件

[root@ceph-1 data]# radosgw-admin metadata get bucket.instance:registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1 > conf.json

2、获取当前集群的zonegroup

[root@ceph-1 data]# radosgw-admin zonegroup list
{
    "default_info": "79ee051e-ac44-4677-b011-c7f3ad0d1d75",
    "zonegroups": [
        "default"
    ]
}

3、修改conf.json的zonegroup

结果如下

 [root@ceph-1 data]# cat conf.json
{
    "key": "bucket.instance:registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1",
    "ver": {
        "tag": "_CMGeYR69ptByuWSkghrYCln",
        "ver": 1
    },
    "mtime": "2024-03-08 07:42:50.397826Z",
    "data": {
        "bucket_info": {
            "bucket": {
                "name": "registry",
                "marker": "204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1",
                "bucket_id": "204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1",
                "tenant": "",
                "explicit_placement": {
                    "data_pool": "",
                    "data_extra_pool": "",
                    "index_pool": ""
                }
            },
            "creation_time": "2024-01-24 10:36:55.798976Z",
            "owner": "registry",
            "flags": 0,
            "zonegroup": "79ee051e-ac44-4677-b011-c7f3ad0d1d75",  #替换成radosgw-admin zonegroup list的default_inf
            "placement_rule": "default-placement",
            "has_instance_obj": "true",
            "quota": {
                "enabled": false,
                "check_on_raw": true,
                "max_size": -1,
                "max_size_kb": 0,
                "max_objects": -1
            },
            "num_shards": 16,
            "bi_shard_hash_type": 0,
            "requester_pays": "false",
            "has_website": "false",
            "swift_versioning": "false",
            "swift_ver_location": "",
            "index_type": 0,
            "mdsearch_config": [],
            "reshard_status": 0,
            "new_bucket_instance_id": ""
        },
        "attrs": [
            {
                "key": "user.rgw.acl",
                "val": "AgKTAAAAAwIYAAAACAAAAHJlZ2lzdHJ5CAAAAHJlZ2lzdHJ5BANvAAAAAQEAAAAIAAAAcmVnaXN0cnkPAAAAAQAAAAgAAAByZWdpc3RyeQUDPAAAAAICBAAAAAAAAAAIAAAAcmVnaXN0cnkAAAAAAAAAAAICBAAAAA8AAAAIAAAAcmVnaXN0cnkAAAAAAAAAAAAAAAAAAAAA"
            }
        ]
    }
}

4、导入信息

[root@ceph-1 data]# radosgw-admin metadata  put bucket.instance:registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1 < conf.json

5、查看"zonegroup": "79ee051e-ac44-4677-b011-c7f3ad0d1d75",   和当前集群一直

[root@ceph-1 data]# radosgw-admin metadata list bucket.instance
[
    "docker-000010:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.4",
    "docker-pub:204e1689-81f3-41e6-a487-8a0cfe918e2e.4743.3",
    "digital-000010:204e1689-81f3-41e6-a487-8a0cfe918e2e.4801.1",
    "digital-000002:204e1689-81f3-41e6-a487-8a0cfe918e2e.4743.2",
    "000002:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.3",
    "component-pub:204e1689-81f3-41e6-a487-8a0cfe918e2e.4743.1",
    "cloudengine:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.2",
    "deploy-2:204e1689-81f3-41e6-a487-8a0cfe918e2e.4801.5",
    "warp-benchmark-bucket:204e1689-81f3-41e6-a487-8a0cfe918e2e.4801.6",
    "registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1",
    "000010:204e1689-81f3-41e6-a487-8a0cfe918e2e.4801.2",
    "component-000010:204e1689-81f3-41e6-a487-8a0cfe918e2e.4743.4"
]

[root@ceph-1 data]# radosgw-admin metadata get bucket.instance:registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1
{
    "key": "bucket.instance:registry:204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1",
    "ver": {
        "tag": "_CMGeYR69ptByuWSkghrYCln",
        "ver": 1
    },
    "mtime": "2024-03-08 07:42:50.397826Z",
    "data": {
        "bucket_info": {
            "bucket": {
                "name": "registry",
                "marker": "204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1",
                "bucket_id": "204e1689-81f3-41e6-a487-8a0cfe918e2e.4772.1",
                "tenant": "",
                "explicit_placement": {
                    "data_pool": "",
                    "data_extra_pool": "",
                    "index_pool": ""
                }
            },
            "creation_time": "2024-01-24 10:36:55.798976Z",
            "owner": "registry",
            "flags": 0,
            "zonegroup": "79ee051e-ac44-4677-b011-c7f3ad0d1d75",
            "placement_rule": "default-placement",
            "has_instance_obj": "true",
            "quota": {
                "enabled": false,
                "check_on_raw": true,
                "max_size": -1,
                "max_size_kb": 0,
                "max_objects": -1
            },
            "num_shards": 16,
            "bi_shard_hash_type": 0,
            "requester_pays": "false",
            "has_website": "false",
            "swift_versioning": "false",
            "swift_ver_location": "",
            "index_type": 0,
            "mdsearch_config": [],
            "reshard_status": 0,
            "new_bucket_instance_id": ""
        },
        "attrs": [
            {
                "key": "user.rgw.acl",
                "val": "AgKTAAAAAwIYAAAACAAAAHJlZ2lzdHJ5CAAAAHJlZ2lzdHJ5BANvAAAAAQEAAAAIAAAAcmVnaXN0cnkPAAAAAQAAAAgAAAByZWdpc3RyeQUDPAAAAAICBAAAAAAAAAAIAAAAcmVnaXN0cnkAAAAAAAAAAAICBAAAAA8AAAAIAAAAcmVnaXN0cnkAAAAAAAAAAAAAAAAAAAAA"
            }
        ]
    }
}

6、重启registry   问题解决

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/448930.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

2-LINUX--Linux 系统文件类型与文件权限

一.文件类型 Linux 下所有的东西都可以看做文件&#xff0c;Linux 将文件分为以下几种类型&#xff1a; 1. 普通文件 ‘-’ 2. 目录文件 ‘d’ 3. 管道文件 ‘p’ 4. 链接文件 ‘l’ 5. 设备文件&#xff08;块设备 ’b’ 、字符设备 ‘c’&#xff09; 6. 套接字…

蓝桥杯真题讲解:异或和之和 (拆位、贡献法)

蓝桥杯真题讲解&#xff1a;异或和之和 &#xff08;拆位、贡献法&#xff09; 一、视频讲解二、正解代码 一、视频讲解 蓝桥杯真题讲解&#xff1a;异或和之和 &#xff08;拆位、贡献法&#xff09; 二、正解代码 //拆位考虑 #include<bits/stdc.h> #define endl &…

AI时代,AI智能交互数字人赋能公共服务降本增效

人工智能时代&#xff0c;AI交互数字人技术不断在冲击公共服务领域。越来越多公共服务领域开始在自身业务中运用AI智能交互数字人&#xff0c;通过布局AI交互数字人应用于代言人、推荐官、客服、主播等诸多领域。 近年来&#xff0c;数字人技术正在成为引领数字化时代营销的重…

【Python】科研代码学习:八 FineTune PretrainedModel (用 trainer,用 script);LLM文本生成

【Python】科研代码学习&#xff1a;八 FineTune PretrainedModel [用 trainer&#xff0c;用 script] LLM文本生成 自己整理的 HF 库的核心关系图用 trainer 来微调一个预训练模型用 script 来做训练任务使用 LLM 做生成任务可能犯的错误&#xff0c;以及解决措施 自己整理的 …

【STM32】串口助手接受数据是乱码如何解决

第一步 首先判断自己使用的串口助手和工程配置的波特率是否相同&#xff0c;一般都是115200 第二步 如果不是上一条的问题&#xff0c;继续排查&#xff0c;检查时钟问题 打开工程&#xff0c;找到此文件(stm32f10x.h)的这个位置&#xff0c;如工程中未添加&#xff0c;可以从…

B2B2C电商系统源代码部署,让你轻松开启网店生意

在当今数字化时代&#xff0c;开设一家网店已经变得异常简单。借助B2B2C电商系统源代码部署&#xff0c;你可以轻松搭建自己的在线商城&#xff0c;开始网店生意。这种系统为企业提供了一个强大的平台&#xff0c;让他们可以直接与制造商和消费者进行交易&#xff0c;从而实现品…

【生成式AI導論 2024】第4講:訓練不了人工智慧?你可以訓練你自己 (中) — 拆解問題與使用工具

文章目录 我的总结 拆解任务让语言模型检查自己的错误为什么同一个问题每次答案都不同&#xff1f;组合拳使用工具使用工具-搜索引擎-RAG使用工具-文字生图AIGPT4 其他插件 from: https://www.youtube.com/watch?vlwe3_x50_uw 我的总结 复杂任务拆解为多个步骤让模型检查自己…

redis缓存满了的话会发生什么?

线上问题 未及时加监控&#xff0c;导致线上redis被逐出&#xff0c;业务有损 示例&#xff1a; 一个key临时存储在redis等缓存中&#xff0c;如果该key在一段时间内有很大作用 比如一次业务请求&#xff0c;上游服务写入一个value&#xff0c;时长1小时&#xff0c;下游服务…

【Android】数据安全(一) —— Sqlite加密

目录 SQLCipherSQLiteCrypt其它 SQLCipher SQLCipher 是 SQLite 数据库的的开源扩展&#xff0c;使用了 256 位 AES 加密&#xff0c;支持跨平台、零配置、数据100%加密、加密开销低至 5 -15%、占用空间小、性能出色等优点&#xff0c;因此非常适合保护嵌入式应用程序数据库&a…

238.除自身以外数组的乘积

题目&#xff1a;给你一个整数数组 nums&#xff0c;返回 数组 answer &#xff0c;其中 answer[i] 等于 nums 中除 nums[i] 之外其余各元素的乘积 。 题目数据 保证 数组 nums之中任意元素的全部前缀元素和后缀的乘积都在 32 位 整数范围内。 请 不要使用除法&#xff0c;且…

八、软考-系统架构设计师笔记-系统质量属性和架构评估

1、软件系统质量属性 软件架构的定义 软件架构是指在一定的设计原则基础上&#xff0c;从不同角度对组成系统的各部分进行搭配和安排&#xff0c;形成系统的多个结构而组成架构&#xff0c;它包括该系统的各个构件&#xff0c;构件的外部可见属性及构件之间的相互关系。 软件架…

微信小程序购物/超市/餐饮/酒店商城开发搭建过程和需求

1. 商城开发的基本框架 a. 用户界面&#xff08;Frontend&#xff09; 页面设计&#xff1a;包括首页、商品列表、商品详情、购物车、下单界面、用户中心等。交云设计&#xff1a;如何让用户操作更加流畅&#xff0c;包括搜索、筛选、排序等功能的实现。响应式设计&#xff1…

洛谷 P8816 [CSP-J 2022] 上升点列(T4)

目录 题目传送门 算法解析 最终代码 提交结果 尾声 题目传送门 [CSP-J 2022] 上升点列 - 洛谷https://www.luogu.com.cn/problem/P8816 算法解析 k 0 且 xi, yi 值域不大时&#xff0c;这题是非常简单的 DP&#xff0c;类似「数字三角形」。 记 dp(x,y) 为「以 (x,y) …

笔记79:ROS入门之前的准备

一、ROS是什么 ROS其实是一个伪操作系统&#xff0c;是基于Liunx操作系统的一个用于机器人各个节点之间通信的系统&#xff1b;ROS制定了一系列规则使得每个节点之间遵循相同的通信规则&#xff0c;使得每个人都可以有一个守则区遵守开发自己的节点&#xff0c;也能和别人开发…

Exception异常处理

1. 两种异常处理机制 1.1 使用 throw 关键字手动抛出异常 使用throw关键字抛出异常&#xff0c;代码会的显得简单明了 如下图所示 1.2 使用 try-catch 捕获异常 使用try-catch进行捕获异常&#xff0c;往往会使代码变得更加笼统&#xff0c;层层包裹 如下图所示 2. 自定义…

从零学算法287

287.给定一个包含 n 1 个整数的数组 nums &#xff0c;其数字都在 [1, n] 范围内&#xff08;包括 1 和 n&#xff09;&#xff0c;可知至少存在一个重复的整数。 假设 nums 只有 一个重复的整数 &#xff0c;返回 这个重复的数 。 你设计的解决方案必须 不修改 数组 nums 且只…

聊聊python中面向对象编程思想

面向对象编程思想 1、什么是面向过程 传统的面向过程的编程思想总结起来就八个字——自顶向下&#xff0c;逐步细化&#xff01; → 将要实现的功能描述为一个从开始到结束按部就班的连续的“步骤” → 依次逐步完成这些步骤&#xff0c;如果某一个步骤的难度较大&#xff…

2024年品牌推广:构建品牌生态圈与注重品牌故事和文化传播

在全球经济深度融合、数字化浪潮汹涌澎湃的2024年&#xff0c;品牌推广的策略与模式正经历着前所未有的变革。在这一背景下&#xff0c;构建品牌生态圈和注重品牌故事与文化传播&#xff0c;成为了企业提升品牌竞争力和市场占有率的重要手段。 一、2024年市场经济分析与现状 …

上门预约按摩系统相比较传统按摩店有哪些优点和特色;

上门按摩系统与传统按摩店的运营对比&#xff1a; 1. 技师自由选择&#xff1a;在上门按摩系统中&#xff0c;技师可以兼职加入&#xff0c;无需固定门店。平台为技师提供订单&#xff0c;技师则携带基础服务用具上门服务。同时&#xff0c;兼职技师也需提交详尽资料和资质证明…

伟骅英才|二月二:龙年龙抬头

二月二龙抬头&#xff0c;是中国民间传统节日&#xff0c;人们期盼通过对龙的祈求来实现降雨的目的&#xff0c;寄托了老百姓对美好生活的向往。这一天&#xff0c;人们通常会去理个发&#xff0c;寓意着“二月二剃龙头&#xff0c;一年都有精神头”。现如今的二月二&#xff0…