日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 >

005 Ceph配置文件及用户管理

發(fā)布時(shí)間:2025/5/22 38 豆豆
生活随笔 收集整理的這篇文章主要介紹了 005 Ceph配置文件及用户管理 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

一、Ceph的配置文件

? ? ? ? Ceph 配置文件可用于配置存儲(chǔ)集群內(nèi)的所有守護(hù)進(jìn)程、或者某一類型的所有守護(hù)進(jìn)程。要配置一系列守護(hù)進(jìn)程,這些配置必須位于能收到配置的段落之下。默認(rèn)情況下,無論是ceph的服務(wù)端還是客戶端,配置文件都存儲(chǔ)在/etc/ceph/ceph.conf文件中

? ? ? ? 如果修改了配置參數(shù),必須使用/etc/ceph/ceph.conf文件在所有節(jié)點(diǎn)(包括客戶端)上保持一致。

? ? ? ? ceph.conf 采用基于 INI 的文件格式,包含具有 Ceph 守護(hù)進(jìn)程和客戶端相關(guān)配置的多個(gè)部分。每個(gè)部分具有一個(gè)使用 [name] 標(biāo)頭定義的名稱,以及鍵值對的一個(gè)或多個(gè)參數(shù)

[root@ceph2 ceph]# cat ceph.conf

[global] #存儲(chǔ)所有守護(hù)進(jìn)程之間通用配置。它應(yīng)用到讀取配置文件的所有進(jìn)程,包括客戶端。配置影響 Ceph 集群里的所有守護(hù)進(jìn)程。 fsid = 35a91e48-8244-4e96-a7ee-980ab989d20d #這個(gè)ID和ceph -s查看的ID是一個(gè) mon initial members = ceph2,ceph3,ceph4 #monitor的初始化配置,定義ceph最初安裝的時(shí)候定義的monitor節(jié)點(diǎn),必須在啟動(dòng)的時(shí)候就準(zhǔn)備就緒的monitor節(jié)點(diǎn)。 mon host = 172.25.250.11,172.25.250.12,172.25.250.13 public network = 172.25.250.0/24 cluster network = 172.25.250.0/24 [osd] #配置影響存儲(chǔ)集群里的所有?ceph-osd?進(jìn)程,并且會(huì)覆蓋?[global]?下的同一選項(xiàng) osd mkfs type = xfs osd mkfs options xfs = -f -i size=2048 osd mount options xfs = noatime,largeio,inode64,swalloc osd journal size = 5120

注:配置文件使用#和;來注釋,參數(shù)名稱可以使用空格、下劃線、中橫線來作為分隔符。如osd journal size 、 osd_jounrnal_size 、 osd-journal-size是有效且等同的參數(shù)名稱

1.1 osd介紹

?一個(gè)裸磁盤給ceph后,會(huì)被格式化成xfs格式,并且會(huì)分兩個(gè)分區(qū),一個(gè)數(shù)據(jù)區(qū)和一個(gè)日志區(qū)

[root@ceph2 ceph]# fdisk -l

OSD的id和磁盤對應(yīng)關(guān)系

[root@ceph2 ceph]# df -hT

Ceph的配置文件位置和工作目錄分別為:/etc/ceph和 cd /var/lib/ceph/

[root@ceph2 ceph]# ceph osd tree

二、刪除一個(gè)存儲(chǔ)池

2.1 修改配置文件

先配置一個(gè)參數(shù)為mon_allow_pool_delete為true

[root@ceph1 ceph-ansible]# vim /etc/ceph/ceph.conf [global] fsid = 35a91e48-8244-4e96-a7ee-980ab989d20d mon initial members = ceph2,ceph3,ceph4 mon host = 172.25.250.11,172.25.250.12,172.25.250.13 public network = 172.25.250.0/24 cluster network = 172.25.250.0/24 [osd] osd mkfs type = xfs osd mkfs options xfs = -f -i size=2048 osd mount options xfs = noatime,largeio,inode64,swalloc osd journal size = 5120 [mon] #添加配置 mon_allow_pool_delete = true

?2.2?同步各個(gè)節(jié)點(diǎn)

[root@ceph1 ceph-ansible]# ansible all -m copy -a 'src=/etc/ceph/ceph.conf dest=/etc/ceph/ceph.conf owner=ceph group=ceph mode=0644'

ceph3 | SUCCESS => {"changed": true, "checksum": "18ad6b3743d303bdd07b8655be547de35f9b4e55", "dest": "/etc/ceph/ceph.conf", "failed": false, "gid": 1001, "group": "ceph", "md5sum": "8415ae9d959d31fdeb23b06ea7f61b1b", "mode": "0644", "owner": "ceph", "size": 500, "src": "/root/.ansible/tmp/ansible-tmp-1552807199.08-216306208753591/source", "state": "file", "uid": 1001 } ceph4 | SUCCESS => {"changed": true, "checksum": "18ad6b3743d303bdd07b8655be547de35f9b4e55", "dest": "/etc/ceph/ceph.conf", "failed": false, "gid": 1001, "group": "ceph", "md5sum": "8415ae9d959d31fdeb23b06ea7f61b1b", "mode": "0644", "owner": "ceph", "size": 500, "src": "/root/.ansible/tmp/ansible-tmp-1552807199.09-46038387604349/source", "state": "file", "uid": 1001 } ceph2 | SUCCESS => {"changed": true, "checksum": "18ad6b3743d303bdd07b8655be547de35f9b4e55", "dest": "/etc/ceph/ceph.conf", "failed": false, "gid": 1001, "group": "ceph", "md5sum": "8415ae9d959d31fdeb23b06ea7f61b1b", "mode": "0644", "owner": "ceph", "size": 500, "src": "/root/.ansible/tmp/ansible-tmp-1552807199.04-33302205115898/source", "state": "file", "uid": 1001 } ceph1 | SUCCESS => {"changed": false, "checksum": "18ad6b3743d303bdd07b8655be547de35f9b4e55", "failed": false, "gid": 1001, "group": "ceph", "mode": "0644", "owner": "ceph", "path": "/etc/ceph/ceph.conf", "size": 500, "state": "file", "uid": 1001 } 輸出結(jié)果

或者配置此選項(xiàng)

[root@ceph1 ceph-ansible]# vim /usr/share/ceph-ansible/group_vars/all.yml

重新執(zhí)行palybook
[root@ceph1 ceph-ansible]# ansible-playbook site.yml

配置文件修改后不會(huì)立即生效,要重啟相關(guān)的進(jìn)程,比如所有的osd,或所有的monitor

[root@ceph2 ceph]#? cat /etc/ceph/ceph.conf

同步成功

2.3 每個(gè)節(jié)點(diǎn)重啟服務(wù)

[root@ceph2 ceph]# systemctl restart ceph-mon@serverc

或者

[root@ceph2 ceph]# systemctl restart ceph-mon.target

也可以使用ansible同時(shí)啟動(dòng)

[root@ceph1 ceph-ansible]# ansible mons -m shell -a ' systemctl restart ceph-mon.target' #這種操作堅(jiān)決不允許在生產(chǎn)環(huán)境操作 ceph2 | SUCCESS | rc=0 >> ceph4 | SUCCESS | rc=0 >> ceph3 | SUCCESS | rc=0 >>

2.4 刪除池

[root@ceph2 ceph]# ceph osd pool ls testpool EC-pool [root@ceph2 ceph]# ceph osd pool delete EC-pool EC-pool --yes-i-really-really-mean-it pool 'EC-pool' removed [root@ceph2 ceph]# ceph osd pool ls testpool

三、修改配置文件?

3.1? 臨時(shí)修改一個(gè)配置文件

[root@ceph2 ceph]# ceph tell mon.* injectargs '--mon_osd_nearfull_ratio 0.85' #在輸出信息顯示需要重啟服務(wù),但是配置已經(jīng)生效 mon.ceph2: injectargs:mon_osd_nearfull_ratio = '0.850000' (not observed, change may require restart) mon.ceph3: injectargs:mon_osd_nearfull_ratio = '0.850000' (not observed, change may require restart) mon.ceph4: injectargs:mon_osd_nearfull_ratio = '0.850000' (not observed, change may require restart) [root@ceph2 ceph]# ceph tell mon.* injectargs '--mon_osd_full_ratio 0.95' mon.ceph2: injectargs:mon_osd_full_ratio = '0.950000' (not observed, change may require restart) mon.ceph3: injectargs:mon_osd_full_ratio = '0.950000' (not observed, change may require restart) mon.ceph4: injectargs:mon_osd_full_ratio = '0.950000' (not observed, change may require restart) [root@ceph2 ceph]# ceph daemon osd.0 config show|grep nearfull"mon_osd_nearfull_ratio": "0.850000", [root@ceph2 ceph]# ceph daemon mon.ceph2 config show|grep mon_osd_full_ratio"mon_osd_full_ratio": "0.950000",

3.2 元變量介紹

所謂元變量是即Ceph內(nèi)置的變量。可以用它來簡化ceph.conf文件的配置:

$cluster:Ceph存儲(chǔ)集群的名稱。默認(rèn)為ceph,在/etc/sysconfig/ceph文件中定義。例如,log_file參數(shù)的默認(rèn)值是/var/log/ceph/$cluster-$name.log。在擴(kuò)展之后,它變?yōu)?var/log/ceph/ceph-mon.ceph-node1.log

$type:守護(hù)進(jìn)程類型。監(jiān)控器使用mon;OSD使用osd,元數(shù)據(jù)服務(wù)器使用mds,管理器使用mgr,客戶端應(yīng)用使用client。如在[global]部分中將pid_file參數(shù)設(shè)定義為/var/run/$cluster/$type.$id.pid,它會(huì)擴(kuò)展為/var/run/ceph/osd.0.pid,表示ID為0的 OSD。對于在ceph-node1上運(yùn)行的MON守護(hù)進(jìn)程,它擴(kuò)展為/var/run/ceph/mon.ceph-node1.pid

$id:守護(hù)進(jìn)程實(shí)例ID。對于ceph-node1上的MON,設(shè)置為ceph-node1。對于osd.1,它設(shè)置為1。如果是客戶端應(yīng)用,這是用戶名

$name:守護(hù)進(jìn)程名稱和實(shí)例ID。這是$type.$id的快捷方式

$host:其上運(yùn)行了守護(hù)進(jìn)程的主機(jī)的名稱

補(bǔ)充:

關(guān)閉所有ceph進(jìn)程

[root@ceph2 ceph]# ps -ef|grep "ceph-"|grep -v grep|awk '{print $2}'|xargs kill -9

3.3 組件之間啟用cephx認(rèn)證配置

?[root@ceph2 ceph]# ceph daemon osd.3 config show|grep grace

?[root@ceph2 ceph]# ceph daemon mon.ceph2 config show|grep auth

修改配置文件

[root@ceph1 ceph-ansible]#? vim /etc/ceph/ceph.conf

auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx

[root@ceph1 ceph-ansible]# ansible all -m copy -a 'src=/etc/ceph/ceph.conf dest=/etc/ceph/ceph.conf owner=ceph group=ceph mode=0644'

[root@ceph1 ceph-ansible]# ansible mons -m shell -a ' systemctl restart ceph-mon.target'

[root@ceph1 ceph-ansible]# ansible mons -m shell -a ' systemctl restart ceph-osd.target'

[root@ceph1 ceph-ansible]# ansible mons -m shell -a ' systemctl restart ceph-mgr.target'

[root@ceph2 ceph]# ceph daemon mon.ceph2 config show|grep auth

cephx驗(yàn)證

四、Ceph用戶管理

用戶管理需要用戶名和秘鑰

[root@ceph2 ceph]# cat ceph.client.admin.keyring

[client.admin]key = AQD7fYxcnG+wCRAARyLuAewyDcGmTPb5wdNRvQ==caps mds = "allow *"caps mgr = "allow *"caps mon = "allow *"caps osd = "allow *"

假如沒有秘鑰環(huán),在執(zhí)行ceph -s等相關(guān)操作就會(huì)報(bào)錯(cuò),如在ceph1執(zhí)行

[root@ceph1 ceph-ansible]# ceph -s

2019-03-17 16:19:31.824428 7fc87b255700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory 2019-03-17 16:19:31.824437 7fc87b255700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication 2019-03-17 16:19:31.824439 7fc87b255700 0 librados: client.admin initialization error (2) No such file or directory [errno 2] error connecting to the cluster

4.1 Ceph授權(quán)

Ceph把數(shù)據(jù)以對象的形式存于個(gè)存儲(chǔ)池中,Ceph用戶必須具有訪問存儲(chǔ)池的權(quán)限能夠讀寫數(shù)據(jù)

Ceph用caps來描述給用戶的授權(quán),這樣才能使用Mon,OSD和MDS的功能

caps也用于限制對某一存儲(chǔ)池內(nèi)的數(shù)據(jù)或某個(gè)命名空間的訪問

Ceph管理用戶可在創(chuàng)建或更新普通用戶是賦予其相應(yīng)的caps

Ceph常用權(quán)限說明:

r:賦予用戶讀數(shù)據(jù)的權(quán)限,如果我們需要訪問集群的任何信息,都需要先具有monitor的讀權(quán)限

w:賦予用戶寫數(shù)據(jù)的權(quán)限,如果需要在osd上存儲(chǔ)或修改數(shù)據(jù)就需要為OSD授予寫權(quán)限

x:賦予用戶調(diào)用對象方法的權(quán)限,包括讀和寫,以及在monitor上執(zhí)行用戶身份驗(yàn)證的權(quán)限

class-read:x的子集,允許用戶調(diào)用類的read方法,通常用于rbd類型的池

class-write:x的子集,允許用戶調(diào)用類的write方法,通常用于rbd類型的池

*:將一個(gè)指定存儲(chǔ)池的完整權(quán)限(r、w和x)以及執(zhí)行管理命令的權(quán)限授予用戶

profile osd:授權(quán)一個(gè)用戶以O(shè)SD身份連接其它OSD或者M(jìn)onitor,用于OSD心跳和狀態(tài)報(bào)告

profile mds:授權(quán)一個(gè)用戶以MDS身份連接其他MDS或者M(jìn)onitor

profile bootstrap-osd:允許用戶引導(dǎo)OSD。比如ceph-deploy和ceph-disk工具都使用client.bootstrap-osd用戶,該用戶有權(quán)給OSD添加密鑰和啟動(dòng)加載程序

profile bootstrap-mds:允許用戶引導(dǎo)MDS。比如ceph-deploy工具使用了client.bootstrap-mds用戶,該用戶有權(quán)給MDS添加密鑰和啟動(dòng)加載程序

4.2 添加用戶

[root@ceph2 ceph]# ceph auth add client.ning mon 'allow r' osd 'allow rw pool=testpool' #當(dāng)用戶不存在,則創(chuàng)建用戶并授權(quán);當(dāng)用戶存在,當(dāng)權(quán)限不變,則不進(jìn)行任何輸出;當(dāng)用戶存在,不支持修改權(quán)限 added key for client.ning [root@ceph2 ceph]# ceph auth add client.ning mon 'allow r' osd 'allow rw' Error EINVAL: entity client.ning exists but cap osd does not match [root@ceph2 ceph]# ceph auth get-or-create client.joy mon 'allow r' osd 'allow rw pool=mytestpool' #當(dāng)用戶不存在,則創(chuàng)建用戶并授權(quán)并返回用戶和key,當(dāng)用戶存在,權(quán)限不變,返回用戶和key,,當(dāng)用戶存在,權(quán)限修改,則返回報(bào)錯(cuò) [client.joy]key = AQBiBY5cJ2gBLBAA/ZCGDdp6JWkPuuU0YaLsrw== [root@ceph2 ceph]# cat ceph.client.admin.keyring [client.admin]key = AQD7fYxcnG+wCRAARyLuAewyDcGmTPb5wdNRvQ==caps mds = "allow *"caps mgr = "allow *"caps mon = "allow *"caps osd = "allow *" [root@ceph2 ceph]# ceph auth get-or-create client.joy mon 'allow r' osd 'allow rw' #當(dāng)用戶不存在,則創(chuàng)建用戶并授權(quán)只返回key;當(dāng)用戶存在,權(quán)限不變,只返回key;當(dāng)用戶存在,權(quán)限修改,則返回報(bào)錯(cuò) Error EINVAL: key for client.joy exists but cap osd does not match

4.3 刪除用戶

[root@ceph2 ceph]# ceph auth get-or-create client.xxx #創(chuàng)建一個(gè)xxx用戶 [client.xxx]key = AQAOB45c4KIoCRAAF/kDd7r4uUjKEdEHOSP8Xw== [root@ceph2 ceph]# ceph auth del client.xxx #刪除xxx用戶 updated [root@ceph2 ceph]# ceph auth get client.xxx #用戶已經(jīng)刪除 Error ENOENT: failed to find client.xxx in keyring [root@ceph2 ceph]# ceph auth get client.ning exported keyring for client.ning [client.ning]key = AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw==caps mon = "allow r"caps osd = "allow rw pool=testpool" [root@ceph2 ceph]# ceph auth get-key client.ning AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw==[root@ceph2 ceph]#

4.4 導(dǎo)出用戶

[root@ceph2 ceph]# ceph auth get-or-create client.ning -o ./ceph.client.ning.keyring [root@ceph2 ceph]# ll ./ceph.client.ning.keyring -rw-r--r-- 1 root root 62 Mar 17 16:39 ./ceph.client.ning.keyring [root@ceph2 ceph]# cat ./ceph.client.ning.keyring [client.ning]key = AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw==

ceph1上驗(yàn)證驗(yàn)證操作

[root@ceph2 ceph]# scp ./ceph.client.ning.keyring 172.25.250.10:/etc/ceph/ 把秘鑰傳給cph1 [root@ceph1 ceph-ansible]# ceph -s #發(fā)現(xiàn)沒有成功,需要制定ID或用戶 2019-03-17 16:47:01.936939 7fbad2aae700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory 2019-03-17 16:47:01.936950 7fbad2aae700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication 2019-03-17 16:47:01.936951 7fbad2aae700 0 librados: client.admin initialization error (2) No such file or directory [errno 2] error connecting to the cluster [root@ceph1 ceph-ansible]# ceph -s --name client.ning #指定用戶查看cluster:id: 35a91e48-8244-4e96-a7ee-980ab989d20dhealth: HEALTH_OKservices:mon: 3 daemons, quorum ceph2,ceph3,ceph4mgr: ceph4(active), standbys: ceph2, ceph3osd: 9 osds: 9 up, 9 indata:pools: 1 pools, 128 pgsobjects: 3 objects, 21938 bytesusage: 972 MB used, 133 GB / 134 GB availpgs: 128 active+clean [root@ceph1 ceph-ansible]# ceph osd pool ls --name client.ning testpool [root@ceph1 ceph-ansible]# ceph osd pool ls --id ning testpool [root@ceph1 ceph-ansible]# [root@ceph1 ceph-ansible]# rados -p testpool ls --id ning test2 test [root@ceph1 ceph-ansible]# rados -p testpool put aaa /etc/ceph/ceph.conf --id ning #驗(yàn)證數(shù)據(jù)的上傳下載 [root@ceph1 ceph-ansible]# rados -p testpool ls --id ning test2 aaa test [root@ceph1 ceph-ansible]# rados -p testpool get aaa /root/aaa.conf --name client.ning [root@ceph1 ceph-ansible]# diff /root/aaa.conf /etc/ceph/ceph.conf

?4.5 導(dǎo)入用戶

[root@ceph2 ceph]# ceph auth export client.ning -o /etc/ceph/ceph.client.ning-1.keyring export auth(auid = 18446744073709551615 key=AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw== with 2 caps) [root@ceph2 ceph]# ll total 20 -rw------- 1 ceph ceph 151 Mar 16 12:39 ceph.client.admin.keyring -rw-r--r-- 1 root root 121 Mar 17 17:32 ceph.client.ning-1.keyring -rw-r--r-- 1 root root 62 Mar 17 16:39 ceph.client.ning.keyring -rw-r--r-- 1 ceph ceph 589 Mar 17 16:12 ceph.conf drwxr-xr-x 2 ceph ceph 23 Mar 16 12:39 ceph.d -rw-r--r-- 1 root root 92 Nov 23 2017 rbdmap [root@ceph2 ceph]# cat ceph.client.ning-1.keyring #帶有授權(quán)信息 [client.ning]key = AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw==caps mon = "allow r"caps osd = "allow rw pool=testpool" [root@ceph2 ceph]# ceph auth get client.ning exported keyring for client.ning [client.ning]key = AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw==caps mon = "allow r"caps osd = "allow rw pool=testpool"

?4.6 用戶被刪除,恢復(fù)用戶

[root@ceph2 ceph]# cat ceph.client.ning.keyring #秘鑰環(huán)沒有權(quán)限信息 [client.ning]key = AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw== [root@ceph2 ceph]# ceph auth del client.ning #刪除這個(gè)用戶 updated [root@ceph1 ceph-ansible]# ll /etc/ceph/ceph.client.ning.keyring #在客戶端,秘鑰環(huán)依然存在 -rw-r--r-- 1 root root 62 Mar 17 16:40 /etc/ceph/ceph.client.ning.keyring [root@ceph1 ceph-ansible]# ceph -s --name client.ning #秘鑰環(huán)的用戶被刪除,無效 2019-03-17 17:49:13.896609 7f841eb27700 0 librados: client.ning authentication error (1) Operation not permitted [errno 1] error connecting to the cluster [root@ceph2 ceph]# ceph auth import -i ./ceph.client.ning-1.keyring #使用ning-1.keyring恢復(fù) imported keyring [root@ceph2 ceph]# ceph auth list |grep ning #用戶恢復(fù) installed auth entries: client.ning [root@ceph1 ceph-ansible]# ceph osd pool ls --name client.ning #客戶端驗(yàn)證,秘鑰生效 testpool EC-pool

4.7 修改用戶權(quán)限

? ? ? ? 兩種方法,一種是直接刪除這個(gè)用戶,重新創(chuàng)建具有新權(quán)限的用戶,但是會(huì)導(dǎo)致使用這個(gè)用戶連接的客戶端,都將全部失效。可以使用下面的方法修改

? ? ? ? ceph auth caps 用戶修改用戶授權(quán)。如果給定的用戶不存在,直接返回報(bào)錯(cuò)。如果用戶存在,則使用新指定的權(quán)限覆蓋現(xiàn)有權(quán)限。所以,如果只是給用戶新增權(quán)限,則原來的權(quán)限需要原封不動(dòng)的帶上。如果需要?jiǎng)h除原來的權(quán)限,只需要將該權(quán)限設(shè)定為空即可。

[root@ceph2 ceph]# ceph auth get client.joy #查看用戶權(quán)限 exported keyring for client.joy [client.joy]key = AQBiBY5cJ2gBLBAA/ZCGDdp6JWkPuuU0YaLsrw==caps mon = "allow r"caps osd = "allow rw pool=mytestpool" [root@ceph2 ceph]# ceph osd pool ls testpool EC-pool [root@ceph2 ceph]# ceph auth caps client.joy mon 'allow r' osd 'allow rw pool=mytestpool,allow rw pool=testpool' #對用戶joy添加對testpool這個(gè)池的權(quán)限 updated caps for client.joy [root@ceph2 ceph]# ceph auth get client.joy #查看成功添加 exported keyring for client.joy [client.joy]key = AQBiBY5cJ2gBLBAA/ZCGDdp6JWkPuuU0YaLsrw==caps mon = "allow r"caps osd = "allow rw pool=mytestpool,allow rw pool=testpool" [root@ceph2 ceph]# rados -p testpool put joy /etc/ceph/ceph.client.admin.keyring --id joy #但是沒有秘鑰,需要導(dǎo)出秘鑰 2019-03-17 18:01:36.602310 7ff35e71ee40 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.joy.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory 2019-03-17 18:01:36.602337 7ff35e71ee40 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication 2019-03-17 18:01:36.602340 7ff35e71ee40 0 librados: client.joy initialization error (2) No such file or directory couldn't connect to cluster: (2) No such file or directory [root@ceph2 ceph]# ceph auth get-or-create client.joy -o /etc/ceph/ceph.client.joy.keyring #導(dǎo)出秘鑰 [root@ceph2 ceph]# rados -p testpool put joy /etc/ceph/ceph.client.admin.keyring --id joy #上傳數(shù)據(jù)測試測試 [root@ceph2 ceph]# rados -p testpool ls --id joy #測試成功,用戶權(quán)限修改成功 joy test2 aaa test [root@ceph2 ceph]# ceph auth caps client.joy mon 'allow r' osd 'allow rw pool=testpool' #去掉對mytestpool的權(quán)限 updated caps for client.joy [root@ceph2 ceph]# ceph auth get client.joy #修改成功 exported keyring for client.joy [client.joy]key = AQBiBY5cJ2gBLBAA/ZCGDdp6JWkPuuU0YaLsrw==caps mon = "allow r"caps osd = "allow rw pool=testpool" [root@ceph2 ceph]# ceph auth caps client.ning mon '' osd '' #清除掉所有權(quán)限,但是必須保留對mon的讀權(quán)限 Error EINVAL: moncap parse failed, stopped at end of '' [root@ceph2 ceph]# ceph auth caps client.ning mon 'allow r' osd '' #成功清除所有權(quán)限,但是還有mon的權(quán)限 updated caps for client.ning [root@ceph2 ceph]# ceph auth get client.ning exported keyring for client.ning [client.ning]key = AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw==caps mon = "allow r"caps osd = "" [root@ceph2 ceph]# ceph auth get-or-create joyning #即也不可以創(chuàng)建空權(quán)限用戶,既沒有monitor讀權(quán)限的用戶 Error EINVAL: bad entity name

4.8 推送用戶

創(chuàng)建的用戶主要用于客戶端授權(quán),所以需要將創(chuàng)建的用戶推送至客戶端。如果需要向同一個(gè)客戶端推送多個(gè)用戶,可以將多個(gè)用戶的信息寫入同一個(gè)文件,然后直接推送該文件

[root@ceph2 ceph]# ceph-authtool -C /etc/ceph/ceph.keyring #創(chuàng)建一個(gè)秘鑰文件 creating /etc/ceph/ceph.keyring [root@ceph2 ceph]# ceph-authtool ceph.keyring --import-keyring ceph.client.ning.keyring #把用戶client.ning添加進(jìn)秘鑰文件 importing contents of ceph.client.ning.keyring into ceph.keyring [root@ceph2 ceph]# cat ceph.keyring #查看 [client.ning]key = AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw== [root@ceph2 ceph]# ceph-authtool ceph.keyring --import-keyring ceph.client.joy.keyring #把用戶client.ning添加進(jìn)秘鑰文件 importing contents of ceph.client.joy.keyring into ceph.keyring [root@ceph2 ceph]# cat ceph.keyring #查看有兩個(gè)用戶,可以把這文件推送給客戶端,就可以使用這兩個(gè)用戶的權(quán)限 [client.joy]key = AQBiBY5cJ2gBLBAA/ZCGDdp6JWkPuuU0YaLsrw== [client.ning]key = AQAcBY5coY/rLxAAvq99xcSOrwLI1ip0WAw2Sw==

?博主聲明:本文的內(nèi)容來源主要來自譽(yù)天教育晏威老師,由本人實(shí)驗(yàn)完成操作驗(yàn)證,需要的博友請聯(lián)系譽(yù)天教育(http://www.yutianedu.com/),獲得官方同意或者晏老師(https://www.cnblogs.com/breezey/)本人同意即可轉(zhuǎn)載,謝謝!

轉(zhuǎn)載于:https://www.cnblogs.com/zyxnhr/p/10548454.html

總結(jié)

以上是生活随笔為你收集整理的005 Ceph配置文件及用户管理的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。