日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

配置基于IPv6的单节点Ceph

發布時間:2023/12/14 编程问答 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 配置基于IPv6的单节点Ceph 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
  • 本文作者:?lemon
  • 本文鏈接:?https://lemon2013.github.io/2016/11/06/配置基于IPv6的Ceph/
  • 版權聲明:?本博客所有文章除特別聲明外,均采用?CC BY-NC-SA 3.0?許可協議。轉載請注明出處!

引言

為什么突然想起搭建一個基于IPv6的Ceph環境?純屬巧合,原本有一個項目需要搭建一個基于IPv6的文件系統,可惜Hadoop不支持(之前一直覺得Hadoop比較強大),幾經折騰,Ceph給了我希望,好了閑話少說,直接進入正題。

實驗環境

  • Linux操作系統版本:CentOS Linux release 7.2.1511 (Core)
    • Minimal鏡像?603M左右
    • Everything鏡像?7.2G左右
  • Ceph版本:0.94.9(hammer版本)
  • 原本選取的為jewel最新版本,環境配置成功后,在使用Ceph的對象存儲功能時,導致不能通過IPv6訪問,出現類似如下錯誤提示,查閱資料發現是Ceph jewel版本的一個bug,正在修復,另外也給大家一個建議,在生產環境中,盡量不要選擇最新版本。

    set_ports_option:[::]8888:invalid port sport spec

  • 預檢

    網絡配置

    參考之前的一篇文章CentOS7 設置靜態IPv6/IPv4地址完成網絡配置

    修改主機名

    1

    2

    3

    4

    5

    [root@localhost ~]# hostnamectl set-hostname ceph001 #ceph001即為你想要修改的名字

    [root@localhost ~]# vim /etc/hosts

    127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

    ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

    2001:250:4402:2001:20c:29ff:fe25:8888 ceph001 #新增,前面IPv6地址即主機ceph001的靜態IPv6地址

    修改yum源

    由于某些原因,可能導致官方的yum在下載軟件時速度較慢,這里我們將yum源換為aliyun源

    1

    2

    3

    4

    5

    6

    7

    [root@localhost ~]# yum clean all #清空yum源

    [root@localhost ~]# rm -rf /etc/yum.repos.d/*.repo

    [root@localhost ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo #下載阿里base源

    [root@localhost ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo #下載阿里epel源

    [root@localhost ~]# sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo

    [root@localhost ~]# sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo

    [root@localhost ~]# sed -i 's/$releasever/7.2.1511/g' /etc/yum.repos.d/CentOS-Base.repo

    ?

    添加ceph源

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    [root@localhost ~]# vim /etc/yum.repos.d/ceph.repo

    [ceph]

    name=ceph

    baseurl=http://mirrors.aliyun.com/ceph/rpm-hammer/el7/x86_64/ #可以選擇需要安裝的版本

    gpgcheck=0

    [ceph-noarch]

    name=cephnoarch

    baseurl=http://mirrors.aliyun.com/ceph/rpm-hammer/el7/noarch/ #可以選擇需要安裝的版本

    gpgcheck=0

    [root@localhost ~]# yum makecache

    ?

    安裝ceph與ceph-deploy

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    18

    19

    20

    21

    22

    23

    24

    25

    26

    27

    28

    29

    30

    31

    32

    33

    34

    35

    36

    37

    38

    39

    40

    41

    42

    43

    44

    45

    46

    47

    48

    49

    50

    51

    52

    53

    54

    55

    56

    57

    58

    [root@localhost ~]# yum install ceph ceph-deploy

    Loaded plugins: fastestmirror, langpacks

    Loading mirror speeds from cached hostfile

    Resolving Dependencies

    --> Running transaction check

    ---> Package ceph.x86_64 1:0.94.9-0.el7 will be installed

    --> Processing Dependency: librbd1 = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

    --> Processing Dependency: python-rbd = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

    --> Processing Dependency: python-cephfs = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

    --> Processing Dependency: libcephfs1 = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

    --> Processing Dependency: librados2 = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

    --> Processing Dependency: python-rados = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

    --> Processing Dependency: ceph-common = 1:0.94.9-0.el7 for package: 1:ceph-0.94.9-0.el7.x86_64

    --> Processing Dependency: python-requests for package: 1:ceph-0.94.9-0.el7.x86_64

    --> Processing Dependency: python-flask for package: 1:ceph-0.94.9-0.el7.x86_64

    --> Processing Dependency: redhat-lsb-core for package: 1:ceph-0.94.9-0.el7.x86_64

    --> Processing Dependency: hdparm for package: 1:ceph-0.94.9-0.el7.x86_64

    --> Processing Dependency: libcephfs.so.1()(64bit) for package: 1:ceph-0.94.9-0.el7.x86_64

    .......

    Dependencies Resolved

    =======================================================================================

    Package Arch Version Repository Size

    =======================================================================================

    Installing:

    ceph x86_64 1:0.94.9-0.el7 ceph 20 M

    ceph-deploy noarch 1.5.36-0 ceph-noarch 283 k

    Installing for dependencies:

    boost-program-options x86_64 1.53.0-25.el7 base 155 k

    ceph-common x86_64 1:0.94.9-0.el7 ceph 7.2 M

    ...

    Transaction Summary

    =======================================================================================

    Install 2 Packages (+24 Dependent packages)

    Upgrade ( 2 Dependent packages)

    Total download size: 37 M

    Is this ok [y/d/N]: y

    Downloading packages:

    No Presto metadata available for ceph

    warning: /var/cache/yum/x86_64/7/base/packages/boost-program-options-1.53.0-25.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY

    Public key for boost-program-options-1.53.0-25.el7.x86_64.rpm is not installed

    (1/28): boost-program-options-1.53.0-25.el7.x86_64.rpm | 155 kB 00:00:00

    (2/28): hdparm-9.43-5.el7.x86_64.rpm | 83 kB 00:00:00

    (3/28): ceph-deploy-1.5.36-0.noarch.rpm | 283 kB 00:00:00

    (4/28): leveldb-1.12.0-11.el7.x86_64.rpm | 161 kB 00:00:00

    ...

    ---------------------------------------------------------------------------------------

    Total 718 kB/s | 37 MB 00:53

    Retrieving key from http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

    Importing GPG key 0xF4A80EB5:

    Userid : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"

    Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5

    From : http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

    Is this ok [y/N]: y

    ...

    Complete!

    驗證安裝版本

    1

    2

    3

    4

    [root@localhost ~]# ceph-deploy --version

    1.5.36

    [root@localhost ~]# ceph -v

    ceph version 0.94.9 (fe6d859066244b97b24f09d46552afc2071e6f90)

    ?

    安裝NTP(如果是多節點還需要配置服務端與客戶端),并設置selinux與firewalld

    1

    2

    3

    4

    5

    6

    7

    [root@localhost ~]# yum install ntp

    [root@localhost ~]# sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

    [root@localhost ~]# setenforce 0

    [root@localhost ~]# systemctl stop firewalld

    [root@localhost ~]# systemctl disable firewalld

    Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

    Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

    創建Ceph集群

    在管理節點(ceph001)

    [root@ceph001 ~]# mkdir cluster
    [root@ceph001 ~]# cd cluster/

    創建集群

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    18

    19

    20

    21

    22

    23

    24

    25

    26

    27

    28

    29

    30

    31

    32

    33

    34

    35

    36

    37

    38

    39

    40

    41

    42

    43

    44

    45

    46

    47

    48

    49

    [root@ceph001 cluster]# ceph-deploy new ceph001

    [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

    [ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy new ceph001

    [ceph_deploy.cli][INFO ] ceph-deploy options:

    [ceph_deploy.cli][INFO ] username : None

    [ceph_deploy.cli][INFO ] func : <function new at 0xfe0668>

    [ceph_deploy.cli][INFO ] verbose : False

    [ceph_deploy.cli][INFO ] overwrite_conf : False

    [ceph_deploy.cli][INFO ] quiet : False

    [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x104c680>

    [ceph_deploy.cli][INFO ] cluster : ceph

    [ceph_deploy.cli][INFO ] ssh_copykey : True

    [ceph_deploy.cli][INFO ] mon : ['ceph001']

    [ceph_deploy.cli][INFO ] public_network : None

    [ceph_deploy.cli][INFO ] ceph_conf : None

    [ceph_deploy.cli][INFO ] cluster_network : None

    [ceph_deploy.cli][INFO ] default_release : False

    [ceph_deploy.cli][INFO ] fsid : None

    [ceph_deploy.new][DEBUG ] Creating new cluster named ceph

    [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds

    [ceph001][DEBUG ] connected to host: ceph001

    [ceph001][DEBUG ] detect platform information from remote host

    [ceph001][DEBUG ] detect machine type

    [ceph001][DEBUG ] find the location of an executable

    [ceph001][INFO ] Running command: /usr/sbin/ip link show

    [ceph001][INFO ] Running command: /usr/sbin/ip addr show

    [ceph001][DEBUG ] IP addresses found: [u'192.168.122.1', u'49.123.105.124']

    [ceph_deploy.new][DEBUG ] Resolving host ceph001

    [ceph_deploy.new][DEBUG ] Monitor ceph001 at 2001:250:4402:2001:20c:29ff:fe25:8888

    [ceph_deploy.new][INFO ] Monitors are IPv6, binding Messenger traffic on IPv6

    [ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph001']

    [ceph_deploy.new][DEBUG ] Monitor addrs are ['[2001:250:4402:2001:20c:29ff:fe25:8888]']

    [ceph_deploy.new][DEBUG ] Creating a random mon key...

    [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...

    [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

    [root@ceph001 cluster]# ll

    total 12

    -rw-r--r--. 1 root root 244 Nov 6 21:54 ceph.conf

    -rw-r--r--. 1 root root 3106 Nov 6 21:54 ceph-deploy-ceph.log

    -rw-------. 1 root root 73 Nov 6 21:54 ceph.mon.keyring

    [root@ceph001 cluster]# cat ceph.conf

    [global]

    fsid = 865e6b01-b0ea-44da-87a5-26a4980aa7a8

    ms_bind_ipv6 = true

    mon_initial_members = ceph001

    mon_host = [2001:250:4402:2001:20c:29ff:fe25:8888]

    auth_cluster_required = cephx

    auth_service_required = cephx

    auth_client_required = cephx

    由于我們采用的單節點部署,將默認的復制備份數改為1(原本是3)

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    18

    19

    20

    21

    [root@ceph001 cluster]# echo "osd_pool_default_size = 1" >> ceph.conf

    [root@ceph001 cluster]# ceph-deploy --overwrite-conf config push ceph001

    [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

    [ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy --overwrite-conf config push ceph001

    [ceph_deploy.cli][INFO ] ceph-deploy options:

    [ceph_deploy.cli][INFO ] username : None

    [ceph_deploy.cli][INFO ] verbose : False

    [ceph_deploy.cli][INFO ] overwrite_conf : True

    [ceph_deploy.cli][INFO ] subcommand : push

    [ceph_deploy.cli][INFO ] quiet : False

    [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x14f9710>

    [ceph_deploy.cli][INFO ] cluster : ceph

    [ceph_deploy.cli][INFO ] client : ['ceph001']

    [ceph_deploy.cli][INFO ] func : <function config at 0x14d42a8>

    [ceph_deploy.cli][INFO ] ceph_conf : None

    [ceph_deploy.cli][INFO ] default_release : False

    [ceph_deploy.config][DEBUG ] Pushing config to ceph001

    [ceph001][DEBUG ] connected to host: ceph001

    [ceph001][DEBUG ] detect platform information from remote host

    [ceph001][DEBUG ] detect machine type

    [ceph001][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

    ?

    創建監控節點

    將ceph001作為監控節點

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    18

    19

    20

    21

    22

    23

    24

    25

    26

    27

    28

    29

    30

    31

    32

    33

    34

    35

    36

    37

    38

    39

    40

    41

    42

    43

    44

    45

    46

    47

    48

    49

    50

    51

    52

    53

    54

    55

    56

    57

    58

    59

    60

    61

    62

    63

    64

    65

    66

    67

    68

    69

    70

    71

    72

    73

    74

    75

    76

    77

    78

    79

    80

    81

    82

    83

    84

    85

    86

    87

    88

    89

    90

    91

    92

    93

    94

    95

    96

    97

    98

    99

    100

    101

    102

    103

    104

    105

    106

    [root@ceph001 cluster]# ceph-deploy mon create-initial

    [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

    [ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy mon create-initial

    [ceph_deploy.cli][INFO ] ceph-deploy options:

    [ceph_deploy.cli][INFO ] username : None

    [ceph_deploy.cli][INFO ] verbose : False

    [ceph_deploy.cli][INFO ] overwrite_conf : False

    [ceph_deploy.cli][INFO ] subcommand : create-initial

    [ceph_deploy.cli][INFO ] quiet : False

    [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x23865a8>

    [ceph_deploy.cli][INFO ] cluster : ceph

    [ceph_deploy.cli][INFO ] func : <function mon at 0x237e578>

    [ceph_deploy.cli][INFO ] ceph_conf : None

    [ceph_deploy.cli][INFO ] default_release : False

    [ceph_deploy.cli][INFO ] keyrings : None

    [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph001

    [ceph_deploy.mon][DEBUG ] detecting platform for host ceph001 ...

    [ceph001][DEBUG ] connected to host: ceph001

    [ceph001][DEBUG ] detect platform information from remote host

    [ceph001][DEBUG ] detect machine type

    [ceph001][DEBUG ] find the location of an executable

    [ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.2.1511 Core

    [ceph001][DEBUG ] determining if provided host has same hostname in remote

    [ceph001][DEBUG ] get remote short hostname

    [ceph001][DEBUG ] deploying mon to ceph001

    [ceph001][DEBUG ] get remote short hostname

    [ceph001][DEBUG ] remote hostname: ceph001

    [ceph001][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

    [ceph001][DEBUG ] create the mon path if it does not exist

    [ceph001][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph001/done

    [ceph001][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph001/done

    [ceph001][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph001.mon.keyring

    [ceph001][DEBUG ] create the monitor keyring file

    [ceph001][INFO ] Running command: ceph-mon --cluster ceph --mkfs -i ceph001 --keyring /var/lib/ceph/tmp/ceph-ceph001.mon.keyring

    [ceph001][DEBUG ] ceph-mon: mon.noname-a [2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0 is local, renaming to mon.ceph001

    [ceph001][DEBUG ] ceph-mon: set fsid to 865e6b01-b0ea-44da-87a5-26a4980aa7a8

    [ceph001][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph001 for mon.ceph001

    [ceph001][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph001.mon.keyring

    [ceph001][DEBUG ] create a done file to avoid re-doing the mon deployment

    [ceph001][DEBUG ] create the init path if it does not exist

    [ceph001][DEBUG ] locating the `service` executable...

    [ceph001][INFO ] Running command: /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.ceph001

    [ceph001][DEBUG ] === mon.ceph001 ===

    [ceph001][DEBUG ] Starting Ceph mon.ceph001 on ceph001...

    [ceph001][WARNIN] Running as unit ceph-mon.ceph001.1478441156.735105300.service.

    [ceph001][DEBUG ] Starting ceph-create-keys on ceph001...

    [ceph001][INFO ] Running command: systemctl enable ceph

    [ceph001][WARNIN] ceph.service is not a native service, redirecting to /sbin/chkconfig.

    [ceph001][WARNIN] Executing /sbin/chkconfig ceph on

    [ceph001][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph001.asok mon_status

    [ceph001][DEBUG ] ********************************************************************************

    [ceph001][DEBUG ] status for monitor: mon.ceph001

    [ceph001][DEBUG ] {

    [ceph001][DEBUG ] "election_epoch": 2,

    [ceph001][DEBUG ] "extra_probe_peers": [],

    [ceph001][DEBUG ] "monmap": {

    [ceph001][DEBUG ] "created": "0.000000",

    [ceph001][DEBUG ] "epoch": 1,

    [ceph001][DEBUG ] "fsid": "865e6b01-b0ea-44da-87a5-26a4980aa7a8",

    [ceph001][DEBUG ] "modified": "0.000000",

    [ceph001][DEBUG ] "mons": [

    [ceph001][DEBUG ] {

    [ceph001][DEBUG ] "addr": "[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0",

    [ceph001][DEBUG ] "name": "ceph001",

    [ceph001][DEBUG ] "rank": 0

    [ceph001][DEBUG ] }

    [ceph001][DEBUG ] ]

    [ceph001][DEBUG ] },

    [ceph001][DEBUG ] "name": "ceph001",

    [ceph001][DEBUG ] "outside_quorum": [],

    [ceph001][DEBUG ] "quorum": [

    [ceph001][DEBUG ] 0

    [ceph001][DEBUG ] ],

    [ceph001][DEBUG ] "rank": 0,

    [ceph001][DEBUG ] "state": "leader",

    [ceph001][DEBUG ] "sync_provider": []

    [ceph001][DEBUG ] }

    [ceph001][DEBUG ] ********************************************************************************

    [ceph001][INFO ] monitor: mon.ceph001 is running

    [ceph001][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph001.asok mon_status

    [ceph_deploy.mon][INFO ] processing monitor mon.ceph001

    [ceph001][DEBUG ] connected to host: ceph001

    [ceph001][DEBUG ] detect platform information from remote host

    [ceph001][DEBUG ] detect machine type

    [ceph001][DEBUG ] find the location of an executable

    [ceph001][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph001.asok mon_status

    [ceph_deploy.mon][INFO ] mon.ceph001 monitor has reached quorum!

    [ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum

    [ceph_deploy.mon][INFO ] Running gatherkeys...

    [ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmpgY2IT7

    [ceph001][DEBUG ] connected to host: ceph001

    [ceph001][DEBUG ] detect platform information from remote host

    [ceph001][DEBUG ] detect machine type

    [ceph001][DEBUG ] get remote short hostname

    [ceph001][DEBUG ] fetch remote file

    [ceph001][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph001.asok mon_status

    [ceph001][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph001/keyring auth get client.admin

    [ceph001][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph001/keyring auth get client.bootstrap-mds

    [ceph001][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph001/keyring auth get client.bootstrap-osd

    [ceph001][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph001/keyring auth get client.bootstrap-rgw

    [ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring

    [ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring

    [ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists

    [ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring

    [ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring

    [ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpgY2IT7

    ?

    查看集群狀態

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    [root@ceph001 cluster]# ceph -s

    cluster 865e6b01-b0ea-44da-87a5-26a4980aa7a8

    health HEALTH_ERR

    64 pgs stuck inactive

    64 pgs stuck unclean

    no osds

    monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}

    election epoch 2, quorum 0 ceph001

    osdmap e1: 0 osds: 0 up, 0 in

    pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects

    0 kB used, 0 kB / 0 kB avail

    64 creating

    ?

    添加OSD

    查看硬盤

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    18

    19

    20

    21

    22

    23

    24

    25

    26

    27

    28

    29

    30

    [root@ceph001 cluster]# ceph-deploy disk list ceph001

    [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

    [ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy disk list ceph001

    [ceph_deploy.cli][INFO ] ceph-deploy options:

    [ceph_deploy.cli][INFO ] username : None

    [ceph_deploy.cli][INFO ] verbose : False

    [ceph_deploy.cli][INFO ] overwrite_conf : False

    [ceph_deploy.cli][INFO ] subcommand : list

    [ceph_deploy.cli][INFO ] quiet : False

    [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1c79bd8>

    [ceph_deploy.cli][INFO ] cluster : ceph

    [ceph_deploy.cli][INFO ] func : <function disk at 0x1c70e60>

    [ceph_deploy.cli][INFO ] ceph_conf : None

    [ceph_deploy.cli][INFO ] default_release : False

    [ceph_deploy.cli][INFO ] disk : [('ceph001', None, None)]

    [ceph001][DEBUG ] connected to host: ceph001

    [ceph001][DEBUG ] detect platform information from remote host

    [ceph001][DEBUG ] detect machine type

    [ceph001][DEBUG ] find the location of an executable

    [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core

    [ceph_deploy.osd][DEBUG ] Listing disks on ceph001...

    [ceph001][DEBUG ] find the location of an executable

    [ceph001][INFO ] Running command: /usr/sbin/ceph-disk list

    [ceph001][DEBUG ] /dev/sda :

    [ceph001][DEBUG ] /dev/sda1 other, xfs, mounted on /boot

    [ceph001][DEBUG ] /dev/sda2 other, LVM2_member

    [ceph001][DEBUG ] /dev/sdb other, unknown

    [ceph001][DEBUG ] /dev/sdc other, unknown

    [ceph001][DEBUG ] /dev/sdd other, unknown

    [ceph001][DEBUG ] /dev/sr0 other, iso9660

    ?

    添加第一個OSD(/dev/sdb)

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    18

    19

    20

    21

    22

    23

    24

    25

    26

    27

    28

    29

    30

    [root@ceph001 cluster]# ceph-deploy disk zap ceph001:/dev/sdb

    [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

    [ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy disk zap ceph001:/dev/sdb

    [ceph_deploy.cli][INFO ] ceph-deploy options:

    [ceph_deploy.cli][INFO ] username : None

    [ceph_deploy.cli][INFO ] verbose : False

    [ceph_deploy.cli][INFO ] overwrite_conf : False

    [ceph_deploy.cli][INFO ] subcommand : zap

    [ceph_deploy.cli][INFO ] quiet : False

    [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1b14bd8>

    [ceph_deploy.cli][INFO ] cluster : ceph

    [ceph_deploy.cli][INFO ] func : <function disk at 0x1b0be60>

    [ceph_deploy.cli][INFO ] ceph_conf : None

    [ceph_deploy.cli][INFO ] default_release : False

    [ceph_deploy.cli][INFO ] disk : [('ceph001', '/dev/sdb', None)]

    [ceph_deploy.osd][DEBUG ] zapping /dev/sdb on ceph001

    [ceph001][DEBUG ] connected to host: ceph001

    [ceph001][DEBUG ] detect platform information from remote host

    [ceph001][DEBUG ] detect machine type

    [ceph001][DEBUG ] find the location of an executable

    [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core

    [ceph001][DEBUG ] zeroing last few blocks of device

    [ceph001][DEBUG ] find the location of an executable

    [ceph001][INFO ] Running command: /usr/sbin/ceph-disk zap /dev/sdb

    [ceph001][DEBUG ] Creating new GPT entries.

    [ceph001][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or

    [ceph001][DEBUG ] other utilities.

    [ceph001][DEBUG ] Creating new GPT entries.

    [ceph001][DEBUG ] The operation has completed successfully.

    [ceph001][WARNIN] partx: specified range <1:0> does not make sense

    ?

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    18

    19

    20

    21

    22

    23

    24

    25

    26

    27

    28

    29

    30

    31

    32

    33

    34

    35

    36

    37

    38

    39

    40

    41

    42

    43

    44

    45

    46

    47

    48

    49

    50

    51

    52

    53

    54

    55

    56

    57

    58

    59

    60

    61

    62

    63

    64

    65

    66

    67

    68

    69

    70

    71

    72

    73

    74

    75

    76

    77

    78

    79

    80

    81

    82

    83

    84

    85

    86

    87

    88

    89

    90

    91

    92

    93

    [root@ceph001 cluster]# ceph-deploy osd create ceph001:/dev/sdb

    [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

    [ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy osd create ceph001:/dev/sdb

    [ceph_deploy.cli][INFO ] ceph-deploy options:

    [ceph_deploy.cli][INFO ] username : None

    [ceph_deploy.cli][INFO ] disk : [('ceph001', '/dev/sdb', None)]

    [ceph_deploy.cli][INFO ] dmcrypt : False

    [ceph_deploy.cli][INFO ] verbose : False

    [ceph_deploy.cli][INFO ] bluestore : None

    [ceph_deploy.cli][INFO ] overwrite_conf : False

    [ceph_deploy.cli][INFO ] subcommand : create

    [ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys

    [ceph_deploy.cli][INFO ] quiet : False

    [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x19b6680>

    [ceph_deploy.cli][INFO ] cluster : ceph

    [ceph_deploy.cli][INFO ] fs_type : xfs

    [ceph_deploy.cli][INFO ] func : <function osd at 0x19aade8>

    [ceph_deploy.cli][INFO ] ceph_conf : None

    [ceph_deploy.cli][INFO ] default_release : False

    [ceph_deploy.cli][INFO ] zap_disk : False

    [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph001:/dev/sdb:

    [ceph001][DEBUG ] connected to host: ceph001

    [ceph001][DEBUG ] detect platform information from remote host

    [ceph001][DEBUG ] detect machine type

    [ceph001][DEBUG ] find the location of an executable

    [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core

    [ceph_deploy.osd][DEBUG ] Deploying osd to ceph001

    [ceph001][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

    [ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdb journal None activate True

    [ceph001][DEBUG ] find the location of an executable

    [ceph001][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdb

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type

    [ceph001][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb

    [ceph001][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sdb

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:ae307314-3a81-4da2-974b-b21c24d9bba1 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb

    [ceph001][DEBUG ] The operation has completed successfully.

    [ceph001][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdb

    [ceph001][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partx -a /dev/sdb

    [ceph001][WARNIN] partx: /dev/sdb: error adding partition 2

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle

    [ceph001][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/ae307314-3a81-4da2-974b-b21c24d9bba1

    [ceph001][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/ae307314-3a81-4da2-974b-b21c24d9bba1

    [ceph001][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdb

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:16a6298d-59bb-4190-867a-10a5b519e7c0 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdb

    [ceph001][DEBUG ] The operation has completed successfully.

    [ceph001][WARNIN] INFO:ceph-disk:calling partx on created device /dev/sdb

    [ceph001][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partx -a /dev/sdb

    [ceph001][WARNIN] partx: /dev/sdb: error adding partitions 1-2

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/udevadm settle

    [ceph001][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdb1

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1

    [ceph001][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks

    [ceph001][DEBUG ] = sectsz=512 attr=2, projid32bit=1

    [ceph001][DEBUG ] = crc=0 finobt=0

    [ceph001][DEBUG ] data = bsize=4096 blocks=24903419, imaxpct=25

    [ceph001][DEBUG ] = sunit=0 swidth=0 blks

    [ceph001][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=0

    [ceph001][DEBUG ] log =internal log bsize=4096 blocks=12159, version=2

    [ceph001][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1

    [ceph001][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0

    [ceph001][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.2SMGIk with options noatime,inode64

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.2SMGIk

    [ceph001][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.2SMGIk

    [ceph001][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.2SMGIk/journal -> /dev/disk/by-partuuid/ae307314-3a81-4da2-974b-b21c24d9bba1

    [ceph001][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.2SMGIk

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.2SMGIk

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb

    [ceph001][DEBUG ] Warning: The kernel is still using the old partition table.

    [ceph001][DEBUG ] The new table will be used at the next reboot.

    [ceph001][DEBUG ] The operation has completed successfully.

    [ceph001][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdb

    [ceph001][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors

    [ceph001][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/partx -a /dev/sdb

    [ceph001][WARNIN] partx: /dev/sdb: error adding partitions 1-2

    [ceph001][INFO ] Running command: systemctl enable ceph

    [ceph001][WARNIN] ceph.service is not a native service, redirecting to /sbin/chkconfig.

    [ceph001][WARNIN] Executing /sbin/chkconfig ceph on

    [ceph001][INFO ] checking OSD status...

    [ceph001][DEBUG ] find the location of an executable

    [ceph001][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json

    [ceph001][WARNIN] there is 1 OSD down

    [ceph001][WARNIN] there is 1 OSD out

    [ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use.

    查看集群狀態

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    [root@ceph001 cluster]# ceph -s

    cluster 865e6b01-b0ea-44da-87a5-26a4980aa7a8

    health HEALTH_WARN

    64 pgs stuck inactive

    64 pgs stuck unclean

    monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}

    election epoch 1, quorum 0 ceph001

    osdmap e3: 1 osds: 0 up, 0 in

    pgmap v4: 64 pgs, 1 pools, 0 bytes data, 0 objects

    0 kB used, 0 kB / 0 kB avail

    64 creating

    ?

    繼續添加其他OSD

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    [root@ceph001 cluster]# ceph-deploy disk zap ceph001:/dev/sdc

    [root@ceph001 cluster]# ceph-deploy disk zap ceph001:/dev/sdd

    [root@ceph001 cluster]# ceph-deploy osd create ceph001:/dev/sdc

    [root@ceph001 cluster]# ceph-deploy osd create ceph001:/dev/sdd

    [root@ceph001 cluster]# ceph -s

    cluster 865e6b01-b0ea-44da-87a5-26a4980aa7a8

    health HEALTH_WARN

    64 pgs stuck inactive

    64 pgs stuck unclean

    monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}

    election epoch 1, quorum 0 ceph001

    osdmap e7: 3 osds: 0 up, 0 in

    pgmap v8: 64 pgs, 1 pools, 0 bytes data, 0 objects

    0 kB used, 0 kB / 0 kB avail

    64 creating

    ?

    重啟機器,查看集群狀態

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    [root@ceph001 ~]# ceph -s

    cluster 2818c750-8724-4a70-bb26-f01af7f6067f

    health HEALTH_WARN

    too few PGs per OSD (21 < min 30)

    monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}

    election epoch 1, quorum 0 ceph001

    osdmap e9: 3 osds: 3 up, 3 in

    pgmap v11: 64 pgs, 1 pools, 0 bytes data, 0 objects

    102196 kB used, 284 GB / 284 GB avail

    64 active+clean

    ?

    錯誤處理

    我們可以看到,目前集群狀態為HEALTH_WARN,存在以下警告提示

    1

    too few PGs per OSD (21 < min 30)

    ?

    增大rbd的pg數(too few PGs per OSD (21 < min 30))

    1

    2

    3

    4

    [root@ceph001 cluster]# ceph osd pool set rbd pg_num 128

    set pool 0 pg_num to 128

    [root@ceph001 cluster]# ceph osd pool set rbd pgp_num 128

    set pool 0 pgp_num to 128

    ?

    查看集群狀態

    1

    2

    3

    4

    5

    6

    7

    8

    9

    [root@ceph001 ~]# ceph -s

    cluster 2818c750-8724-4a70-bb26-f01af7f6067f

    health HEALTH_OK

    monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0}

    election epoch 1, quorum 0 ceph001

    osdmap e13: 3 osds: 3 up, 3 in

    pgmap v17: 128 pgs, 1 pools, 0 bytes data, 0 objects

    101544 kB used, 284 GB / 284 GB avail

    128 active+clean

    ?

    小結

  • 本教程只是簡單的搭建了一個單節點的Ceph環境,如果要換成多節點也很簡單,操作大同小異
  • 在基于IPv6的Ceph配置上,個人覺得與IPv4操作相差不大,只需要注意兩點
  • 配置靜態的IPv6地址
  • 修改主機名并添加域名解析,將主機名對應于前面設置的靜態IPv6地址
  • ?

    ?

    ?

    ?

    ?

    總結

    以上是生活随笔為你收集整理的配置基于IPv6的单节点Ceph的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。