日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

k8s集成ceph

發布時間:2023/12/14 编程问答 24 豆豆
生活随笔 收集整理的這篇文章主要介紹了 k8s集成ceph 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

有兩個參考
參見1:內容全
參見2:rbd,比較詳細

ceph的配置

在ceph集群中執行如下命令:

[root@node1 ~]# ceph -scluster:id: 365b02aa-db0c-11ec-b243-525400ce981fhealth: HEALTH_OKservices:mon: 3 daemons, quorum node1,node2,node3 (age 41h)mgr: node2.dqryin(active, since 2d), standbys: node1.umzcuvosd: 12 osds: 10 up (since 2d), 10 in (since 2d)data:pools: 2 pools, 33 pgsobjects: 1 objects, 19 Busage: 10 GiB used, 4.9 TiB / 4.9 TiB availpgs: 33 active+clean[root@node1 ~]# ceph mon stat e11: 3 mons at {node1=[v2:172.70.10.181:3300/0,v1:172.70.10.181:6789/0],node2=[v2:172.70.10.182:3300/0,v1:172.70.10.182:6789/0],node3=[v2:172.70.10.183:3300/0,v1:172.70.10.183:6789/0]}, election epoch 100, leader 0 node1, quorum 0,1,2 node1,node2,node3

部署ceph-csi版本

涉及三方的版本:ceph(Octopus),kubernetes (v1.24.0),ceph-sci版本
現階段對應ceph csi與k8s版本對應如下:

Ceph CSI VersionContainer Orchestrator NameVersion Tested
v3.6.1Kubernetesv1.21,v1.22,v1.23
v3.6.0Kubernetesv1.21,v1.22,v1.23
v3.5.1Kubernetesv1.21,v1.22,v1.23
v3.5.0Kubernetesv1.21,v1.22,v1.23
v3.4.0Kubernetesv1.20,v1.21,v1.22

目前使用的kubernetes版本是1.24,所以ceph-sci版本就使用最新版本3.6.1
目前使用的Ceph的版本是O版,ceph與Ceph CSI版本的對應關系,因為太多了,所以參照:ceph-sci插件官網
總上,部署ceph-csi V3.6.1版本就可以了

下載ceph-csi

下載ceph-csi 3.6.1的源碼:下載地址
deploy目錄下的rbd目錄下的內容

[root@node1 ceph-csi-3.6.1]# tree -L 3 deploy deploy ├── cephcsi │ └── image │ └── Dockerfile ├── cephfs │ └── kubernetes │ ├── csi-cephfsplugin-provisioner.yaml │ ├── csi-cephfsplugin.yaml │ ├── csi-config-map.yaml │ ├── csidriver.yaml │ ├── csi-nodeplugin-psp.yaml │ ├── csi-nodeplugin-rbac.yaml │ ├── csi-provisioner-psp.yaml │ └── csi-provisioner-rbac.yaml ├── Makefile ├── nfs │ └── kubernetes │ ├── csi-config-map.yaml │ ├── csidriver.yaml │ ├── csi-nfsplugin-provisioner.yaml │ ├── csi-nfsplugin.yaml │ ├── csi-nodeplugin-psp.yaml │ ├── csi-nodeplugin-rbac.yaml │ ├── csi-provisioner-psp.yaml │ └── csi-provisioner-rbac.yaml ├── rbd │ └── kubernetes │ ├── csi-config-map.yaml │ ├── csidriver.yaml │ ├── csi-nodeplugin-psp.yaml │ ├── csi-nodeplugin-rbac.yaml │ ├── csi-provisioner-psp.yaml │ ├── csi-provisioner-rbac.yaml │ ├── csi-rbdplugin-provisioner.yaml │ └── csi-rbdplugin.yaml └── scc.yaml

部署rbd

  • 將ceph-csi/deploy/rbd/kubernetes/下的所有yaml文件拷貝到本地
  • 創建csi-config-map.yaml
    clusterID就是集群ID,ceph -s即可獲得
    monitors 在/var/lib/ceph/365b02aa-db0c-11ec-b243-525400ce981f/mon.node1/config中
  • --- apiVersion: v1 kind: ConfigMap data:config.json: |-[{"clusterID": "365b02aa-db0c-11ec-b243-525400ce981f","monitors": ["172.70.10.181:6789","172.70.10.182:6789","172.70.10.183:6789"]}] metadata:name: ceph-csi-config
  • 創建csi-kms-config-map.yaml,也可以不創建,但是需要將csi-rbdplugin-provisioner.yaml和csi-rbdplugin.yaml中kms有關內容注釋掉
  • --- apiVersion: v1 kind: ConfigMap data:config.json: |-{} metadata:name: ceph-csi-encryption-kms-config
  • 創建ceph-config-map.yaml
    ceph.conf就是復制ceph集群的配置文件,也就是/ect/ceph/ceph.conf文件中的對應內容
  • --- apiVersion: v1 kind: ConfigMap data:ceph.conf: |[global]fsid = 365b02aa-db0c-11ec-b243-525400ce981fmon_host = [v2:172.70.10.181:3300/0,v1:172.70.10.181:6789/0] [v2:172.70.10.182:3300/0,v1:172.70.10.182:6789/0] [v2:172.70.10.183:3300/0,v1:172.70.10.183:6789/0]#public_network = 172.70.10.0/24auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephx# keyring is a required key and its value should be emptykeyring: | metadata:name: ceph-config
  • 創建csi-rbd-secret.yaml
    獲取admin的key
  • [root@node1 ~]# ceph auth get client.admin exported keyring for client.admin [client.admin]key = AQDpR4xiIN/TJRAAIIv3DMRdm70RnsGs5/DW9g==caps mds = "allow *"caps mgr = "allow *"caps mon = "allow *"caps osd = "allow *"

    創建csi-rbd-secret.yaml

    --- apiVersion: v1 kind: Secret metadata:name: csi-rbd-secretnamespace: default stringData:userID: kubernetesuserKey: AQDpR4xiIN/TJRAAIIv3DMRdm70RnsGs5/DW9g==encryptionPassphrase: test_passphrase
  • 創建k8s_rbd塊兒存儲池
  • [root@node1 ~]# ceph osd pool create k8s_rbd pool 'k8s_rbd' created [root@node1 ~]# rbd pool init k8s_rbd

    創建授權用戶,實際上,可以使用admin賬號

    ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=k8s' mgr 'profile rbd pool=k8s'
  • apply以上的所有文件
    一共如下:
  • csi-config-map.yamlcsi-kms-config-map.yaml (可有可無)ceph-config-map.yamlcsi-rbd-secret.yaml
  • 創建csi-plugin
  • kubectl apply -f csi-provisioner-rbac.yaml kubectl apply -f csi-nodeplugin-rbac.yaml

    執行的時候,因為眾所周知的原因,導致Google的鏡像下載不下來,主要是

    k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0 k8s.gcr.io/sig-storage/csi-snapshotter:v5.0.1 k8s.gcr.io/sig-storage/csi-attacher:v3.4.0 k8s.gcr.io/sig-storage/csi-resizer:v1.4.0 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.4.0

    將k8s.gcr.io/sig-storage替換成registry.aliyuncs.com/google_containers
    然后執行

    kubectl apply -f csi-provisioner-rbac.yaml kubectl apply -f csi-nodeplugin-rbac.yaml

    因為kubernetes1.24.0去掉了docker引擎,所以執行完成后,鏡像列表如下:

    [root@node1 kubernetes]# crictl images IMAGE TAG IMAGE ID SIZE docker.io/calico/cni v3.23.1 90d97aa939bbf 111MB docker.io/calico/node v3.23.1 fbfd04bbb7f47 76.6MB docker.io/calico/pod2daemon-flexvol v3.23.1 01dda8bd1b91e 8.67MB docker.io/library/nginx latest de2543b9436b7 56.7MB quay.io/tigera/operator v1.27.1 02245817b973b 60.3MB registry.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7a 13.6MB registry.aliyuncs.com/google_containers/etcd 3.5.3-0 aebe758cef4cd 102MB registry.aliyuncs.com/google_containers/kube-apiserver v1.24.0 529072250ccc6 33.8MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.24.0 88784fb4ac2f6 31MB registry.aliyuncs.com/google_containers/kube-proxy v1.24.0 77b49675beae1 39.5MB registry.aliyuncs.com/google_containers/kube-scheduler v1.24.0 e3ed7dee73e93 15.5MB registry.aliyuncs.com/google_containers/pause 3.6 6270bb605e12e 302kB registry.aliyuncs.com/google_containers/pause 3.7 221177c6082a8 311kB [root@node1 kubernetes]#

    kubernetes1.24.0去掉了默認使用ipvs,不再使用iptables,所以,最好關閉iptables,以免出現下面的情況

    [root@node5 ~]# crictl pods POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME 5d6dd91fd83e2 5 hours ago Ready csi-rbdplugin-provisioner-c6d7486dd-2jk5w default 0 (default) 354b1a8fd8b52 7 hours ago Ready csi-rbdplugin-wjpzd default 0 (default) 198dc5bf556df 5 days ago Ready calico-apiserver-6d4dd4bcf9-n8zgk calico-apiserver 0 (default) 244e05f6f9c67 5 days ago Ready calico-typha-b76b84965-bhsxn calico-system 0 (default) 4813ce009d806 5 days ago Ready calico-node-r6mv8 calico-system 0 (default) 2bca356d7a28a 5 days ago Ready kube-proxy-89lgc kube-system 0 (default) [root@node5 ~]# crictl ps -a CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID b798b5f047428 89f8fb0f77c15 4 hours ago Running csi-rbdplugin-controller 19 5d6dd91fd83e2 ed133e74dec67 89f8fb0f77c15 4 hours ago Exited csi-rbdplugin-controller 18 5d6dd91fd83e2 8ceafecc7d9f8 89f8fb0f77c15 5 hours ago Running liveness-prometheus 0 5d6dd91fd83e2 7ede68a34c5ab 89f8fb0f77c15 5 hours ago Running csi-rbdplugin 0 5d6dd91fd83e2 a3afb73c6ed92 551fd931edd5e 5 hours ago Running csi-resizer 0 5d6dd91fd83e2 753ded0a3a413 03e115718d258 5 hours ago Running csi-attacher 0 5d6dd91fd83e2 825eaf4f07fa8 53ae5b88a3380 5 hours ago Running csi-snapshotter 0 5d6dd91fd83e2 2abb44295907a c3dfb4b04796b 5 hours ago Running csi-provisioner 0 5d6dd91fd83e2 a9e6846498a1b 89f8fb0f77c15 7 hours ago Running liveness-prometheus 0 354b1a8fd8b52 39638d5c0961a 89f8fb0f77c15 7 hours ago Running csi-rbdplugin 0 354b1a8fd8b52 1b12e9d273f68 f45c8a305a0bb 7 hours ago Running driver-registrar 0 354b1a8fd8b52 797228d6b31ed 3bcf34f7d7d8d 5 days ago Running calico-apiserver 0 198dc5bf556df dc4bae329b42f fbfd04bbb7f47 5 days ago Running calico-node 0 4813ce009d806 d5d0be4a3ef2f 90d97aa939bbf 5 days ago Exited install-cni 0 4813ce009d806 8c5853e9a0905 4ac3a9100f349 5 days ago Running calico-typha 0 244e05f6f9c67 12f2be66fd320 01dda8bd1b91e 5 days ago Exited flexvol-driver 0 4813ce009d806 f4663a0650d73 77b49675beae1 5 days ago Running kube-proxy 0 2bca356d7a28a [root@node5 ~]# crictl logs ed133e74dec67 I0531 08:37:48.420227 1 cephcsi.go:180] Driver version: v3.6.1 and Git version: 1bd6297ecbdf11f1ebe6a4b20f8963b4bcebe13b I0531 08:37:48.420443 1 cephcsi.go:229] Starting driver type: controller with name: rbd.csi.ceph.com E0531 08:37:48.422369 1 controller.go:70] failed to create manager Get "https://10.96.0.1:443/api?timeout=32s": dial tcp 10.96.0.1:443: connect: no route to host E0531 08:37:48.422450 1 cephcsi.go:296] Get "https://10.96.0.1:443/api?timeout=32s": dial tcp 10.96.0.1:443: connect: no route to host

    關閉掉iptables服務即可解決上面的問題

  • 創建StorageClass
    創建cat storage.class.yaml
  • --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata:name: csi-rbd-sc provisioner: rbd.csi.ceph.com parameters:clusterID: 365b02aa-db0c-11ec-b243-525400ce981fpool: k8s_rbd #之前創建pool的名稱imageFeatures: layeringcsi.storage.k8s.io/provisioner-secret-name: csi-rbd-secretcsi.storage.k8s.io/provisioner-secret-namespace: defaultcsi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secretcsi.storage.k8s.io/controller-expand-secret-namespace: defaultcsi.storage.k8s.io/node-stage-secret-name: csi-rbd-secretcsi.storage.k8s.io/node-stage-secret-namespace: defaultcsi.storage.k8s.io/fstype: ext4 reclaimPolicy: Delete allowVolumeExpansion: true mountOptions:- discard

    創建rbd-pvc.yaml

    --- apiVersion: v1 kind: PersistentVolumeClaim metadata:name: rbd-pvc spec:accessModes:- ReadWriteOnceresources:requests:storage: 10GistorageClassName: csi-rbd-sc
  • 使用pvc
    創建nginx.yaml
  • [root@node1 ~]# cat nginx.yaml apiVersion: apps/v1 kind: Deployment metadata:name: my-nginx spec:replicas: 1selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:latestports:- containerPort: 80volumeMounts:- name: rbdmountPath: /usr/share/rbdvolumes:- name: rbdpersistentVolumeClaim: #指定pvcclaimName: rbd-pvc --- apiVersion: v1 kind: Service metadata:name: ngx-servicelabels:app: nginx spec:type: NodePortselector:app: nginxports:- port: 80targetPort: 80nodePort: 32500

    部署文件系統

  • 在ceph集群上面創建文件系統
  • [root@node1 ~]# ceph osd pool create cephfs_metadata 32 32 pool 'cephfs_metadata' created [root@node1 ~]# ceph osd pool create cephfs_data 32 32 pool 'cephfs_data' created [root@node1 ~]# ceph fs new cephfs cephfs_metadata cephfs_data new fs with metadata pool 3 and data pool 4 [root@node1 ~]# ceph -scluster:id: 365b02aa-db0c-11ec-b243-525400ce981fhealth: HEALTH_ERR1 filesystem is offline1 filesystem is online with fewer MDS than max_mdsservices:mon: 3 daemons, quorum node1,node2,node3 (age 2d)mgr: node2.dqryin(active, since 6d), standbys: node1.umzcuvmds: cephfs:0osd: 12 osds: 10 up (since 6d), 10 in (since 6d)data:pools: 4 pools, 97 pgsobjects: 23 objects, 15 MiBusage: 10 GiB used, 4.9 TiB / 4.9 TiB availpgs: 97 active+clean[root@node1 ~]# ceph auth get client.admin exported keyring for client.admin [client.admin]key = AQDpR4xiIN/TJRAAIIv3DMRdm70RnsGs5/DW9g==caps mds = "allow *"caps mgr = "allow *"caps mon = "allow *"caps osd = "allow *" [root@node1 ~]# ceph mon stat e11: 3 mons at {node1=[v2:172.70.10.181:3300/0,v1:172.70.10.181:6789/0],node2=[v2:172.70.10.182:3300/0,v1:172.70.10.182:6789/0],node3=[v2:172.70.10.183:3300/0,v1:172.70.10.183:6789/0]}, election epoch 104, leader 0 node1, quorum 0,1,2 node1,node2,node3 [root@node1 mon.node1]# cat config # minimal ceph.conf for 365b02aa-db0c-11ec-b243-525400ce981f [global]fsid = 365b02aa-db0c-11ec-b243-525400ce981fmon_host = [v2:172.70.10.181:3300/0,v1:172.70.10.181:6789/0] [v2:172.70.10.182:3300/0,v1:172.70.10.182:6789/0] [v2:172.70.10.183:3300/0,v1:172.70.10.183:6789/0] # 下面的這一步不能少,cephfs一定要啟動mds服務,才能正常對外提供服務,一般來說,這步操作在cephadm shell命令行下執行比較好 [root@node1 kubernetes]# cephadm shell Inferring fsid 365b02aa-db0c-11ec-b243-525400ce981f Inferring config /var/lib/ceph/365b02aa-db0c-11ec-b243-525400ce981f/mon.node1/config Using recent ceph image quay.io/ceph/ceph@sha256:f2822b57d72d07e6352962dc830d2fa93dd8558b725e2468ec0d07af7b14c95d [ceph: root@node1 /]# ceph orch apply mds cephfs --placement="3 node1 node2 node3" Scheduled mds.cephfs update...
  • 下載下來之后進入 deploy/cephfs/kubernetes
  • 使用rbd創建過的ceph-csi-config ConfigMap
  • 安裝
    vim中替換掉csi-cephfsplugin-provisioner.yaml和csi-cephfsplugin.yaml中的
    /k8s.gcr.io/sig-storage/registry.aliyuncs.com/google_containers/g
  • kubectl apply -f deploy/rbd/kubernetes/csi-provisioner-rbac.yaml kubectl apply -f deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml kubectl apply -f deploy/rbd/kubernetes/csi-cephfsplugin-provisioner.yaml kubectl apply -f deploy/rbd/kubernetes/csi-cephfsplugin.yaml

  • kubectl apply -f deploy/rbd/kubernetes/csi-provisioner-rbac.yaml
    kubectl apply -f deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml
    kubectl apply -f deploy/rbd/kubernetes/csi-cephfsplugin-provisioner.yaml
    kubectl apply -f deploy/rbd/kubernetes/csi-cephfsplugin.yaml
  • 切個目錄examples/cephfs/開始部署客戶端
  • [root@node1 fs]# cat secret.yaml --- apiVersion: v1 kind: Secret metadata:name: csi-cephfs-secretnamespace: default stringData:# Required for statically provisioned volumesuserID: adminuserKey: AQDpR4xiIN/TJRAAIIv3DMRdm70RnsGs5/DW9g==# Required for dynamically provisioned volumesadminID: adminadminKey: AQDpR4xiIN/TJRAAIIv3DMRdm70RnsGs5/DW9g== [root@node1 fs]# kubectl apply -f secret.yaml secret/csi-cephfs-secret created [root@node1 fs]# k get secret NAME TYPE DATA AGE csi-cephfs-secret Opaque 4 11s csi-rbd-secret Opaque 2 28h
  • 遇到的問題
  • [root@node1 ~]# k get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE fs-pvc Pending csi-cephfs-sc 27m rbd-pvc Bound pvc-80d393f0-8664-4d70-8e0d-d7a0550d4417 10Gi RWO csi-rbd-sc 7h22m [root@node1 ~]# kd pvc/fs-pvc Name: fs-pvc Namespace: default StorageClass: csi-cephfs-sc Status: Pending Volume: Labels: <none> Annotations: volume.beta.kubernetes.io/storage-provisioner: cephfs.csi.ceph.comvolume.kubernetes.io/storage-provisioner: cephfs.csi.ceph.com Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Used By: <none> Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Provisioning 3m55s (x14 over 27m) cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-794b8d9f95-jwmw4_ac816be6-acdb-4447-a41a-e034c43d1b2b External provisioner is provisioning volume for claim "default/fs-pvc"Warning ProvisioningFailed 3m55s (x4 over 24m) cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-794b8d9f95-jwmw4_ac816be6-acdb-4447-a41a-e034c43d1b2b failed to provision volume with StorageClass "csi-cephfs-sc": rpc error: code = DeadlineExceeded desc = context deadline exceededWarning ProvisioningFailed 3m55s (x10 over 24m) cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-794b8d9f95-jwmw4_ac816be6-acdb-4447-a41a-e034c43d1b2b failed to provision volume with StorageClass "csi-cephfs-sc": rpc error: code = Aborted desc = an operation with the given Volume ID pvc-aaed8aa7-c202-44c2-8def-f3479ea27ffe already existsNormal ExternalProvisioning 2m26s (x102 over 27m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "cephfs.csi.ceph.com" or manually created by system administrator # 到ceph集群中,查看集群健康狀態 [root@node1 fs]# ceph health HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds # 該問題出現是因為cephfs沒有啟動mds,下面啟動mds便可回復正常 [root@node1 kubernetes]# cephadm shell Inferring fsid 365b02aa-db0c-11ec-b243-525400ce981f Inferring config /var/lib/ceph/365b02aa-db0c-11ec-b243-525400ce981f/mon.node1/config Using recent ceph image quay.io/ceph/ceph@sha256:f2822b57d72d07e6352962dc830d2fa93dd8558b725e2468ec0d07af7b14c95d [ceph: root@node1 /]# ceph orch apply mds cephfs --placement="3 node1 node2 node3" Scheduled mds.cephfs update... #回到k8s環境中 [root@node1 ~]# k delete pvc/fs-pvc persistentvolumeclaim "fs-pvc" deleted [root@node1 fs]# k apply -f fs-pvc.yaml persistentvolumeclaim/fs-pvc created [root@node1 fs]# k get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE fs-pvc Bound pvc-01d43d98-3375-4642-9bd2-b4818ce59f77 11Gi RWX csi-cephfs-sc 6s rbd-pvc Bound pvc-80d393f0-8664-4d70-8e0d-d7a0550d4417 10Gi RWO csi-rbd-sc 7h23m
  • 掛載cephfs
  • [root@node1 fs]# cat fs-nginx.yaml apiVersion: apps/v1 kind: Deployment metadata:name: fs-nginx spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:latestports:- containerPort: 80volumeMounts:- name: fsmountPath: /usr/share/rbdvolumes:- name: fspersistentVolumeClaim: #指定pvcclaimName: fs-pvc --- apiVersion: v1 kind: Service metadata:name: fs-nginx-servicelabels:app: nginx spec:type: NodePortselector:app: nginxports:- port: 80targetPort: 80nodePort: 32511

    pod狀態

    [root@node1 fs]# pods NAME READY STATUS RESTARTS AGE csi-cephfsplugin-2wh9v 3/3 Running 0 85m csi-cephfsplugin-dwswx 3/3 Running 0 85m csi-cephfsplugin-n5js6 3/3 Running 0 85m csi-cephfsplugin-provisioner-794b8d9f95-jwmw4 6/6 Running 0 85m csi-cephfsplugin-provisioner-794b8d9f95-rprrp 6/6 Running 0 85m csi-cephfsplugin-provisioner-794b8d9f95-sd848 6/6 Running 0 85m csi-rbdplugin-provisioner-c6d7486dd-2jk5w 7/7 Running 19 (9h ago) 10h csi-rbdplugin-provisioner-c6d7486dd-mk68w 7/7 Running 2 (8h ago) 12h csi-rbdplugin-provisioner-c6d7486dd-qlgkf 7/7 Running 0 12h csi-rbdplugin-tthrg 3/3 Running 0 12h csi-rbdplugin-vtlbs 3/3 Running 0 12h csi-rbdplugin-wjpzd 3/3 Running 0 12h fs-nginx-6d86d5d84d-77gvt 1/1 Running 0 4m16s fs-nginx-6d86d5d84d-b9twd 1/1 Running 0 4m16s fs-nginx-6d86d5d84d-s4v85 1/1 Running 0 4m16s my-nginx-549466b985-nzkxl 1/1 Running 0 7h29m

    進入到其中一個pod中,在共享的目錄下創建文件fs.txt

    [root@node1 fs]# k exec -it pod/fs-nginx-6d86d5d84d-77gvt -- /bin/bash root@fs-nginx-6d86d5d84d-77gvt:/# df -h Filesystem Size Used Avail Use% Mounted on overlay 50G 15G 36G 30% / tmpfs 64M 0 64M 0% /dev tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup shm 64M 0 64M 0% /dev/shm /dev/mapper/centos-root 50G 15G 36G 30% /etc/hosts 172.70.10.181:6789,172.70.10.182:6789,172.70.10.183:6789:/volumes/csi/csi-vol-194d8e9e-e108-11ec-a020-26b729f874ac/372a597b-867d-40e0-b246-6537208c8a9f 11G 0 11G 0% /usr/share/rbd tmpfs 16G 12K 16G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 7.9G 0 7.9G 0% /proc/acpi tmpfs 7.9G 0 7.9G 0% /proc/scsi tmpfs 7.9G 0 7.9G 0% /sys/firmware root@fs-nginx-6d86d5d84d-77gvt:/# cd /usr/share/rbd root@fs-nginx-6d86d5d84d-77gvt:/usr/share/rbd# ls root@fs-nginx-6d86d5d84d-77gvt:/usr/share/rbd# echo "cephfs" > fs.txt root@fs-nginx-6d86d5d84d-77gvt:/usr/share/rbd# cat fs.txt cephfs root@fs-nginx-6d86d5d84d-77gvt:/usr/share/rbd# exit exit

    進入到另一個pod中,可以看到共享目錄下同樣有文件fs.txt

    [root@node1 fs]# k exec -it pod/fs-nginx-6d86d5d84d-b9twd -- /bin/bash root@fs-nginx-6d86d5d84d-b9twd:/# cd /usr/share/rbd/ root@fs-nginx-6d86d5d84d-b9twd:/usr/share/rbd# ls fs.txt root@fs-nginx-6d86d5d84d-b9twd:/usr/share/rbd# cat fs.txt cephfs root@fs-nginx-6d86d5d84d-b9twd:/usr/share/rbd# exit exit

    至此,整合rbd和cephfs的過程結束。

    對象存儲

    對于ceph對象存儲,本身ceph提供的是基于七層協議的接口,直接通過對象存儲s3協議訪問即可,不需要通過csi進行集成。

    總結

    以上是生活随笔為你收集整理的k8s集成ceph的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

    主站蜘蛛池模板: av在线综合网 | 奇米影视一区二区 | 91久久久国产精品 | 欧美a一级片 | 国产91绿帽单男绿奴 | 992av| 中文字幕大全 | 欧美成人精品欧美一级乱 | 狠狠操网站 | 可以直接观看的av | www色综合| 尤物视频在线播放 | 91精品国产成人观看 | 欧美一区二区三区色 | av激情小说 | 蜜桃av色偷偷av老熟女 | 欧美视频直播网站 | 国产鲁鲁视频在线观看特色 | 成年人黄色一级片 | 成人久久网 | 91草草草 | 亚洲男人的天堂网站 | av2014天堂 | 手机在线毛片 | h亚洲| 国产制服丝袜在线 | 国产资源视频 | 久久久久香蕉视频 | 含羞草一区二区三区 | 91影院在线 | 五月婷六月 | 国产精品一区二区三区免费看 | 国精产品一区一区三区在线 | 精品人伦一区二区三电影 | 伊人91| 亚洲视频久久久 | 国产成人精品免费在线观看 | 亚洲AV无码久久精品国产一区 | 毛片1000部免费看 | 污污免费视频 | 天堂影视在线观看 | yes4444视频在线观看 | 夜夜欢天天干 | 怎么可能高潮了就结束漫画 | 国产精品第108页 | 成年人看的黄色 | 中文字幕av有码 | 日本内谢少妇xxxxx少交 | 久综合网 | 五月婷婷综合网 | 小色瓷导航 | 成人精品二区 | 蘑菇av| 大j8福利视频导航 | 91亚洲免费| 新香蕉视频 | 免费一二三区 | 一区二区国产视频 | 精品一区二区不卡 | 夜夜嗨aⅴ一区二区三区 | 亚洲同性gay激情无套 | 超碰在线97观看 | av.www| 91中文字幕在线播放 | 中字幕一区二区三区乱码 | www.青青草| 免费无码不卡视频在线观看 | 青青操在线 | 久久天堂av综合合色蜜桃网 | 中文字幕第315页 | 欧洲视频一区二区三区 | 无码人妻一区二区三区线 | 亚洲尤物视频 | 夜夜视频 | 国产刺激高潮av | 日本乱偷人妻中文字幕在线 | aise爱色av| 国产精品成人国产乱一区 | 久久久国产精品黄毛片 | 乌克兰黄色片 | 国产成人欧美一区二区三区91 | www.夜色| 夜夜骑av | 精品孕妇一区二区三区 | 伊人国产精品 | 日本三级一区 | 亚洲国产aⅴ成人精品无吗 日韩乱论 | 国产野外作爱视频播放 | 911美女片黄在线观看游戏 | 精品国产一区二区三区久久久蜜臀 | av免费影院| 720url在线观看免费版 | 手机在线看永久av片免费 | 中文字幕线人 | 欧美一区二区激情视频 | 欧美日韩精品一区二区三区蜜桃 | 青青草这里只有精品 | 狠狠草视频 | 大地资源影视在线播放观看高清视频 |