日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Kubernetes 基于ceph rbd生成pv

發布時間:2023/12/14 编程问答 40 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Kubernetes 基于ceph rbd生成pv 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

1、創建ceph-secret這個k8s secret對象,這個secret對象用于k8s volume插件訪問ceph集群,獲取client.admin的keyring值,并用base64編碼,在master1-admin(ceph管理節點)操作

[root@master1-admin ~]# ceph auth get-key client.admin | base64 QVFDOWF4eGhPM0UzTlJBQUJZZnVCMlZISVJGREFCZHN0UGhMc3c9PQ==

?2.創建ceph的secret,在k8s的控制節點操作:

[root@master ceph]# cat ceph-secret.yaml apiVersion: v1 kind: Secret metadata:name: ceph-secret data:key: QVFDOWF4eGhPM0UzTlJBQUJZZnVCMlZISVJGREFCZHN0UGhMc3c9PQ== [root@master ceph]# kubectl apply -f ceph-secret.yaml secret/ceph-secret created

3.回到ceph 管理節點創建pool池,然后再創建一個塊設備

[root@master1-admin ~]# ceph osd pool create k8stest 56 pool 'k8stest' created You have new mail in /var/spool/mail/root [root@master1-admin ~]# rbd create rbda -s 1024 -p k8stest [root@master1-admin ~]# rbd feature disable k8stest/rbda object-map fast-diff deep-flatten [root@master1-admin ~]# ceph osd pool ls rbd cephfs_data cephfs_metadata k8srbd1 k8stest

?4、創建pv

[root@master ceph]# cat pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: ceph-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce rbd: monitors: - '192.168.0.5:6789'- '192.168.0.6:6789'- '192.168.0.7:6789' pool: k8stest image: rbda user: admin secretRef: name: ceph-secret fsType: xfs readOnly: false persistentVolumeReclaimPolicy: Recycle[root@master ceph]# vim pv.yaml [root@master ceph]# kubectl apply -f pv.yaml persistentvolume/ceph-pv created[root@master ceph]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM ceph-pv 1Gi RWO Recycle Available [root@master ceph]# cat pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ceph-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi[root@master ceph]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ceph-pvc Bound ceph-pv 1Gi RWO 4s[root@master ceph]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE ceph-pv 1Gi RWO Recycle Bound default/ceph-pvc 5h23m [root@master ceph]# cat pod-2.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx-deployment spec:selector:matchLabels:app: nginxreplicas: 2 # tells deployment to run 2 pods matching the templatetemplate: # create pods using pod definition in this templatemetadata:labels:app: nginxspec:containers:- name: nginximage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80volumeMounts:- mountPath: "/ceph-data"name: ceph-datavolumes:- name: ceph-datapersistentVolumeClaim:claimName: ceph-pvc[root@master ceph]# kubectl apply -f pod-2.yaml deployment.apps/nginx-deployment created[root@master ceph]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-d9f89fd7c-v8rxb 1/1 Running 0 12s 10.233.90.179 node1 <none> <none> nginx-deployment-d9f89fd7c-zfwzj 1/1 Running 0 12s 10.233.90.178 node1 <none> <none>

通過上面可以發現pod是可以以ReadWriteOnce共享掛載相同的pvc的?

注意:ceph?rbd塊存儲的特點

  • ?ceph rbd塊存儲能在同一個node上跨pod以ReadWriteOnce共享掛載
  • ceph rbd塊存儲能在同一個node上同一個pod多個容器中以ReadWriteOnce共享掛載
  • ceph rbd塊存儲不能跨node以ReadWriteOnce共享掛載
  • 如果一個使用ceph rdb的pod所在的node掛掉,這個pod雖然會被調度到其它node,但是由于rbd不能跨node多次掛載和掛掉的pod不能自動解綁pv的問題,這個新pod不會正常運行
#將之前所在的節點關閉,可以看到不能使用pvc[root@master ceph]# kubectl get pod NAME READY STATUS RESTARTS AGE nfs-client-provisioner-867b44bd69-nhh4m 1/1 Running 0 9m30s nfs-client-provisioner-867b44bd69-xscs6 1/1 Terminating 2 18d nginx-deployment-d9f89fd7c-4rzjh 1/1 Terminating 0 15m nginx-deployment-d9f89fd7c-79ncq 1/1 Terminating 0 15m nginx-deployment-d9f89fd7c-p54p9 0/1 ContainerCreating 0 9m30s nginx-deployment-d9f89fd7c-prgt5 0/1 ContainerCreating 0 9m30sEvents:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 11m default-scheduler Successfully assigned default/nginx-deployment-d9f89fd7c-p54p9 to node2Warning FailedAttachVolume 11m attachdetach-controller Multi-Attach error for volume "ceph-pv" Volume is already used by pod(s) nginx-deployment-d9f89fd7c-79ncq, nginx-deployment-d9f89fd7c-4rzjhWarning FailedMount 25s (x5 over 9m26s) kubelet, node2 Unable to attach or mount volumes: unmounted volumes=[ceph-data], unattached volumes=[ceph-data default-token-cwbdx]: timed out waiting for the condition

?

Deployment更新特性:

deployment觸發更新的時候,它確保至少所需 Pods 75% 處于運行狀態(最大不可用比例為 25%)。故像一個pod的情況,肯定是新創建一個新的pod,新pod運行正常之后,再關閉老的pod。

默認情況下,它可確保啟動的 Pod 個數比期望個數最多多出 25%

問題:

結合ceph rbd共享掛載的特性和deployment更新的特性,我們發現原因如下:

由于deployment觸發更新,為了保證服務的可用性,deployment要先創建一個pod并運行正常之后,再去刪除老pod。而如果新創建的pod和老pod不在一個node,就會導致此故障。

解決辦法:

1,使用能支持跨node和pod之間掛載的共享存儲,例如cephfs,GlusterFS等

2,給node添加label,只允許deployment所管理的pod調度到一個固定的node上。(不建議,這個node掛掉的話,服務就故障了)

總結

以上是生活随笔為你收集整理的Kubernetes 基于ceph rbd生成pv的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。