日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

k8s部署rook-ceph

發(fā)布時間:2025/1/21 编程问答 18 豆豆
生活随笔 收集整理的這篇文章主要介紹了 k8s部署rook-ceph 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

簡介

  • Rook官網(wǎng):https://rook.io
  • Rook是云原生計算基金會(CNCF)的孵化級項目.
  • Rook是Kubernetes的開源云本地存儲協(xié)調(diào)器,為各種存儲解決方案提供平臺,框架和支持,以便與云原生環(huán)境本地集成。
  • 至于CEPH,官網(wǎng)在這:https://ceph.com/
  • ceph官方提供的helm部署,至今我沒成功過,所以轉向使用rook提供的方案

環(huán)境

centos 7.5 kernel 4.18.7-1.el7.elrepo.x86_64docker 18.06kubernetes v1.12.2kubeadm部署:網(wǎng)絡: canalDNS: coredns集群成員: 192.168.1.1 kube-master192.168.1.2 kube-node1192.168.1.3 kube-node2192.168.1.4 kube-node3192.168.1.5 kube-node4所有node節(jié)點準備一塊200G的磁盤:/dev/sdb

準備工作

  • 所有節(jié)點開啟ip_forward
cat <<EOF > /etc/sysctl.d/ceph.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system

開始部署Operator

  • 部署Rook Operator
#無另外說明,全部操作都在master操作cd $HOME git clone https://github.com/rook/rook.gitcd rook cd cluster/examples/kubernetes/ceph kubectl apply -f operator.yaml
  • 查看Operator的狀態(tài)
#執(zhí)行apply之后稍等一會。 #operator會在集群內(nèi)的每個主機創(chuàng)建兩個pod:rook-discover,rook-ceph-agentkubectl -n rook-ceph-system get pod -o wide

給節(jié)點打標簽

  • 運行ceph-mon的節(jié)點打上:ceph-mon=enabled
kubectl label nodes {kube-node1,kube-node2,kube-node3} ceph-mon=enabled
  • 運行ceph-osd的節(jié)點,也就是存儲節(jié)點,打上:ceph-osd=enabled
kubectl label nodes {kube-node1,kube-node2,kube-node3} ceph-osd=enabled
  • 運行ceph-mgr的節(jié)點,打上:ceph-mgr=enabled
#mgr只能支持一個節(jié)點運行,這是ceph跑k8s里的局限 kubectl label nodes kube-node1 ceph-mgr=enabled

配置cluster.yaml文件

  • 官方配置文件詳解:https://rook.io/docs/rook/v0.8/ceph-cluster-crd.html

  • 文件中有幾個地方要注意:

    • dataDirHostPath: 這個路徑是會在宿主機上生成的,保存的是ceph的相關的配置文件,再重新生成集群的時候要確保這個目錄為空,否則mon會無法啟動
    • useAllDevices: 使用所有的設備,建議為false,否則會把宿主機所有可用的磁盤都干掉
    • useAllNodes:使用所有的node節(jié)點,建議為false,肯定不會用k8s集群內(nèi)的所有node來搭建ceph的
    • databaseSizeMB和journalSizeMB:當磁盤大于100G的時候,就注釋這倆項就行了
  • 本次實驗用到的 cluster.yaml 文件內(nèi)容如下:

apiVersion: v1 kind: Namespace metadata:name: rook-ceph --- apiVersion: v1 kind: ServiceAccount metadata:name: rook-ceph-clusternamespace: rook-ceph --- kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata:name: rook-ceph-clusternamespace: rook-ceph rules: - apiGroups: [""]resources: ["configmaps"]verbs: [ "get", "list", "watch", "create", "update", "delete" ] --- # Allow the operator to create resources in this cluster's namespace kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata:name: rook-ceph-cluster-mgmtnamespace: rook-ceph roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: rook-ceph-cluster-mgmt subjects: - kind: ServiceAccountname: rook-ceph-systemnamespace: rook-ceph-system --- # Allow the pods in this namespace to work with configmaps kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata:name: rook-ceph-clusternamespace: rook-ceph roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: rook-ceph-cluster subjects: - kind: ServiceAccountname: rook-ceph-clusternamespace: rook-ceph --- apiVersion: ceph.rook.io/v1beta1 kind: Cluster metadata:name: rook-cephnamespace: rook-ceph spec:cephVersion:# The container image used to launch the Ceph daemon pods (mon, mgr, osd, mds, rgw).# v12 is luminous, v13 is mimic, and v14 is nautilus.# RECOMMENDATION: In production, use a specific version tag instead of the general v13 flag, which pulls the latest release and could result in different# versions running within the cluster. See tags available at https://hub.docker.com/r/ceph/ceph/tags/.image: ceph/ceph:v13# Whether to allow unsupported versions of Ceph. Currently only luminous and mimic are supported.# After nautilus is released, Rook will be updated to support nautilus.# Do not set to true in production.allowUnsupported: false# The path on the host where configuration files will be persisted. If not specified, a kubernetes emptyDir will be created (not recommended).# Important: if you reinstall the cluster, make sure you delete this directory from each host or else the mons will fail to start on the new cluster.# In Minikube, the '/data' directory is configured to persist across reboots. Use "/data/rook" in Minikube environment.dataDirHostPath: /var/lib/rook# The service account under which to run the daemon pods in this cluster if the default account is not sufficient (OSDs)serviceAccount: rook-ceph-cluster# set the amount of mons to be started# count可以定義ceph-mon運行的數(shù)量,這里默認三個就行了mon:count: 3allowMultiplePerNode: true# enable the ceph dashboard for viewing cluster status# 開啟ceph資源面板dashboard:enabled: true# serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy)# urlPrefix: /ceph-dashboardnetwork:# toggle to use hostNetwork# 使用宿主機的網(wǎng)絡進行通訊# 使用宿主機的網(wǎng)絡貌似可以讓集群外的主機掛載ceph# 但是我沒試過,有興趣的兄弟可以試試改成true# 反正這里只是集群內(nèi)用,我就不改了hostNetwork: false# To control where various services will be scheduled by kubernetes, use the placement configuration sections below.# The example under 'all' would have all services scheduled on kubernetes nodes labeled with 'role=storage-node' and# tolerate taints with a key of 'storage-node'.placement: # all: # nodeAffinity: # requiredDuringSchedulingIgnoredDuringExecution: # nodeSelectorTerms: # - matchExpressions: # - key: role # operator: In # values: # - storage-node # podAffinity: # podAntiAffinity: # tolerations: # - key: storage-node # operator: Exists # The above placement information can also be specified for mon, osd, and mgr components # mon: # osd: # mgr: # nodeAffinity:通過選擇標簽的方式,可以限制pod被調(diào)度到特定的節(jié)點上 # 建議限制一下,為了讓這幾個pod不亂跑mon:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: ceph-monoperator: Invalues:- enabledosd:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: ceph-osdoperator: Invalues:- enabledmgr:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: ceph-mgroperator: Invalues:- enabledresources: # The requests and limits set here, allow the mgr pod to use half of one CPU core and 1 gigabyte of memory # mgr: # limits: # cpu: "500m" # memory: "1024Mi" # requests: # cpu: "500m" # memory: "1024Mi" # The above example requests/limits can also be added to the mon and osd components # mon: # osd:storage: # cluster level storage configuration and selectionuseAllNodes: falseuseAllDevices: falsedeviceFilter:location:config:# The default and recommended storeType is dynamically set to bluestore for devices and filestore for directories.# Set the storeType explicitly only if it is required not to use the default.# storeType: bluestore# databaseSizeMB: "1024" # this value can be removed for environments with normal sized disks (100 GB or larger)# journalSizeMB: "1024" # this value can be removed for environments with normal sized disks (20 GB or larger) # Cluster level list of directories to use for storage. These values will be set for all nodes that have no `directories` set. # directories: # - path: /rook/storage-dir # Individual nodes and their config can be specified as well, but 'useAllNodes' above must be set to false. Then, only the named # nodes below will be used as storage resources. Each node's 'name' field should match their 'kubernetes.io/hostname' label. #建議磁盤配置方式如下: #name: 選擇一個節(jié)點,節(jié)點名字為kubernetes.io/hostname的標簽,也就是kubectl get nodes看到的名字 #devices: 選擇磁盤設置為OSD # - name: "sdb":將/dev/sdb設置為osdnodes:- name: "kube-node1"devices:- name: "sdb"- name: "kube-node2"devices:- name: "sdb"- name: "kube-node3"devices:- name: "sdb"# directories: # specific directories to use for storage can be specified for each node # - path: "/rook/storage-dir" # resources: # limits: # cpu: "500m" # memory: "1024Mi" # requests: # cpu: "500m" # memory: "1024Mi" # - name: "172.17.4.201" # devices: # specific devices to use for storage can be specified for each node # - name: "sdb" # - name: "sdc" # config: # configuration can be specified at the node level which overrides the cluster level config # storeType: filestore # - name: "172.17.4.301" # deviceFilter: "^sd."

開始部署ceph

  • 部署ceph
kubectl apply -f cluster.yaml# cluster會在rook-ceph這個namesapce創(chuàng)建資源 # 盯著這個namesapce的pod你就會發(fā)現(xiàn),它在按照順序創(chuàng)建Podkubectl -n rook-ceph get pod -o wide -w# 看到所有的pod都Running就行了 # 注意看一下pod分布的宿主機,跟我們打標簽的主機是一致的kubectl -n rook-ceph get pod -o wide
  • 切換到其他主機看一下磁盤

    • 切換到kube-node1
    lsblk
    • 切換到kube-node3
    lsblk

配置ceph dashboard

  • 看一眼dashboard在哪個service上
kubectl -n rook-ceph get service #可以看到dashboard監(jiān)聽了8443端口
  • 創(chuàng)建個nodeport類型的service以便集群外部訪問
kubectl apply -f dashboard-external-https.yaml# 查看一下nodeport在哪個端口 ss -tanl kubectl -n rook-ceph get service
  • 找出Dashboard的登陸賬號和密碼
MGR_POD=`kubectl get pod -n rook-ceph | grep mgr | awk '{print $1}'`kubectl -n rook-ceph logs $MGR_POD | grep password
  • 打開瀏覽器輸入任意一個Node的IP+nodeport端口
  • 這里我的就是:https://192.168.1.2:30290

配置ceph為storageclass

  • 官方給了一個樣本文件:storageclass.yaml
  • 這個文件使用的是 RBD 塊存儲
  • pool創(chuàng)建詳解:https://rook.io/docs/rook/v0.8/ceph-pool-crd.html
apiVersion: ceph.rook.io/v1beta1 kind: Pool metadata:#這個name就是創(chuàng)建成ceph pool之后的pool名字name: replicapoolnamespace: rook-ceph spec:replicated:size: 1# size 池中數(shù)據(jù)的副本數(shù),1就是不保存任何副本failureDomain: osd# failureDomain:數(shù)據(jù)塊的故障域,# 值為host時,每個數(shù)據(jù)塊將放置在不同的主機上# 值為osd時,每個數(shù)據(jù)塊將放置在不同的osd上 --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata:name: ceph# StorageClass的名字,pvc調(diào)用時填的名字 provisioner: ceph.rook.io/block parameters:pool: replicapool# Specify the namespace of the rook cluster from which to create volumes.# If not specified, it will use `rook` as the default namespace of the cluster.# This is also the namespace where the cluster will beclusterNamespace: rook-ceph# Specify the filesystem type of the volume. If not specified, it will use `ext4`.fstype: xfs # 設置回收策略默認為:Retain reclaimPolicy: Retain
  • 創(chuàng)建StorageClass
kubectl apply -f storageclass.yaml kubectl get storageclasses.storage.k8s.io -n rook-ceph kubectl describe storageclasses.storage.k8s.io -n rook-ceph
  • 創(chuàng)建個nginx pod嘗試掛載
cat << EOF > nginx.yaml --- apiVersion: v1 kind: PersistentVolumeClaim metadata:name: nginx-pvc spec:accessModes:- ReadWriteManyresources:requests:storage: 1GistorageClassName: ceph--- apiVersion: v1 kind: Service metadata:name: nginx spec:selector:app: nginxports: - port: 80name: nginx-porttargetPort: 80protocol: TCP--- apiVersion: apps/v1 kind: Deployment metadata:name: nginx spec:replicas: 1selector:matchLabels:app: nginxtemplate:metadata:name: nginxlabels:app: nginxspec:containers:- name: nginximage: nginxports:- containerPort: 80volumeMounts:- mountPath: /htmlname: http-filevolumes:- name: http-filepersistentVolumeClaim:claimName: nginx-pvc EOFkubectl apply -f nginx.yaml
  • 查看pv,pvc是否創(chuàng)建了
kubectl get pv,pvc# 看一下nginx這個pod也運行了 kubectl get pod
  • 刪除這個pod,看pv是否還存在
kubectl delete -f nginx.yamlkubectl get pv,pvc # 可以看到,pod和pvc都已經(jīng)被刪除了,但是pv還在!!!

添加新的OSD進入集群

  • 這次我們要把node4添加進集群,先打標簽
kubectl label nodes kube-node4 ceph-osd=enabled
  • 重新編輯cluster.yaml文件
# 原來的基礎上添加node4的信息cd $HOME/rook/cluster/examples/kubernetes/ceph/ vi cluster.yam
  • apply一下cluster.yaml文件
kubectl apply -f cluster.yaml# 盯著rook-ceph名稱空間,集群會自動添加node4進來kubectl -n rook-ceph get pod -o wide -w kubectl -n rook-ceph get pod -o wide
  • 去node4節(jié)點看一下磁盤
lsblk
  • 再打開dashboard看一眼

刪除一個節(jié)點

  • 去掉node3的標簽
kubectl label nodes kube-node3 ceph-osd-
  • 重新編輯cluster.yaml文件
# 刪除node3的信息cd $HOME/rook/cluster/examples/kubernetes/ceph/ vi cluster.yam
  • apply一下cluster.yaml文件
kubectl apply -f cluster.yaml# 盯著rook-ceph名稱空間kubectl -n rook-ceph get pod -o wide -w kubectl -n rook-ceph get pod -o wide# 最后記得刪除宿主機的/var/lib/rook文件夾

常見問題

  • 官方解答:https://rook.io/docs/rook/v0.8/common-issues.html

  • 當機器重啟之后,osd無法正常的Running,無限重啟

#解決辦法:# 標記節(jié)點為 drain 狀態(tài) kubectl drain <node-name> --ignore-daemonsets --delete-local-data# 然后再恢復 kubectl uncordon <node-name>

參考鏈接:
https://note.youdao.com/ynoteshare1/index.html?id=281719f1f0374f787effc90067e0d5ad&type=note
https://weixin.sogou.com/weixin?type=2&query=k8s+rook+ceph

總結

以上是生活随笔為你收集整理的k8s部署rook-ceph的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。