日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

高可用集群技术之RHCS应用详解(一)

發布時間:2025/7/14 编程问答 33 豆豆
生活随笔 收集整理的這篇文章主要介紹了 高可用集群技术之RHCS应用详解(一) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

前提:
1)本配置共有3個測試節點,分別node1.samlee.com、node2.samlee.com和node3.samlee.com,相的IP地址分別為172.16.100.6、172.16.100.7和172.16.100.8;系統為 CentOS 6.5 X86_64bit;

2)node4.samlee.com 172.16.100.9 作為共享存儲使用

3)director.samlee.com 172.16.100.3 作為RHCS管理平臺使用

4)集群服務為apache的httpd服務;
5)提供web服務的地址為172.16.100.1;
6)為集群中的每個節點事先配置好yum源;
7) 額外提供了主機172.16.100.3做為ansible跳板機,以其為平臺實現對集群中各節點的管理;其主機名稱為director.samlee.com;

部署架構圖如下所示:


1、準備工作
為了配置一臺Linux主機成為HA的節點,通常需要做出如下的準備工作:

1)所有節點的主機名稱和對應的IP地址解析服務可以正常工作,且每個節點的主機名稱需要跟"uname -n“命令的結果保持一致;因此,需要保證兩個節點上的/etc/hosts文件均為下面的內容:

#?vim?/etc/hosts 172.16.100.6???node1.samlee.com?node1 172.16.100.7????node2.samlee.com?node2 172.16.100.8???node3.samlee.com?node3 172.16.100.9???node4.samlee.com?node4 172.16.100.3????director.samlee.com?director

為了使得重新啟動系統后仍能保持如上的主機名稱,還分別需要在各節點執行類似如下的命令:
Node1配置:

#?sed?-i?'s@\(HOSTNAME=\).*@\1node1.samlee.com@g'??/etc/sysconfig/network #?hostname?node1.samlee.com

Node2配置:

#?sed?-i?'s@\(HOSTNAME=\).*@\1node2.samlee.com@g'?/etc/sysconfig/network #?hostname?node2.samlee.com

Node3配置:

#?sed?-i?'s@\(HOSTNAME=\).*@\1node3.samlee.com@g'?/etc/sysconfig/network #?hostname?node3.samlee.com

Node4配置:

#?sed?-i?'s@\(HOSTNAME=\).*@\1node4.samlee.com@g'?/etc/sysconfig/network #?hostname?node4.samlee.com


2)設定兩個節點可以基于密鑰進行ssh通信,這可以通過如下的命令實現:
Node1配置:

#?ssh-keygen?-t?rsa?-P?'' #?ssh-copy-id?-i?~/.ssh/id_rsa.pub?root@node2 #?ssh-copy-id?-i?~/.ssh/id_rsa.pub?root@node3 #?ssh-copy-id?-i?~/.ssh/id_rsa.pub?root@node4 #?for?i?in?{1..4};do?ssh?node$i?'date';done

Node2配置:

#?ssh-keygen?-t?rsa?-P?'' #?ssh-copy-id?-i?~/.ssh/id_rsa.pub?root@node1 #?ssh-copy-id?-i?~/.ssh/id_rsa.pub?root@node3 #?ssh-copy-id?-i?~/.ssh/id_rsa.pub?root@node4 #?for?i?in?{1..4};do?ssh?node$i?'date';done

Node3配置:

#?ssh-keygen?-t?rsa?-P?'' #?ssh-copy-id?-i?~/.ssh/id_rsa.pub?root@node1 #?ssh-copy-id?-i?~/.ssh/id_rsa.pub?root@node2 #?ssh-copy-id?-i?~/.ssh/id_rsa.pub?root@node4 #?for?i?in?{1..4};do?ssh?node$i?'date';done

Node4配置:

#?ssh-keygen?-t?rsa?-P?'' #?ssh-copy-id?-i?~/.ssh/id_rsa.pub?root@node1 #?ssh-copy-id?-i?~/.ssh/id_rsa.pub?root@node2 #?ssh-copy-id?-i?~/.ssh/id_rsa.pub?root@node3 #?for?i?in?{1..4};do?ssh?node$i?'date';done

director配置:

#?ssh-keygen?-t?rsa?-P?'' #?ssh-copy-id?-i?~/.ssh/id_rsa.pub?root@node1 #?ssh-copy-id?-i?~/.ssh/id_rsa.pub?root@node2 #?ssh-copy-id?-i?~/.ssh/id_rsa.pub?root@node3 #?ssh-copy-id?-i?~/.ssh/id_rsa.pub?root@node4 #?for?i?in?{1..4};do?ssh?node$i?'date';done

3)設置5分鐘自動同步時間(node1、node2都需要配置)

#?crontab?-e */5?*?*?*?*?/sbin/ntpdate?172.16.100.10?&>?/dev/null

4)關閉selinux(所有節點都需要配置)

#?setenforce?0 #?vim?/etc/selinux/config SELINUX=disabled



2、集群安裝(conga配置)
RHCS的核心組件為cman和rgmanager,其中cman為基于openais的“集群基礎架構層,rgmanager為資源管理器。RHCS的集群中資源的配置需要修改其主配置文件/etc/cluster/cluster.conf實現,這對于很多用戶來說是比較有挑戰性的,因此,RHEL提供了luci這個web管理工具,其僅安裝在集群中的某一節點上即可,而cman和rgmanager需要分別安裝在集群中的每個節點上。這可以在跳板機上使用ansible執行如下命令實現:

#?ansible?rhcs?-m?ping node3.samlee.com?|?success?>>?{"changed":?false,?"ping":?"pong" }node1.samlee.com?|?success?>>?{"changed":?false,?"ping":?"pong" }node2.samlee.com?|?success?>>?{"changed":?false,?"ping":?"pong" } #?ansible?rhcs?-m?yum?-a?"name=ricci?state=present"

注意:如果啟用了epel源,其會通過epel安裝Luci依賴的rpm包,這會被rhcs認為是不安全的。因此,安裝luci時要禁用epel源:

#?yum?repolist #?ansible?webservers?-m?yum?-a?"name=ricci?state=present?disablerepo=epel"


安裝完成后配置開機自啟動:

#?ansible?rhcs?-m?service?-a?"name=ricci?state=started?enabled=yes"

檢測監聽端口信息:

#?ansible?rhcs?-m?shell?-a?"ss?-tunlp?|?grep?ricci"


RHCS集群管理平臺安裝:

在director主機上安裝:

#?yum?-y?install?luci

注意:如果啟用了epel源,其會通過epel安裝Luci依賴的rpm包,這會被rhcs認為是不安全的。因此,安裝luci時要禁用epel源:

#?yum?-y?install?luci?--disablerepo=epel #?service?luci?start #?ss?-tunlp?|?grep?8084 tcp????LISTEN?????0??????5??????????????????????*:8084??????????????????*:*??????users:(("luci",3009,5))




3、集群配置

使用網頁登陸管理集群:

https:\\172.16.100.3:8084


配置所有節點ricci用戶密碼,使用ansible快速配置如下:

#?ansible?rhcs?-m?shell?-a?"echo?samlee?|?passwd?--stdin?ricci"

創建集群如下所示:


失效轉移域(故障轉移域)配置:


定義web服務集群資源:

(1)給所有節點安裝httpd服務并配置web主頁:

#?ansible?rhcs?-m?yum?-a?"name=httpd?state=present" #?vim?setindex.sh #!/bin/bash # echo?"<h1>`uname?-n`</h1>"?>?/var/www/html/index.html #?chmod?+x?setindex.sh #?for?i?in?{1..3};?do?scp?-p?setindex.sh?node$i:/tmp/;done????????????????????????????????????????????????????????????????????????????????????????100%???68?????0.1KB/s???00:00???? #?ansible?rhcs?-m?shell?-a?"/tmp/setindex.sh"


創建資源組同時定義資源浮點VIP和httpd服務


創建資源組:


查看集群資源運行狀態:


測試如下所示:

RHCS常用命令集群管理工具:

集群配置文件:/etc/cluster/cluster.conf

1.查看集群所有節點狀態信息:

#?clustat? Cluster?Status?for?mycluster?@?Tue?Aug?23?16:46:57?2016 Member?Status:?QuorateMember?Name?????????????????????????????????????????????????????ID???Status------?----?????????????????????????????????????????????????????----?------node1.samlee.com????????????????????????????????????????????????????1?Online,?Local,?rgmanagernode2.samlee.com????????????????????????????????????????????????????2?Online,?rgmanagernode3.samlee.com????????????????????????????????????????????????????3?Online,?rgmanagerService?Name????????????????????????????????????????????Owner?(Last)????????????????????????????????????????????State?????????-------?----????????????????????????????????????????????-----?------????????????????????????????????????????????-----?????????service:webservice??????????????????????????????????????node1.samlee.com????????????????????????????????????????started??????? You?have?new?mail?in?/var/spool/mail/root

2.切換集群節點:

#?clusvcadm?-r?webservice?-m?node3.samlee.com

3.查詢所有節點信息

#?cman_tool?nodes?-a


上面為使用conga配置RHCS集群環境,下面我們使用命令行方式配置RHCS集群環境:


3.集群安裝(命令行配置RHCS)

(1)使用ansible安裝集群節點服務corosync、cman、rgmanager

#?ansible?rhcs?-m?yum?-a?"name=corosync?state=present" #?ansible?rhcs?-m?yum?-a?"name=cman?state=present" #?ansible?rhcs?-m?yum?-a?"name=rgmanager?state=present" #?ansible?rhcs?-m?yum?-a?"name=ricci?state=present" #?ansible?rhcs?-m?service?-a?"name=ricci?state=started?enabled=yes" #?ansible?rhcs?-m?service?-a?"name=corosync?state=started?enabled=yes"

(2)生成集群配置文件,找集群中某一個節點:

1)創建集群配置文件的框架

[root@node1?~]#?ccs_tool?create?mycluster

2)定義集群節點

[root@node1?~]#?ccs_tool?addnode?node1.samlee.com?-n?1?-v?1 [root@node1?~]#?ccs_tool?addnode?node2.samlee.com?-n?2?-v?1 [root@node1?~]#?ccs_tool?addnode?node3.samlee.com?-n?3?-v?1

3)傳遞集群配置文件至各節點(、node2、node3),并啟動所有節點cman加入自啟動服務列表

#?for?i?in?{1..3};do?scp?-p?/etc/cluster/cluster.conf?node$i:/etc/cluster/;done #?ansible?rhcs?-m?shell?-a?"service?cman?start" #?ansible?rhcs?-m?service?-a?"name=cman?state=started?enabled=yes"

4)檢測所有節點信息:

[root@node1?~]#?ccs_tool?lsnodeCluster?name:?mycluster,?config_version:?4Nodename????????????????????????Votes?Nodeid?Fencetype node1.samlee.com???????????????????1????1???? node2.samlee.com???????????????????1????2???? node3.samlee.com???????????????????1????3????[root@node1?~]#?clustat? Cluster?Status?for?mycluster?@?Wed?Aug?24?17:03:26?2016 Member?Status:?QuorateMember?Name?????????????????????????????????????????????????????ID???Status------?----?????????????????????????????????????????????????????----?------node1.samlee.com????????????????????????????????????????????????????1?Online,?Localnode2.samlee.com????????????????????????????????????????????????????2?Onlinenode3.samlee.com????????????????????????????????????????????????????3?Online

配置集群共享存儲iSCSI

(1)在node4準備好共享存儲的分區(/dev/sda5../dev/sda6--大小為20G邏輯分區)

#?fdisk?-l?/dev/sda[5..6] Disk?/dev/sda5:?21.5?GB,?21483376128?bytes 255?heads,?63?sectors/track,?2611?cylinders Units?=?cylinders?of?16065?*?512?=?8225280?bytes Sector?size?(logical/physical):?512?bytes?/?512?bytes I/O?size?(minimum/optimal):?512?bytes?/?512?bytes Disk?identifier:?0x00000000Disk?/dev/sda6:?21.5?GB,?21484399104?bytes 255?heads,?63?sectors/track,?2611?cylinders Units?=?cylinders?of?16065?*?512?=?8225280?bytes Sector?size?(logical/physical):?512?bytes?/?512?bytes I/O?size?(minimum/optimal):?512?bytes?/?512?bytes Disk?identifier:?0x00000000

(2)在node4上安裝target服務

#?yum?-y?install?scsi-target-utils

(3)在node4上配置/etc/tgt/targets.conf如下所示:

#?vim?/etc/tgt/targets.conf <target?iqn.2016-08.com.samlee:iscsi.disk>backing-store?/dev/sda5backing-store?/dev/sda6initiator-address?172.16.0.0/16 </target>

(4)在node4啟動target服務并加入服務列表

#?chkconfig?tgtd?on #?service?tgtd?start #?ss?-tunlp?|?grep?tgt tcp????LISTEN?????0??????128???????????????????:::3260?????????????????:::*??????users:(("tgtd",1469,5),("tgtd",1472,5)) tcp????LISTEN?????0??????128????????????????????*:3260??????????????????*:*??????users:(("tgtd",1469,4),("tgtd",1472,4))

(5)各集群節點安裝iscsi客戶端工具

#?ansible?rhcs?-m?yum?-a?"name=iscsi-initiator-utils?state=present"

(6)配置各集群節點initiator文件
initiator的配置文件位于/etc/iscsi/,該目錄下有兩個文 件,initiatorname.iscsi 和iscsid.conf,其中iscsid.conf 是其配置文件,initiatorname.iscsi 是標記了initiator的名稱,我們做如下配置:

#?ansible?rhcs?-m?shell?-a?'echo?"InitiatorName=`iscsi-iname?-p?iqn.2016-08.com.samlee:iscsi.disk`"?>?/etc/iscsi/initiatorname.iscsi' #?ansible?rhcs?-m?shell?-a?'echo?"InitiatorAlias=initiator"?>>?/etc/iscsi/initiatorname.iscsi'

(7)配置各集群節點iSCSI客戶端服務啟動并加入自啟動服務列表

#?ansible?rhcs?-m?service?-a?"name=iscsi?state=started?enabled=yes" #?ansible?rhcs?-m?service?-a?"name=iscsid?state=started?enabled=yes"

(8)配置所有集群節點偵測target

#?ansible?rhcs?-m?shell?-a?"iscsiadm?-m?discovery?-t?sendtargets?-p?172.16.100.9"

(9)配置所有集群節點連接登陸target

#?ansible?rhcs?-m?shell?-a?"iscsiadm?-m?node?-T?iqn.2016-08.com.samlee:iscsi.disk?-p?172.16.100.9?-l"

(10)測試所有集群節點磁盤狀態

#?ansible?rhcs?-m?shell?-a?'fdisk?-l?/dev/sd[a-z]'


配置集群文件系統gfs2

(1)各集群節點安裝集群文件系統gfs2-utils

#?ansible?rhcs?-m?yum?-a?"name=gfs2-utils?state=present"

(2)在集群中的某節點上執行如下命令,查看gfs2模塊的裝載情況:

#?ansible?rhcs?-m?shell?-a?'lsmod?|?grep?gfs'

(3)選擇某一個節點進行分區/dev/sdb1大小為10G

#?fdisk?-l?/dev/sdb1Disk?/dev/sdb1:?10.7?GB,?10738450432?bytes 64?heads,?32?sectors/track,?10240?cylinders Units?=?cylinders?of?2048?*?512?=?1048576?bytes Sector?size?(logical/physical):?512?bytes?/?512?bytes I/O?size?(minimum/optimal):?512?bytes?/?512?bytes Disk?identifier:?0x00000000

(4)格式化集群文件系統

gfs2相關命令行工具的使用:
mkfs.gfs2為gfs2文件系統創建工具,其一般常用的選項有:

-b?BlockSize:指定文件系統塊大小,最小為512,默認為4096; -J?MegaBytes:指定gfs2日志區域大小,默認為128MB,最小值為8MB; -j?Number:指定創建gfs2文件系統時所創建的日志區域個數,一般需要為每個掛載的客戶端指定一個日志區域; -p?LockProtoName:所使用的鎖協議名稱,通常為lock_dlm或lock_nolock之一; -t?LockTableName:鎖表名稱,一般來說一個集群文件系統需一個鎖表名以便讓集群節點在施加文件鎖時得悉其所關聯到的集群文件系統,鎖表名稱為clustername:fsname,其中的clustername必須跟集群配置文件中的集群名稱保持一致,因此,也僅有此集群內的節點可訪問此集群文件系統;此外,同一個集群內,每個文件系統的名稱必須惟一;

因此,若要在前面的/dev/sdb1上創建集群文件系統gfs2,可以使用如下命令:

#?mkfs.gfs2?-j?3?-t?mycluster:webstore?-p?lock_dlm?/dev/sdb1 Are?you?sure?you?want?to?proceed??[y/n]?yDevice:????????????????????/dev/sdb1 Blocksize:?????????????????4096 Device?Size????????????????10.00?GB?(2621692?blocks) Filesystem?Size:???????????10.00?GB?(2621689?blocks) Journals:??????????????????3????--日志區域數,只能允許3個節點掛載 Resource?Groups:???????????41 Locking?Protocol:??????????"lock_dlm" Lock?Table:????????????????"mycluster:webstore" UUID:??????????????????????fb0c4327-e2da-bad8-9a55-354b728a162d

(5)掛載使用:

1)在當前集群節點掛載使用,掛載時是不需要指定文件系統進行掛載:

[root@node1?~]#?mount?|?grep?mnt /dev/sdb1?on?/mnt?type?gfs2?(rw,relatime,hostdata=jid=0) [root@node1?~]#?touch?/mnt/file{1..5}.xlsx [root@node1?~]#?ls?/mnt/ file1.xlsx??file2.xlsx??file3.xlsx??file4.xlsx??file5.xlsx

2)在集群節點node2上進行掛載(測試顯示讀寫數據之間可以實時傳輸同步):

--輸出磁盤信息 [root@node2?~]#?partx?-a?/dev/sdb [root@node2?~]#?mount?-t?gfs2?/dev/sdb1?/mnt/ [root@node2?~]#?mount?|?grep?mnt /dev/sdb1?on?/mnt?type?gfs2?(rw,relatime,hostdata=jid=1) [root@node2?~]#?ls?/mnt/ file1.xlsx??file2.xlsx??file3.xlsx??file4.xlsx??file5.xlsx

3)在集群節點node3上進行掛載(測試顯示讀寫數據之間可以實時傳輸同步):

--輸出磁盤信息 [root@node3?~]#?partx?-a?/dev/sdb [root@node3?~]#?mount?-t?gfs2?/dev/sdb1?/mnt/ [root@node3?~]#?mount?|?grep?mnt /dev/sdb1?on?/mnt?type?gfs2?(rw,relatime,hostdata=jid=1) [root@node3?~]#?ls?/mnt/ file1.xlsx??file2.xlsx??file3.xlsx??file4.xlsx??file5.xlsx

異常處理:如果指定的日志區域不夠用,可以使用gfs2_jadd添加新的日志區域

[root@node3?~]#?gfs2_jadd?-j?1?/dev/sdb1? Filesystem:????????????/mnt Old?Journals???????????3 New?Journals???????????4

4)臨時凍結及解凍集群文件系統

#?gfs2_tool?freeze?/mnt/ #?gfs2_tool?unfreeze?/mnt/

5)查詢gfs2文件系統掛載點可調整參數

#?gfs2_tool?gettune?/mnt incore_log_blocks?=?8192 log_flush_secs?=?60 quota_warn_period?=?10 quota_quantum?=?60 max_readahead?=?262144 complain_secs?=?10 statfs_slow?=?0 quota_simul_sync?=?64 statfs_quantum?=?30 quota_scale?=?1.0000???(1,?1) new_files_jdata?=?0

6)修改掛載點參數--調整日志刷新時長

[root@node1?mnt]#?gfs2_tool?settune?/mnt?log_flush_secs?120 [root@node1?mnt]#?gfs2_tool?gettune?/mnt?|?grep?log_flush_secs log_flush_secs?=?120

7)查詢文件系統日志區域個數

#?gfs2_tool?journals?/mnt journal2?-?128MB journal3?-?128MB journal1?-?128MB journal0?-?128MB 4?journal(s)?found.

8)配置開機自動掛載gfs文件系統

#?vim?/etc/fstab? /dev/sdb1????????/mnt????????????gfs2????defaults????0?0 #?service?gfs2?start


基于集群邏輯卷創建gfs2文件系統

1)在集群節點中準備lvm類型的磁盤分區/dev/sdb2大小為10G

#?fdisk?-l?/dev/sdb?|?grep?"Linux?LVM" /dev/sdb2???????????10242???????20482????10486784???8e??Linux?LVM #?partx?-a?/dev/sdb

注意: # partx -a /dev/sdb要在所有節點中都執行兩次輸出磁盤信息

#?ansible?rhcs?-m?shell?-a?'partx?-a?/dev/sdb'

2)在各集群節點安裝集群邏輯卷專用程序包

lvm2-cluster安裝包信息:

#?yum?info?lvm2-cluster Loaded?plugins:?fastestmirror,?security Loading?mirror?speeds?from?cached?hostfile c6-media????????????????????????????????????????????????????????????????????????????????????????????????????|?4.0?kB?????00:00?...? Available?Packages Name????????:?lvm2-cluster Arch????????:?x86_64 Version?????:?2.02.100 Release?????:?8.el6 Size????????:?424?k Repo????????:?c6-media Summary?????:?Cluster?extensions?for?userland?logical?volume?management?tools URL?????????:?http://sources.redhat.com/lvm2 License?????:?GPLv2 Description?:?Extensions?to?LVM2?to?support?clusters.

安裝lvm2-cluster:

#?ansible?rhcs?-m?yum?-a?"name=lvm2-cluster?state=present"

3)調整各集群lvm鎖類型為集群鎖

#?ansible?rhcs?-m?shell?-a?'sed?-i?"s@^\([[:space:]]*locking_type\).*@\1?=?3@g"?/etc/lvm/lvm.conf' 或: #?ansible?rhcs?-m?shell?-a?'lvmconf?--enable-cluster'

4)配置各集群節點啟動clvmd服務及開機自啟動

#?ansible?rhcs?-m?service?-a?"name=clvmd?state=started?enabled=yes"

5)配置集群邏輯卷(選擇某一集群節點操作即可)

#?pvcreate?/dev/sdb2 #?vgcreate?clustervg?/dev/sdb2 #?lvcreate?-L?5G?-n?clusterlv?clustervg

6)格式化集群邏輯卷創建文件系統

#?mkfs.gfs2?-p?lock_dlm?-j?2?-t?mycluster:clvm?/dev/clustervg/clusterlv

7)測試掛載使用:

node1掛載使用:

[root@node1?~]#?mount?-t?gfs2?/dev/clustervg/clusterlv?/media/ [root@node1?~]#?cp?/etc/issue?/media/ [root@node1?~]#?ls?/media/ issue

node2掛載使用:

[root@node2?~]#?mount?-t?gfs2?/dev/clustervg/clusterlv?/media/ [root@node2?~]#?ls?/media/ issue

node3掛載使用(會出現日志區域不夠用的情況,添加新的日志區域;再進行掛載):

[root@node3?~]#?mount?-t?gfs2?/dev/clustervg/clusterlv?/media/ Too?many?nodes?mounting?filesystem,?no?free?journals [root@node2?~]#?gfs2_jadd?-j?1?/dev/clustervg/clusterlv? Filesystem:????????????/media Old?Journals???????????2 New?Journals???????????3 [root@node3?~]#?mount?-t?gfs2?/dev/clustervg/clusterlv?/media/ [root@node3?~]#?mount?-t?gfs2?/dev/clustervg/clusterlv?/media/ [root@node3?~]#?ls?/media/ issue

8)擴展邏輯卷

#?lvextend?-L?+2G?/dev/clustervg/clusterlv? #?df?-lh?|?grep?media /dev/mapper/clustervg-clusterlv??5.0G??388M??4.7G???8%?/media --測試 #?gfs2_grow?-T?/dev/clustervg/clusterlv? #?gfs2_grow??/dev/clustervg/clusterlv? FS:?Mount?Point:?/media FS:?Device:??????/dev/dm-4 FS:?Size:????????1310718?(0x13fffe) FS:?RG?size:?????65533?(0xfffd) DEV:?Size:???????1835008?(0x1c0000) The?file?system?grew?by?2048MB. gfs2_grow?complete. #?df?-lh?|?grep?media /dev/mapper/clustervg-clusterlv??7.0G??388M??6.7G???6%?/media


轉載于:https://blog.51cto.com/gzsamlee/1842337

總結

以上是生活随笔為你收集整理的高可用集群技术之RHCS应用详解(一)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。