日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

CloudStack学习-2

發(fā)布時(shí)間:2025/7/14 编程问答 27 豆豆
生活随笔 收集整理的這篇文章主要介紹了 CloudStack学习-2 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

環(huán)境準(zhǔn)備


?

這次實(shí)驗(yàn)主要是CloudStack結(jié)合glusterfs。 兩臺(tái)宿主機(jī),做gluster復(fù)制卷

VmWare添加一臺(tái)和agent1配置一樣的機(jī)器

系統(tǒng)版本:centos6.6 x86_64 內(nèi)存:4GB 網(wǎng)絡(luò):機(jī)器是nat 磁盤:裝完系統(tǒng)后額外添加個(gè)50GB的磁盤 額外:勾選vt-x

配置主機(jī)名為agent2

正式開始
關(guān)閉iptables和selinux

sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config setenforce 0 chkconfig iptables off /etc/init.d/iptables stop

配置IP地址為靜態(tài)的

[root@agent2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 TYPE=Ethernet ONBOOT=yes BOOTPROTO=static IPADDR=192.168.145.153 NETMASK=255.255.255.0 GATEWAY=192.168.145.2 DNS1=10.0.1.11 [root@agent2 ~]#

配置主機(jī)名為agent2
配置hosts文件

cat >>/etc/hosts<<EOF 192.168.145.151 master1 192.168.145.152 agent1 192.168.145.153 agent2 EOF

保證master和兩臺(tái)agent的配置都如下

[root@agent2 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.145.151 master1 192.168.145.152 agent1 192.168.145.153 agent2 [root@agent2 ~]#

配置ntp

yum install ntp -y chkconfig ntpd on /etc/init.d/ntpd start

檢查 hostname ?--fqdn

[root@agent2 ~]# hostname --fqdn agent2 [root@agent2 ~]#

安裝epel源

yum install epel-release -y

agent2上也如下操作,注意agent2新建primary目錄

[root@agent2 tools]# mkdir /export/primary -p [root@agent2 tools]#

agent2上操作,格式化磁盤

[root@agent2 ~]# mkfs.ext4 /dev/sdb mke2fs 1.41.12 (17-May-2010) /dev/sdb is entire device, not just one partition! Proceed anyway? (y,n) y Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 3276800 inodes, 13107200 blocks 655360 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 400 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 36 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@agent2 ~]#

agent2上操作

[root@agent2 ~]# echo "/dev/sdb /export/primary ext4 defaults 0 0">>/etc/fstab [root@agent2 ~]# mount -a [root@agent2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 35G 2.3G 31G 7% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 380M 33M 328M 9% /boot /dev/sdb 50G 52M 47G 1% /export/primary [root@agent2 ~]#

  

刪除之前實(shí)驗(yàn)殘留的配置


?

master端操作

刪除之前的配置
先操作master,刪除之前的庫

[root@master1 ~]# mysql -uroot -p123456 Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 14 Server version: 5.1.73-log Source distributionCopyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | cloud | | cloud_usage | | mysql | | test | +--------------------+ 5 rows in set (0.01 sec)mysql> drop database cloud; Query OK, 274 rows affected (1.34 sec)mysql> drop database cloud_usage; Query OK, 25 rows affected (0.09 sec)mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | test | +--------------------+ 3 rows in set (0.00 sec)mysql>

停止management服務(wù)

[root@master1 ~]# /etc/init.d/cloudstack-management stop Stopping cloudstack-management: [FAILED] [root@master1 ~]# /etc/init.d/cloudstack-management stop Stopping cloudstack-management: [ OK ] [root@master1 ~]# /etc/init.d/cloudstack-management status cloudstack-management is stopped [root@master1 ~]#

agent1上卸載gluster的包
卸載glusterfs會(huì)卸載kvm相關(guān)的依賴包,從c6.6開始的
6.5以前,kvm也不依賴glusterfs
如果replication包沒安裝的話,先執(zhí)行上面的卸載操作
它會(huì)自動(dòng)的把kvm的包卸載。libvirtd的包也被卸載

[root@agent1 ~]# rpm -qa | grep gluster [root@agent1 ~]# yum remove glusterfs* [root@agent1 ~]# rpm -qa | grep kvm [root@agent1 ~]# rpm -qa | grep libvirt libvirt-python-0.10.2-60.el6.x86_64 libvirt-client-0.10.2-60.el6.x86_64 [root@agent1 ~]#

小插曲

agent1本次實(shí)驗(yàn)完畢關(guān)機(jī)后,ifcfg-cloudbr0文件沒生成。在這之前實(shí)驗(yàn)是正常的,重啟后發(fā)現(xiàn)cloudbr0文件丟失 解決辦法:復(fù)制ifcfg-eth0為ifcfg-cloudbr0,參照機(jī)器agent2上正常的cloudbr0文件修改

配置glusterfs


?

安裝gluster3.7的源(agent1和agent2都操作)
如果你安裝了3.6和3.8的包也沒問題

[root@agent1 ~]# yum install centos-release-gluster37 -y Loaded plugins: fastestmirror, security Setting up Install Process Loading mirror speeds from cached hostfile* epel: mirrors.tuna.tsinghua.edu.cn centos-gluster37 | 2.9 kB 00:00 centos-gluster37/primary_db | 99 kB 00:07 Package centos-release-gluster37-1.0-4.el6.centos.noarch already installed and latest version Nothing to do [root@agent1 ~]#

查看yum源文件

[root@agent1 ~]# cat /etc/yum.repos.d/CentOS-Gluster-3.7.repo # CentOS-Gluster-3.7.repo # # Please see http://wiki.centos.org/SpecialInterestGroup/Storage for more # information[centos-gluster37] name=CentOS-$releasever - Gluster 3.7 baseurl=http://mirror.centos.org/centos/$releasever/storage/$basearch/gluster-3.7/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage[centos-gluster37-test] name=CentOS-$releasever - Gluster 3.7 Testing baseurl=http://buildlogs.centos.org/centos/$releasever/storage/$basearch/gluster-3.7/ gpgcheck=0 enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage[root@agent1 ~]#

指定源,安裝對(duì)應(yīng)版本的glusterfs包(agent1和agent2都操作)

[root@agent1 ~]# yum --enablerepo=centos-gluster37-test install glusterfs-server glusterfs-cli gluster-geo-replication -y

  也可以從下面路徑下載rpm包,然后安裝

https://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/CentOS/epel-6.8/x86_64/

?

agent端刪除之前殘留文件

?agent1刪除掛載點(diǎn)原先的文件(原先的kvm等文件)

[root@agent1 ~]# cd /export/primary/ [root@agent1 primary]# ls 0cc65968-4ff3-4b4c-b31e-7f1cf5d1959b cf3dac7a-a071-4def-83aa-555b5611fb02 1685f81b-9ac9-4b21-981a-f1b01006c9ef f3521c3d-fca3-4527-984d-5ff208e05b5c 99643b7d-aaf4-4c75-b7d6-832c060e9b77 lost+found [root@agent1 primary]# rm -rf * [root@agent1 primary]# ls [root@agent1 primary]#

agent2也如此操作。刪除多余的東西

[root@agent2 ~]# cd /export/primary/ [root@agent2 primary]# ls lost+found [root@agent2 primary]# rm -rf * [root@agent2 primary]# ls [root@agent2 primary]#

agent2上安裝CloudStack包(在這之前,已經(jīng)把glusterfs3.7的包安裝上了)

這些agent不需要手動(dòng)起,它們是交給master管理的

其實(shí)master是通過22端口連接過來管理的

[root@agent2 tools]# yum install cloudstack-agent-4.8.0-1.el6.x86_64.rpm cloudstack-common-4.8.0-1.el6.x86_64.rpm -y

agent端檢查glusterfs版本

[root@agent1 ~]# glusterfs -V glusterfs 3.7.20 built on Jan 30 2017 15:39:27 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. [root@agent1 ~]#

啟動(dòng)glusterd并設(shè)置開機(jī)啟動(dòng)(兩個(gè)agent上操作)

[root@agent1 ~]# /etc/init.d/glusterd start Starting glusterd: [ OK ] [root@agent1 ~]# chkconfig glusterd on [root@agent1 ~]#

兩臺(tái)agent停止iptables

[root@agent1 ~]# /etc/init.d/iptables stop iptables: Setting chains to policy ACCEPT: nat mangle filte[ OK ] iptables: Flushing firewall rules: [ OK ] iptables: Unloading modules: [ OK ] [root@agent1 ~]# chkconfig iptables off [root@agent1 ~]#

gluster加入節(jié)點(diǎn),并檢查狀態(tài),在一臺(tái)agent上操作即可?

[root@agent1 ~]# gluster peer probe agent2 peer probe: failed: Probe returned with Transport endpoint is not connected [root@agent1 ~]# gluster peer probe agent2 peer probe: success. [root@agent1 ~]# gluster peer status Number of Peers: 1Hostname: agent2 Uuid: 2778cb7a-32ef-4a3f-a34c-b97f5937bb49 State: Peer in Cluster (Connected) [root@agent1 ~]#

創(chuàng)建復(fù)制卷
開始操作
gv2是自定義的

[root@agent1 ~]# gluster volume create gv2 replica 2 agent1:/export/primary agent2:/export/primary force volume create: gv2: success: please start the volume to access data [root@agent1 ~]#

啟動(dòng)這個(gè)卷,并查看狀態(tài),Type顯示Replicate,就表示復(fù)制卷的意思 ,為什么采用gluster呢,原先主存儲(chǔ)是本地掛載的,假如宿主機(jī)掛掉后,宿主機(jī)上的kvm全部掛掉。

[root@agent1 ~]# gluster volume start gv2 volume start: gv2: success [root@agent1 ~]# gluster volume infoVolume Name: gv2 Type: Replicate Volume ID: 3a23ab68-73da-4f1b-bc5c-3310ffa9e8b7 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: agent1:/export/primary Brick2: agent2:/export/primary Options Reconfigured: performance.readdir-ahead: on [root@agent1 ~]#

gluster的東西至此完畢,繼續(xù)之前的內(nèi)容

?

?

CloudStack配置和用戶界面操作


?

繼續(xù)在master上如下初始化數(shù)據(jù)庫的操作,導(dǎo)入數(shù)據(jù)?

[root@master1 ~]# cloudstack-setup-databases cloud:123456@localhost --deploy-as=root:123456 Mysql user name:cloud [ OK ] Mysql user password:****** [ OK ] Mysql server ip:localhost [ OK ] Mysql server port:3306 [ OK ] Mysql root user name:root [ OK ] Mysql root user password:****** [ OK ] Checking Cloud database files ... [ OK ] Checking local machine hostname ... [ OK ] Checking SELinux setup ... [ OK ] Detected local IP address as 192.168.145.151, will use as cluster management server node IP[ OK ] Preparing /etc/cloudstack/management/db.properties [ OK ] Applying /usr/share/cloudstack-management/setup/create-database.sql [ OK ] Applying /usr/share/cloudstack-management/setup/create-schema.sql [ OK ] Applying /usr/share/cloudstack-management/setup/create-database-premium.sql [ OK ] Applying /usr/share/cloudstack-management/setup/create-schema-premium.sql [ OK ] Applying /usr/share/cloudstack-management/setup/server-setup.sql [ OK ] Applying /usr/share/cloudstack-management/setup/templates.sql [ OK ] Processing encryption ... [ OK ] Finalizing setup ... [ OK ]CloudStack has successfully initialized database, you can check your database configuration in /etc/cloudstack/management/db.properties[root@master1 ~]#

數(shù)據(jù)庫配置完畢后,啟動(dòng)master,它會(huì)做一些初始化的操作
以后不要這么啟動(dòng),初始化只執(zhí)行一次就行了

[root@master1 ~]# cloudstack-setup-management

查看日志,已經(jīng)啟動(dòng)完成了,看到8080端口已經(jīng)監(jiān)聽了  

[root@master1 ~]# tail -f /var/log/cloudstack/management/catalina.out INFO [o.a.c.s.l.CloudStackExtendedLifeCycle] (main:ctx-d2bdddaf) (logid:) Done Configuring CloudStack Components INFO [c.c.u.LogUtils] (main:ctx-d2bdddaf) (logid:) log4j configuration found at /etc/cloudstack/management/log4j-cloud.xml Feb 12, 2017 7:59:25 PM org.apache.coyote.http11.Http11NioProtocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Feb 12, 2017 7:59:25 PM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:20400 Feb 12, 2017 7:59:25 PM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/18 config=null Feb 12, 2017 7:59:25 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 63790 ms

登錄網(wǎng)頁
http://192.168.145.151:8080/client
登錄網(wǎng)頁
admin/password

?

接下來創(chuàng)建系統(tǒng)虛擬機(jī)(master上操作)
系統(tǒng)虛擬路由,vnc窗口都是這些虛擬機(jī)的作用,master上執(zhí)行下面命令

/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \ -m /export/secondary \ -f /tools/systemvm64template-4.6.0-kvm.qcow2.bz2 \ -h kvm -F

這個(gè)步驟的作用就是把虛擬機(jī)模板導(dǎo)入到二級(jí)存儲(chǔ),執(zhí)行過程如下

[root@master1 tools]# /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \ > -m /export/secondary \ > -f /tools/systemvm64template-4.6.0-kvm.qcow2.bz2 \ > -h kvm -F Uncompressing to /usr/share/cloudstack-common/scripts/storage/secondary/9824edc4-61db-4ad8-a08a-61f051b9ebfe.qcow2.tmp (type bz2)...could take a long time Moving to /export/secondary/template/tmpl/1/3///9824edc4-61db-4ad8-a08a-61f051b9ebfe.qcow2...could take a while Successfully installed system VM template /tools/systemvm64template-4.6.0-kvm.qcow2.bz2 to /export/secondary/template/tmpl/1/3/ [root@master1 ~]#

 

登錄CloudStack管理頁面,更改內(nèi)存的超配
改完需要重啟下服務(wù)。

?

 

基礎(chǔ)架構(gòu)----添加資源域

?

其余默認(rèn)如下

下面這里都默認(rèn)了,管理和guest都沒修改。之前這里改成eth0,下面兩個(gè)按鈕改成了cloudbr0. 其實(shí)可以不用改,它會(huì)自動(dòng)幫你做好。這里我們就不改了

如果你的虛擬機(jī)多的話,可以填到250

?

這些agent不需要手動(dòng)起,它們是交給master管理的 其實(shí)master是通過22端口連接過來管理的

?

?

這里配置如下 由于agent節(jié)點(diǎn)做的glusterfs復(fù)制卷,就可以選擇協(xié)議為gluster,服務(wù)器可以填127.0.0.1了

?

這里配置結(jié)果如下,sec是隨便寫的

?

點(diǎn)擊啟動(dòng)資源

?

可能是軟件的bug 點(diǎn)擊上面的取消就行了 區(qū)域,提供點(diǎn),集群都加成功了,這里可以手動(dòng)加主機(jī)和存儲(chǔ)

?

可以通過主機(jī)界面添加

?

?

agent2加成功了,agent1加失敗了

?

添加成功

agent1沒添加成功是上節(jié)課的cloudbr0 殘留引起的
刪除cloudbr0,重啟agent1網(wǎng)絡(luò),再次master上添加,完成

[root@agent1 ~]# cd /etc/sysconfig/network-scripts/ [root@agent1 network-scripts]# ls ifcfg-cloudbr0 ifcfg-cloudbr0 [root@agent1 network-scripts]# rm -f ifcfg-cloudbr0 [root@agent1 network-scripts]# /etc/init.d/network restart Shutting down interface eth0: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] [root@agent1 network-scripts]# 這里解決過程如下,先刪除之前實(shí)驗(yàn)生成的cloudbr0文件,重啟網(wǎng)絡(luò) rm -f ifcfg-cloudbr0 /etc/init.d/network restart 再次網(wǎng)頁添加提示失敗,檢查kvm,發(fā)現(xiàn)上次實(shí)驗(yàn)的kvm機(jī)器在運(yùn)行,但是kvm的rpm包找不到 而且libvirtd包也沒了,就是使用yum ?remove 卸載gluster導(dǎo)致的 ps -ef | grep kvm lsmod | grep kvm rpm -qa | grep kvm virsh list --all 之前卸載glusterfs時(shí),把kvm包連帶卸載掉了,重新安裝CloudStack的包,把依賴的kvm包和libvirt包都裝上了 cd /tools/ ls yum install cloudstack-agent-4.8.0-1.el6.x86_64.rpm cloudstack-common-4.8.0-1.el6.x86_64.rpm -y ps -ef | grep kvm lsmod | grep kvm rpm -qa | grep kvm /etc/init.d/libvirtd status /etc/init.d/libvirtd restart 由于之前實(shí)驗(yàn)的kvm一直駐留在內(nèi)存里,直接kill,kill之后會(huì)消失 virsh list ps -ef | grep kvm kill -9 2776 ps -ef | grep kvm kill -9 2532 ps -ef | grep kvm kill -9 2304 ps -ef | grep kvm virsh list --all agent1也添加成功

?

日志在刷,卡在這里,這里是別人做實(shí)驗(yàn)遇到的。就是cloudbr0文件沒刪除導(dǎo)致的
提示File exists

?

添加存儲(chǔ)

先添加主存儲(chǔ)

?

添加成功

?

檢查,添加成功了,以前的版本是數(shù)據(jù)庫加了一條記錄,這里沒掛載,新版本數(shù)據(jù)庫加了記錄之后,這里掛載了 [root@agent1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 35G 2.7G 31G 8% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 380M 33M 328M 9% /boot /dev/sdb 50G 52M 47G 1% /export/primary 127.0.0.1:/gv2 50G 52M 47G 1% /mnt/6d915c5a-6640-354e-9209-d2c8479ca105 [root@agent1 ~]# [root@agent2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 35G 2.7G 31G 9% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 380M 33M 328M 9% /boot /dev/sdb 50G 52M 47G 1% /export/primary 127.0.0.1:/gv2 50G 52M 47G 1% /mnt/6d915c5a-6640-354e-9209-d2c8479ca105 [root@agent2 ~]# 添加二級(jí)存儲(chǔ)

?

基礎(chǔ)架構(gòu)完成如下

?

啟動(dòng)域之前,先優(yōu)化一下
修改超配
關(guān)于超配可以參照上一節(jié)課的。
修改完之后需要重啟master的management服務(wù),可能服務(wù)第一次重啟需要等待一段時(shí)間,為了確保成功,再次執(zhí)行一次重啟

[root@master1 ~]# /etc/init.d/cloudstack-management restart Stopping cloudstack-management: [FAILED] Starting cloudstack-management: [ OK ] [root@master1 ~]# /etc/init.d/cloudstack-management restart Stopping cloudstack-management: [ OK ] Starting cloudstack-management: [ OK ] [root@master1 ~]#

重啟過程中,日志有些報(bào)錯(cuò),參照下。不用管

[root@master1 ~]# tail -f /var/log/cloudstack/management/catalina.out INFO [o.a.c.s.l.CloudStackExtendedLifeCycle] (Thread-85:null) (logid:) stopping bean VolumeDataStoreDaoImpl INFO [o.a.c.s.l.CloudStackExtendedLifeCycle] (Thread-85:null) (logid:) stopping bean UsageDaoImpl INFO [o.a.c.s.l.CloudStackExtendedLifeCycle] (Thread-85:null) (logid:) stopping bean ManagementServerNode INFO [o.a.c.s.l.CloudStackExtendedLifeCycle] (Thread-85:null) (logid:) stopping bean ConfigurationServerImpl INFO [o.a.c.s.l.CloudStackExtendedLifeCycle] (Thread-85:null) (logid:) stopping bean DatabaseIntegrityChecker INFO [o.a.c.s.l.CloudStackExtendedLifeCycle] (Thread-85:null) (logid:) stopping bean ClusterManagerImpl INFO [c.c.c.ClusterManagerImpl] (Thread-85:null) (logid:) Stopping Cluster manager, msid : 52236852888 log4j:WARN No appenders could be found for logger (com.cloud.cluster.ClusterManagerImpl). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Exception in thread "SnapshotPollTask" java.lang.NullPointerExceptionat org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:304)at org.apache.cloudstack.managed.context.ManagedContextRunnable.getContext(ManagedContextRunnable.java:66)at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)at org.apache.cloudstack.managed.context.ManagedContextTimerTask.run(ManagedContextTimerTask.java:27)at java.util.TimerThread.mainLoop(Timer.java:555)at java.util.TimerThread.run(Timer.java:505) Feb 12, 2017 8:50:21 PM org.apache.catalina.core.AprLifecycleListener init Feb 12, 2017 8:50:22 PM org.apache.catalina.session.StandardManager doLoad SEVERE: IOException while loading persisted sessions: java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: net.sf.cglib.proxy.NoOp$1 java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: net.sf.cglib.proxy.NoOp$1at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1354) [root@master1 ~]# 啟動(dòng)資源域

?

啟動(dòng)資源域的目的,為了創(chuàng)建兩臺(tái)kvm 一個(gè)是二級(jí)存儲(chǔ)用的,一個(gè)是vnc窗口代理的

?

想判斷kvm有沒有啟動(dòng)成功,兩種方法
1、網(wǎng)頁查看
2、vnc登錄看看

?

?

網(wǎng)頁方式查看
啟動(dòng)中

啟動(dòng)成功 vnc方式查看 在vncviewer 復(fù)制進(jìn)去。0 的話就是5900開始的 密碼不是password,指的是vnc的密碼 密碼是最后一串,下面的1表示虛擬機(jī)的ID [root@agent1 ~]# virsh edit 1

找到如下內(nèi)容,最后的密碼就是vnc的密碼
<graphics type='vnc' port='-1' autoport='yes' listen='192.168.145.152' passwd='Pdf1sAQ2bIl0oVpKSRfxaA'>

?

復(fù)制這一串密碼

這樣就表示虛擬機(jī)啟動(dòng)成功了 輸入用戶名和密碼root/password 登錄進(jìn)去

刷新下

?

?虛擬機(jī)啟動(dòng)成功后,查看所在位置

這里面就是模板還有那兩個(gè)虛擬機(jī),兩個(gè)agent由于是復(fù)制卷,內(nèi)容一致

[root@agent1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 35G 2.7G 31G 8% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 380M 33M 328M 9% /boot /dev/sdb 50G 682M 46G 2% /export/primary 127.0.0.1:/gv2 50G 682M 46G 2% /mnt/6d915c5a-6640-354e-9209-d2c8479ca105 [root@agent1 ~]# cd /mnt/6d915c5a-6640-354e-9209-d2c8479ca105/ [root@agent1 6d915c5a-6640-354e-9209-d2c8479ca105]# ls 745865fe-545e-4430-98ac-0ffd5186a9b6 bc5bc6eb-4900-4076-9d5d-36fd0480b5e2 9824edc4-61db-4ad8-a08a-61f051b9ebfe [root@agent1 6d915c5a-6640-354e-9209-d2c8479ca105]#內(nèi)容一致 [root@agent2 ~]# cd /mnt/6d915c5a-6640-354e-9209-d2c8479ca105/ [root@agent2 6d915c5a-6640-354e-9209-d2c8479ca105]# ls 745865fe-545e-4430-98ac-0ffd5186a9b6 bc5bc6eb-4900-4076-9d5d-36fd0480b5e2 9824edc4-61db-4ad8-a08a-61f051b9ebfe [root@agent2 6d915c5a-6640-354e-9209-d2c8479ca105]#

?

?虛擬機(jī)的在線遷移


?

目前是一個(gè)宿主機(jī)分了一個(gè)系統(tǒng)虛擬機(jī)

[root@agent1 ~]# virsh listId Name State ----------------------------------------------------1 v-2-VM running[root@agent1 ~]# [root@agent2 ~]# virsh listId Name State ----------------------------------------------------1 s-1-VM running[root@agent2 ~]#

它的ip是192.168.145.180

agent1上目前是沒有該虛擬機(jī)的 我們把它從agent2遷移到agent1上 [root@agent1 ~]# virsh listId Name State ----------------------------------------------------1 v-2-VM running[root@agent1 ~]# 先ping著,那邊做遷移,看看會(huì)不會(huì)丟包

提示遷移完畢

有少量丟包 64 bytes from 192.168.145.180: icmp_seq=132 ttl=64 time=2.97 ms 64 bytes from 192.168.145.180: icmp_seq=133 ttl=64 time=0.830 ms 64 bytes from 192.168.145.180: icmp_seq=134 ttl=64 time=0.640 ms 64 bytes from 192.168.145.180: icmp_seq=135 ttl=64 time=0.850 ms 64 bytes from 192.168.145.180: icmp_seq=136 ttl=64 time=1.43 ms ^C --- 192.168.145.180 ping statistics --- 136 packets transmitted, 132 received, 2% packet loss, time 135432ms rtt min/avg/max/mdev = 0.447/1.331/8.792/1.268 ms [root@master1 ~]# 已經(jīng)遷移完畢了 [root@agent1 ~]# virsh listId Name State ----------------------------------------------------1 v-2-VM running3 s-1-VM running[root@agent1 ~]#

  

遷移原理,在新的host上創(chuàng)建虛擬機(jī),只不過是掛起的,拷貝硬盤數(shù)據(jù)和內(nèi)存數(shù)據(jù)過去 遷移完畢,起來。 網(wǎng)絡(luò)一定要快,不丟包 這是kvm自帶的功能,只是用界面把底層命令封裝了

 

?

自定義虛擬機(jī)配置方案


?

你可以添加自定義的kvm配置套餐

先看看自己虛host的cpuinfo 看到2200mhz [root@agent1 ~]# cat /proc/cpuinfo | head -10 processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz stepping : 7 microcode : 26 cpu MHz : 2192.909 cache size : 6144 KB physical id : 0 [root@agent1 ~]#

假如你物理機(jī)是2.2GMHZ,比如上面這種
雖然物理機(jī)有多核,這里你創(chuàng)建1個(gè)3GMHz的。 是不可以的。

這里最好寫2000, 不能超過物理機(jī)的2200的

?

添加成功 查看下磁盤方案

關(guān)于寫入緩存類型這里 cache=none,表示默認(rèn)的,沒有緩存、。

用戶界面賬戶和項(xiàng)目使用


?

賬戶配置 user1/123456

?

?

普通用戶是無法看到系統(tǒng)虛擬機(jī)的

?

?

關(guān)于安全組,普通用戶也可以自定義

?

普通用戶的資源限制 只能加20kvm admin用戶可以改它的資源限制 項(xiàng)目和賬戶和資源也關(guān)聯(lián)

項(xiàng)目創(chuàng)建完畢

?

?

事件這里記錄的操作日志

?

?

重要的報(bào)警首頁會(huì)顯示

?

?

?

轉(zhuǎn)載于:https://www.cnblogs.com/nmap/p/6392782.html

總結(jié)

以上是生活随笔為你收集整理的CloudStack学习-2的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。