日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

CloudStack学习-3

發(fā)布時間:2023/11/29 编程问答 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 CloudStack学习-3 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

此次試驗主要是CloudStack結(jié)合openvswitch

?

背景介紹


?

之所以引入openswitch,是因為如果按照之前的方式,一個網(wǎng)橋占用一個vlan,假如一個zone有20個vlan,那么豈不是每個vlan都要創(chuàng)建一個橋,維護很麻煩

openvswitch是由Nicira Networks主導(dǎo)的,運行在虛擬化平臺(例如KVM,Xen)上的虛擬交換機。在虛擬化平臺上,ovs可以動態(tài)變化的斷電提供2層交換功能,很好的控制虛擬網(wǎng)絡(luò)中的訪問策略、網(wǎng)絡(luò)隔離、流量監(jiān)控等

它是軟件級別的交換機,可以稱之為軟件定義網(wǎng)絡(luò)

ovs的適用范圍:高級網(wǎng)絡(luò),低級網(wǎng)絡(luò),只有1個網(wǎng)卡也可以使用ovs

?

openvswitch官網(wǎng)如下

openswitch支持的平臺

kvm,Xen,openstack,VirtualBox等,雖然沒寫CloudStack,但是也是支持的

點擊download,可以下載源碼包,目前openswitch只提供源碼包下載

?

?

?

這里把openvswitch,統(tǒng)一稱為ovs openvswitch是為了替換bridge的一個解決方案,你bridge都沒在master上安裝,它也肯定不會在master上安裝 只在宿主機上安裝

?

?

?

?

IP地址規(guī)劃


?

3臺宿主機關(guān)機,每臺機器再添加2個網(wǎng)卡,最終每個機器3塊網(wǎng)卡

每個host3個網(wǎng)卡,分別模擬管理網(wǎng),存儲網(wǎng),客戶網(wǎng)
eth0:管理網(wǎng),特點是數(shù)據(jù)量最小,管理宿主機使用
eth1:存儲網(wǎng),不跨vlan,連接自己存儲的網(wǎng)絡(luò)
eth2:客戶網(wǎng),也叫來賓網(wǎng)絡(luò)。用戶端請求。來自公網(wǎng)的訪問

master機器之所以添加客戶網(wǎng),為了給后面添加資源域時給agent的客戶網(wǎng)當(dāng)網(wǎng)關(guān)用

?

?

?

把這3個虛擬機都關(guān)閉了? 添加vnet5和vnet6 都該成下面這種網(wǎng)段

?

3個機器都加上,master也加。

?

硬件添加完之后,需要配置新網(wǎng)卡的IP

這里容易混淆eth1和eth2 我的經(jīng)驗是,一次添加一個網(wǎng)卡,重啟,添加第三個,重啟 這里其它網(wǎng)卡不用寫網(wǎng)關(guān)了,因為一臺機器只能有1個網(wǎng)關(guān)

?

?

最終3臺機器的eth1和eth2的ip地址如下。
原先eth0不變,保證3臺機器能互相通過新加的網(wǎng)卡ping通IP

master [root@master1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 # Please read /usr/share/doc/initscripts-*/sysconfig.txt # for the documentation of these parameters. DEVICE=eth1 BOOTPROTO=none NETMASK=255.255.255.0 TYPE=Ethernet HWADDR=00:0c:29:8f:1e:a2 IPADDR=192.168.5.151 [root@master1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth2 # Please read /usr/share/doc/initscripts-*/sysconfig.txt # for the documentation of these parameters. DEVICE=eth2 BOOTPROTO=none NETMASK=255.255.255.0 TYPE=Ethernet IPADDR=192.168.6.151 [root@master1 ~]# agent1 [root@agent1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 # Please read /usr/share/doc/initscripts-*/sysconfig.txt # for the documentation of these parameters. DEVICE=eth1 BOOTPROTO=none NETMASK=255.255.255.0 TYPE=Ethernet HWADDR=00:0c:29:ab:d5:b3 IPADDR=192.168.5.152 [root@agent1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth2 # Please read /usr/share/doc/initscripts-*/sysconfig.txt # for the documentation of these parameters. DEVICE=eth2 BOOTPROTO=none NETMASK=255.255.255.0 TYPE=Ethernet IPADDR=192.168.6.152 [root@agent1 ~]# agent2 [root@agent2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 BOOTPROTO=none NETMASK=255.255.255.0 TYPE=Ethernet IPADDR=192.168.5.153 [root@agent2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth2 DEVICE=eth2 BOOTPROTO=none NETMASK=255.255.255.0 TYPE=Ethernet IPADDR=192.168.6.153 [root@agent2 ~]#

  

?

?清理之前實驗殘留的信息


?

關(guān)閉系統(tǒng)虛擬機

登錄網(wǎng)頁,雖然點擊關(guān)閉了。系統(tǒng)虛擬機還會自動啟動,不用理會

?

?

禁用資源域

因為我們要把之前數(shù)據(jù)清除

?

?

agent上刪除eth0的橋接,兩臺agent都操作

刪除這一行: BRIDGE=cloudbr0 [root@agent1 ~]# brctl show bridge name bridge id STP enabled interfaces cloud0 8000.fe00a9fe00e0 no vnet0 cloudbr0 8000.000c29abd5a9 no eth0vnet1vnet2 virbr0 8000.525400ea877d yes virbr0-nic [root@agent1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 TYPE=EthernetONBOOT=yes BOOTPROTO=none IPADDR=192.168.145.152 NETMASK=255.255.255.0 GATEWAY=192.168.145.2 DNS1=10.0.1.11 NM_CONTROLLED=no BRIDGE=cloudbr0 IPV6INIT=no USERCTL=no [root@agent1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0 [root@agent1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 TYPE=EthernetONBOOT=yes BOOTPROTO=none IPADDR=192.168.145.152 NETMASK=255.255.255.0 GATEWAY=192.168.145.2 DNS1=10.0.1.11 NM_CONTROLLED=no IPV6INIT=no USERCTL=no [root@agent1 ~]#

?

刪除cloudbr0文件

ovs和它是不同的東西,會沖突,agent2同樣操作

[root@agent1 ~]# cd /etc/sysconfig/network-scripts [root@agent1 network-scripts]# ls ifcfg-cloudbr0 ifdown-ipv6 ifup-bnep ifup-routes ifcfg-eth0 ifdown-isdn ifup-eth ifup-sit ifcfg-eth1 ifdown-post ifup-ippp ifup-tunnel ifcfg-eth2 ifdown-ppp ifup-ipv6 ifup-wireless ifcfg-lo ifdown-routes ifup-isdn init.ipv6-global ifdown ifdown-sit ifup-plip net.hotplug ifdown-bnep ifdown-tunnel ifup-plusb network-functions ifdown-eth ifup ifup-post network-functions-ipv6 ifdown-ippp ifup-aliases ifup-ppp [root@agent1 network-scripts]# rm -f ifcfg-cloudbr0 [root@agent1 network-scripts]#

?

重啟宿主機

系統(tǒng)虛擬機還在采用reboot讓它把信息清理干凈

[root@agent1 ~]# virsh listId Name State ----------------------------------------------------2 v-2-VM running[root@agent1 ~]# [root@agent2 ~]# virsh listId Name State ----------------------------------------------------2 s-1-VM running[root@agent2 ~]#

reboot之后,vnet和cloudbr0都沒有了,(兩臺agent都一樣)

[root@agent1 ~]# ip ad 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:d5:a9 brd ff:ff:ff:ff:ff:ffinet 192.168.145.152/24 brd 192.168.145.255 scope global eth0inet6 fe80::20c:29ff:feab:d5a9/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:d5:b3 brd ff:ff:ff:ff:ff:ffinet 192.168.5.152/24 brd 192.168.5.255 scope global eth1inet6 fe80::20c:29ff:feab:d5b3/64 scope link valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:d5:bd brd ff:ff:ff:ff:ff:ffinet 192.168.6.152/24 brd 192.168.6.255 scope global eth2inet6 fe80::20c:29ff:feab:d5bd/64 scope link valid_lft forever preferred_lft forever 5: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:ea:87:7d brd ff:ff:ff:ff:ff:ffinet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500link/ether 52:54:00:ea:87:7d brd ff:ff:ff:ff:ff:ff [root@agent1 ~]#

重啟后下面這個也只存在了自帶的這個橋,(兩臺agent都一樣)

[root@agent1 ~]# brctl show bridge name bridge id STP enabled interfaces virbr0 8000.525400ea877d yes virbr0-nic [root@agent1 ~]#

系統(tǒng)虛擬機也消失了(兩臺agent都沒有)

[root@agent1 ~]# virsh list --allId Name State ----------------------------------------------------[root@agent1 ~]#

  

?

master端停止cloudstack-management服務(wù)

[root@master1 ~]# /etc/init.d/cloudstack-management stop Stopping cloudstack-management: [FAILED] [root@master1 ~]# /etc/init.d/cloudstack-management stop Stopping cloudstack-management: [ OK ] [root@master1 ~]# /etc/init.d/cloudstack-management status cloudstack-management is stopped [root@master1 ~]#

?

master端清除數(shù)據(jù)庫

[root@master1 ~]# mysql -uroot -p123456 Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 13 Server version: 5.1.73-log Source distributionCopyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | cloud | | cloud_usage | | mysql | | test | +--------------------+ 5 rows in set (0.00 sec)mysql> drop database cloud; Query OK, 274 rows affected (1.17 sec)mysql> drop database cloud_usage; Query OK, 25 rows affected (0.13 sec)mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | test | +--------------------+ 3 rows in set (0.00 sec)mysql> exit Bye [root@master1 ~]#

  

?

agent端安裝ovs包


?


兩臺agent安裝ovs
openvswitch是為了替換bridge的一個解決方案,只在宿主機上安裝,最后2個rpm包就是ovs相關(guān)的包

[root@agent1 tools]# ls cloudstack-agent-4.8.0-1.el6.x86_64.rpm cloudstack-baremetal-agent-4.8.0-1.el6.x86_64.rpm cloudstack-cli-4.8.0-1.el6.x86_64.rpm cloudstack-common-4.8.0-1.el6.x86_64.rpm cloudstack-management-4.8.0-1.el6.x86_64.rpm cloudstack-usage-4.8.0-1.el6.x86_64.rpm kmod-openvswitch-2.3.1-1.el6.x86_64.rpm openvswitch-2.3.1-1.x86_64.rpm

安裝過程如下(兩臺agent上都執(zhí)行,master上不需要執(zhí)行)  

[root@agent1 tools]# yum install kmod-openvswitch-2.3.1-1.el6.x86_64.rpm openvswitch-2.3.1-1.x86_64.rpm -y Loaded plugins: fastestmirror, security Setting up Install Process Examining kmod-openvswitch-2.3.1-1.el6.x86_64.rpm: kmod-openvswitch-2.3.1-1.el6.x86_64 Marking kmod-openvswitch-2.3.1-1.el6.x86_64.rpm to be installed Loading mirror speeds from cached hostfile epel/metalink | 6.2 kB 00:00 * epel: mirror.premi.st base | 3.7 kB 00:00 centos-gluster37 | 2.9 kB 00:00 epel | 4.3 kB 00:00 epel/primary_db | 5.9 MB 00:24 extras | 3.4 kB 00:00 updates | 3.4 kB 00:00 Examining openvswitch-2.3.1-1.x86_64.rpm: openvswitch-2.3.1-1.x86_64 Marking openvswitch-2.3.1-1.x86_64.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package kmod-openvswitch.x86_64 0:2.3.1-1.el6 will be installed ---> Package openvswitch.x86_64 0:2.3.1-1 will be installed --> Finished Dependency ResolutionDependencies Resolved=================================================================================Package Arch Version Repository Size ================================================================================= Installing:kmod-openvswitch x86_64 2.3.1-1.el6 /kmod-openvswitch-2.3.1-1.el6.x86_64 6.5 Mopenvswitch x86_64 2.3.1-1 /openvswitch-2.3.1-1.x86_64 8.0 MTransaction Summary ================================================================================= Install 2 Package(s)Total size: 14 M Installed size: 14 M Downloading Packages: Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running TransactionInstalling : kmod-openvswitch-2.3.1-1.el6.x86_64 1/2 Installing : openvswitch-2.3.1-1.x86_64 2/2 Verifying : openvswitch-2.3.1-1.x86_64 1/2 Verifying : kmod-openvswitch-2.3.1-1.el6.x86_64 2/2 Installed:kmod-openvswitch.x86_64 0:2.3.1-1.el6 openvswitch.x86_64 0:2.3.1-1 Complete! [root@agent1 tools]#

  

?配置 cloudstack-agent 使用 OVS

編輯/etc/cloudstack/agent/agent.properties,添加下列行:

network.bridge.type=openvswitch libvirt.vif.driver=com.cloud.hypervisor.kvm.resource.OvsVifDriver 第一行表示橋接類型ovs 第二行表示libvirt的驅(qū)動路徑,libvirt的網(wǎng)絡(luò)驅(qū)動是由ovs實現(xiàn),不是橋接來實現(xiàn)

?

下面是原先的配置文件

[root@agent1 tools]# cat /etc/cloudstack/agent/agent.properties #Storage #Mon Feb 13 22:57:04 CST 2017 guest.network.device=cloudbr0 workers=5 private.network.device=cloudbr0 port=8250 resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource pod=1 zone=1 hypervisor.type=kvm guid=a8be994b-26bd-39d6-a72f-693f06476873 public.network.device=cloudbr0 cluster=1 local.storage.uuid=cd049ede-7106-45ba-acd4-1f229405f272 domr.scripts.dir=scripts/network/domr/kvm LibvirtComputingResource.id=4 host=192.168.145.151 [root@agent1 tools]#

添加上面2行到末尾(agent2同樣操作)

[root@agent1 tools]# cat /etc/cloudstack/agent/agent.properties #Storage #Mon Feb 13 22:57:04 CST 2017 guest.network.device=cloudbr0 workers=5 private.network.device=cloudbr0 port=8250 resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource pod=1 zone=1 hypervisor.type=kvm guid=a8be994b-26bd-39d6-a72f-693f06476873 public.network.device=cloudbr0 cluster=1 local.storage.uuid=cd049ede-7106-45ba-acd4-1f229405f272 domr.scripts.dir=scripts/network/domr/kvm LibvirtComputingResource.id=4 host=192.168.145.151 network.bridge.type=openvswitch libvirt.vif.driver=com.cloud.hypervisor.kvm.resource.OvsVifDriver [root@agent1 tools]#

?

啟動服務(wù)讓兩臺agent加載openvswitch模塊

[root@agent1 tools]# lsmod | grep openvswitch [root@agent1 tools]# /etc/init.d/openvswitch start /etc/openvswitch/conf.db does not exist ... (warning). Creating empty database /etc/openvswitch/conf.db [ OK ] Starting ovsdb-server [ OK ] Configuring Open vSwitch system IDs [ OK ] Inserting openvswitch module [ OK ] Starting ovs-vswitchd [ OK ] Enabling remote OVSDB managers [ OK ] [root@agent1 tools]# chkconfig openvswitch on [root@agent1 tools]# lsmod | grep openvswitch openvswitch 88783 0 libcrc32c 1246 1 openvswitch [root@agent1 tools]#

?

?

agent端glusterfs配置文件優(yōu)化

修改/etc/glusterfs/glusterd.vol

添加 option rpc-auth-allow-insecure on 這是因為glusterd默認只接受小于1024的端口下發(fā)的請求,而Qemu使用了大于1024的端口下發(fā)請求,造成gluterd的安全機制默認阻止Qemu的請求

同時也要對要使用的brick
testvol 做如下操作,并重啟該birck
volume set testvol server.allow-insecure on
ps:如果啟動虛擬機失敗,遇到這樣的錯誤:SETVOLUME on remote-host failed: Authentication failed
可以關(guān)閉cluster各個brick,umount所有掛載點后執(zhí)行:
gluster volume set brickname auth.allow 'serverIp like 192.*'

?

修改之后如下

[root@agent1 tools]# cat /etc/glusterfs/glusterd.vol volume managementtype mgmt/glusterdoption working-directory /var/lib/glusterdoption transport-type socket,rdmaoption transport.socket.keepalive-time 10option transport.socket.keepalive-interval 2option transport.socket.read-fail-log offoption ping-timeout 0option event-threads 1 # option base-port 49152 end-volume [root@agent1 tools]# vim /etc/glusterfs/glusterd.vol [root@agent1 tools]# cat /etc/glusterfs/glusterd.vol volume managementtype mgmt/glusterdoption working-directory /var/lib/glusterdoption transport-type socket,rdmaoption transport.socket.keepalive-time 10option transport.socket.keepalive-interval 2option transport.socket.read-fail-log offoption ping-timeout 0option event-threads 1option rpc-auth-allow-insecure on # option base-port 49152 end-volume [root@agent1 tools]#

agent2也添加這行參數(shù)

[root@agent2 tools]# cat /etc/glusterfs/glusterd.vol volume managementtype mgmt/glusterdoption working-directory /var/lib/glusterdoption transport-type socket,rdmaoption transport.socket.keepalive-time 10option transport.socket.keepalive-interval 2option transport.socket.read-fail-log offoption ping-timeout 0option event-threads 1option rpc-auth-allow-insecure on # option base-port 49152 end-volume [root@agent2 tools]#

?

命令行執(zhí)行安全配置

這個命令,只需要在一個gluster節(jié)點上執(zhí)行即可,它對整個集群生效  

[root@agent1 tools]# gluster volume set gv2 server.allow-insecure on volume set: success [root@agent1 tools]#

?

gluster節(jié)點重啟glusterd服務(wù)

改動之后,兩臺agent都要重啟glusterd,注意服務(wù)名字,不是下面的glusterfsd,是glusterd服務(wù)
假如在這之前我們沒啟動過glusterd服務(wù),可以不用重啟。保險起見,重啟即可

重啟glusterfs對kvm有影響么,幾乎無影響。但是先stop,在start,有影響的

[root@agent2 tools]# /etc/init.d/glusterfsd restart Stopping glusterfsd: [FAILED] [root@agent2 tools]# /etc/init.d/glusterd restart Stopping glusterd: [ OK ] Starting glusterd: [ OK ] [root@agent2 tools]#

glusterfs相關(guān)部分就操作完了

?

?

配置agent機器上網(wǎng)卡文件結(jié)合ovs


?

?

修改agent網(wǎng)卡參數(shù),創(chuàng)建ovs橋cloudbr2,這里網(wǎng)橋名字隨便取,這里出于對應(yīng)eth2,就寫成了cloudbr2
兩臺agent都操作

[root@agent2 tools]# cd /etc/sysconfig/network-scripts/ [root@agent2 network-scripts]# vim ifcfg-cloudbr2 [root@agent2 network-scripts]# vim ifcfg-eth2 [root@agent2 network-scripts]# cat ifcfg-cloudbr2 DEVICE=cloudbr2 ONBOOT=yes DEVICETYPE=ovs TYPE=OVSBridge BOOTPROTO=static STP=no NM_CONTROLLED=no USERCTL=no

刪除IP地址等

[root@agent2 network-scripts]# cat ifcfg-eth2 # Please read /usr/share/doc/initscripts-*/sysconfig.txt # for the documentation of these parameters. DEVICE=eth2 BOOTPROTO=none TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=cloudbr2 ONBOOT=yes USERCTL=no NM_CONTROLLED=no [root@agent2 network-scripts]#

修改完之后,兩邊都重啟網(wǎng)絡(luò)服務(wù)

[root@agent2 network-scripts]# /etc/init.d/network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down interface eth2: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface cloudbr2: [ OK ] Bringing up interface eth0: Determining if ip address 192.168.145.153 is already in use for device eth0...[ OK ] Bringing up interface eth1: Determining if ip address 192.168.5.153 is already in use for device eth1...[ OK ] Bringing up interface eth2: [ OK ] [root@agent2 network-scripts]#

查看重啟后的網(wǎng)絡(luò)接口信息,多了cloudbr2

[root@agent2 network-scripts]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:95:fa brd ff:ff:ff:ff:ff:ffinet 192.168.145.153/24 brd 192.168.145.255 scope global eth0inet6 fe80::20c:29ff:feab:95fa/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:95:04 brd ff:ff:ff:ff:ff:ffinet 192.168.5.153/24 brd 192.168.5.255 scope global eth1inet6 fe80::20c:29ff:feab:9504/64 scope link valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:95:0e brd ff:ff:ff:ff:ff:ffinet6 fe80::20c:29ff:feab:950e/64 scope link valid_lft forever preferred_lft forever 5: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:a1:8e:64 brd ff:ff:ff:ff:ff:ffinet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500link/ether 52:54:00:a1:8e:64 brd ff:ff:ff:ff:ff:ff 8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether b6:34:b6:35:e6:ca brd ff:ff:ff:ff:ff:ff 9: cloudbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:ab:95:0e brd ff:ff:ff:ff:ff:ffinet6 fe80::4893:c0ff:fe7a:b14e/64 scope link valid_lft forever preferred_lft forever [root@agent2 network-scripts]#

查看重啟后兩臺agent的ovs這個虛擬交換機 

[root@agent2 network-scripts]# ovs-vsctl show c17bcc03-f6d0-4368-9f41-004598ec7336Bridge "cloudbr2"Port "eth2"Interface "eth2"Port "cloudbr2"Interface "cloudbr2"type: internalovs_version: "2.3.1" [root@agent2 network-scripts]# [root@agent1 network-scripts]# ovs-vsctl show b8d2eae6-27c2-4f94-bb28-81635229141dBridge "cloudbr2"Port "eth2"Interface "eth2"Port "cloudbr2"Interface "cloudbr2"type: internalovs_version: "2.3.1" [root@agent1 network-scripts]#

?

?

master端配置


?

建庫建表
執(zhí)行數(shù)據(jù)庫初始化操作,導(dǎo)入數(shù)據(jù)

[root@master1 ~]# cloudstack-setup-databases cloud:123456@localhost --deploy-as=root:123456 Mysql user name:cloud [ OK ] Mysql user password:****** [ OK ] Mysql server ip:localhost [ OK ] Mysql server port:3306 [ OK ] Mysql root user name:root [ OK ] Mysql root user password:****** [ OK ] Checking Cloud database files ... [ OK ] Checking local machine hostname ... [ OK ] Checking SELinux setup ... [ OK ] Detected local IP address as 192.168.145.151, will use as cluster management server node IP[ OK ] Preparing /etc/cloudstack/management/db.properties [ OK ] Applying /usr/share/cloudstack-management/setup/create-database.sql [ OK ] Applying /usr/share/cloudstack-management/setup/create-schema.sql [ OK ] Applying /usr/share/cloudstack-management/setup/create-database-premium.sql [ OK ] Applying /usr/share/cloudstack-management/setup/create-schema-premium.sql [ OK ] Applying /usr/share/cloudstack-management/setup/server-setup.sql [ OK ] Applying /usr/share/cloudstack-management/setup/templates.sql [ OK ] Processing encryption ... [ OK ] Finalizing setup ... [ OK ]CloudStack has successfully initialized database, you can check your database configuration in /etc/cloudstack/management/db.properties[root@master1 ~]#

?

初始化matser配置

數(shù)據(jù)庫配置完畢后,啟動master,它會做一些初始化的操作
以后不要這么啟動,初始化只執(zhí)行一次就行了

[root@master1 ~]# /etc/init.d/cloudstack-management status cloudstack-management is stopped [root@master1 ~]# cloudstack-setup-management Starting to configure CloudStack Management Server: Configure Firewall ... [OK] Configure CloudStack Management Server ...[OK] CloudStack Management Server setup is Done! [root@master1 ~]#

?

master上導(dǎo)入系統(tǒng)鏡像模板

master上執(zhí)行下面命令
導(dǎo)入模板。它會把系統(tǒng)模板拷貝到對應(yīng)路徑下,同時往數(shù)據(jù)庫里寫記錄。

之前實驗雖然也導(dǎo)入到對應(yīng)路徑下了,但是我們把數(shù)據(jù)庫刪除了,記錄不存在了。因此需要重新導(dǎo)入

/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \ -m /export/secondary \ -f /tools/systemvm64template-4.6.0-kvm.qcow2.bz2 \ -h kvm -F

這個步驟的作用就是把虛擬機模板導(dǎo)入到二級存儲,執(zhí)行過程如下

[root@master1 ~]# /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \ > -m /export/secondary \ > -f /tools/systemvm64template-4.6.0-kvm.qcow2.bz2 \ > -h kvm -F Uncompressing to /usr/share/cloudstack-common/scripts/storage/secondary/fa050b43-3f2e-4dd7-aecc-119ef1851039.qcow2.tmp (type bz2)...could take a long time Moving to /export/secondary/template/tmpl/1/3///fa050b43-3f2e-4dd7-aecc-119ef1851039.qcow2...could take a while Successfully installed system VM template /tools/systemvm64template-4.6.0-kvm.qcow2.bz2 to /export/secondary/template/tmpl/1/3/ [root@master1 ~]#

?

登陸master網(wǎng)頁允許agent下載ISO鏡像

下面這里改成允許全網(wǎng)段下載iso 添加ISO模版,就不會出現(xiàn)“connection refused”的錯誤。

?

重啟CloudStack-management

[root@master1 ~]# /etc/init.d/cloudstack-management restart Stopping cloudstack-management: [FAILED] Starting cloudstack-management: [ OK ] [root@master1 ~]# /etc/init.d/cloudstack-management restart Stopping cloudstack-management: [ OK ] Starting cloudstack-management: [ OK ] [root@master1 ~]#

?

?

添加資源域


?

在操作之前,先把3臺機器停止iptables

[root@master1 ~]# /etc/init.d/iptables stop iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Flushing firewall rules: [ OK ] iptables: Unloading modules: [ OK ] [root@master1 ~]#

操作之前,還需要確保gluster是OK的

[root@agent1 ~]# gluster volume status Status of volume: gv2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick agent1:/export/primary 49152 0 Y 27248 Brick agent2:/export/primary 49152 0 Y 27372 NFS Server on localhost 2049 0 Y 27232 Self-heal Daemon on localhost N/A N/A Y 27241 NFS Server on agent2 2049 0 Y 27357 Self-heal Daemon on agent2 N/A N/A Y 27367Task Status of Volume gv2 ------------------------------------------------------------------------------ There are no active volume tasks[root@agent1 ~]#

  

這次選高級和安全組

關(guān)于高級網(wǎng)絡(luò)和基本網(wǎng)絡(luò)的比較

?

10.0.1.11是本地dns服務(wù)器

?

?

其余默認

?

把圖標(biāo)拖到不同的位置,就可以實現(xiàn)分離了 默認是下面,需要修改

?

eth0是管理網(wǎng),橋接到cloudbr0上,cloudbr0會在后面創(chuàng)建

繼續(xù)編輯來賓網(wǎng)絡(luò)的網(wǎng)卡 這里處于對應(yīng)eth2這個名字,寫成cloudbr2了。其實可以自定義

寫不對的話,網(wǎng)卡加不上

?

存儲的這里不實現(xiàn),最后結(jié)果如下

?

預(yù)留的給系統(tǒng)虛擬機的。 10個就夠了。現(xiàn)網(wǎng)也可以這么做

?

由于本次環(huán)境沒有交換機,真實環(huán)境要寫真實網(wǎng)關(guān)的。這里為了通過,可以拿master網(wǎng)卡IP當(dāng)成網(wǎng)關(guān),生產(chǎn)環(huán)境用交換機真實的網(wǎng)關(guān)地址 vlan標(biāo)簽根據(jù)真實的vlan id寫,這里寫個6表示vlan id是6

?

root/root01

我們這里用的是glusterfs agent自己本地運行了glusterd服務(wù)。就寫127.0.0.1

?

二級存儲寫master共享的nfs,其實生產(chǎn)環(huán)境也可以試用glusterfs

Failed to add data store: iSCSI needs to have LUN number 這里是軟件bug,點擊取消即可,可以單獨添加存儲

?

?

單獨添加主存儲

?

添加成功。剛才的問題應(yīng)該是CloudStack的bug

?

繼續(xù)添加二級存儲

?

完成如下

?

把agent2主機加上 現(xiàn)網(wǎng)中推薦先啟動資源域,再添加新的主機,這樣好處在于排查問題方便。

?

?

?

主機添加之后的agent1網(wǎng)絡(luò)信息
多了cloudbr0和cloudbr2網(wǎng)橋設(shè)備

[root@agent1 ~]# ip ad 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:d5:a9 brd ff:ff:ff:ff:ff:ffinet6 fe80::20c:29ff:feab:d5a9/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:d5:b3 brd ff:ff:ff:ff:ff:ffinet 192.168.5.152/24 brd 192.168.5.255 scope global eth1inet6 fe80::20c:29ff:feab:d5b3/64 scope link valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:d5:bd brd ff:ff:ff:ff:ff:ffinet6 fe80::20c:29ff:feab:d5bd/64 scope link valid_lft forever preferred_lft forever 5: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:ea:87:7d brd ff:ff:ff:ff:ff:ffinet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500link/ether 52:54:00:ea:87:7d brd ff:ff:ff:ff:ff:ff 8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 2e:c0:a6:d1:46:73 brd ff:ff:ff:ff:ff:ff 10: cloudbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:ab:d5:a9 brd ff:ff:ff:ff:ff:ffinet 192.168.145.152/24 brd 192.168.145.255 scope global cloudbr0inet6 fe80::448a:8cff:fe77:e140/64 scope link valid_lft forever preferred_lft forever 11: cloudbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:ab:d5:bd brd ff:ff:ff:ff:ff:ffinet6 fe80::30c6:80ff:fe79:a149/64 scope link valid_lft forever preferred_lft forever 13: cloud0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether c6:9e:e5:49:12:4e brd ff:ff:ff:ff:ff:ffinet 169.254.0.1/16 scope global cloud0inet6 fe80::c49e:e5ff:fe49:124e/64 scope link valid_lft forever preferred_lft forever [root@agent1 ~]#

  

啟動資源域


?

?

沒有報錯,看下網(wǎng)頁這里。看到兩個系統(tǒng)vm在啟動

?

啟動資源域之后系統(tǒng)虛擬機運行OK [root@agent2 ~]# virsh listId Name State ----------------------------------------------------2 s-2-VM running3 v-1-VM running[root@agent2 ~]#

  

控制臺這里之所以無法打開 可能就是當(dāng)時指定的192.168.6.151這個IP是master機器,不是有效的網(wǎng)關(guān)

?

?

?

虛擬機啟動后,ovs會創(chuàng)建很多橋


管理網(wǎng)段的橋接到了cloudbr0
guest網(wǎng)段的vnet都橋接到了cloudbr2上

?下面是agent1的橋接信息,之前的bridge-util橋就沒用到,試用的是ovs

[root@agent1 ~]# brctl show bridge name bridge id STP enabled interfaces virbr0 8000.525400ea877d yes virbr0-nic [root@agent1 ~]# ovs-vsctl show b8d2eae6-27c2-4f94-bb28-81635229141dBridge "cloud0"Port "cloud0"Interface "cloud0"type: internalBridge "cloudbr0"Port "eth0"Interface "eth0"Port "cloudbr0"Interface "cloudbr0"type: internalBridge "cloudbr2"Port "eth2"Interface "eth2"Port "cloudbr2"Interface "cloudbr2"type: internalovs_version: "2.3.1" [root@agent1 ~]#

下面是agent2上的橋接信息 

[root@agent2 ~]# ovs-vsctl show c17bcc03-f6d0-4368-9f41-004598ec7336Bridge "cloudbr2"Port "eth2"Interface "eth2"Port "cloudbr2"Interface "cloudbr2"type: internalPort "vnet2"tag: 6Interface "vnet2"Port "vnet5"tag: 6Interface "vnet5"Bridge "cloud0"Port "vnet0"Interface "vnet0"Port "vnet3"Interface "vnet3"Port "cloud0"Interface "cloud0"type: internalBridge "cloudbr0"Port "eth0"Interface "eth0"Port "vnet1"Interface "vnet1"Port "vnet4"Interface "vnet4"Port "cloudbr0"Interface "cloudbr0"type: internalovs_version: "2.3.1" [root@agent2 ~]#

因為系統(tǒng)虛擬機目前都運行在agent2上
所以橋接設(shè)備比較多

[root@agent2 ~]# ip ad 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:95:fa brd ff:ff:ff:ff:ff:ffinet6 fe80::20c:29ff:feab:95fa/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:95:04 brd ff:ff:ff:ff:ff:ffinet 192.168.5.153/24 brd 192.168.5.255 scope global eth1inet6 fe80::20c:29ff:feab:9504/64 scope link valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:95:0e brd ff:ff:ff:ff:ff:ffinet6 fe80::20c:29ff:feab:950e/64 scope link valid_lft forever preferred_lft forever 5: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:a1:8e:64 brd ff:ff:ff:ff:ff:ffinet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500link/ether 52:54:00:a1:8e:64 brd ff:ff:ff:ff:ff:ff 8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether b6:34:b6:35:e6:ca brd ff:ff:ff:ff:ff:ff 10: cloudbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:ab:95:fa brd ff:ff:ff:ff:ff:ffinet 192.168.145.153/24 brd 192.168.145.255 scope global cloudbr0inet6 fe80::b454:44ff:fe91:8e45/64 scope link valid_lft forever preferred_lft forever 11: cloudbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:ab:95:0e brd ff:ff:ff:ff:ff:ffinet6 fe80::c05d:eaff:fe39:2e45/64 scope link valid_lft forever preferred_lft forever 13: cloud0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 16:d7:4f:51:fc:4b brd ff:ff:ff:ff:ff:ffinet 169.254.0.1/16 scope global cloud0inet6 fe80::14d7:4fff:fe51:fc4b/64 scope link valid_lft forever preferred_lft forever 17: vnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500link/ether fe:00:a9:fe:03:4e brd ff:ff:ff:ff:ff:ffinet6 fe80::fc00:a9ff:fefe:34e/64 scope link valid_lft forever preferred_lft forever 18: vnet4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500link/ether fe:62:18:00:00:10 brd ff:ff:ff:ff:ff:ffinet6 fe80::fc62:18ff:fe00:10/64 scope link valid_lft forever preferred_lft forever 19: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500link/ether fe:28:42:00:00:02 brd ff:ff:ff:ff:ff:ffinet6 fe80::fc28:42ff:fe00:2/64 scope link valid_lft forever preferred_lft forever 20: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500link/ether fe:00:a9:fe:02:37 brd ff:ff:ff:ff:ff:ffinet6 fe80::fc00:a9ff:fefe:237/64 scope link valid_lft forever preferred_lft forever 21: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500link/ether fe:65:ce:00:00:12 brd ff:ff:ff:ff:ff:ffinet6 fe80::fc65:ceff:fe00:12/64 scope link valid_lft forever preferred_lft forever 22: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500link/ether fe:fa:e6:00:00:01 brd ff:ff:ff:ff:ff:ffinet6 fe80::fcfa:e6ff:fe00:1/64 scope link valid_lft forever preferred_lft forever [root@agent2 ~]#

下面是此時的agent1的網(wǎng)絡(luò)設(shè)備信息  

[root@agent1 ~]# ip ad 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:d5:a9 brd ff:ff:ff:ff:ff:ffinet6 fe80::20c:29ff:feab:d5a9/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:d5:b3 brd ff:ff:ff:ff:ff:ffinet 192.168.5.152/24 brd 192.168.5.255 scope global eth1inet6 fe80::20c:29ff:feab:d5b3/64 scope link valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:d5:bd brd ff:ff:ff:ff:ff:ffinet6 fe80::20c:29ff:feab:d5bd/64 scope link valid_lft forever preferred_lft forever 5: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:ea:87:7d brd ff:ff:ff:ff:ff:ffinet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500link/ether 52:54:00:ea:87:7d brd ff:ff:ff:ff:ff:ff 8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 2e:c0:a6:d1:46:73 brd ff:ff:ff:ff:ff:ff 10: cloudbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:ab:d5:a9 brd ff:ff:ff:ff:ff:ffinet 192.168.145.152/24 brd 192.168.145.255 scope global cloudbr0inet6 fe80::448a:8cff:fe77:e140/64 scope link valid_lft forever preferred_lft forever 11: cloudbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:ab:d5:bd brd ff:ff:ff:ff:ff:ffinet6 fe80::30c6:80ff:fe79:a149/64 scope link valid_lft forever preferred_lft forever 13: cloud0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether c6:9e:e5:49:12:4e brd ff:ff:ff:ff:ff:ffinet 169.254.0.1/16 scope global cloud0inet6 fe80::c49e:e5ff:fe49:124e/64 scope link valid_lft forever preferred_lft forever [root@agent1 ~]#

把v-1-VM遷移到agent1上,看到agent1多了vnet0,vnet1,vnet2
默認是因為系統(tǒng)虛擬機有3個網(wǎng)卡,對應(yīng)外面的3個虛擬設(shè)備

[root@agent1 ~]# ip ad 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:d5:a9 brd ff:ff:ff:ff:ff:ffinet6 fe80::20c:29ff:feab:d5a9/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:d5:b3 brd ff:ff:ff:ff:ff:ffinet 192.168.5.152/24 brd 192.168.5.255 scope global eth1inet6 fe80::20c:29ff:feab:d5b3/64 scope link valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:ab:d5:bd brd ff:ff:ff:ff:ff:ffinet6 fe80::20c:29ff:feab:d5bd/64 scope link valid_lft forever preferred_lft forever 5: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:ea:87:7d brd ff:ff:ff:ff:ff:ffinet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500link/ether 52:54:00:ea:87:7d brd ff:ff:ff:ff:ff:ff 8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 2e:c0:a6:d1:46:73 brd ff:ff:ff:ff:ff:ff 10: cloudbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:ab:d5:a9 brd ff:ff:ff:ff:ff:ffinet 192.168.145.152/24 brd 192.168.145.255 scope global cloudbr0inet6 fe80::448a:8cff:fe77:e140/64 scope link valid_lft forever preferred_lft forever 11: cloudbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:ab:d5:bd brd ff:ff:ff:ff:ff:ffinet6 fe80::30c6:80ff:fe79:a149/64 scope link valid_lft forever preferred_lft forever 13: cloud0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether c6:9e:e5:49:12:4e brd ff:ff:ff:ff:ff:ffinet 169.254.0.1/16 scope global cloud0inet6 fe80::c49e:e5ff:fe49:124e/64 scope link valid_lft forever preferred_lft forever 18: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500link/ether fe:00:a9:fe:02:37 brd ff:ff:ff:ff:ff:ffinet6 fe80::fc00:a9ff:fefe:237/64 scope link valid_lft forever preferred_lft forever 19: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500link/ether fe:65:ce:00:00:12 brd ff:ff:ff:ff:ff:ffinet6 fe80::fc65:ceff:fe00:12/64 scope link valid_lft forever preferred_lft forever 20: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500link/ether fe:fa:e6:00:00:01 brd ff:ff:ff:ff:ff:ffinet6 fe80::fcfa:e6ff:fe00:1/64 scope link valid_lft forever preferred_lft forever

可以使用vnc工具查看,系統(tǒng)虛擬機有3個網(wǎng)卡

?

?

cloud0是自帶的169開頭的系統(tǒng)虛擬機橋接的網(wǎng)卡

[root@agent1 ~]# ovs-vsctl show b8d2eae6-27c2-4f94-bb28-81635229141dBridge "cloud0"Port "cloud0"Interface "cloud0"type: internalPort "vnet0"Interface "vnet0"Bridge "cloudbr0"Port "eth0"Interface "eth0"Port "vnet1"Interface "vnet1"Port "cloudbr0"Interface "cloudbr0"type: internalBridge "cloudbr2"Port "vnet2"tag: 6Interface "vnet2"Port "eth2"Interface "eth2"Port "cloudbr2"Interface "cloudbr2"type: internalovs_version: "2.3.1" [root@agent1 ~]#

?

?

查看tag標(biāo)記

上面ovs-vsctl show這里也能看到tag標(biāo)記

virsh edit 1 看到打的tag標(biāo)記

?

?

?

?

關(guān)于數(shù)據(jù)包流向

這里的br100可以理解為vlan100

?

1 VM實例instance產(chǎn)生一個數(shù)據(jù)包并發(fā)送至實例內(nèi)的虛擬網(wǎng)絡(luò)接口VNIC,圖中就是instance中的eth0.
2 這個數(shù)據(jù)包會傳送到物理節(jié)點上的VNIC接口,如圖就是vnet接口。
3 數(shù)據(jù)包從vnet NIC出來,到達橋(虛擬交換機)br100上.
4 數(shù)據(jù)包經(jīng)過交換機的處理,從物理節(jié)點上的物理接口發(fā)出,如圖中物理節(jié)點上的eth0.
5 數(shù)據(jù)包從eth0出去的時候,是按照物理節(jié)點上的路由以及默認網(wǎng)關(guān)操作的,這個時候該數(shù)據(jù)包其實已經(jīng)不受你的控制了。

 

?

?ovs-vsctl用法

列出所有掛接到網(wǎng)卡上的網(wǎng)橋

[root@agent1 ~]# ovs-vsctl list-ports cloudbr2 eth2 vnet2 [root@agent1 ~]#

 

CloudStack的HA


?

系統(tǒng)可靠性與可用性

管理服務(wù)器的HA
CloudStack管理服務(wù)器可以部署多節(jié)點的配置,使得它不容易受到單個服務(wù)器故障影響。
管理服務(wù)器(不同于Mysql數(shù)據(jù)庫)本身是無狀態(tài)的,可以被部署在負載均衡設(shè)備后面。
停止的所有管理服務(wù)不會影響主機的正常操作。所有來賓VM將繼續(xù)工作。
當(dāng)管理主機下線后,不能創(chuàng)建新的VMs、最終用戶,管理UI、API、動態(tài)負載以及HA都將停止工作


啟用了HA的虛擬機
用戶可以給指定的虛擬機開啟高可用特性。默認情況下所有的虛擬路由虛擬機和彈性輔助均衡虛擬機自動開啟了高可用特性。
當(dāng)CloudStack檢測到開啟了高可用特性的虛擬機崩潰時將會在相同的可用資源中自動重啟該虛擬機。
高可用特性不會垮資源域執(zhí)行。
CloudStack采用比較保守的方式重啟虛擬機,以確使不會同時運行兩個相同的實例。
管理服務(wù)器會嘗試在本集群的另一臺主機上開啟該虛擬機。
高可用特性只在使用共享主存儲的時候才可以使用,不支持使用本地存儲作為主存儲的高可用

?

下面模擬開啟高可用

?

?這里拿系統(tǒng)虛擬機當(dāng)普通虛擬機使用(系統(tǒng)虛擬機默認是開啟了高可用的)

?這里寫一個高可用標(biāo)記,隨便寫

?

?

修改了全局設(shè)置重啟master服務(wù)

[root@master1 ~]# /etc/init.d/cloudstack-management restart Stopping cloudstack-management: [FAILED] Starting cloudstack-management: [ OK ] [root@master1 ~]# /etc/init.d/cloudstack-management restart Stopping cloudstack-management: [ OK ] Starting cloudstack-management: [ OK ] [root@master1 ~]#

  

?

?修改虛擬機標(biāo)簽

加上標(biāo)簽,需要刷新下,看到立即啟用了高可用

?

?

?

?

API相關(guān)


?

管理頁面少批量創(chuàng)建虛擬機功能,但是可以通過api方式自己開發(fā)實現(xiàn),需要創(chuàng)建密鑰,然后通過api連接創(chuàng)建實例

密鑰生成完畢,可以通過api連接創(chuàng)建實例

?

?

?

?

?

知識補充一


?

?關(guān)于添加資源域時選擇安全組

生產(chǎn)環(huán)境中關(guān)于安全組,可以先創(chuàng)建一個開放所有端口和所有協(xié)議的安全組,然后創(chuàng)建實例的時候使用

?

管理頁面可以添加vlan

?

?

?

?

自己實驗測試補充

我直接對agent1宿主機 執(zhí)行halt -pf模擬意外斷電,之前上面運行一個系統(tǒng)虛擬機。沒有自動在agent2節(jié)點上啟動 管理頁面一直保持下面狀態(tài) 重啟master服務(wù)沒作用

?

?

點擊遷移虛擬機也不可行

?

解決辦法:

是刪除agent1 宿主機
然后在系統(tǒng)vm那里點擊運行此虛擬機。它會自動在agent2上啟動

(宿主機2上起新的系統(tǒng)虛擬機的時候,會有1分鐘左右的斷網(wǎng)。 筆記本無法連接此宿主機,1分鐘后恢復(fù),可能與我虛擬機配置有關(guān))

?

?

?知識補充二


?

1、kvm虛擬機達到2000多臺,對mysql也沒壓力。

因此mysql調(diào)優(yōu)這塊不需要研究太深,做好主備即可。

?

2、企業(yè)生產(chǎn)環(huán)境服務(wù)器配置
企業(yè)私有云,服務(wù)器配置
企業(yè)搭建自己的私有云機器建議配置
CPU:2Cx10核(Intel Xeon E5-2650 v3 或Intel Xeon E52650 v3)
內(nèi)存:256G (16G*16,單條內(nèi)存無要求,按照最高性價比采購)
網(wǎng)卡:10G*2(模塊與目前機房的萬兆交換機匹配)
磁盤:600G*2塊 SAS盤(系統(tǒng)盤,大小無特殊規(guī)定,按公司標(biāo)準(zhǔn)采購) 4T*6 SATA盤

機器配置太高也沒意義。 散熱也是問題,電源發(fā)熱。風(fēng)扇不停的轉(zhuǎn)
網(wǎng)卡這里千兆網(wǎng)卡是標(biāo)配,自帶的。這里寫10GB*2 是附加買的

?

3、生產(chǎn)環(huán)境master配置

master放兩臺
數(shù)據(jù)庫配置主從
二級存儲也可以單獨nfs服務(wù)器(drbd),也可以使用gluster,然后作為nfs


4、部署架構(gòu)
部署架構(gòu)應(yīng)該從硬件、網(wǎng)絡(luò)、存儲綜合考慮,保障私有云整體的穩(wěn)定性和安全性,
主控制節(jié)點需要2臺機器保障控制節(jié)點高可用,計算節(jié)點由多臺機器(至少2臺)組成一個
或多個集群,保障業(yè)務(wù)的連續(xù)性,穩(wěn)定性,安全性

?

5、控制節(jié)點架構(gòu)

主控節(jié)點由兩臺機器作為主備,安裝CloudStack管理端,MYSQL和分布式文件系統(tǒng)作為二級存儲,都是一主一備。

CloudStack管理服務(wù)器可以部署一個或多個前端服務(wù)器并連接單一的Mysql數(shù)據(jù)庫。可視需求使用
一對硬件輔助均衡對web請求進行分流,另一備份管理節(jié)點可使用遠端站點的Mysql復(fù)制數(shù)據(jù)以增加災(zāi)難
恢復(fù)能力

?

6、私有云整體架構(gòu)
管理服務(wù)器集群(包括前端輔助均衡節(jié)點,管理節(jié)點,以及Mysql數(shù)據(jù)庫節(jié)點)通過兩個負載均衡節(jié)點接入管理網(wǎng)絡(luò)。
輔助存儲服務(wù)器接入管理網(wǎng)絡(luò)
每一個機柜提供點POD包括存儲和計算節(jié)點服務(wù)器。
每一個存儲和計算節(jié)點服務(wù)器都需要有冗余網(wǎng)卡連接到不同交換機上


7、關(guān)于掛載的glusterfs

我們大多數(shù)情況下采用分布式復(fù)制卷,已經(jīng)類似raid10了。
硬盤只需要做成raid5即可

8、高級網(wǎng)絡(luò)
每個區(qū)域都有基本或高級網(wǎng)絡(luò)。
一個區(qū)域的整個生命周期中,不論是基本或高級網(wǎng)絡(luò)。一旦在CloudStack中選擇并配置區(qū)域的網(wǎng)絡(luò)類型,就無法再修改

?9、虛擬機遷移

建議是在同一個cluster內(nèi),共享統(tǒng)一存儲的

?

轉(zhuǎn)載于:https://www.cnblogs.com/nmap/p/6401507.html

總結(jié)

以上是生活随笔為你收集整理的CloudStack学习-3的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。