Openstack云平台的搭建与部署(具体实验过程截图评论拿)
目錄
Openstack云平臺的搭建與部署................................... 3
Keywords:Openstack、Cloud Computing、Iaas....................... 3
一、 緒論.................................................... 4
1.1 選題背景.................................................... 4
1.2 研究的目的和意義............................................ 4
二、 相關技術................................................ 4
2.1 環境配置.................................................... 4
2.2 數據庫MariaDB和消息隊列的安裝.............................. 6
2.3 安裝Keystone(控制節點)................................... 8
2.4 安裝Glance................................................ 12
2.5 安裝Nova(控制節點)...................................... 16
2.6 安裝Neutron............................................... 22
2.7 安裝DashBoard............................................. 24
2.8 部署計算節點............................................... 25
2.9 創建實例................................................... 29
2.10 配置cinder............................................... 32
2.11 配置swift................................................ 39
三、 系統需求分析........................................... 55
3.1 可行性分析................................................. 55
3.2 系統功能分析............................................... 55
3.3 系統非功能設計............................................. 58
四、 系統設計............................................... 59
五、 系統實現............................................... 62
5.1 登錄DashBoard............................................. 62
5.2 租戶用戶管理............................................... 63
5.3 創建網絡................................................... 63
5.4 查看鏡像................................................... 64
5.5 啟動云主機................................................. 65
六、 結束語................................................. 65
七、 致謝................................................... 66
Openstack云平臺的搭建與部署
[摘 要]
OpenStack是一個開源的云計算管理平臺項目,由幾個主要的組件組合起來完成具體工作。OpenStack支持幾乎所有類型的云環境,項目目標是提供實施簡單、可大規模擴展、豐富、標準統一的云計算管理平臺。OpenStack通過各種互補的服務提供了基礎設施即服務(IaaS)的解決方案,每個服務提供API以進行集成。
[關鍵詞]
Openstack、云計算、Iaas
Openstack cloud platform construction and deployment
Openstack is an open-source cloud computing management platform project, which is composed of several major components to complete the specific work. Openstack supports almost all types of cloud environments, and the goal of the project is to provide a cloud computing management platform with simple implementation, large-scale expansion, rich and unified standards. Openstack provides infrastructure as a service (IAAs) solutions through a variety of complementary services, each providing an API for integration
Keywords:Openstack、Cloud Computing、Iaas
- 緒論
- 選題背景
隨著計算科學和商業計算的發展,使得軟件模型和架構越來越快地發生變化,同時促進網格計算、并行計算、分布式計算迅速發展成為云計算。云計算主要包括基礎設施即服務(aS), 平臺即服務(PaaS), 軟件即服務(SaaS), 并通過這些技術將計算資源統一管理和調度。 作為一-種 新的計算模型,云計算憑借其低成本、高效率得到了快速發展,也促進了近幾年開源云計算架構的不斷發展和完善。
OpenStack是-一個開放源的云計算項目和工具集,并且提供了關于基礎設施即服務(laaS) 的解決方案。OpenStack不僅可以快速部署全虛擬化環境,而且可以通過此環境來建立多個互聯的虛擬服務器,并能夠使用戶快速部署應用在虛擬機上。
研究目的:簡化云的部署過程并為其帶來良好的可擴展性
研究意義:虛擬化技術突破了時間、空間的界限,使用Openstack搭建云平臺實現了對于鏡像、網絡等按需分配,同時,云系統上的資源數據十分龐大,更新速度快還提供監控等功能,方便使用和管理,有助于推動網絡時代的發展。
- 相關技術
2.1 環境配置
在VMware上創建三臺虛擬機,分別是controller(4G),computer(4G)和cinder(3G),為其分配網卡,分別是eth8,eth1,eth3,并為其設置好網絡ip,具體情況如下圖所示:
分別在三臺虛擬機上進行如下操作:
hostnamectl set-hostname controller(對應的主機名)
vim /etc/hosts
127.0.0.1?? localhost localhost.localdomain localhost4 localhost4.localdomain4
::1???????? localhost localhost.localdomain localhost6 localhost6.localdomain6
10.1.1.11????? controller
10.1.1.12????? computer1
10.1.1.13????? cinder
yum -y install ntp -y
修改配置文件/etc/ntp.conf
cp -a /etc/ntp.conf? /etc/ntp.conf_bak
控制節點
vi /etc/ntp.conf
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
server 127.127.1.0
fudge 127.127.1.0 stratum 0
其他節點修改如下
vi /etc/ntp.conf
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
server 9.1.1.11 prefer
systemctl restart ntpd
驗證:ntpq -p
sed -i "s/\=enforcing/\=disabled/g" /etc/selinux/config
setenforce 0
chkconfig firewalld off
service firewalld stop
yum install -y vim net-tools
yum install openstack-selinux –y
yum install openstack-selinux \
python-openstackclient yum-plugin-priorities -y
yum install -y openstack-utils
2.2 數據庫MariaDB和消息隊列的安裝
安裝mariadb時,由于是國外源,可能會比較慢,需要更改為國內源即可解決:
vim /etc/yum.repos.d/MariaDB.repo
[mariadb]
name = MariaDB
baseurl=https://mirrors.ustc.edu.cn/mariadb/yum/10.1/centos7-amd64
gpgkey=https://mirrors.ustc.edu.cn/mariadb/yum/RPM-GPG-KEY-MariaDB
gpgcheck=1
搭建Mariadb
yum install -y MariaDB-server MariaDB-client
vim /etc/my.cnf.d/mariadb-openstack.cnf
在 mysqld 區塊添加如下內容:
[mysqld]
default-storage-engine=inodb ???????????????????
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
bind-address = 10.1.1.11
systemctl enable mariadb.service
systemctl restart mariadb.service
systemctl status mariadb.service
systemctl list-unit-files |grep mariadb.service
mysql_secure_installation
先按回車,然后按Y,設置 mysql 密碼,然后一直按 y 結束這里我們設置的密碼是 123456
安裝RabbitMQ
yum install -y erlang
yum install -y rabbitmq-server
systemctl enable rabbitmq-server.service
systemctl restart rabbitmq-server.service
??? systemctl status rabbitmq-server.service
systemctl list-unit-files |grep rabbitmq-server.service
rabbitmqctl add_user openstack 123456
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
rabbitmqctl set_user_tags openstack administrator
rabbitmqctl list_users
?netstat -ntlp |grep 5672
/usr/lib/rabbitmq/bin/rabbitmq-plugins list
/usr/lib/rabbitmq/bin/rabbitmq-plugins??? enable rabbitmq_management? mochiweb webmachine rabbitmq_web_dispatch amqp_client rabbitmq_management_agent
用瀏覽器登錄http://9.1.1.11:15672 輸入 openstack/123456 也可以查看狀態信息
2.3 安裝Keystone(控制節點)
(1)創建 keystone 數據庫
mysql -uroot -p123456
Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 11
Server version: 10.1.20-MariaDB MariaDB Server
Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> create database keystone;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> grant all privileges on keystone.* to keystone@'localhost' identified by '123456';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> grant all privileges on keystone.* to keystone@'' identified by '123456'; Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> exit
Bye
(2)安裝軟件包
yum -y install openstack-keystone \
openstack-utils python-openstackclient httpd mod_wsgi mod_ssl
(3)修改配置文件
cp -a? /etc/keystone/keystone.conf /etc/keystone/keystone.conf_bak
vi /etc/keystone/keystone.conf
memcache_servers = 10.1.1.11:11211 [database]
connection = mysql+pymysql://keystone:123456@10.1.1.11/keystone [token]
provider = fernet driver = memcache
(4)生成數據庫表
su -s /bin/bash keystone -c "keystone-manage db_sync"
(5)初始化秘鑰
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
(6)定義本機 IP
export controller=10.1.1.11
將keystone,adminpassword 替換為自己的密碼
keystone-manage bootstrap --bootstrap-password 123456 \
--bootstrap-admin-url http://$controller:35357/v3/ \
--bootstrap-internal-url http://$controller:35357/v3/ \
--bootstrap-public-url http://$controller:5000/v3/ \
--bootstrap-region-id RegionOne
(7)編 輯 /etc/httpd/conf/httpd.conf
cp -a? /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf_bak ServerName controller
(8)創建軟連接
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ # systemctl start httpd
systemctl enable httpd
(9)創建和加載環境變量
vim ~/keystonerc
export OS_USERNAME=admin export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://10.1.1.11:35357/v3
export OS_IDENTITY_API_VERSION=3
chmod 600 ~/keystonerc
source ~/keystonerc
(10)創建 service
openstack project create --domain default \
--description "Service Project" service
(11)創建項目
# openstack project create --domain default \
--description "Demo Project" demo
openstack user create --domain default \
?--password-prompt demo
(13)創建角色
openstack role create user
(14)賦予用戶角色
openstack role add --project demo --user demo user
(15)驗證
取消環境變量
unset OS_AUTH_URL OS_PASSWORD
驗證 admin???
openstack --os-auth-url http://10.1.1.11:35357/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin
驗證demo
openstack --os-auth-url http://10.1.1.11:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name demo --os-username demo
(16)寫入系統變量中
echo "source ~/keystonerc " >>
~/.bash_profile # source
~/.bash_profile
2.4 安裝Glance
(1)創建 glance 用戶
openstack user create --domain default --project service --password 123456 glance
(2)賦予 glance 用戶 admin 權限
openstack role add --project service --user glance admin
(3)創建 glance 服務
openstack service create --name glance --description "OpenStack Image service" image
(4)定義 controller管理網IP
export controller=10.1.1.11
(5)創建public的endpoint
openstack endpoint create --region RegionOne image public http://$controller:9292
(6)創建 internal 的 endpoint
openstack endpoint create --region RegionOne image internal http://$controller:9292
(7)創建 admin 的 endpoint
openstack endpoint create --region RegionOne image admin http://$controller:9292
(8)創建數據庫
mysql -uroot -p123456
MariaDB [(none)]> create database glance;
MariaDB [(none)]> grant all privileges on glance.* to glance@'localhost' identified by '123456';
MariaDB [(none)]> grant all privileges on glance.* to glance@' ' identified by '123456'; MariaDB [(none)]> flush privileges;
MariaDB [(none)]> exit
(9)安裝軟件包
yum-y install openstack-glance
(10)修改配置文件
mv /etc/glance/glance-api.conf ?/etc/glance/glance-api.conf.org
vi /etc/glance/glance-api.conf
[DEFAULT]
bind_host = 0.0.0.0
notification_driver = noop
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[database]
connection = mysql+pymysql://glance:123456@10.1.1.11/glance
[keystone_authtoken]
auth_uri = http://10.1.1.11:5000
auth_url = http://10.1.1.11:35357
memcached_servers = 10.1.1.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy]
flavor = keystone
(11)修改配置文件 glance-registry.conf
mv /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.org
vi /etc/glance/glance-registry.conf
[DEFAULT]
bind_host = 0.0.0.0
notification_driver = noop
[database]
connection = mysql+pymysql://glance:123456@10.1.1.11/glance
[keystone_authtoken]
auth_uri = http://10.1.1.11:5000
auth_url = http://10.1.1.11:35357
memcached_servers = 10.1.1.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance password = 123456
[paste_deploy]
flavor = keystone
(12)修改文件權限
chmod 640
/etc/glance/glance-api.conf /etc/glance/glance-registry.conf
chown root:glance /etc/glance/glance-api.conf \ /etc/glance/glance-registry.conf
su -s /bin/bash glance -c "glance-manage db_sync"
systemctl start openstack-glance-api openstack-glance-registry
systemctl enable openstack-glance-api openstack-glance-registry
引用環境變量
source admin-openrc
下載鏡像
wget \ ?????? ?http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
加載鏡像
openstack image create "cirros" \
--file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
查看鏡像
Openstack image list
2.5 安裝Nova(控制節點)
mysql -u root -p123456
CREATE DATABASE?? nova_placement ;
CREATE DATABASE?? nova_cell0 ;
CREATE DATABASE?? nova ;
CREATE DATABASE?? nova_api ;
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \??
IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'' \
IDENTIFIED BY '123456';
GRANT? ALL? PRIVILEGES? ON? nova_placement.* \
TO? 'nova'@'localhost'??? IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_placement.* TO 'nova'@'' \
IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \?
IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'' \
IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@''? IDENTIFIED BY '123456';
(2)創建用戶
openstack user create --domain default??? --password 123456 nova
(3)賦予 admin 權限
openstack role add --project service --user nova admin
(4)創建service
openstack service create --name nova --description "OpenStack Compute" compute
(5)創建 endpoint
openstack endpoint create --region RegionOne compute public http://10.1.1.11:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://10.1.1.11:8774/v2.1
openstack endpoint create --region RegionOne placement admin http://10.1.1.11:8778
(6)創建 placement用戶
openstack user create --domain default --password 123456 placement
(7)賦予admin權限
openstack role add --project service --user placement admin
(8)創建服務
openstack service create --name placement --description "Placement API" placement
(9)創建endpoint
openstack endpoint create --region RegionOne placement public http://10.1.1.11:8778
openstack endpoint create --region RegionOne placement internal http://10.1.1.11:8778
openstack endpoint create --region RegionOne placement admin http://10.1.1.11:8778
(10)安裝軟件包
yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
??openstack-nova-scheduler openstack-nova-placement-api -y
(11)修改配置文件/etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
[api_database]
connection = mysql+pymysql://nova:123456@10.1.1.11/nova_api ????? [database]
connection = mysql+pymysql://nova:123456@10.1.1.11/nova
#消息隊列
[DEFAULT]
transport_url = rabbit://openstack:123456@10.1.1.11
#keystone 認證
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://10.1.1.11:5000
auth_url = http://10.1.1.11:35357
memcached_servers = 10.1.1.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default project_name = service
username = nova
password = 123456
[DEFAULT]
my_ip = 10.1.1.11
#管理網 IP
[DEFAULT]
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[glance]
api_servers = http://10.1.1.11:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://10.1.1.11:35357/v3
username = placement
password = 123456
(12)寫入配置文件
vi /etc/httpd/conf.d/00-nova-placement-api.conf
Listen 8778
<VirtualHost *:8778>
WSGIProcessGroup nova-placement-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova
group=nova
WSGIScriptAlias / /usr/bin/nova-placement-api
<IfVersion >= 2.4>
ErrorLogFormat "%M"
</IfVersion>
ErrorLog /var/log/nova/nova-placement-api.log
#SSLEngine On
#SSLCertificateFile ...
#SSLCertificateKeyFile ...
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
Alias /nova-placement-api /usr/bin/nova-placement-api
<Location /nova-placement-api>
SetHandler wsgi-script
Options +ExecCGI
WSGIProcessGroup nova-placement-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
</Location>
(13)重啟服務
systemctl restart httpd
(14)驗證監聽
ss -tanlp | grep 8778
(15)同步數據庫
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
nova-manage cell_v2 list_cells
(16)啟動服務并設置開機啟動
systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service \ ? ?openstack-nova-scheduler.service \
openstack-nova-conductor.service\
openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service\
openstack-nova-scheduler.service \
openstack-nova-conductor.service\
openstack-nova-novncproxy.service
2.6 安裝Neutron
(1)創建 neutron 數據庫
mysql -u root -p123456
CREATE DATABASE neutron;
(2)創建數據庫用戶并賦予權限
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';
(3)創建 neutron 用戶及賦予 admin 權限
source /root/admin-openrc.sh
openstack user create --domain default neutron --password 123456
openstack role add --project service --user neutron admin
(4)創建 network 服務
openstack service create --name neutron --description "OpenStack Networking" network
(5)創建 endpoint
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
(6)安裝 neutron 相關軟件
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge
ebtables -y
(7)配置 neutron 配置文件/etc/neutron/neutron.conf
(8)配置/etc/neutron/plugins/ml2/ml2_conf.ini
(9)配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini
(10)配置 /etc/neutron/l3_agent.ini
(11)配置/etc/neutron/dhcp_agent.ini
(12)重新配置/etc/nova/nova.conf,配置這步的目的是讓 compute 節點能使用上neutron 網絡
(13)將 dhcp-option-force=26,1450 寫/etc/neutron/dnsmasq-neutron.conf
echo "dhcp-option-force=26,1450" >/etc/neutron/dnsmasq-neutron.conf
(14)配置/etc/neutron/metadata_agent.ini
(15)創建軟鏈接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
(16)同步數據庫
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf -- config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
?(17)重啟 nova 服務,因為剛才改了 nova.conf
systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service
(18)重啟 neutron 服務并設置開機啟動
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron
dhcp-agent.service neutron-metadata-agent.service
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron
dhcp-agent.service neutron-metadata-agent.service
systemctl status neutron-server.service neutron-linuxbridge-agent.service neutron
dhcp-agent.service neutron-metadata-agent.service?
(19)啟動 neutron-l3-agent.service 并設置開機啟動
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
systemctl status neutron-l3-agent.service
(20)執行驗證
source /root/admin-openrc
neutron ext-list
neutron agent-list
2.7 安裝DashBoard
(1)安裝 dashboard 相關軟件包
yum install openstack-dashboard -y
(2)修改配置文件/etc/openstack-dashboard/local_settings
vim /etc/openstack-dashboard/local_settings
(3)啟動 dashboard 服務并設置開機啟動
systemctl enable httpd.service memcached.service
systemctl restart httpd.service memcached.service
systemctl status httpd.service memcached.service
到此,Controller 節點搭建完畢,打開 firefox 瀏覽器即可訪問 http://9.1.1.11/dashboard/ ,可進入 openstack 界面!
2.8 部署計算節點
安裝nova(計算節點)
(1)安裝nova軟件包
首先卸載舊版本 qemu-kvm
yum –y remove qemu-kvm
下載源碼
wget https://download.qemu.org/qemu-3.1.0-rc0.tar.xz
安裝依賴包
yum -y install gcc gcc-c++ automake libtool zlib-devel glib2-devel bzip2-devel libuuid devel spice-protocol spice-server-devel usbredir-devel libaio-devel
編譯安裝
tar -xvJf qemu-3.1.0-rc0.tar.xz
cd qemu-3.1.0-rc0
./configure
make && make install
編譯完成之后 做鏈接
ln -s /usr/local/bin/qemu-system-x86_64 /usr/bin/qemu-kvm
ln -s /usr/local/bin/qemu-system-x86_64 /usr/libexec/qemu-kvm
ln -s /usr/local/bin/qemu-img /usr/bin/qemu-img
查看當前 qemu 版本
qemu-img --version
qemu-kvm –version
yum install libvirt-client
yum install openstack-nova-compute –y
修改配置文件/etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@10.1.1.11
my_ip = 10.1.1.12
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://10.1.1.11:5000
auth_url = http://10.1.1.11:35357
memcached_servers = 10.1.1.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://10.1.1.11:6080/vnc_auto.html
[glance]
api_servers = http://10.1.1.11:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://10.1.1.11:35357/v3
username = placement
password = 123456
(2)檢測是否支持虛擬化
egrep -c '(vmx|svm)' /proc/cpuinfo
(3)啟動服務并設置開機啟動
systemctl enable libvirtd.service openstack-nova-compute.service?
systemctl start libvirtd.service openstack-nova-compute.service?
控制節點
(4)驗證
openstack compute service list--service nova-compute
(5)發現計算節點
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
(6)驗證
查看服務
openstack compute service list
在 Identity Service 中列出 API 的 endpoint,以驗證與 Identity 服務的連接正常
openstack catalog list
查看鏡像服務
openstack image list
驗證 cells and placement API 是否正常
nova-status upgrade check
安裝 Neutron(計算節點)
(1)安裝相關軟件包
yum install openstack-neutron-linuxbridge ebtables ipset -y
(2)配置 neutron.conf
(3)配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini
(4)配置 nova.conf
systemctl restart libvirtd.service openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl restart neutron-linuxbridge-agent.service
systemctl status libvirtd.service openstack-nova-compute.service neutron-linuxbridge agent.service
2.9 創建實例
執行環境變量,創建flat模式的public網絡,創建public網絡子網,創建private的私有網絡,private-subnet 的私有網絡子網,創建三個不同名稱的私有網絡,創建路由。
source ~/keystonerc
neutron --debug net-create --shared provider --router:external True --
provider:network_type flat --provider:physical_network provider
neutron subnet-create provider 9.1.1.0/24 --name provider-sub --allocation-pool
start=9.1.1.50,end=9.1.1.90 --dns-nameserver 8.8.8.8 --gateway 9.1.1.254
neutron net-create private --provider:network_type vxlan --router:external False --shared
neutron subnet-create private --name private-subnet --gateway 192.168.1.1 192.168.1.0/24
neutron net-create private-office --provider:network_type vxlan --router:external False -- shared
neutron subnet-create private-office --name office-net --gateway 192.168.2.1 192.168.2.0/24
neutron net-create private-sale --provider:network_type vxlan--router:external False -- shared
neutron subnet-create private-sale --name sale-net --gateway 192.168.3.1 192.168.3.0/24
neutron net-create private-technology --provider:network_type vxlan --router:external False --shared
neutron subnet-create private-technology --name technology-net --gateway 192.168.4.1 192.168.4.0/24
2.10 配置cinder
安裝計算節點
(1)創建數據庫用戶并賦予權限
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY ‘123456’;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY ‘123456’
(2)創建 cinder 用戶并賦予 admin 權限
source /root/admin-openrc
openstack user create --domain default cinder --password 123456
openstack role add --project service --user cinder admin
(3)創建 volume 服務
openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack endpoint create --region RegionOne volume public
http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume internal
http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume admin
http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 public
http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 internal
http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin
http://controller:8776/v2/%\(tenant_id\)s
(5)安裝 cinder 相關服務
yum install openstack-cinder -y
(6)配置 cinder 配置文件
(7)同步數據庫
su -s /bin/sh -c "cinder-manage db sync" cinder
(8)在 controller 上啟動 cinder 服務,并設置開機啟動
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service
安裝cinder節點
(9)安裝 Cinder 節點,Cinder 節點這里我們需要額外的添加一個硬盤(/dev/sdb) 用作 cinder 的存儲服務
yum install lvm2 -y
(10)啟動服務并設置為開機自啟
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
systemctl status lvm2-lvmetad.service
(11)創建 lvm, 這里的/dev/sdb 就是額外添加的硬盤
fdisk -l
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb
(12)編輯存儲節點 lvm.conf 文件
vim /etc/lvm/lvm.conf
在 devices 下面添加 filter = [ "a/sda/", "a/sdb/", "r/.*/"]
然后重啟下 lvm2 服務:
systemctl restart lvm2-lvmetad.service
systemctl status lvm2-lvmetad.service
(13)安裝 openstack-cinder、targetcli
yum install openstack-cinder openstack-utils targetcli python-keystone ntpdate -y
(14)配置 cinder 配置文件
(15)啟動 openstack-cinder-volume 和 target 并設置開機啟動
systemctl enable openstack-cinder-volume.service target.service
systemctl restart openstack-cinder-volume.service target.service
systemctl status openstack-cinder-volume.service target.service
(16)驗證 cinder 服務是否正常 (計算節點)
source /root/admin-openrc
cinder service-list
配置 cinder(computer 節點部署)
(1)計算節點要是想用 cinder,那么需要配置 nova 配置文件(注意!這一步是在計 算節點操作的)
openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne
systemctl restart libvirtd.service openstack-nova-compute.service
(2)然后在 controller 上重啟 nova 服務
systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service
(3)驗證(控制節點)
source /root/admin-openrc
neutron ext-list
neutron agent-list
2.11 配置swift
控制節點
(1)獲得admin憑證
source keystonerc ?
(2)創建swift用戶,給swift用戶添加admin角色:
openstack?user?create?--domain?default?--password=swift?swift??
openstack?role?add?--project?service?--user?swift?admin
(3)創建swift服務條目,創建對象存儲服務 API 端點,dashboard中看效果
openstack service create --name swift?? --description "OpenStack Object Storage" object-store
openstack endpoint create --region RegionOne?? object-store public http://controller:8080/v1/AUTH_%\(tenant_id\)s
openstack endpoint create --region RegionOne?? object-store internal http://controller:8080/v1/AUTH_%\(tenant_id\)s
openstack endpoint create --region RegionOne?? object-store admin http://controller:8080/v1
(4)安裝軟件包
yum install openstack-swift-proxy
python-swiftclient python-keystoneclient python-keystonemiddleware memcached -y
(5)從對象存儲的倉庫源中獲取代理服務的配置文件
curl -o
/etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/pike
https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/pike中的內容覆蓋配置文件中原有的東西,然后進行修改)
vi /etc/swift/proxy-server.conf
在[DEFAULT]項,配置Swift對象存儲服務組件使用的端口、用戶和配置路徑
[DEFAULT]
# bind_ip = 0.0.0.0
bind_port = 8080
# bind_timeout = 30
# backlog = 4096
swift_dir = /etc/swift
user = swift
在[pipeline:main]項,啟用相關的模塊
[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
在[app:proxy-server]項,啟用自動賬戶創建
[app:proxy-server]
use = egg:swift#proxy
account_autocreate = true
在[filter:keystoneauth]項,配置操作用戶角色
[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin, user
在[filter:authtoken]項,配置keystone身份認證服務組件訪問
[filter:authtoken]
paste.filter_factory = ? keystonemiddleware.auth_token:filter_factory
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = swift
password = swift
delay_auth_decision = True
在[filter:cache]項,配置MemCached的訪問路徑
[filter:cache]
use = egg:swift#memcache
memcache_servers = controller:11211
存儲節點配置
關閉虛擬機,在設置中新建一個磁盤
(1)安裝服務工具包
yum install xfsprogs rsync -y
(2)數據盤配置?/dev/sdc
mkfs.xfs /dev/sdc? #格式化xfs
mkdir -p /srv/node/sdb? #創建掛載點目錄結構
(3)編輯“/etc/fstab”文件并添加以下內容
/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
(4)掛載硬盤
mount /srv/node/sdb??
(5)查看硬盤
df -h
(6)文件配置rsync
vi /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 9.1.1.13
[account]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = False#
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/object.lock
(7)啟動 “rsyncd” 服務和配置它隨系統啟動
systemctl enable rsyncd.service
systemctl start rsyncd.service
(8)安裝和配置Swift對象存儲服務組件
安裝軟件包
yum install openstack-swift-account openstack-swift-container \
openstack-swift-object
從對象存儲源倉庫中獲取accounting, container以及object服務配置文件
curl -o
/etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/pike
curl -o
/etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/pike
curl -o
/etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/pike
(9)修改配置文件(使鏈接中的內容覆蓋原有的配置文件,分別對其進行修改)
vi /etc/swift/account-server.conf
[DEFAULT]
bind_ip = 9.1.1.13
bind_port = 6002
# bind_timeout = 30
# backlog = 4096
user = swift
swift_dir = /etc/swift
devices = /srv/node
[pipeline:main]
pipeline = healthcheck recon account-server
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
vi /etc/swift/container-server.conf
[DEFAULT]
bind_ip = 9.1.1.13
bind_port = 6001
# bind_timeout = 30
# backlog = 4096
user = swift
swift_dir = /etc/swift
devices = /srv/node
[pipeline:main]
pipeline = healthcheck recon account-server
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
vi /etc/swift/object-server.conf
[DEFAULT]
bind_ip = 9.1.1.13
bind_port = 6000
# bind_timeout = 30
# backlog = 4096
user = swift
swift_dir = /etc/swift
devices = /srv/node
[pipeline:main]
pipeline = healthcheck recon account-server
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock
(10)創建目錄,更改權限
chown -R swift:swift /srv/node
mkdir -p /var/cache/swift
chown -R swift:swift /var/cache/swift
chmod -R 775 /var/cache/swift
控制節點:
(1)創建目錄,更改權限創建,分發并初始化環(rings)
#在/etc/swift/目錄創建賬戶Account Ring,只有一個硬盤,只能創建一個副本
#創建基本 account.builder 文件
swift-ring-builder account.builder create 10 1 1
#添加每個節點到 ring 中
swift-ring-builder?account.builder add --region?1?--zone 1 --ip 9.1.1.13?--port?6002?--device?sdc?--weight?100
#驗證 ring 的內容
swift-ring-builder account.builder?
#平衡 ring
swift-ring-builder account.builder rebalance
#驗證 ring 的內容
swift-ring-builder account.builder?
?
#在/etc/swift/目錄創建容器container.builder,只有一個硬盤,只能創建一個副本
#創建基本 container.builder 文件
swift-ring-builder container.builder create 10 1 1?
#添加每個節點到 ring 中
swift-ring-builder container.builder add? --region 1 --zone 1 --ip 9.1.1.13 --port 6001 --device sdc --weight 100
#驗證 ring 的內容
swift-ring-builder account.builder??
#平衡 ring
swift-ring-builder account.builder rebalance
#驗證 ring 的內容
swift-ring-builder account.builder
#在/etc/swift/目錄創建對象object.builder,只有一個硬盤,只能創建一個副本
swift-ring-builder object.builder create 10 1 1?
#創建基本 object.builder 文件
#添加每個節點到 ring 中
swift-ring-builder container.builder add? --region 1 --zone 1 --ip 9.1.1.13 --port 6000 --device sdc --weight 100
#驗證 ring 的內容
swift-ring-builder account.builder??
#平衡 ring
swift-ring-builder account.builder rebalance
#驗證 ring 的內容
swift-ring-builder account.builder
(2)復制“account.ring.gz”,“container.ring.gz”和“object.ring.gz”文件到每個存儲節點和其他運行了代理服務的額外節點的 /etc/swift 目錄。
(3)配置/etc/swift/swift.conf
#從對象存儲源倉庫中獲取 /etc/swift/swift.conf 文件
curl -o
/etc/swift/swift.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/pike
vi /etc/swift/swift.conf
[swift-hash]
swift_hash_path_suffix = start
swift_hash_path_prefix = end
[storage-policy:0]
name = Policy-0
default = yes
#復制“swift.conf” 文件到每個存儲節點和其他允許了代理服務的額外節點的 /etc/swift 目錄,修改權限:
chown -R root:swift /etc/swift/*
(4)啟動服務
#控制節點
systemctl enable openstack-swift-proxy.service memcached.service
systemctl start openstack-swift-proxy.service memcached.service
#存儲節點
systemctl enable
openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
systemctl start
openstack-swift-account.service openstack-swift-account-auditor.service
openstack-swift-account-reaper.service openstack-swift-account-replicator.service
systemctl enable openstack-swift-container.service
openstack-swift-container-auditor.service openstack-swift-container-replicator.service
openstack-swift-container-updater.service
systemctl start openstack-swift-container.service
openstack-swift-container-auditor.service openstack-swift-container-replicator.service
openstack-swift-container-updater.service
systemctl enable
openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
systemctl start
openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
(5)驗證
swift stat
- 系統需求分析
3.1 可行性分析
基于OpenStack云平臺中的虛擬化資源進行建模,把抽象的資源實例化來操作,分別在服務層面和資源層面進行描述,然后提出一種面向計算資源實時監測反饋綜合負載均衡的調度算法,能夠實時的分析各計算資源在云數據中心資源池中所占的比例和權重值來進行合理的分配和調度虛擬機實例,并通過在云仿真平臺cloudsim上進行對比實驗得以驗證算法能有效地提高數據中心的負載均衡能力,系統能自動的對資源進行調整,提高了系統運行的穩定性和執行效率。
3.2 系統功能分析
用戶通過登錄頁面,輸入賬號,密碼就可以進入openstack管理頁面。
路由管理:管理員->網絡->路由,進入路由管理頁面,就可以進行添加、刪除、搜索和編輯路由。
網絡管理:管理員->網絡->網絡,進入網絡管理頁面,就可以進行添加、刪除、搜索和編輯網絡。
虛擬機管理:管理員->計算->實例類型,進入實例管理頁面,可以創建、刪除、搜索和修改使用權等。點擊實例,可以創建、刪除和啟動實例等。啟動實例后,進入虛擬機,可以進行操作。
容器管理:項目->對象存儲->容器,可以顯示所擁有的容器及其其中的信息。
3.3 系統非功能設計
性能需求:
環境需求:
服務需求:
- 系統設計
- 系統功能模塊詳細設計
路由管理:安裝完成一系列所需軟件后,進入路由管理頁面,點擊新建路由,創建新的路由,然后點擊輸入路由名稱,外部網絡選擇前面所配置的外部網絡,點擊新建路由完成創建。在頁面有一系列的功能可以選擇,按自己的需求選擇自己所需要的功能進行操作。
網絡管理:安裝完成一系列所需軟件后,在路由管理中,點擊路由名稱,可以新增接口,只選擇子網就可以,IP地址可以不用選,這樣可以將網絡連接到外網上,進入網絡管理可以看到網絡拓撲圖。上面顯示了網絡的信息,也展現了網絡的狀態等信息。
虛擬機管理:安裝完成一系列所需軟件后,進入管理頁面創建實例,創建時,詳情、源、實例類型、網絡必須填,其他的可以不填。實例創建完成后,點擊實例名稱,然后進入控制臺,可以看到虛擬機的控制臺,登錄就可以正常使用。
容器管理:安裝完成一系列所需軟件后,執行相關的文件上傳命令,然后進入容器管理頁面,可以查看容器的名稱,點擊可以查看容器的信息,同時旁邊會顯示容器里面的內容,即上傳的文件信息。
- 系統實現
5.1 登錄DashBoard
進入瀏覽器,輸入網址9.1.1.11:/dashboard,然后進行登陸,用戶名為admin,密碼為123456,然后可以進入管理系統。
5.2 租戶用戶管理
登錄進入管理系統后,查看用戶信息,可以顯示所有的用戶信息并可以進行操作。
5.3 創建網絡
使用一系列的命令創建網絡后,進入管理系統查看網絡信息,創建路由后,添加路由接口,可以讓選擇的網絡連接外網,進入網絡拓撲中,可以查看到相應的網絡拓撲圖。
5.4 查看鏡像
進入管理系統,查看所擁有的鏡像。
5.5 啟動云主機
進入實例,啟動實例,進入控制臺登錄進入正常使用。
- 結束語
云計算是一種繼網格計算,分布式計算發展之后新的商業計算模型和服務模式,由于云數據中心的資源異構性和多樣性,部署架構之間的復雜性,如何將云計算數據中心虛擬共享資源有效地按用戶需求動態管理和分配,并提高資源的使用效率從而為云計算廣泛應用提供便利成為一個關鍵性的研究熱點和問題。
本文基于當前最流行熱門的開源云計算平臺openstack,對其進行了簡單的搭建和部署,完成了Keystone、Glance、Nova、Neutron等組件的安裝,為云平臺的進一步開發做好了準備。在安裝的過程中,也遇到了許多問題,或是百度,或是問同學、老師,最后都得到了解決。學到了很多東西,同時也發現了自己的很多不足之初,在以后的學習生活中將會加以改正。
- 致謝
總結
以上是生活随笔為你收集整理的Openstack云平台的搭建与部署(具体实验过程截图评论拿)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 手动给无线网设置一个DNS服务器地址,手
- 下一篇: 耦合器 功分器 合路器