Nginx+Keeplived双机热备(主从模式)
?
Nginx+Keeplived雙機熱備(主從模式)
參考資料:
http://www.cnblogs.com/kevingrace/p/6138185.html
雙機高可用一般是通過虛擬IP(漂移IP)方法來實現的,基于Linux/Unix的IP別名技術。
雙機高可用方法目前分為兩種:
1.雙機主從模式:即前端使用兩臺服務器,一臺主服務器和一臺熱備服務器,正常情況下,主服務器綁定一個公網虛擬IP,提供負載均衡服務,熱備服務器處于空閑狀態;當主服務器發生故障時,熱備服務器接管主服務器的公網虛擬IP,提供負載均衡服務;但是熱備服務器在主機器不出現故障的時候,永遠處于浪費狀態,對于服務器不多的網站,該方案不經濟實惠。
2.雙機主主模式:即前端使用兩臺負載均衡服務器,互為主備,且都處于活動狀態,同事各自綁定一個公網虛擬IP,提供負載均衡服務;當其中一臺發生故障時,另一臺接管發生故障服務器的公網虛擬IP(這時由非故障機器一臺負擔所有的請求)。這種方案,經濟實惠,非常適合于當前架構環境。
今天再次分享下Nginx+keeplived實現高可用負載均衡的主從模式的操作記錄:
keeplived可以認為是VRRP協議在Linux上的實現,主要有三個模塊,分別是core,check和vrrp。
core模塊為keeplived的核心,負責主進程的啟動、維護以及全局配置文件的加載和解析。
check負責健康檢查,包括創建的各種檢查方式。
vrrp模塊是來實現VRRP協議的。
一、環境說明
操作系統:CentOS release 6.9 (Final) minimal
web1:172.16.12.223
web2:172.16.12.224
vip:svn:172.16.12.226
svn:172.16.12.225
?
二、環境安裝
安裝nginx和keeplived服務(web1和web2兩臺服務器上的安裝完全一樣)。
?
2.1、安裝依賴
yum clean all yum -y update yum -y install gcc-c++ gd libxml2-devel libjpeg-devel libpng-devel net-snmp-devel wget telnet vim zip unzip yum -y install curl-devel libxslt-devel pcre-devel libjpeg libpng libcurl4-openssl-dev yum -y install libcurl-devel libcurl freetype-config freetype freetype-devel unixODBC libxslt yum -y install gcc automake autoconf libtool openssl-devel yum -y install perl-devel perl-ExtUtils-Embed yum -y install cmake ncurses-devel.x86_64 openldap-devel.x86_64 lrzsz openssh-clients gcc-g77 bison yum -y install libmcrypt libmcrypt-devel mhash mhash-devel bzip2 bzip2-devel yum -y install ntpdate rsync svn patch iptables iptables-services yum -y install libevent libevent-devel cyrus-sasl cyrus-sasl-devel yum -y install gd-devel libmemcached-devel memcached git libssl-devel libyaml-devel auto make yum -y groupinstall "Server Platform Development" "Development tools" yum -y groupinstall "Development tools" yum -y install gcc pcre-devel zlib-devel openssl-devel2.2、Centos6系統安裝完畢后,需要優化的地方
#關閉SELinux sed -i 's/SELINUX=enforcing/SELinux=disabled/' /etc/selinux/config grep SELINUX=disabled /etc/selinux/config setenforce 0 getenforce cat >> /etc/sysctl.conf << EOF # ##custom # net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 net.ipv4.tcp_max_tw_buckets = 6000 net.ipv4.tcp_sack = 1 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_rmem = 4096 87380 4194304 net.ipv4.tcp_wmem = 4096 16384 4194304 net.core.wmem_default = 8388608 net.core.rmem_default = 8388608 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.netdev_max_backlog = 262144 net.core.somaxconn = 262144 net.ipv4.tcp_max_orphans = 3276800 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_timestamps = 0 #net.ipv4.tcp_synack_retries = 1 net.ipv4.tcp_synack_retries = 2 #net.ipv4.tcp_syn_retries = 1 net.ipv4.tcp_syn_retries = 2 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_mem = 94500000 915000000 927000000 #net.ipv4.tcp_fin_timeout = 1 net.ipv4.tcp_fin_timeout = 15 net.ipv4.tcp_keepalive_time = 30 net.ipv4.ip_local_port_range = 1024 65535 #net.ipv4.tcp_tw_len = 1 EOF#使其生效 sysctl -p cp /etc/security/limits.conf /etc/security/limits.conf.bak2017 cat >> /etc/security/limits.conf << EOF # ###custom # * soft nofile 20480 * hard nofile 65535 * soft nproc 20480 * hard nproc 65535 EOF2.3、修改shell終端的超時時間
vi /etc/profile 增加如下一行即可(3600秒,默認不超時) cp /etc/profile /etc/profile.bak2017 cat >> /etc/profile << EOF export TMOUT=1800 EOF2.4、下載軟件包
(master和slave兩臺負載均衡機都要做) [root@web1 ~]# cd /usr/local/src/[root@web1 src]# wget http://nginx.org/download/nginx-1.9.7.tar.gz[root@web1 src]# wget http://www.keepalived.org/software/keepalived-1.3.2.tar.gz2.5、安裝nginx
(master和slave兩臺負載均衡機都要做) [root@web1 src]# tar -zxvf nginx-1.9.7.tar.gz [root@web1 nginx-1.9.7]# cd nginx-1.9.7 # 添加www用戶,其中-M參數表示不添加用戶家目錄,-s參數表示指定shell類型 [root@web1 nginx-1.9.7]# useradd www -M -s /sbin/nologin [root@web1 nginx-1.9.7]# vim auto/cc/gcc #將這句注釋掉 取消Debug編譯模式 大概在179行 # debug # CFLAGS="$CFLAGS -g" [root@web1 nginx-1.9.7]# ./configure --prefix=/usr/local/nginx --user=www --group=www --with-http_ssl_module --with-http_flv_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre [root@web1 nginx-1.9.7]# make && make install2.6、安裝keeplived
(master和slave兩臺負載均衡機都要做) [root@web1 nginx-1.9.7]# cd /usr/local/src/ [root@web1 src]# tar -zvxf keepalived-1.3.2.tar.gz [root@web1 src]# cd keepalived-1.3.2 [root@web1 keepalived-1.3.2]# ./configure [root@web1 keepalived-1.3.2]# make && make install [root@web1 keepalived-1.3.2]# cp /usr/local/src/keepalived-1.3.2/keepalived/etc/init.d/keepalived /etc/rc.d/init.d/ [root@web1 keepalived-1.3.2]# cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/ [root@web1 keepalived-1.3.2]# mkdir /etc/keepalived [root@web1 keepalived-1.3.2]# cp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/ [root@web1 keepalived-1.3.2]# cp /usr/local/sbin/keepalived /usr/sbin/ [root@web1 keepalived-1.3.2]# echo "/usr/local/nginx/sbin/nginx" >> /etc/rc.local [root@web1 keepalived-1.3.2]# echo "/etc/init.d/keepalived start" >> /etc/rc.local三、配置服務
3.1、關閉selinux
?
先關閉SElinux、配置防火墻?(master和slave兩臺負載均衡機都要做) [root@web1 keepalived-1.3.2]# cd /root/ [root@web1 ~]#sed -i 's/SELINUX=enforcing/SELinux=disabled/' /etc/selinux/config [root@web1 ~]#grep SELINUX=disabled /etc/selinux/config [root@web1 ~]#setenforce 0?
3.2、關閉防火墻
[root@web1 ~]# /etc/init.d/iptables stop3.3、配置nginx
master-和slave兩臺服務器的nginx的配置完全一樣,主要是配置/usr/local/nginx/conf/nginx.conf的http,當然也可以配置vhost虛擬主機目錄,然后配置vhost下的比如LB.conf文件。
其中:
多域名指向是通過虛擬主機(配置http下面的server)實現;
同一域名的不同虛擬目錄通過每個server下面的不同location實現;
到后端的服務器在vhost/LB.conf下面配置upstream,然后在server或location中通過proxy_pass引用。
要實現前面規劃的接入方式,LB.conf的配置如下(添加proxy_cache_path和proxy_temp_path這兩行,表示打開nginx的緩存功能):
?
[root@web1 ~]# vim /usr/local/nginx/conf/nginx.conf user www; worker_processes 8;#error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info;#pid logs/nginx.pid;events {worker_connections 65535; }http {include mime.types;default_type application/octet-stream;charset utf-8;######### set access log format#######log_format main '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';#access_log logs/access.log main;######### http setting#######sendfile on;#tcp_nopush on;tcp_nopush on;tcp_nodelay on;keepalive_timeout 65;proxy_cache_path /var/www/cache levels=1:2 keys_zone=mycache:20m max_size=2048m inactive=60m;proxy_temp_path /var/www/cache/tmp;fastcgi_connect_timeout 3000;fastcgi_send_timeout 3000;fastcgi_read_timeout 3000;fastcgi_buffer_size 256k;fastcgi_buffers 8 256k;fastcgi_busy_buffers_size 256k;fastcgi_temp_file_write_size 256k;fastcgi_intercept_errors on;#keepalive_timeout 0;#keepalive_timeout 65;#client_header_timeout 600s;client_body_timeout 600s;# client_max_body_size 50m;client_max_body_size 100m; #允許客戶端請求的最大單個文件字節數client_body_buffer_size 256k; #緩沖區代理緩沖請求的最大字節數,可以理解為先保存到本地再傳給用戶#gzip on;gzip_min_length 1k;gzip_buffers 4 16k;gzip_http_version 1.1;gzip_comp_level 9;gzip_types text/plain application/x-javascript text/css application/xml text/javascript application/x-httpd-php;gzip_vary on;## includes vhostsinclude vhosts/*.conf; }# 創建相應的目錄 [root@web1 ~]# mkdir -p /usr/local/nginx/conf/vhosts [root@web1 ~]# mkdir -p /var/www/cache [root@web1 ~]# ulimit 65535 [root@web2 ~]# vim /usr/local/nginx/conf/vhosts/LB.conf upstream LB-WWW {ip_hash;server 172.16.12.223:80 max_fails=3 fail_timeout=30s; #max_fails = 3 為允許失敗的次數,默認值為1server 172.16.12.224:80 max_fails=3 fail_timeout=30s; #fail_timeout = 30s 當max_fails次失敗后,暫停將請求分發到該后端服務器的時間server 172.16.12.225:80 max_fails=3 fail_timeout=30s;}upstream LB-OA {ip_hash;server 172.16.12.223:8080 max_fails=3 fail_timeout=30s;server 172.16.12.224:8080 max_fails=3 fail_timeout=30s; }server {listen 80;server_name localhost;access_log /usr/local/nginx/logs/dev-access.log main;error_log /usr/local/nginx/logs/dev-error.log;location /svn {proxy_pass http://172.16.12.226/svn/;proxy_redirect off ;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header REMOTE-HOST $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;proxy_connect_timeout 300; #跟后端服務器連接超時時間,發起握手等候響應時間proxy_send_timeout 300; #后端服務器回傳時間,就是在規定時間內后端服務器必須傳完所有數據proxy_read_timeout 600; #連接成功后等待后端服務器的響應時間,已經進入后端的排隊之中等候處理proxy_buffer_size 256k; #代理請求緩沖區,會保存用戶的頭信息以供nginx進行處理proxy_buffers 4 256k; #同上,告訴nginx保存單個用幾個buffer最大用多少空間proxy_busy_buffers_size 256k; #如果系統很忙時候可以申請最大的proxy_buffersproxy_temp_file_write_size 256k; #proxy緩存臨時文件的大小proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;proxy_max_temp_file_size 128m;proxy_cache mycache; proxy_cache_valid 200 302 60m; proxy_cache_valid 404 1m;}location /submin {proxy_pass http://172.16.12.226/submin/;proxy_redirect off ;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header REMOTE-HOST $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;proxy_connect_timeout 300;proxy_send_timeout 300;proxy_read_timeout 600;proxy_buffer_size 256k;proxy_buffers 4 256k;proxy_busy_buffers_size 256k;proxy_temp_file_write_size 256k;proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;proxy_max_temp_file_size 128m;proxy_cache mycache; proxy_cache_valid 200 302 60m;proxy_cache_valid 404 1m;}}server {listen 80;server_name localhost;access_log /usr/local/nginx/logs/www-access.log main;error_log /usr/local/nginx/logs/www-error.log;location / {proxy_pass http://LB-WWW;proxy_redirect off ;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header REMOTE-HOST $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;proxy_connect_timeout 300;proxy_send_timeout 300;proxy_read_timeout 600;proxy_buffer_size 256k;proxy_buffers 4 256k;proxy_busy_buffers_size 256k;proxy_temp_file_write_size 256k;proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;proxy_max_temp_file_size 128m;proxy_cache mycache; proxy_cache_valid 200 302 60m; proxy_cache_valid 404 1m;} }server {listen 80;server_name localhost;access_log /usr/local/nginx/logs/oa-access.log main;error_log /usr/local/nginx/logs/oa-error.log;location / {proxy_pass http://LB-OA;proxy_redirect off ;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header REMOTE-HOST $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;proxy_connect_timeout 300;proxy_send_timeout 300;proxy_read_timeout 600;proxy_buffer_size 256k;proxy_buffers 4 256k;proxy_busy_buffers_size 256k;proxy_temp_file_write_size 256k;proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;proxy_max_temp_file_size 128m;proxy_cache mycache; proxy_cache_valid 200 302 60m; proxy_cache_valid 404 1m;} }?
3.4、驗證準備
3.4.1、在svn服務器上執行
cat >/usr/local/nginx/conf/vhosts/svn.conf <<EOF server { listen 80; server_name svn 172.16.12.225;access_log /usr/local/nginx/logs/svn-access.log main; error_log /usr/local/nginx/logs/svn-error.log;location / { root /var/www/html; index index.html index.php index.htm; } } EOF [root@svn ~]# cat /usr/local/nginx/conf/vhosts/svn.conf server { listen 80; server_name svn 172.16.12.225;access_log /usr/local/nginx/logs/svn-access.log main; error_log /usr/local/nginx/logs/svn-error.log;location / { root /var/www/html; index index.html index.php index.htm; } } [root@svn ~]# [root@svn ~]# mkdir -p /var/www/html [root@svn ~]# mkdir -p /var/www/html/submin [root@svn ~]# mkdir -p /var/www/html/svn [root@svn ~]# cat /var/www/html/svn/index.html this is the page of svn/172.16.12.225 [root@svn ~]# cat /var/www/html/submin/index.html this is the page of submin/172.16.12.225 [root@svn ~]# chown -R www.www /var/www/html/ [root@svn ~]# chmod -R 755 /var/www/html/ [root@svn ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.12.223 web1 172.16.12.224 web2 172.16.12.225 svn [root@svn ~]# tail -4 /etc/rc.local touch /var/lock/subsys/local /etc/init.d/iptables stop /usr/local/nginx/sbin/nginx /etc/init.d/keepalived start# 啟動nginx [root@svn ~]# /usr/local/nginx/sbin/nginx # 訪問網址 [root@svn local]# curl http://172.16.12.225/submin/ this is the page of submin/172.16.12.225 [root@svn local]# curl http://172.16.12.225/svn/ this is the page of svn/172.16.12.2253.4.1、在web1上執行
[root@web1 ~]# curl http://172.16.12.225/submin/ this is the page of submin/172.16.12.225 [root@web1 ~]# curl http://172.16.12.225/svn/ this is the page of svn/172.16.12.225cat >/usr/local/nginx/conf/vhosts/web.conf <<EOF server { listen 80; server_name web 172.16.12.223;access_log /usr/local/nginx/logs/web-access.log main; error_log /usr/local/nginx/logs/web-error.log;location / { root /var/www/html; index index.html index.php index.htm; } } EOF[root@web1 ~]# cat /usr/local/nginx/conf/vhosts/web.conf server { listen 80; server_name web 172.16.12.223;access_log /usr/local/nginx/logs/web-access.log main; error_log /usr/local/nginx/logs/web-error.log;location / { root /var/www/html; index index.html index.php index.htm; } }[root@web1 ~]# mkdir -p /var/www/html [root@web1 ~]# mkdir -p /var/www/html/web [root@web1 ~]# cat /var/www/html/web/index.html this is the page of web/172.16.12.223 [root@web1 ~]# chown -R www.www /var/www/html/ [root@web1 ~]# chmod -R 755 /var/www/html/ [root@web1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.12.223 web1 172.16.12.224 web2 172.16.12.225 svn [root@web1 ~]# tail -4 /etc/rc.local touch /var/lock/subsys/local /etc/init.d/iptables stop /usr/local/nginx/sbin/nginx /etc/init.d/keepalived start [root@web1 ~]# /usr/local/nginx/sbin/nginx [root@web1 ~]# curl http://172.16.12.223/web/ this is the page of web/172.16.12.2232.4.2、在web2上執行
[root@web2 ~]# curl http://172.16.12.225/submin/ this is the page of submin/172.16.12.225 [root@web2 ~]# curl http://172.16.12.225/svn/ this is the page of svn/172.16.12.225cat >/usr/local/nginx/conf/vhosts/web.conf <<EOF server { listen 80; server_name web 172.16.12.224;access_log /usr/local/nginx/logs/web-access.log main; error_log /usr/local/nginx/logs/web-error.log;location / { root /var/www/html; index index.html index.php index.htm; } } EOF[root@web2 ~]# cat /usr/local/nginx/conf/vhosts/web.conf server { listen 80; server_name web 172.16.12.224;access_log /usr/local/nginx/logs/web-access.log main; error_log /usr/local/nginx/logs/web-error.log;location / { root /var/www/html; index index.html index.php index.htm; } } [root@web2 ~]# [root@web2 ~]# mkdir -p /var/www/html [root@web2 ~]# mkdir -p /var/www/html/web [root@web2 ~]# cat /var/www/html/web/index.html this is the page of web/172.16.12.224 [root@web2 ~]# cat /var/www/html/web/index.html this is the page of web/172.16.12.224 [root@web2 ~]# chown -R www.www /var/www/html/ [root@web2 ~]# chmod -R 755 /var/www/html/ [root@web2 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.12.223 web1 172.16.12.224 web2 172.16.12.225 svn [root@web2 ~]# tail -4 /etc/rc.local touch /var/lock/subsys/local /etc/init.d/iptables stop /usr/local/nginx/sbin/nginx /etc/init.d/keepalived start# 啟動nginx [root@web2 ~]# /usr/local/nginx/sbin/nginx # 訪問網址 [root@web2 local]# curl http://172.16.12.224/web/ this is the page of web/172.16.12.2242.4.3、瀏覽器測試
四、keeplived配置
4.1、web1上的操作
[root@web1 ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak [root@web1 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived #全局定義global_defs { # notification_email { #指定keepalived在發生事件時(比如切換)發送通知郵件的郵箱 # ops@wangshibo.cn #設置報警郵件地址,可以設置多個,每行一個。 需開啟本機的sendmail服務 # tech@wangshibo.cn # } # # notification_email_from ops@wangshibo.cn #keepalived在發生諸如切換操作時需要發送email通知地址 # smtp_server 127.0.0.1 #指定發送email的smtp服務器 # smtp_connect_timeout 30 #設置連接smtp server的超時時間 router_id master-node #運行keepalived的機器的一個標識,通常可設為hostname。故障發生時,發郵件時顯示在郵件主題中的信息。 }vrrp_script chk_http_port { #檢測nginx服務是否在運行。有很多方式,比如進程,用腳本檢測等等script "/opt/chk_nginx.sh" #這里通過腳本監測interval 2 #腳本執行間隔,每2s檢測一次weight -5 #腳本結果導致的優先級變更,檢測失敗(腳本返回非0)則優先級 -5fall 2 #檢測連續2次失敗才算確定是真失敗。會用weight減少優先級(1-255之間)rise 1 #檢測1次成功就算成功。但不修改優先級 }vrrp_instance VI_1 { #keepalived在同一virtual_router_id中priority(0-255)最大的會成為master,也就是接管VIP,當priority最大的主機發生故障后次priority將會接管state MASTER #指定keepalived的角色,MASTER表示此主機是主服務器,BACKUP表示此主機是備用服務器。注意這里的state指定instance(Initial)的初始狀態,就是說在配置好后,這臺服務器的初始狀態就是這里指定的,但這里指定的不算,還是得要通過競選通過優先級來確定。如果這里設置為MASTER,但如若他的優先級不及另外一臺,那么這臺在發送通告時,會發送自己的優先級,另外一臺發現優先級不如自己的高,那么他會就回搶占為MASTERinterface eth1 #指定HA監測網絡的接口。實例綁定的網卡,因為在配置虛擬IP的時候必須是在已有的網卡上添加的 # mcast_src_ip 103.110.98.14 # 發送多播數據包時的源IP地址,這里注意了,這里實際上就是在哪個地址上發送VRRP通告,這個非常重要,一定要選擇穩定的網卡端口來發送,這里相當于heartbeat的心跳端口,如果沒有設置那么就用默認的綁定的網卡的IP,也就是interface指定的IP地址virtual_router_id 226 #虛擬路由標識,這個標識是一個數字,同一個vrrp實例使用唯一的標識。即同一vrrp_instance下,MASTER和BACKUP必須是一致的priority 101 #定義優先級,數字越大,優先級越高,在同一個vrrp_instance下,MASTER的優先級必須大于BACKUP的優先級advert_int 1 #設定MASTER與BACKUP負載均衡器之間同步檢查的時間間隔,單位是秒authentication { #設置驗證類型和密碼。主從必須一樣auth_type PASS #設置vrrp驗證類型,主要有PASS和AH兩種auth_pass 1111 #設置vrrp驗證密碼,在同一個vrrp_instance下,MASTER與BACKUP必須使用相同的密碼才能正常通信}virtual_ipaddress { #VRRP HA 虛擬地址 如果有多個VIP,繼續換行填寫172.16.12.226}track_script { #執行監控的服務。注意這個設置不能緊挨著寫在vrrp_script配置塊的后面(實驗中碰過的坑),否則nginx監控失效!!chk_http_port #引用VRRP腳本,即在 vrrp_script 部分指定的名字。定期運行它們來改變優先級,并最終引發主備切換。 } }4.2、web2上的操作
[root@web2 ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak [root@web2 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { # notification_email { # ops@wangshibo.cn # tech@wangshibo.cn # } # # notification_email_from ops@wangshibo.cn # smtp_server 127.0.0.1 # smtp_connect_timeout 30 router_id slave-node }vrrp_script chk_http_port { script "/opt/chk_nginx.sh" interval 2 weight -5 fall 2 rise 1 }vrrp_instance VI_1 { state BACKUP interface eth1 # mcast_src_ip 103.110.98.24 virtual_router_id 226 priority 99 advert_int 1 authentication { auth_type PASS auth_pass 1111 }virtual_ipaddress { 172.16.12.226}track_script { chk_http_port }}4.3、監控說明
讓keepalived監控NginX的狀態:
1)經過前面的配置,如果master主服務器的keepalived停止服務,slave從服務器會自動接管VIP對外服務;
一旦主服務器的keepalived恢復,會重新接管VIP。 但這并不是我們需要的,我們需要的是當NginX停止服務的時候能夠自動切換。
2)keepalived支持配置監控腳本,我們可以通過腳本監控NginX的狀態,如果狀態不正常則進行一系列的操作,最終仍不能恢復NginX則殺掉keepalived,使得從服務器能夠接管服務。
如何監控NginX的狀態
最簡單的做法是監控NginX進程,更靠譜的做法是檢查NginX端口,最靠譜的做法是檢查多個url能否獲取到頁面。
注意:這里要提示一下keepalived.conf中vrrp_script配置區的script一般有2種寫法:
1)通過腳本執行的返回結果,改變優先級,keepalived繼續發送通告消息,backup比較優先級再決定。這是直接監控Nginx進程的方式。
2)腳本里面檢測到異常,直接關閉keepalived進程,backup機器接收不到advertisement會搶占IP。這是檢查NginX端口的方式。
上文script配置部分,"killall -0 nginx"屬于第1種情況,"/opt/chk_nginx.sh" 屬于第2種情況。個人更傾向于通過shell腳本判斷,但有異常時exit 1,正常退出exit 0,然后keepalived根據動態調整的 vrrp_instance 優先級選舉決定是否搶占VIP:
如果腳本執行結果為0,并且weight配置的值大于0,則優先級相應的增加
如果腳本執行結果非0,并且weight配置的值小于0,則優先級相應的減少
其他情況,原本配置的優先級不變,即配置文件中priority對應的值。
提示:
優先級不會不斷的提高或者降低
可以編寫多個檢測腳本并為每個檢測腳本設置不同的weight(在配置中列出就行)
不管提高優先級還是降低優先級,最終優先級的范圍是在[1,254],不會出現優先級小于等于0或者優先級大于等于255的情況
在MASTER節點的 vrrp_instance 中 配置 nopreempt ,當它異常恢復后,即使它 prio 更高也不會搶占,這樣可以避免正常情況下做無謂的切換
以上可以做到利用腳本檢測業務進程的狀態,并動態調整優先級從而實現主備切換。
另外:在默認的keepalive.conf里面還有 virtual_server,real_server 這樣的配置,我們這用不到,它是為lvs準備的。
如何嘗試恢復服務
由于keepalived只檢測本機和他機keepalived是否正常并實現VIP的漂移,而如果本機nginx出現故障不會則不會漂移VIP。
所以編寫腳本來判斷本機nginx是否正常,如果發現NginX不正常,重啟之。等待3秒再次校驗,仍然失敗則不再嘗試,關閉keepalived,其他主機此時會接管VIP;
根據上述策略很容易寫出監控腳本。此腳本必須在keepalived服務運行的前提下才有效!如果在keepalived服務先關閉的情況下,那么nginx服務關閉后就不能實現自啟動了。
該腳本檢測ngnix的運行狀態,并在nginx進程不存在時嘗試重新啟動ngnix,如果啟動失敗則停止keepalived,準備讓其它機器接管。
4.4、監控腳本
監控腳本如下(master和slave都要有這個監控腳本): [root@web1 ~]# cat /opt/chk_nginx.sh #!/bin/bash counter=$(ps -C nginx --no-heading|wc -l) if [ "${counter}" = "0" ]; then/usr/local/nginx/sbin/nginxsleep 2counter=$(ps -C nginx --no-heading|wc -l)if [ "${counter}" = "0" ]; then/etc/init.d/keepalived stopfi fi [root@web1 ~]# [root@web1 ~]# chmod 755 /opt/chk_nginx.sh [root@web1 ~]# sh /opt/chk_nginx.sh[root@web2 ~]# cat /opt/chk_nginx.sh #!/bin/bash counter=$(ps -C nginx --no-heading|wc -l) if [ "${counter}" = "0" ]; then/usr/local/nginx/sbin/nginxsleep 2counter=$(ps -C nginx --no-heading|wc -l)if [ "${counter}" = "0" ]; then/etc/init.d/keepalived stopfi fi [root@web2 ~]# [root@web2 ~]# chmod 755 /opt/chk_nginx.sh [root@web2 ~]# sh /opt/chk_nginx.sh4.5、需要考慮的問題
此架構需考慮的問題
1)master沒掛,則master占有vip且nginx運行在master上
2)master掛了,則slave搶占vip且在slave上運行nginx服務
3)如果master上的nginx服務掛了,則nginx會自動重啟,重啟失敗后會自動關閉keepalived,這樣vip資源也會轉移到slave上。
4)檢測后端服務器的健康狀態
5)master和slave兩邊都開啟nginx服務,無論master還是slave,當其中的一個keepalived服務停止后,vip都會漂移到keepalived服務還在的節點上;
如果要想使nginx服務掛了,vip也漂移到另一個節點,則必須用腳本或者在配置文件里面用shell命令來控制。(nginx服務宕停后會自動啟動,啟動失敗后會強制關閉keepalived,從而致使vip資源漂移到另一臺機器上)
?
五、最后驗證
?
?
最后驗證(將配置的后端應用域名都解析到VIP地址上):關閉主服務器上的keepalived或nginx,vip都會自動飄到從服務器上。
驗證keepalived服務故障情況:
1)先后在master、slave服務器上啟動nginx和keepalived,保證這兩個服務都正常開啟:
?
[root@web2 ~]# /usr/local/nginx/sbin/nginx -s stop nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored [root@web2 ~]# /etc/init.d/keepalived stop Stopping keepalived: [FAILED] [root@web2 ~]# [root@web1 ~]# /usr/local/nginx/sbin/nginx -s stop nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored [root@web1 ~]# /etc/init.d/keepalived stop Stopping keepalived: [FAILED] [root@web1 ~]# [root@web1 ~]# /usr/local/nginx/sbin/nginx nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored [root@web1 ~]# /etc/init.d/keepalived start Starting keepalived: [ OK ]?
2)在主服務器上查看是否已經綁定了虛擬IP:
?
[root@web1 ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:ca:99:56 brd ff:ff:ff:ff:ff:ffinet 10.0.2.223/24 brd 10.0.2.255 scope global eth0inet 172.16.12.226/32 scope global eth0inet6 fe80::a00:27ff:feca:9956/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:b3:a9:36 brd ff:ff:ff:ff:ff:ffinet 172.16.12.223/24 brd 172.16.12.255 scope global eth1inet6 fe80::a00:27ff:feb3:a936/64 scope link valid_lft forever preferred_lft forever[root@web2 ~]# /usr/local/nginx/sbin/nginx nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored [root@web2 ~]# /etc/init.d/keepalived start Starting keepalived: [ OK ] [root@web2 ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:9a:0b:97 brd ff:ff:ff:ff:ff:ffinet 10.0.2.224/24 brd 10.0.2.255 scope global eth0inet6 fe80::a00:27ff:fe9a:b97/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:63:26:1a brd ff:ff:ff:ff:ff:ffinet 172.16.12.224/24 brd 172.16.12.255 scope global eth1inet6 fe80::a00:27ff:fe63:261a/64 scope link valid_lft forever preferred_lft forever [root@web2 ~]#?
5.1、修改網站配置
[root@web1 ~]# cat /usr/local/nginx/conf/vhosts/web.conf server { listen 80; server_name localhost 172.16.12.223 172.16.12.226;access_log /usr/local/nginx/logs/web-access.log main; error_log /usr/local/nginx/logs/web-error.log;location / { root /var/www/html; index index.html index.php index.htm; } } [root@web1 ~]# [root@web2 ~]# cat /usr/local/nginx/conf/vhosts/web.conf server { listen 80; server_name localhost 172.16.12.224 172.16.12.226;access_log /usr/local/nginx/logs/web-access.log main; error_log /usr/local/nginx/logs/web-error.log;location / { root /var/www/html; index index.html index.php index.htm; } } [root@web2 ~]#5.2、訪問驗證
5.3、停止主服務器的keeplived服務
[root@web1 ~]# /etc/init.d/keepalived stop Stopping keepalived: [ OK ] [root@web1 ~]# [root@web1 ~]# tail -f /var/log/messages Dec 14 13:32:12 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:32:12 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:32:12 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:32:12 web1 Keepalived_healthcheckers[7958]: Netlink reflector reports IP 172.16.12.226 added Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 172.16.12.226 Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:32:17 web1 Keepalived_vrrp[7959]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:43:51 web1 Keepalived[7956]: Stopping Dec 14 13:43:51 web1 Keepalived_vrrp[7959]: VRRP_Instance(VI_1) sent 0 priority Dec 14 13:43:51 web1 Keepalived_vrrp[7959]: VRRP_Instance(VI_1) removing protocol VIPs. Dec 14 13:43:51 web1 Keepalived_healthcheckers[7958]: Netlink reflector reports IP 172.16.12.226 removed Dec 14 13:43:51 web1 Keepalived_healthcheckers[7958]: Stopped Dec 14 13:43:52 web1 Keepalived_vrrp[7959]: Stopped Dec 14 13:43:52 web1 Keepalived[7956]: Stopped Keepalived v1.3.2 (12/14,2017)[root@web1 ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:ca:99:56 brd ff:ff:ff:ff:ff:ffinet 10.0.2.223/24 brd 10.0.2.255 scope global eth0inet6 fe80::a00:27ff:feca:9956/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:b3:a9:36 brd ff:ff:ff:ff:ff:ffinet 172.16.12.223/24 brd 172.16.12.255 scope global eth1inet6 fe80::a00:27ff:feb3:a936/64 scope link valid_lft forever preferred_lft forever [root@web1 ~]#5.4、在web2查看切換情況
[root@web2 ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:9a:0b:97 brd ff:ff:ff:ff:ff:ffinet 10.0.2.224/24 brd 10.0.2.255 scope global eth0inet 172.16.12.226/32 scope global eth0inet6 fe80::a00:27ff:fe9a:b97/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 08:00:27:63:26:1a brd ff:ff:ff:ff:ff:ffinet 172.16.12.224/24 brd 172.16.12.255 scope global eth1inet6 fe80::a00:27ff:fe63:261a/64 scope link valid_lft forever preferred_lft forever [root@web2 ~]# [root@web2 ~]# tail -f /var/log/messages Dec 14 13:47:33 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:47:33 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:47:33 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:47:33 web2 Keepalived_healthcheckers[8186]: Netlink reflector reports IP 172.16.12.226 added Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth0 for 172.16.12.226 Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.226 Dec 14 13:47:38 web2 Keepalived_vrrp[8187]: Sending gratuitous ARP on eth0 for 172.16.12.2265.5、訪問網頁驗證
切換前的網頁:
?
?
切換后的網頁:
?
?
說明已經切換完畢。
?
?
轉載于:https://www.cnblogs.com/bjx2020/p/8057776.html
總結
以上是生活随笔為你收集整理的Nginx+Keeplived双机热备(主从模式)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 多线程 JUC
- 下一篇: 12.6 Nginx安装 12.7 默认