日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Vertica集群扩容实验过程记录

發布時間:2025/7/14 编程问答 24 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Vertica集群扩容实验过程记录 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

需求:
將3個節點的Vertica集群擴容,額外增加3個節點,即擴展到6個節點的Vertica集群。

實驗環境:
RHEL 6.5 + Vertica 7.2.2-2

步驟:

  • 1.三節點Vertica集群環境創建
  • 2.模擬創建業務最小測試用例
  • 3.集群擴容前準備
  • 4.集群擴容:增加3個節點到集群
  • Reference

1.三節點Vertica集群環境創建

三節點IP地址和主機名規劃:

192.168.56.121 vnode01 192.168.56.122 vnode02 192.168.56.123 vnode03

數據存儲規劃目錄及所屬用戶/用戶組:

mkdir -p /data/verticadb chown -R dbadmin:verticadba /data/verticadb

這個3節點Vertica集群的安裝過程不再贅述,綜合參考我以前寫過的幾篇文章,你一定可以完美的搞定^_^。
FYI:
Linux快速配置集群ssh互信
Vertica 7.1安裝最佳實踐(RHEL6.4)
Vertica 安裝,建庫,新建測試用戶并授予權限,建表,入庫

Tips:7.2版本的安裝提示依賴dialog這個包,如果系統沒有預安裝這個包,可以從對應系統光盤中找到這個包,直接rpm在各節點安裝即可。如下:

[root@vnode01 Packages]# cluster_copy_all_nodes /root/dialog-1.1-9.20080819.1.el6.x86_64.rpm /root dialog-1.1-9.20080819.1.el6.x86_64.rpm 100% 197KB 197.1KB/s 00:00 dialog-1.1-9.20080819.1.el6.x86_64.rpm 100% 197KB 197.1KB/s 00:00 [root@vnode01 Packages]# cluster_run_all_nodes "hostname; rpm -ivh /root/dialog-1.1-9.20080819.1.el6.x86_64.rpm" vnode01 warning: /root/dialog-1.1-9.20080819.1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY Preparing... ################################################## dialog ################################################## vnode02 warning: /root/dialog-1.1-9.20080819.1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY Preparing... ################################################## dialog ################################################## vnode03 warning: /root/dialog-1.1-9.20080819.1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY Preparing... ################################################## dialog ################################################## [root@vnode01 Packages]# [root@vnode01 Packages]# cluster_run_all_nodes "hostname; rpm -q dialog" vnode01 dialog-1.1-9.20080819.1.el6.x86_64 vnode02 dialog-1.1-9.20080819.1.el6.x86_64 vnode03 dialog-1.1-9.20080819.1.el6.x86_64

最終安裝完畢,集群狀態應該是這樣:

dbadmin=> select * from nodes;node_name | node_id | node_state | node_address | node_address_family | export_address | export_address_family | catalog_path | node_type | is_ephemeral | standing_in_for | node_down_since --------------------+-------------------+------------+----------------+---------------------+----------------+-----------------------+------------------------------------------------------------+-----------+--------------+-----------------+-----------------v_testmpp_node0001 | 45035996273704982 | UP | 192.168.56.121 | ipv4 | 192.168.56.121 | ipv4 | /data/verticadb/TESTMPP/v_testmpp_node0001_catalog/Catalog | PERMANENT | f | | v_testmpp_node0002 | 45035996273721500 | UP | 192.168.56.122 | ipv4 | 192.168.56.122 | ipv4 | /data/verticadb/TESTMPP/v_testmpp_node0002_catalog/Catalog | PERMANENT | f | | v_testmpp_node0003 | 45035996273721504 | UP | 192.168.56.123 | ipv4 | 192.168.56.123 | ipv4 | /data/verticadb/TESTMPP/v_testmpp_node0003_catalog/Catalog | PERMANENT | f | | (3 rows)dbadmin=>

2.模擬創建業務最小測試用例

為了更好的模擬已經有業務在數據庫上,我們來模擬創建業務最小測試用例:
FYI:

  • Vertica 業務用戶指定資源池加載數據
  • Vertica 分區表設計(續)

在參考Vertica 業務用戶指定資源池加載數據這篇文章操作時,在GRANT目錄讀權限時遇到了一個錯誤,可能是版本差異,錯誤現象及解決方法如下:

--錯誤現象: dbadmin=> CREATE LOCATION '/tmp' NODE 'v_testmpp_node0001' USAGE 'USER'; CREATE LOCATION dbadmin=> GRANT READ ON LOCATION '/tmp' TO test; ROLLBACK 5365: User available location ["/tmp"] does not exist on node ["v_testmpp_node0002"] dbadmin=> --解決:刪除剛創建的節點1上的location,然后重新CREATE LOCATION,這一次指定參數“ALL NODES”: dbadmin=> SELECT DROP_LOCATION('/tmp' , 'v_testmpp_node0001');DROP_LOCATION ---------------/tmp dropped. (1 row)dbadmin=> CREATE LOCATION '/tmp' ALL NODES USAGE 'USER'; CREATE LOCATION dbadmin=> GRANT READ ON LOCATION '/tmp' TO test; GRANT PRIVILEGE

3.集群擴容前準備

集群擴容前,需要配置好增加的各個節點。

3.1 確認規劃的IP地址和主機名,數據存儲目錄

IP地址和主機名規劃:

192.168.56.124 vnode04 192.168.56.125 vnode05 192.168.56.126 vnode06

數據存儲規劃目錄及所屬用戶/用戶組:

mkdir -p /data/verticadb --更改目錄所有者,所有組,這里不用-R,因為已安裝的節點該目錄下會有大量子目錄 chown dbadmin:verticadba /data/verticadb

3.2 root用戶互信配置

--清除root用戶ssh互信的當前所有配置信息(節點1執行)【因為root用戶的互信刪除不會影響到Vertica集群,所以才可以這樣操作】 cluster_run_all_nodes "hostname ; rm -rf ~/.ssh" rm -rf ~/.ssh--節點1的hosts文件(vi /etc/hosts) 192.168.56.121 vnode01 192.168.56.122 vnode02 192.168.56.123 vnode03 192.168.56.124 vnode04 192.168.56.125 vnode05 192.168.56.126 vnode06--節點1的環境變量(vi ~/.bash_profile) export NODE_LIST='vnode01 vnode02 vnode03 vnode04 vnode05 vnode06' --重新登錄或source生效變量 source ~/.bash_profile

然后依據Linux快速配置集群ssh互信重新配置root用戶的互信。

3.3 數據存儲規劃目錄統一

cluster_run_all_nodes "hostname; mkdir -p /data/verticadb"

3.4 確認所有節點防火墻和SELinux關閉

cluster_run_all_nodes "hostname; service iptables status" cluster_run_all_nodes "hostname; getenforce"

3.5 確認依賴包dialog已安裝

cluster_run_all_nodes "hostname; rpm -q dialog"

4.集群擴容:增加3個節點到集群

4.1 增加3個節點到集群

/opt/vertica/sbin/update_vertica --add-hosts host(s) --rpm package

實際我這里是增加3個節點,指定這三個節點的主機名稱

/opt/vertica/sbin/update_vertica --add-hosts vnode04,vnode05,vnode06 --rpm /root/vertica-7.2.2-2.x86_64.RHEL6.rpm --failure-threshold=HALT -u dbadmin -p vertica

執行過程如下:

[root@vnode01 ~]# /opt/vertica/sbin/update_vertica --add-hosts vnode04,vnode05,vnode06 --rpm /root/vertica-7.2.2-2.x86_64.RHEL6.rpm --failure-threshold=HALT -u dbadmin -p vertica Vertica Analytic Database 7.2.2-2 Installation Tool>> Validating options...Mapping hostnames in --add-hosts (-A) to addresses...vnode04 => 192.168.56.124vnode05 => 192.168.56.125vnode06 => 192.168.56.126>> Starting installation tasks. >> Getting system information for cluster (this may take a while)...Default shell on nodes: 192.168.56.126 /bin/bash 192.168.56.125 /bin/bash 192.168.56.124 /bin/bash 192.168.56.123 /bin/bash 192.168.56.122 /bin/bash 192.168.56.121 /bin/bash>> Validating software versions (rpm or deb)...>> Beginning new cluster creation...successfully backed up admintools.conf on 192.168.56.123 successfully backed up admintools.conf on 192.168.56.122 successfully backed up admintools.conf on 192.168.56.121 >> Creating or validating DB Admin user/group...Successful on hosts (6): 192.168.56.126 192.168.56.125 192.168.56.124 192.168.56.123 192.168.56.122 192.168.56.121Provided DB Admin account details: user = dbadmin, group = verticadba, home = /home/dbadminCreating group... Group already existsValidating group... OkayCreating user... User already existsValidating user... Okay>> Validating node and cluster prerequisites...Prerequisites not fully met during local (OS) configuration for verify-192.168.56.126.xml:HINT (S0151): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0151These disks do not have known IO schedulers: '/dev/mapper/vg_linuxbase-lv_root' ('') = ''HINT (S0305): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0305TZ is unset for dbadmin. Consider updating .profile or .bashrcWARN (S0170): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0170lsblk (LVM utility) indicates LVM on the data directory.FAIL (S0020): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0020Readahead size of (/dev/mapper/vg_linuxbase-lv_root) is too low fortypical systems: 256 < 2048FAIL (S0030): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0030ntp daemon process is not running: ['ntpd', 'ntp', 'chronyd']FAIL (S0310): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0310Transparent hugepages is set to 'always'. Must be 'never' or 'madvise'.Prerequisites not fully met during local (OS) configuration for verify-192.168.56.123.xml:HINT (S0151): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0151These disks do not have known IO schedulers: '/dev/mapper/vg_linuxbase-lv_root' ('') = ''HINT (S0305): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0305TZ is unset for dbadmin. Consider updating .profile or .bashrcWARN (S0170): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0170lsblk (LVM utility) indicates LVM on the data directory.FAIL (S0020): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0020Readahead size of (/dev/mapper/vg_linuxbase-lv_root) is too low fortypical systems: 256 < 2048FAIL (S0030): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0030ntp daemon process is not running: ['ntpd', 'ntp', 'chronyd']FAIL (S0310): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0310Transparent hugepages is set to 'always'. Must be 'never' or 'madvise'.Prerequisites not fully met during local (OS) configuration for verify-192.168.56.121.xml:HINT (S0151): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0151These disks do not have known IO schedulers: '/dev/mapper/vg_linuxbase-lv_root' ('') = ''HINT (S0305): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0305TZ is unset for dbadmin. Consider updating .profile or .bashrcWARN (S0170): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0170lsblk (LVM utility) indicates LVM on the data directory.FAIL (S0020): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0020Readahead size of (/dev/mapper/vg_linuxbase-lv_root) is too low fortypical systems: 256 < 2048FAIL (S0030): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0030ntp daemon process is not running: ['ntpd', 'ntp', 'chronyd']FAIL (S0310): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0310Transparent hugepages is set to 'always'. Must be 'never' or 'madvise'.Prerequisites not fully met during local (OS) configuration for verify-192.168.56.122.xml:HINT (S0151): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0151These disks do not have known IO schedulers: '/dev/mapper/vg_linuxbase-lv_root' ('') = ''HINT (S0305): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0305TZ is unset for dbadmin. Consider updating .profile or .bashrcWARN (S0170): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0170lsblk (LVM utility) indicates LVM on the data directory.FAIL (S0020): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0020Readahead size of (/dev/mapper/vg_linuxbase-lv_root) is too low fortypical systems: 256 < 2048FAIL (S0030): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0030ntp daemon process is not running: ['ntpd', 'ntp', 'chronyd']FAIL (S0310): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0310Transparent hugepages is set to 'always'. Must be 'never' or 'madvise'.Prerequisites not fully met during local (OS) configuration for verify-192.168.56.125.xml:HINT (S0151): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0151These disks do not have known IO schedulers: '/dev/mapper/vg_linuxbase-lv_root' ('') = ''HINT (S0305): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0305TZ is unset for dbadmin. Consider updating .profile or .bashrcWARN (S0170): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0170lsblk (LVM utility) indicates LVM on the data directory.FAIL (S0020): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0020Readahead size of (/dev/mapper/vg_linuxbase-lv_root) is too low fortypical systems: 256 < 2048FAIL (S0030): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0030ntp daemon process is not running: ['ntpd', 'ntp', 'chronyd']FAIL (S0310): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0310Transparent hugepages is set to 'always'. Must be 'never' or 'madvise'.Prerequisites not fully met during local (OS) configuration for verify-192.168.56.124.xml:HINT (S0151): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0151These disks do not have known IO schedulers: '/dev/mapper/vg_linuxbase-lv_root' ('') = ''HINT (S0305): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0305TZ is unset for dbadmin. Consider updating .profile or .bashrcWARN (S0170): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0170lsblk (LVM utility) indicates LVM on the data directory.FAIL (S0020): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0020Readahead size of (/dev/mapper/vg_linuxbase-lv_root) is too low fortypical systems: 256 < 2048FAIL (S0030): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0030ntp daemon process is not running: ['ntpd', 'ntp', 'chronyd']FAIL (S0310): https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=S0310Transparent hugepages is set to 'always'. Must be 'never' or 'madvise'.System prerequisites passed. Threshold = HALT>> Establishing DB Admin SSH connectivity...Installing/Repairing SSH keys for dbadmin>> Setting up each node and modifying cluster...Creating Vertica Data Directory...Updating agent... Creating node node0004 definition for host 192.168.56.124 ... Done Creating node node0005 definition for host 192.168.56.125 ... Done Creating node node0006 definition for host 192.168.56.126 ... Done>> Sending new cluster configuration to all nodes...Starting agent...>> Completing installation...Running upgrade logic No spread upgrade required: /opt/vertica/config/vspread.conf not found on any node Installation complete.Please evaluate your hardware using Vertica's validation tools:https://my.vertica.com/docs/7.2.x/HTML/index.htm#cshid=VALSCRIPTTo create a database:1. Logout and login as dbadmin. (see note below)2. Run /opt/vertica/bin/adminTools as dbadmin3. Select Create Database from the Configuration MenuNote: Installation may have made configuration changes to dbadminthat do not take effect until the next session (logout and login).To add or remove hosts, select Cluster Management from the Advanced Menu.

4.2 需要更改數據存儲目錄的所有者,所有組

--安裝軟件之后需要更改目錄所有者,所有組,這里不用-R,因為已安裝的節點該目錄下會有大量子目錄 cluster_run_all_nodes "hostname; chown dbadmin:verticadba /data/verticadb"

4.3 數據庫填加集群中剛剛擴容的3個節點

dbadmin用戶登錄,使用admintools工具添加節點:

7 Advanced Menu -> 6 Cluster Management -> 1 Add Host(s) -> Select database 空格選擇數據庫 -> Select host(s) to add to database 空格選擇要添加的節點 -> Are you sure you want to add ['192.168.56.124', '192.168.56.125', '192.168.56.126'] to the database? -> Failed to add nodes to database | | ROLLBACK 2382: Cannot create another node. The current license permits 3 node(s) and the database catalog already contains 3 node(s)

這是因為社區版Vertica最多只允許有3個節點。
如果購買了HP官方的Vertica的正式授權或是臨時授權,則可以導入授權,再添加新的集群節點到數據庫。
如果有正式授權就會繼續提示:

-> Successfully added nodes to the database. -> Enter directory for Database Designer output: 輸入/data/verticadb -> Database Designer - Proposed K-safety value: 1 -> +--------------------------------------------------------------------------------------------------------------------------+| The Database Designer is ready to modify your projections in order to re-balance data across all nodes in the database. || || Review the options you selected: || || -The data will be automatically re-balanced with a k-safety value of 1. || || Rebalance will take place using elastic cluster methodology. || || Re-balancing data could take a long time; allow it to complete uninterrupted. || Use Ctrl+C if you must cancel the session. || || To change any of the options press <Cancel> to return to the Cluster Management menu. || || |+--------------------------------------------------------------------------------------------------------------------------+| <Proceed> <Cancel > -> 選擇 Proceed -> Starting Data Rebalancing tasks. Please wait.... This process could take a long time; allow it to complete uninterrupted. Use Ctrl+C if you must cancel the session.

等同步完成,

Data Rebalance completed successfully. Press <Enter> to return to the Administration Tools menu.

此時Vertica集群擴容就算全部完成了。

Reference

  • Understanding Cluster Rebalancing in HP Vertica

總結

以上是生活随笔為你收集整理的Vertica集群扩容实验过程记录的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。