日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Ansible搭建hadoop3.1.3高可用集群

發布時間:2025/1/21 编程问答 35 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Ansible搭建hadoop3.1.3高可用集群 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

一、節點信息

  • 內核版本:3.10.0-1062.el7.x86_64
  • 系統版本:Red Hat Enterprise Linux Server release 7.7 (Maipo)
節點ip內存jdkhadoopZKNNDNRNNMJNZKFC
hdp-01192.186.10.111G
hdp-02192.186.10.121G
hdp-03192.186.10.131G
hdp-04192.186.10.141G
hdp-05192.186.10.154G
hdp-06192.186.10.164G
hdp-07192.186.10.174G

二、準備工作

1.登錄環境

系統啟動時進入字符界面

systemctl set-default multi-user.target &&\ systemctl isolate multi-user.target

2.網卡

ens33用來連接外網,下載軟件

TYPE="Ethernet" BOOTPROTO="dhcp" NAME="ens33" DEVICE="ens33" ONBOOT="yes"

ens34用來連接內網,進行集群間的通信

BOOTPROTO=static NAME=ens34 DEVICE=ens34 ONBOOT=yes IPADDR=192.186.10.13 PREFIX=24

3.防火墻、Selinux

關閉防火墻與Selinux

yum install -y iptables-services iptables -F && \ service iptables save && \ systemctl stop firewalld && \ systemctl disable firewalld && \ setenforce 0 && \ sed -ri 's#(SELINUX=)(enforcing)#\1disabled#' /etc/selinux/config

4.ssh免密登錄

因為hdp-01與hdp-02為hdfs-ha,所以它們之間必須要自己可以免密登錄自己,自己可以登錄免密對方

此功能在劇本中已經配置完畢

  • hdp-01->hdp-01
  • hdp-01->hdp-02
  • hdp-01->其它所有主機
  • hdp-02->hdp-02
  • hdp-02->hdp-01
  • hdp-02->其它所有主機

5.安裝軟件

jdk,hadoop ,zookeeper的安裝環境變量的配置均已在劇本中寫好

6.配置hosts

hosts配置已經在劇本中寫好

三、配置文件

  • 寫ansible.cfg配置文件的時候注意,所有的配置欄目不能少,否則使用ansible時就會報錯
[defaults] inventory = /root/ansible/inventory roles_path = /root/ansible/roles remote_user = root ask_pass = Flase forks = 10 [inventory] [privilege_escalation] [paramiko_connection] [ssh_connection] [persistent_connection] [accelerate] [selinux] [colors] [diff]

四、目錄信息

[root@hdp-01 ~]# tree ansible/ ansible/ ├── hadoop_ha.yml #角色啟動文件 ├── inventory # 主機清單 └── roles└── hadoop_ha├── defaults├── files├── handlers├── meta├── README.md # 幫助文檔├── tasks│ ├── 01-ssh.yml # 生成hosts文件設置主機名及nn主機免密登錄集群│ ├── 02-install-soft.yml # 安裝jdk hadoop zookeeper軟件及配置環境變量│ ├── 03-config_zk.yml # 配置zookeeper集群│ ├── 04-copy_conf_file.yml # 復制配置文件到所有主機│ ├── 05-init_ha.yml # 初始化集群│ ├── 06-start-cluster.yml # 啟動集群│ └── main.yml # 任務入口執行文件├── templates│ ├── core-site.xml.j2 # core-site.xml模板文件│ ├── hadoop-env.sh.j2 # hadoop-env.sh模板文件│ ├── hdfs-site.xml.j2 # hdfs-site.xml模板文件│ ├── mapred-site.xml.j2 # mapred-site.xml模板文件│ ├── workers.j2 # workers模板文件│ └── yarn-site.xml.j2 # ├── tests└── vars├── core.yml # core-site.xml變量├── hdfs.yml # hdfs-site.xml變量├── soft.yml # 軟件環境及網絡變量└── yarn.yml # yarn-site.xml變量

五、主機清單

[hdp] hdp-0[1:7] ansible_user=root ansible_ssh_pass="123456"[nn] hdp-0[1:2][rm] hdp-0[3:4][zk] hdp-0[5:7][jn] hdp-0[5:7][dn] hdp-0[5:7][nm] hdp-0[5:7][nn1] hdp-01[nn2] hdp-02

六、角色

tasks

00-main.yml

- name: include varsinclude_vars:dir: vars/depth: 1tags: "always"- name: config ssh ymlimport_tasks: "01-ssh.yml"tags: "confg-ssh"- name: install soft ymlimport_tasks: "02-install-soft.yml"tags: "install-soft"- name: config zkimport_tasks: "03-config_zk.yml"tags: "config-zk"- name: copy config fileimport_tasks: "04-copy_conf_file.yml"tags: "copy-con-file"- name: init haimport_tasks: "05-init_ha.yml"tags: "ini-ha"- name: start clusterimport_tasks: "06-start-cluster.yml"tags: "start-cluster"

01-ssh.yml

# 1.執行生成主機名腳本 - name: 1. make hostsscript: hosts.shregister: rwhen: ansible_hostname in groups['nn1']# 2.輸出到hosts文件中 - name: 2. out varslineinfile:path: /etc/hostsline: "{{ hostname }}"regexp: '^{{ hostname }}'owner: rootgroup: rootmode: '0644'with_items: "{{ r.stdout_lines }}"loop_control:loop_var: hostname when: ansible_hostname in groups['nn1']#3.在NameNode主機上生成密鑰對 - name: gen-pub-keyshell: echo 'y' |ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsawhen: ansible_hostname in groups['nn']#4.將hdp-01中的host文件復制給所有主機 - name: copy-hostscopy:src: /etc/hostsdest: /etc/hostsmode: '0644'force: yeswhen: ansible_hostname in groups['nn1']#5.設置所有主機名 - name: set-hostnameshell: hostnamectl set-hostname $(cat /etc/hosts|grep `ifconfig |grep "inet "|awk '{print $2}'|grep "{{ network }}"`|cut -d " " -f2)#6.將NameNode主機上將公鑰復制給所有的主機 - name: ssh-pub-key-copyshell: sshpass -p "{{ ansible_ssh_pass }}" ssh-copy-id -i ~/.ssh/id_rsa.pub "{{ ansible_user }}"@"{{ host }}" -o StrictHostKeyChecking=nowith_items: "{{ groups['hdp'] }}"loop_control:loop_var: hostwhen: ansible_hostname in groups['nn']#8.清除所有主機的iptables規則,關閉selinux - name: cleanshell: 'source /etc/profile ; iptables -F ; setenforce 0 ; sed -ri "s#(SELINUX=)(enforcing)#\1disabled#" /etc/selinux/config'ignore_errors: true

02-install-soft.yml

#1.創建軟件安裝目錄 - name: create apps directoryfile:path: "{{ soft_install_path }}"state: directorymode: '0755'#2.所有主機安裝jdk與hadoop - name: install-jdk-hadoopunarchive:src: "{{ soft }}"dest: "{{ soft_install_path }}"with_items:- [ "{{ hadoop_soft }}", "{{ jdk_soft }}" ]loop_control:loop_var: softtags: install-ha-jdk#3.清掉原來的jdk,hadoop環境變量 - name: clean jdk,hadoop envshell: sed -ri '/HADOOP_HOME/d;/JAVA_HOME/d;/ZOOKEEPER_HOME/d' "{{ env_file }}"tags: set-env#4.配置用戶的jdk,hadoop環境變量 - name: set jdk hadoop envlineinfile:dest: "{{ env_file }}" line: "{{ soft_env.env }}"regexp: "{{ soft_env.reg }}"state: presentwith_items:- { env: 'export JAVA_HOME={{ jdk_home }}' ,reg: '^export JAVA_HOME=' }- { env: 'export HADOOP_HOME={{ hdp_home }}' ,reg: '^export HADOOP_HOME' }loop_control:loop_var: soft_envtags: set-env#5.在指定主機組,安裝zookeeper集群 - name: install zookeeperunarchive:src: "{{ zookeeper_soft }}"dest: "{{ soft_install_path }}"when: ansible_hostname in groups['zk']tags: install-zookeeper#6.設置zookeeper的用戶環境變量 - name: set zookeeper envlineinfile:dest: "{{ env_file }}"line: "{{ zk_env.env }}"regexp: "{{ zk_env.reg }}"state: presentwith_items:- { env: 'export ZOOKEEPER_HOME={{ zk_home }}' ,reg: '^export ZOOKEEPER_HOME=' } loop_control:loop_var: zk_env when: ansible_hostname in groups['zk']tags: set-env#7.export所有主機的jdk與hadoop環境變量 - name: export jdk hadoop envlineinfile:dest: "{{ env_file }}"line: 'export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin'regexp: "^export PATH"state: presenttags: set-env# 8.export zookeeper集群主機的環境變量 - name: export zookeeper envreplace:path: "{{ env_file }}"regexp: "^(export PATH=)(.+)$"replace: '\1\2:$ZOOKEEPER_HOME/bin'when: ansible_hostname in groups['zk']tags: set-env

03-config_zk.yml

# 1.復制配置文件 - name: copy config filecopy:src: "{{ zk_home }}/conf/zoo_sample.cfg"dest: "{{ zk_home }}/conf/zoo.cfg"remote_src: yeswhen: ansible_hostname in groups['zk']# 2.創建zk運行時的數據目錄 - name: create zk data directoryfile:path: "{{ zk_data_dir }}"state: directorymode: '0755'when: ansible_hostname in groups['zk']# 3.在配置文件中指定數據目錄 - name: set zookeeper dataDirlineinfile:dest: "{{ zk_home }}/conf/zoo.cfg"line: "dataDir={{ zk_data_dir }}"regexp: "^dataDir="state: presentwhen: ansible_hostname in groups['zk']# 4.設置集群信息 - name: set cluster infolineinfile:dest: "{{ zk_home }}/conf/zoo.cfg"line: "server.{{ item.0 + 1 }}={{ item.1 }}:2888:3888"regexp: "^server{{ item.0 + 1 }}"with_indexed_items: "{{ groups['zk'] }}"when: ansible_hostname in groups['zk']# 5.根據集群信息,創建對應的myid文件 - name: make server idshell: 'cat {{ zk_home }}/conf/zoo.cfg |grep {{ ansible_hostname }}|cut -d "." -f2|head -c1 > {{ zk_data_dir }}/myid'when: ansible_hostname in groups['zk']

04-copy_conf_file

# 1.生成classpath變量 - name: hadoopathshell: 'source {{ env_file }} ; hadoop classpath'register: r# 2.復制配置文件到所有主機中 - name: templatetemplate:src: "{{ item }}"dest: "{{ hdp_conf }}/{{ item | replace('.j2','') }}"mode: '0644' vars:hdp_classpath: "{{ r.stdout }}"with_items: ["core-site.xml.j2","hdfs-site.xml.j2","mapred-site.xml.j2","yarn-site.xml.j2","hadoop-env.sh.j2","workers.j2"]

05-init_ha.yml

# 1.首先在zk上要刪除hadoop數據目錄下所有文件- name: delete hdp data shell: "rm -rf {{ hdp_data }}/*" when: ansible_hostname in groups['zk']# 2.啟動zkServer- name: start zookeeper shell: 'source {{ env_file }} && nohup zkServer.sh restart' when: ansible_hostname in groups['zk']# 3.啟動journalnode- name: start journalnode shell: 'source {{ env_file }} ; nohup hdfs --daemon stop journalnode ; nohup hdfs --daemon start journalnode' when: ansible_hostname in groups['jn']# 4.首先在nn上要刪除hadoop數據目錄下所有文件- name: delete hdp data shell: "rm -rf {{ hdp_data }}/*" when: ansible_hostname in groups['nn']# 5.格式化前要能連接journnode,并且journnode的目錄是空的 - name: format namenode shell: 'source {{ env_file }} && nohup echo y | hdfs namenode -format' when: ansible_hostname in groups['nn1'] # 6.nn1啟動namenode- name: start namenode shell: 'source {{ env_file }} ; nohup hdfs --daemon stop namenode ; nohup hdfs --daemon start namenode' when: ansible_hostname in groups['nn1']# 7.nn2在復制nn1的元數據之前,nn1要啟動namenode- name: copy mate data shell: 'source {{ env_file }} && nohup hdfs namenode -bootstrapStandby' when: ansible_hostname in groups['nn2']# 8.nn1格式化zkfc- name: format zkfc shell: 'source {{ env_file }} && nohup echo y |hdfs zkfc -formatZK' when: ansible_hostname in groups['nn1']

06-start-cluster.yml

- name: start zookeepershell: "source {{ env_file }} ; zkServer.sh restart"when: ansible_hostname in groups['zk']# 啟動dfs - name: start dfsshell: "source {{ env_file }} ;nohup stop-dfs.sh ; nohup start-dfs.sh"when: ansible_hostname in groups['nn1']# 啟動yarn - name: start yarnshell: "source {{ env_file }} ; nohup stop-yarn.sh ; nohup start-yarn.sh"when: ansible_hostname in groups['nn1']

vars

00-soft.yml

# 主機網段 network: "192.186.10."# 軟件安裝路徑 soft_install_path: "/root/apps"# hadoop安裝包 hadoop_soft: "/root/soft/hadoop-3.1.3.tar.gz"# hadoop家目錄 hdp_home: "{{ soft_install_path }}/hadoop-3.1.3"# hadoop配置文件目錄 hdp_conf: "{{ hdp_home }}/etc/hadoop"# hadoop 數據目錄 hdp_data: "/root/hdpdata"# hadoop執行用戶 hdp_user: "root"# jdk安裝包 jdk_soft: "/root/soft/jdk1.8.0.tar.gz"# jdk家目錄 jdk_home: "{{ soft_install_path }}/jdk1.8.0"# zookeeper安裝包 zookeeper_soft: "/root/soft/apache-zookeeper-3.5.8-bin.tar.gz"# zookeeper的安裝目錄 zk_home: "{{ soft_install_path }}/apache-zookeeper-3.5.8-bin"# zookeeper運行時數據目錄 zk_data_dir: "/root/zkdata"# 環境變量文件 env_file: "/root/.bashrc"

01-core.yml

# hdfs集群名稱dfs_cluster_name: "mycluster"# hadoop的臨時目錄tmp_dir: "/root/hdpdata/tmp"# zookeeper集群地址zk_cluster: "hdp-05:2181,hdp-06:2181,hdp-07:2181"

03-hdfs.yml

# 名稱目錄 name_dir: "/root/hdpdata/name"# 數據目錄 data_dir: "/root/hdpdata/data"# namesnodes的名稱 nn_names: ["nn1","nn2"]# namesnodes的rpc地址 nn_rpc_address: ["hdp-01:9000","hdp-02:9000"]# namesnodes的http地址 nn_http_address: ["hdp-01:9870","hdp-02:9870"]# NameNode的共享edits元數據在存放的位置 edits_dir: "qjournal://hdp-05:8485;hdp-06:8485;hdp-07:8485/{{ dfs_cluster_name }}"# JournalNode數據存入的位置 jn_data_dir: "/root/hdpdata/journaldata"# ssh私鑰存入的位置 pri_key: /root/.ssh/id_rsa#sshfence隔離機制超時時間 ssh_fen_con_timeout: 3000

04-yarn.yml

# yarn集群id yarn_cluster_id: yrc# resoucemanager名稱 rm_names: ["rm1","rm2"]# resoucemanager主機名稱 rm_hostnames: ["hdp-03","hdp-04"]# resoucemanager的Web地址 rm_webapp_address: ["hdp-03:8088","hdp-04:8088"]# 環境白名單列表 env_whitelist: ["JAVA_HOME","HADOOP_HOME"]

templates

1.hadoop-env.sh.j2

export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}export HADOOP_HOME={{ hdp_home }}export HADOOP_MAPRED_HOME=$HADOOP_HOMEexport HADOOP_COMMON_HOME=$HADOOP_HOMEexport HADOOP_HDFS_HOME=$HADOOP_HOMEexport HADOOP_MAPRED_HOME=$HADOOP_HOMEexport HADOOP_YARN_HOME=$HADOOP_HOMEexport HADOOP_INSTALL=$HADOOP_HOMEexport HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoopexport HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexecexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport JAVA_LIBRARY_PATH=$HADOOP_COMMON_LIB_NATIVE_DIR:$JAVA_LIBRARY_PATHexport HDFS_NAMENODE_USER={{ hdp_user }}export HDFS_DATANODE_USER={{ hdp_user }}export YARN_NODEMANAGER_USER={{ hdp_user }}export YARN_RESOURCEMANAGER_USER={{ hdp_user }}export HDFS_JOURNALNODE_USER={{ hdp_user }}export HDFS_ZKFC_USER={{ hdp_user }}export JAVA_HOME={{ jdk_home }}

2.core-site.xml.j2

<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><!-- 配置集群地址 --><property><name>fs.defaultFS</name><value>hdfs://{{ dfs_cluster_name }}/</value></property><!-- 指定hadoop臨時目錄 --><property><name>hadoop.tmp.dir</name><value>{{ tmp_dir }}</value></property><!-- 指定zookeeper地址 --><property><name>ha.zookeeper.quorum</name><value>{{ zk_cluster }}</value></property> </configuration>

3.hdfs-site.xml.j2

<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><!--指定hdfs的nameservice,需要和core-site.xml中的保持一致 --><property><name>dfs.nameservices</name><value>{{ dfs_cluster_name }}</value></property><!-- 指定namenodes的名稱 --><property><name>dfs.ha.namenodes.{{ dfs_cluster_name }}</name><value> {% for nn in nn_names %}{%- set sep=',' -%}{%- if loop.last -%}{%- set sep='' -%} {%- endif -%} {{ nn }}{{ sep }}{%- endfor -%}</value></property>{% for nn in nn_names %}<!-- {{ nn }}RPC通信地址 --><property><name>dfs.namenode.rpc-address.{{ dfs_cluster_name }}.{{ nn }}</name><value>{{ nn_rpc_address[loop.index0] }}</value></property>{% endfor %}{% for nn in nn_names %}<!-- {{ nn }}的http通信地址 --><property><name>dfs.namenode.http-address.{{ dfs_cluster_name }}.{{ nn }}</name><value>{{ nn_http_address[loop.index0] }}</value></property>{% endfor %}<!-- 名稱目錄位置 --><property><name>dfs.namenode.name.dir</name><value>{{ name_dir }}</value></property><!-- 數據目錄位置 --><property><name>dfs.datanode.data.dir</name><value>{{ data_dir }}</value></property><!-- 指定NameNode的共享edits元數據在JournalNode上的存放位置 --><property><name>dfs.namenode.shared.edits.dir</name><value>{{ edits_dir }}</value></property><!-- 指定JournalNode在本地磁盤存放數據的位置 --><property><name>dfs.journalnode.edits.dir</name><value>{{ jn_data_dir }}</value></property><!-- 開啟NameNode失敗自動切換 --><property><name>dfs.ha.automatic-failover.enabled</name><value>true</value></property><!-- 配置失敗自動切換實現方式 --><property><name>dfs.client.failover.proxy.provider.{{ dfs_cluster_name }}</name><value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value></property><!-- 配置隔離機制方法,多個機制用換行分割,即每個機制暫用一行--><property><name>dfs.ha.fencing.methods</name><value>sshfenceshell(/bin/true)</value></property><!-- 使用sshfence隔離機制時需要ssh免登陸 --><property><name>dfs.ha.fencing.ssh.private-key-files</name><value>{{ pri_key }}</value></property><!-- 配置sshfence隔離機制超時時間 --><property><name>dfs.ha.fencing.ssh.connect-timeout</name><value>{{ ssh_fen_con_timeout }}</value></property> </configuration>

4.yarn-site.xml.j2

<?xml version="1.0"?> <configuration><!-- 開啟RM高可用 --><property><name>yarn.resourcemanager.ha.enabled</name><value>true</value></property><!-- 指定RM的cluster id --><property><name>yarn.resourcemanager.cluster-id</name><value>{{ yarn_cluster_id }}</value></property><!-- 指定RM的邏輯名字 --><property><name>yarn.resourcemanager.ha.rm-ids</name><value>{%- for rm in rm_names -%}{%- set sep=',' -%}{%- if loop.last -%}{%- set sep='' -%} {%- endif -%} {{ rm }}{{ sep }}{%- endfor -%}</value></property>{%- for rm in rm_names -%}<!-- 指定{{ rm }}的地址 --><property><name>yarn.resourcemanager.hostname.{{ rm }}</name><value>{{ rm_hostnames[loop.index0] }}</value></property>{%- endfor -%}<!-- 至關重要,即使默認有也要配置 -->{%- for rm in rm_names -%}<!-- {{ rm }}的webapp地址 --><property><name>yarn.resourcemanager.webapp.address.{{ rm }}</name><value>{{ rm_webapp_address[loop.index0] }}</value></property>{%- endfor -%}<!-- 指定zk集群地址 --><property><name>yarn.resourcemanager.zk-address</name><value>{{ zk_cluster }}</value></property><!--啟用自動恢復--> <property><name>yarn.resourcemanager.recovery.enabled</name><value>true</value></property><!-- 啟用自動切換 --><property><name>yarn.resourcemanager.ha.automatic-failover.enabled</name><value>true</value></property><!-- 指定resourcemanager的狀態信息存儲在zookeeper集群 --> <property><name>yarn.resourcemanager.store.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value></property><!-- NodeManager上運行的附屬服務,需配置成mapreduce_shuffle,才可運行MapReduce程序 --><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><!-- 配置nm環境環境變量白名單 --><property><name>yarn.nodemanager.env-whitelist</name><value>{{ env_whitelist }}</value></property><!-- yarn程序運行環境變量 --><property><name>yarn.application.classpath</name><value>{{ hdp_classpath }}</value></property><!-- 讓NodeManager自動檢測內存和CPU --><property><name>yarn.nodemanager.resource.detect-hardware-capabilities</name><value>true</value></property></configuration>

5.mapred-site.xml.j2

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property></configuration>

6.workers.j2

{% for host in groups['dn'] %}{{ host }}{% endfor %}

七、使用方法

1.執行所有

  • 查看hadoop_ha角色文件
[root@hdp-01 ansible]# cat hadoop_ha.yml - hosts: allroles:- { role: hadoop_ha }
  • 從頭開始執行所有步驟,適合初化環境下運行
[root@hdp-01 ansible]# ansible-playbook hadoop_ha.yml

2.指定執行

  • 查看角色tasks中的所有標簽
[root@hdp-01 ~]# ansible-playbook --list-tags hadoop_ha.yml [always, config-ssh, config-zk, copy-con-file, ini-ha, install-ha-jdk, install-soft, install-zookeeper, set-env, start-cluster]
  • 可以指定標簽執行對應的功能,適合精確的使用某個功能
ansible -t config-ssh hadoop_ha.yml

八、測試集群

1.查看集群進程信息

[root@hdp-01 ~]# ansible -m shell -a 'jps' hdp hdp-02 | CHANGED | rc=0 >> 13909 Jps 11597 NameNode 11663 DFSZKFailoverControllerhdp-04 | CHANGED | rc=0 >> 11219 Jps 9802 ResourceManagerhdp-03 | CHANGED | rc=0 >> 9827 ResourceManager 11436 Jpshdp-01 | CHANGED | rc=0 >> 2882 Jps 1829 NameNode 1957 DFSZKFailoverControllerhdp-05 | CHANGED | rc=0 >> 12560 Jps 10281 JournalNode 10026 QuorumPeerMain 10219 DataNode 10475 NodeManagerhdp-06 | CHANGED | rc=0 >> 10197 JournalNode 9942 QuorumPeerMain 10135 DataNode 12430 Jps 10399 NodeManagerhdp-07 | CHANGED | rc=0 >> 10112 DataNode 12518 Jps 9927 QuorumPeerMain 10375 NodeManager

2.測試mapreduce

1).查看yarn集群信息

[root@hdp-02 ~]# yarn rmadmin -getAllServiceState hdp-03:8033 active hdp-04:8033 standby

2).進入示例目錄

[root@hdp-01 ~]# cd /root/apps/hadoop-3.1.3/share/hadoop/mapreduce

3).執行pi的mapreduce程序

[root@hdp-01 mapreduce]# hadoop jar hadoop-mapreduce-examples-3.1.3.jar pi 3 5

4).執行結果

Estimated value of Pi is 3.73333333333333333333

3.測試hdfs高可用

1).上傳一個文件到hdfs中*

[root@hdp-01 ~]# hadoop fs -put /var/log/messages /

2).獲取active狀態的主機,kill掉namenode

[root@hdp-01 ~]# hdfs haadmin -getAllServiceState hdp-01:9000 standby hdp-02:9000 active [root@hdp-02 ~]# jps 14020 Jps 11597 NameNode 11663 DFSZKFailoverController[root@hdp-02 ~]# kill -9 11597

3).查看nn1對應hdp-01的namenode狀態

[root@hdp-01 ~]# hdfs haadmin -getServiceState nn1 active

4).再次查看hdfs中的文件信息,發現仍然可以訪問,說明成功

[root@hdp-01 ~]# hadoop fs -ls /messages -rw-r--r-- 3 root supergroup 684483 2020-08-10 14:48 /messages

5).再次啟動剛剛kill掉的namdenode,查看集群狀態信息,發現hdp-02已經是standby了

[root@hdp-02 ~]# hdfs --daemon start namenode [root@hdp-02 ~]# hdfs haadmin -getAllServiceState hdp-01:9000 active hdp-02:9000 standby

參考鏈接:https://cloud.tencent.com/developer/article/1676789

總結

以上是生活随笔為你收集整理的Ansible搭建hadoop3.1.3高可用集群的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。