五節點HadoopHA安裝教程:
Master1 namenode,resourcemanager,nodemanager,datanode,journalnode, DFSZKFailoverController
Master2 namenode,resourcemanager,nodemanager,datanode,journalnode, DFSZKFailoverController
Slave1 nodemanager,datenode,journalnode, QuorumPeerMain
Slave2 nodemanager,datenode,journalnode, QuorumPeerMain
Slave3 nodemanager,datenode,journalnode, QuorumPeerMain
安裝jdk
配置環境變量JAVA
export JAVA_HOME=/home/zhouwang/jdk1.8.0_151
export PATH=$JAVA_HOME/bin:$PATH
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
2.配置個節點的/etc/hosts文件
192.168.71.128 master1
192.168.71.132 master2
192.168.71.129 slave1
192.168.71.130 slave2
192.168.71.131 slave3
3.配置SSH免密登錄
Ssh-keygen -t rsa
Cat id_rsa.pub >> authorized_keys
Chmod 644 authorized_keys
Scp ~/.ssh/id_rsa.pub authorized_keys zhouwang@master:~/.ssh
Authorized_keys的權限必須是640
4.安裝zookeeper
重命名conf文件夾下的zoo.example.cfg為zoo.cfg。新修改內容
clientPort=2181
dataDir=/home/zhouwang/zookeeper/data
dataLogDir=/home/zhouwang/zookeeper/log
server.0=master1:2888:3888
server.1=master2:2888:3888
server.2=slave1:2888:3888
server.3=slave2:2888:3888
server.4=slave3:2888:3888
在zookeeper文件夾下面創建相應的data和log文件夾,還有myid文件
Mkdir data
Mkdir log
Vim myid 輸入1,每個節點的myid文件的值與server.x對應上
配置環境變量如下:
ZOOKEEPER
export ZOOKEEPER_HOME=/home/zhouwang/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin
安裝Hadoop
修改etc/hadoop下的四個配置文件(1) core-site.xml
<property> <name>fs.defaultFS</name> <value>hdfs://master/</value>
</property>
<property> <name>hadoop.tmp.dir</name> <value>/home/zhouwang/hadoop/tmp</value>
</property>
<property> <name>ha.zookeeper.quorum</name> <value>master1:2181,master2:2181,slave1:2181,slave2:2181,slave3:2181</value>
</property>
(2) hdfs_site.xml
<property> <name>dfs.namenode.name.dir</name> <value>/home/zhouwang/hadoop/dfs/name</value>
</property>
<property> <name>dfs.datanode.data.dir</name> <value>/home/zhouwang/hadoop/dfs/data</value>
</property>
<property> <name>dfs.replication</name> <value>3</value>
</property> <!--HDFS高可用配置 -->
<!--指定hdfs的nameservice,需要和core-site.xml中的保持一致-->
<property> <name>dfs.nameservices</name> <value>master</value>
</property>
<!--指定master的兩個namenode的名稱 -->
<property> <name>dfs.ha.namenodes.master</name> <value>nn1,nn2</value>
</property> <!-- nn1,nn2 rpc 通信地址 -->
<property> <name>dfs.namenode.rpc-address.master.nn1</name> <value>master1:9000</value>
</property>
<property> <name>dfs.namenode.rpc-address.master.nn2</name> <value>master2:9000</value>
</property> <!-- nn1.nn2 http 通信地址 -->
<property> <name>dfs.namenode.http-address.master.nn1</name> <value>master1:50070</value>
</property>
<property> <name>dfs.namenode.http-address.master.nn2</name> <value>master2:50070</value>
</property> <!--=========Namenode同步==========-->
<!--保證數據恢復 -->
<property> <name>dfs.journalnode.http-address</name> <value>0.0.0.0:8480</value>
</property>
<property> <name>dfs.journalnode.rpc-address</name> <value>0.0.0.0:8485</value>
</property>
<property> <!--指定NameNode的元數據在JournalNode上的存放位置 --> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://master1:8485;master2:8485;slave1:8485;slave2:8485;slave3:8485/master</value>
</property> <property> <!--JournalNode存放數據地址 --> <name>dfs.journalnode.edits.dir</name> <value>/home/zhouwang/hadoop/dfs/journal</value>
</property>
<property> <!--NameNode失敗自動切換實現方式 --> <name>dfs.client.failover.proxy.provider.master</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property> <!--=========Namenode fencing:======== -->
<!-- 配置隔離機制方法,多個機制用換行分割,即每個機制暫用一行 -->
<property> <name>dfs.ha.fencing.methods</name> <value>sshfenceshell(/bin/true)</value>
</property>
<!-- 使用sshfence隔離機制時需要ssh免登陸 -->
<property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/zhouwang/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔離機制超時時間 -->
<property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value>
</property> <!--開啟基于Zookeeper及ZKFC進程的自動備援設置,監視進程是否死掉 -->
<property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value>
</property>
(3) mapred-site.xml
<!-- 指定mr框架為yarn方式 -->
<name>mapreduce.framework.name</name>
<value>yarn</value>
<name>mapreduce.jobhistory.address</name> <value>master1:10020</value>
<name>mapreduce.jobhistory.webapp.address</name> <value>master1:19888</value>
(4) yarn-site.xml
<!--NodeManager上運行的附屬服務。需配置成mapreduce_shuffle,才可運行MapReduce程序-->
<property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value>
</property>
<property> <name>yarn.resourcemanager.connect.retry-interval.ms</name> <value>2000</value>
</property>
<property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value>
</property>
<!-- 指定RM的cluster id -->
<property> <name>yarn.resourcemanager.cluster-id</name> <value>cluster</value>
</property>
<!--指定兩臺RM主機名標識符-->
<property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value>
</property>
<!--RM主機1-->
<property> <name>yarn.resourcemanager.hostname.rm1</name> <value>master1</value>
</property>
<!--RM主機2-->
<property> <name>yarn.resourcemanager.hostname.rm2</name> <value>master2</value>
</property>
<!--RM故障自動切換-->
<property> <name>yarn.resourcemanager.ha.automatic-failover.enabled</name> <value>true</value>
</property>
<!--RM故障自動恢復 -->
<property>
<name>yarn.resourcemanager.recovery.enabled</name> <value>true</value>
</property>
<!--RM狀態信息存儲方式,一種基于內存(MemStore),另一種基于ZK(ZKStore)-->
<property> <name>yarn.resourcemanager.store.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<!-- 指定zk集群地址 -->
<property> <name>yarn.resourcemanager.zk-address</name> <value>master1:2181,master2:2181,slave1:2181,slave2:2181,slave3:2181</value>
</property>
<!--向RM調度資源地址-->
<property> <name>yarn.resourcemanager.scheduler.address.rm1</name> <value>master1:8030</value>
</property>
<property> <name>yarn.resourcemanager.scheduler.address.rm2</name> <value>master2:8030</value>
</property>
<!--NodeManager通過該地址交換信息-->
<property> <name>yarn.resourcemanager.resource-tracker.address.rm1</name> <value>master1:8031</value>
</property>
<property> <name>yarn.resourcemanager.resource-tracker.address.rm2</name> <value>master2:8031</value>
</property>
<!--客戶端通過該地址向RM提交對應用程序操作-->
<property> <name>yarn.resourcemanager.address.rm1</name> <value>master1:8032</value>
</property>
<property> <name>yarn.resourcemanager.address.rm2</name> <value>master2:8032</value>
</property>
<!--管理員通過該地址向RM發送管理命令-->
<property> <name>yarn.resourcemanager.admin.address.rm1</name> <value>master1:8033</value>
</property>
<property> <name>yarn.resourcemanager.admin.address.rm2</name> <value>master2:8033</value>
</property>
<!--RM HTTP訪問地址,查看集群信息-->
<property> <name>yarn.resourcemanager.webapp.address.rm1</name> <value>master1:8088</value>
</property>
<property> <name>yarn.resourcemanager.webapp.address.rm2</name> <value>master2:8088</value>
</property>
(5) slaves
localhost
192.168.71.129 slave1
192.168.71.130 slave2
192.168.71.131 slave3
192.168.71.128 master1
192.168.71.132 master2
設置配置文件:
HADOOP
export HADOOP_HOME=/home/zhouwang/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/Hadoop
將hadoop文件夾分發到各個節點上去:
Scp -r Hadoop zhouwang@XXX:~/Hadoop
第一次啟動集群
配置了zookeeper的節點啟動zkServer.sh服務zkServer.sh start
然后查看zookeeper狀態,zkServer.sh status,顯式為follower或者leader則說明啟動成功
然后,數據節點啟動journalnode服務,hadoop-deamen.sh start journalnode
在master1格式化namenode: Hadoop namenode -format
然后啟動namenode服務: Hadoop-deamon.sh start namenode
在master2同步master1的元數據: HDFS namenode -bootstrapStandby
然后再master2上啟動namenode服務: Hadoop-deamon.sh start namenode
在master上格式化ZKFC
Hdfs zkfc -formatZK
在master1和master2上執行hadoop-deamon.sh start zkfc 啟動DFSZKFailoverController 服務#ZKFC用于監控NameNode active和standby節點狀態
在master1上啟動hadoop-deamons.sh start datanode 啟動所有數據節點上的datanode服務。
在master1上執行start-yarn.xml啟動yarn服務。
至此安裝完成。
7.第一次停止集群
先停止hdfs:stop-dfs.sh
在停止yarn:stop-yarn.sh
再次啟動或者停止就可以執行start-all.sh 和stop-all.sh了。
8.Hdfs操作命令
Hadoop dfsadmin -report #查看datanode的節點信息
Hdfs haadmin -getServiceState nn1 查看namenode的狀態
hdfs haadmin -transitionToActive/transitionToStandby -forcemanual nn1 強制切換節點的active和是standby狀態。
總結
以上是生活随笔為你收集整理的五节点HadoopHA安装教程的全部內容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。