日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

hadoop NameNode HA 和ResouceManager HA

發(fā)布時(shí)間:2024/1/17 编程问答 39 豆豆
生活随笔 收集整理的這篇文章主要介紹了 hadoop NameNode HA 和ResouceManager HA 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

官網(wǎng)配置地址:

HDFS HA :?http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html

ResourceManager HA : http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html

安裝jdk

關(guān)閉防火墻

hadoop自動(dòng)HA借助于zookeeper實(shí)現(xiàn),整體架構(gòu)如下:

m2和m3作為NameNode節(jié)點(diǎn)應(yīng)該配置與其他所有節(jié)點(diǎn)的SSH無(wú)密碼登錄

m4和m5應(yīng)該與m6、m7、m8配置SSH無(wú)密碼登錄

core-site.xml具體配置

<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property><name>fs.defaultFS</name><value>hdfs://cluster</value> </property><property><name>hadoop.tmp.dir</name><value>/home/hadoop/app/hadoop-2.7.3/tmp/data</value> </property><property><name>ha.zookeeper.quorum</name><value>m6:2181,m7:2181,m8:2181</value></property> </configuration>

hdfs-site.xml具體配置

<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration> <property><name>dfs.nameservices</name><value>cluster</value> </property><property><name>dfs.ha.namenodes.cluster</name><value>nn1,nn2</value> </property><property><name>dfs.namenode.rpc-address.cluster.nn1</name><value>m2:9820</value> </property> <property><name>dfs.namenode.rpc-address.cluster.nn2</name><value>m3:9820</value> </property><property><name>dfs.namenode.http-address.cluster.nn1</name><value>m2:9870</value> </property> <property><name>dfs.namenode.http-address.cluster.nn2</name><value>m3:9870</value> </property><property><name>dfs.namenode.shared.edits.dir</name><value>qjournal://m6:8485;m7:8485;m8:8485;/cluster</value> </property><property><name>dfs.client.failover.proxy.provider.cluster</name><value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property><property><name>dfs.ha.fencing.methods</name><value>sshfenceshell(shell(/bin/true))</value> </property><property><name>dfs.ha.fencing.ssh.private-key-files</name><value>/home/hadoop/.ssh/id_rsa</value> </property><property><name>dfs.journalnode.edits.dir</name><value>/home/hadoop/app/hadoop-2.7.3/journalnode/data</value> </property><property><name>dfs.replication</name><value>3</value> </property><property><name>dfs.ha.automatic-failover.enabled</name><value>true</value></property></configuration>

yarn-site.xml具體配置

<?xml version="1.0"?><configuration><!-- Site specific YARN configuration properties --><property><name>yarn.resourcemanager.ha.enabled</name><value>true</value> </property> <property><name>yarn.resourcemanager.cluster-id</name><value>cluster</value> </property> <property><name>yarn.resourcemanager.ha.rm-ids</name><value>rm1,rm2</value> </property> <property><name>yarn.resourcemanager.hostname.rm1</name><value>m4</value> </property> <property><name>yarn.resourcemanager.hostname.rm2</name><value>m5</value> </property> <property><name>yarn.resourcemanager.webapp.address.rm1</name><value>m4:8088</value> </property> <property><name>yarn.resourcemanager.webapp.address.rm2</name><value>m5:8088</value> </property> <property><name>yarn.resourcemanager.zk-address</name><value>m6:2181,m7:2181,m8:2181</value> </property><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value> </property></configuration>

mapred-site.xml具體配置

<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration> <property><name>mapreduce.framework.name</name><value>yarn</value> </property> </configuration>

slaves具體配置

m6 m7 m8

拷貝hadoop到m3、m4、m5、m6、m7、m8

scp -r hadoop-2.7.3/ m3:/home/hadoop/app/ scp -r hadoop-2.7.3/ m4:/home/hadoop/app/ scp -r hadoop-2.7.3/ m5:/home/hadoop/app/ scp -r hadoop-2.7.3/ m6:/home/hadoop/app/ scp -r hadoop-2.7.3/ m7:/home/hadoop/app/ scp -r hadoop-2.7.3/ m8:/home/hadoop/app/ 

zookeeper配置zoo.cfg(m6 m7 m8) 

# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. dataDir=/home/hadoop/app/zookeeper-3.3.6/data # the port at which the clients will connect clientPort=2181 server.1=m6:2888:3888 server.2=m7:2888:3888 server.3=m8:2888:3888

?

配置好后的啟動(dòng)順序:

1、啟動(dòng)zookeeper ??./bin/zkServer.sh start

2、分別在m6 ?m7 ?m8上啟動(dòng)journalnode,?./hadoop-daemon.sh start journalnode只有第一次才需要手動(dòng)啟動(dòng)journalnode,以后啟動(dòng)hdfs的時(shí)候會(huì)自動(dòng)啟動(dòng)journalnode

3、在m2上格式化namenode,格式化成功后拷貝元數(shù)據(jù)到m3節(jié)點(diǎn)上

4、格式化zkfc ?./bin/hdfs zkfc -formatZK ?只需要一次

5、啟動(dòng)hdfs

6、啟動(dòng)yarn

?

驗(yàn)證:

通過(guò)kill命令殺死namenode進(jìn)程觀察namenode節(jié)點(diǎn)是否會(huì)自動(dòng)切換

?yarn rmadmin -getServiceState rm1查看那個(gè)resourceManager是active那個(gè)是standby

?

單獨(dú)啟動(dòng)namenode:?./sbin/hadoop-daemon.sh start namenode

?

轉(zhuǎn)載于:https://www.cnblogs.com/heml/p/5997190.html

總結(jié)

以上是生活随笔為你收集整理的hadoop NameNode HA 和ResouceManager HA的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。