日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Hadoop集群安装-CDH5(5台服务器集群)

發布時間:2025/7/14 编程问答 28 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Hadoop集群安装-CDH5(5台服务器集群) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

CDH5包下載:http://archive.cloudera.com/cdh5/

架構設計:

主機規劃:

IP

Host

部署模塊

進程

192.168.254.151

Hadoop-NN-01

NameNode

ResourceManager

NameNode

DFSZKFailoverController

ResourceManager

192.168.254.152

Hadoop-NN-02

NameNode

ResourceManager

NameNode

DFSZKFailoverController

ResourceManager

192.168.254.153

Hadoop-DN-01

Zookeeper-01

DataNode

NodeManager

Zookeeper

DataNode

NodeManager

JournalNode

QuorumPeerMain

192.168.254.154

Hadoop-DN-02

Zookeeper-02

DataNode

NodeManager

Zookeeper

DataNode

NodeManager

JournalNode

QuorumPeerMain

192.168.254.155

Hadoop-DN-03

Zookeeper-03

DataNode

NodeManager

Zookeeper

DataNode

NodeManager

JournalNode

QuorumPeerMain

各個進程解釋:

  • NameNode
  • ResourceManager
  • DFSZKFC:DFS Zookeeper Failover Controller 激活Standby NameNode
  • DataNode
  • NodeManager
  • JournalNode:NameNode共享editlog結點服務(如果使用NFS共享,則該進程和所有啟動相關配置接可省略)。
  • QuorumPeerMain:Zookeeper主進程

目錄規劃:

名稱

路徑

$HADOOP_HOME

/home/hadoopuser/hadoop-2.6.0-cdh5.6.0

Data

$ HADOOP_HOME/data

Log

$ HADOOP_HOME/logs

?

集群安裝:

一、關閉防火墻(防火墻可以以后配置)

二、安裝JDK(略)

三、修改HostName并配置Host5臺)

[root@Linux01 ~]# vim /etc/sysconfig/network [root@Linux01 ~]# vim /etc/hosts 192.168.254.151 Hadoop-NN-01 192.168.254.152 Hadoop-NN-02 192.168.254.153 Hadoop-DN-01 Zookeeper-01 192.168.254.154 Hadoop-DN-02 Zookeeper-02 192.168.254.155 Hadoop-DN-03 Zookeeper-03

四、為了安全,創建Hadoop專門登錄的用戶(5臺)

[root@Linux01 ~]# useradd hadoopuser [root@Linux01 ~]# passwd hadoopuser [root@Linux01 ~]# su – hadoopuser #切換用戶

五、配置SSH免密碼登錄(2NameNode

[hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ ssh-keygen --生成公私鑰 [hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoopuser@Hadoop-NN-01

-I 表示 input

~/.ssh/id_rsa.pub 表示哪個公鑰組

或者省略為:

[hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ ssh-copy-id Hadoop-NN-01(或寫IP:10.10.51.231) #將公鑰扔到對方服務器 [hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ ssh-copy-id ”-p 6000 Hadoop-NN-01” #如果帶端口則這樣寫

注意修改Hadoop的配置文件

vi Hadoop-env.sh export HADOOP_SSH_OPTS=”-p 6000”[hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ ssh Hadoop-NN-01 #驗證(退出當前連接命令:exit、logout) [hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ ssh Hadoop-NN-01 –p 6000 #如果帶端口這樣寫

六、配置環境變量:vi ~/.bashrc 然后 source ~/.bashrc5臺)

[hadoopuser@Linux01 ~]$ vi ~/.bashrc # hadoop cdh5 export HADOOP_HOME=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0 export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin[hadoopuser@Linux01 ~]$ source ~/.bashrc #生效

七、安裝zookeeper3DataNode

  安裝文檔:http://www.cnblogs.com/hunttown/p/5807383.html

八、安裝Hadoop,并配置(只裝1配置完成后分發給其它節點)

1、解壓

2、修改配置文件

配置名稱

類型

說明

hadoop-env.sh

Bash腳本

Hadoop運行環境變量設置

core-site.xml

xml

配置Hadoop core,如IO

hdfs-site.xml

xml

配置HDFS守護進程:NN、JN、DN

yarn-env.sh

Bash腳本

Yarn運行環境變量設置

yarn-site.xml

xml

Yarn框架配置環境

mapred-site.xml

xml

MR屬性設置

capacity-scheduler.xml

xml

Yarn調度屬性設置

container-executor.cfg

Cfg

Yarn Container配置

mapred-queues.xml

xml

MR隊列設置

hadoop-metrics.properties

Java屬性

Hadoop Metrics配置

hadoop-metrics2.properties

Java屬性

Hadoop Metrics配置

slaves

Plain Text

DN節點配置

exclude

Plain Text

移除DN節點配置文件

log4j.properties

?

系統日志設置

configuration.xsl

??

?

1)修改 $HADOOP_HOME/etc/hadoop/hadoop-env.sh

#--------------------Java Env------------------------------ export JAVA_HOME="/usr/java/jdk1.8.0_73"#--------------------Hadoop Env---------------------------- #export HADOOP_PID_DIR=${HADOOP_PID_DIR} export HADOOP_PREFIX="/home/hadoopuser/hadoop-2.6.0-cdh5.6.0"#--------------------Hadoop Daemon Options----------------- # export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS" # export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"#--------------------Hadoop Logs--------------------------- #export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER#--------------------SSH PORT------------------------------- export HADOOP_SSH_OPTS="-p 6000" #如果你修改了SSH登錄端口,一定要修改此配置。

?

2)修改 $HADOOP_HOME/etc/hadoop/core-site.xml

<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration><!--Yarn 需要使用 fs.defaultFS 指定NameNode URI --><property><name>fs.defaultFS</name><value>hdfs://mycluster</value><description>該值來自于hdfs-site.xml中的配置</description></property><!--HDFS超級用戶 --><property><name>dfs.permissions.superusergroup</name><value>zero</value></property><!--==============================Trash機制======================================= --><property><!--多長時間創建CheckPoint NameNode截點上運行的CheckPointer 從Current文件夾創建CheckPoint;默認:0 由fs.trash.interval項指定 --><name>fs.trash.checkpoint.interval</name><value>0</value></property><property><!--多少分鐘.Trash下的CheckPoint目錄會被刪除,該配置服務器設置優先級大于客戶端,默認:0 不刪除 --><name>fs.trash.interval</name><value>1440</value></property> </configuration>

?

3)修改 $HADOOP_HOME/etc/hadoop/hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration><!--開啟web hdfs --><property><name>dfs.webhdfs.enabled</name><value>true</value></property><property><name>dfs.namenode.name.dir</name><value>/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/data/dfs/name</value><description> namenode 存放name table(fsimage)本地目錄(需要修改)</description></property><property><name>dfs.namenode.edits.dir</name><value>${dfs.namenode.name.dir}</value><description>namenode存放 transaction file(edits)本地目錄(需要修改)</description></property><property><name>dfs.datanode.data.dir</name><value>/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/data/dfs/data</value><description>datanode存放block本地目錄(需要修改)</description></property><property><name>dfs.replication</name><value>1</value><description>文件副本個數,默認為3</description></property><!-- 塊大小 (默認) --><property><name>dfs.blocksize</name><value>268435456</value>< description>塊大小256M</description></property><!--======================================================================= --><!--HDFS高可用配置 --><!--nameservices邏輯名 --><property><name>dfs.nameservices</name><value>mycluster</value></property><property><!--設置NameNode IDs 此版本最大只支持兩個NameNode --><name>dfs.ha.namenodes.mycluster</name><value>nn1,nn2</value></property><!-- Hdfs HA: dfs.namenode.rpc-address.[nameservice ID] rpc 通信地址 --><property><name>dfs.namenode.rpc-address.mycluster.nn1</name><value>Hadoop-NN-01:8020</value></property><property><name>dfs.namenode.rpc-address.mycluster.nn2</name><value>Hadoop-NN-02:8020</value></property><!-- Hdfs HA: dfs.namenode.http-address.[nameservice ID] http 通信地址 --><property><name>dfs.namenode.http-address.mycluster.nn1</name><value>Hadoop-NN-01:50070</value></property><property><name>dfs.namenode.http-address.mycluster.nn2</name><value>Hadoop-NN-02:50070</value></property><!--==================Namenode editlog同步 ============================================ --><!--保證數據恢復 --><property><name>dfs.journalnode.http-address</name><value>0.0.0.0:8480</value></property><property><name>dfs.journalnode.rpc-address</name><value>0.0.0.0:8485</value></property><property><!--設置JournalNode服務器地址,QuorumJournalManager 用于存儲editlog --><!--格式:qjournal://<host1:port1>;<host2:port2>;<host3:port3>/<journalId> 端口同journalnode.rpc-address --><name>dfs.namenode.shared.edits.dir</name><value>qjournal://Hadoop-DN-01:8485;Hadoop-DN-02:8485;Hadoop-DN-03:8485/mycluster</value></property><property><!--JournalNode存放數據地址 --><name>dfs.journalnode.edits.dir</name><value>/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/data/dfs/jn</value></property><!--==================DataNode editlog同步 ============================================ --><property><!--DataNode,Client連接Namenode識別選擇Active NameNode策略 --><name>dfs.client.failover.proxy.provider.mycluster</name><value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value></property><!--==================Namenode fencing:=============================================== --><!--Failover后防止停掉的Namenode啟動,造成兩個服務 --><property><name>dfs.ha.fencing.methods</name><value>sshfence</value></property><property><name>dfs.ha.fencing.ssh.private-key-files</name><value>/home/hadoopuser/.ssh/id_rsa</value></property><property><!--多少milliseconds 認為fencing失敗 --><name>dfs.ha.fencing.ssh.connect-timeout</name><value>30000</value></property><!--==================NameNode auto failover base ZKFC and Zookeeper====================== --><!--開啟基于Zookeeper及ZKFC進程的自動備援設置,監視進程是否死掉 --><property><name>dfs.ha.automatic-failover.enabled</name><value>true</value></property><property><name>ha.zookeeper.quorum</name><!--<value>Zookeeper-01:2181,Zookeeper-02:2181,Zookeeper-03:2181</value>--><value>Hadoop-DN-01:2181,Hadoop-DN-02:2181,Hadoop-DN-03:2181</value></property><property><!--指定ZooKeeper超時間隔,單位毫秒 --><name>ha.zookeeper.session-timeout.ms</name><value>2000</value></property> </configuration>

?

(4)修改 $HADOOP_HOME/etc/hadoop/yarn-env.sh

#Yarn Daemon Options #export YARN_RESOURCEMANAGER_OPTS #export YARN_NODEMANAGER_OPTS #export YARN_PROXYSERVER_OPTS #export HADOOP_JOB_HISTORYSERVER_OPTS#Yarn Logs export YARN_LOG_DIR="/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/logs"

?

5)修改 $HADOOP_HOEM/etc/hadoop/mapred-site.xml

<configuration><!-- 配置JVM大小 --><property><name>mapred.child.java.opts</name><value>-Xmx1000m</value><final>true</final><description>final=true表示禁止用戶修改JVM大小</description></property><!-- 配置 MapReduce Applications --><property><name>mapreduce.framework.name</name><value>yarn</value></property><!-- JobHistory Server ============================================================== --><!-- 配置 MapReduce JobHistory Server 地址 ,默認端口10020 --><property><name>mapreduce.jobhistory.address</name><value>0.0.0.0:10020</value></property><!-- 配置 MapReduce JobHistory Server web ui 地址, 默認端口19888 --><property><name>mapreduce.jobhistory.webapp.address</name><value>0.0.0.0:19888</value></property> </configuration>

HBase的配置:

<!-- HBase使用 start --> <property><name>mapred.remote.os</name><value>Linux</value> </property> <property><name>mapreduce.app-submission.cross-platform</name><value>true</value> </property> <property><name>mapreduce.application.classpath</name><value>/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/etc/hadoop,/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/common/*,/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/common/lib/*,/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/hdfs/*,/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/hdfs/lib/*,/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/mapreduce/*,/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/mapreduce/lib/*,/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/yarn/*,/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/yarn/lib/*,/usr/local/hbase/lib/*</value> </property> <!-- HBase使用 end -->

?

另:JVM配置也可以這么寫:

<property><name>mapred.task.java.opts</name><value>-Xmx2000m</value> </property> <property><name>mapred.child.java.opts</name><value>${mapred.task.java.opts} -Xmx1000m</value><final>true</final><description>相同的jvm arg寫在一起,比如"-Xmx2000m -Xmx1000m",后面的會覆蓋前面的,也就是說最終“-Xmx1000m”才會生效。</description> </property>

另:如果要分別配置map和reduce的JVM大小,可以這么寫

<property><name>mapred.map.child.java.opts</name><value>-Xmx512M</value> </property> <property><name>mapred.reduce.child.java.opts</name><value>-Xmx1024M</value> </property>

?

(6)修改 $HADOOP_HOME/etc/hadoop/yarn-site.xml

<configuration><!-- nodemanager 配置 ================================================= --><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><property><name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value></property><property><description>Address where the localizer IPC is.</description><name>yarn.nodemanager.localizer.address</name><value>0.0.0.0:23344</value></property><property><description>NM Webapp address.</description><name>yarn.nodemanager.webapp.address</name><value>0.0.0.0:23999</value></property><!-- HA 配置 =============================================================== --><!-- Resource Manager Configs --><property><name>yarn.resourcemanager.connect.retry-interval.ms</name><value>2000</value></property><property><name>yarn.resourcemanager.ha.enabled</name><value>true</value></property><property><name>yarn.resourcemanager.ha.automatic-failover.enabled</name><value>true</value></property><!-- 使嵌入式自動故障轉移。HA環境啟動,與 ZKRMStateStore 配合 處理fencing --><property><name>yarn.resourcemanager.ha.automatic-failover.embedded</name><value>true</value></property><!-- 集群名稱,確保HA選舉時對應的集群 --><property><name>yarn.resourcemanager.cluster-id</name><value>yarn-cluster</value></property><property><name>yarn.resourcemanager.ha.rm-ids</name><value>rm1,rm2</value></property><!--這里RM主備結點需要單獨指定,(可選)<property><name>yarn.resourcemanager.ha.id</name><value>rm2</value></property>--><property><name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value></property><property><name>yarn.resourcemanager.recovery.enabled</name><value>true</value></property><property><name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name><value>5000</value></property><!-- ZKRMStateStore 配置 --><property><name>yarn.resourcemanager.store.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value></property><property><name>yarn.resourcemanager.zk-address</name><!--<value>Zookeeper-01:2181,Zookeeper-02:2181,Zookeeper-03:2181</value>--><value>Hadoop-DN-01:2181,Hadoop-DN-02:2181,Hadoop-DN-03:2181</value></property><property><name>yarn.resourcemanager.zk.state-store.address</name><!--<value>Zookeeper-01:2181,Zookeeper-02:2181,Zookeeper-03:2181</value>--><value>Hadoop-DN-01:2181,Hadoop-DN-02:2181,Hadoop-DN-03:2181</value></property><!-- Client訪問RM的RPC地址 (applications manager interface) --><property><name>yarn.resourcemanager.address.rm1</name><value>Hadoop-NN-01:23140</value></property><property><name>yarn.resourcemanager.address.rm2</name><value>Hadoop-NN-02:23140</value></property><!-- AM訪問RM的RPC地址(scheduler interface) --><property><name>yarn.resourcemanager.scheduler.address.rm1</name><value>Hadoop-NN-01:23130</value></property><property><name>yarn.resourcemanager.scheduler.address.rm2</name><value>Hadoop-NN-02:23130</value></property><!-- RM admin interface --><property><name>yarn.resourcemanager.admin.address.rm1</name><value>Hadoop-NN-01:23141</value></property><property><name>yarn.resourcemanager.admin.address.rm2</name><value>Hadoop-NN-02:23141</value></property><!--NM訪問RM的RPC端口 --><property><name>yarn.resourcemanager.resource-tracker.address.rm1</name><value>Hadoop-NN-01:23125</value></property><property><name>yarn.resourcemanager.resource-tracker.address.rm2</name><value>Hadoop-NN-02:23125</value></property><!-- RM web application 地址 --><property><name>yarn.resourcemanager.webapp.address.rm1</name><value>Hadoop-NN-01:23188</value></property><property><name>yarn.resourcemanager.webapp.address.rm2</name><value>Hadoop-NN-02:23188</value></property><property><name>yarn.resourcemanager.webapp.https.address.rm1</name><value>Hadoop-NN-01:23189</value></property><property><name>yarn.resourcemanager.webapp.https.address.rm2</name><value>Hadoop-NN-02:23189</value></property> </configuration>

?HBase的配置:

<!-- HBase使用 start --> <property><name>mapreduce.application.classpath</name><value>/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/etc/hadoop,/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/common/*,/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/common/lib/*,/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/hdfs/*,/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/hdfs/lib/*,/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/mapreduce/*,/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/mapreduce/lib/*,/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/yarn/*,/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/yarn/lib/*,/usr/local/hbase/lib/*</value> </property> <!-- HBase使用 end -->

?

7)修改 $HADOOP_HOME/etc/hadoop/slaves

Hadoop-DN-01 Hadoop-DN-02 Hadoop-DN-03

3、分發程序

#因為我的SSH登錄修改了端口,所以使用了 -P 6000 scp -P 6000 -r /home/hadoopuser/hadoop-2.6.0-cdh5.6.0 hadoopuser@Hadoop-NN-02:/home/hadoopuser scp -P 6000 -r /home/hadoopuser/hadoop-2.6.0-cdh5.6.0 hadoopuser@Hadoop-DN-01:/home/hadoopuser scp -P 6000 -r /home/hadoopuser/hadoop-2.6.0-cdh5.6.0 hadoopuser@Hadoop-DN-02:/home/hadoopuser scp -P 6000 -r /home/hadoopuser/hadoop-2.6.0-cdh5.6.0 hadoopuser@Hadoop-DN-03:/home/hadoopuser

4、啟動HDFS

1)啟動JournalNode

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hadoop-daemon.sh start journalnode starting journalnode, logging to /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/logs/hadoop-puppet-journalnode-BigData-03.out

驗證JournalNode

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ jps5652 QuorumPeerMain 9076 Jps 9029 JournalNode

停止JournalNode

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hadoop-daemon.sh stop journalnode stoping journalnode

2NameNode 格式化:

結點Hadoop-NN-01:hdfs namenode -format

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hdfs namenode -format

3)同步NameNode元數據:

同步Hadoop-NN-01元數據到Hadoop-NN-02

主要是:dfs.namenode.name.dir,dfs.namenode.edits.dir還應該確保共享存儲目錄下(dfs.namenode.shared.edits.dir ) 包含NameNode 所有的元數據。

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ scp -P 6000 -r data/ hadoopuser@Hadoop-NN-02:/home/hadoopuser/hadoop-2.6.0-cdh5.6.0

4)初始化ZFCK

創建ZNode,記錄狀態信息。

結點Hadoop-NN-01:hdfs zkfc -formatZK

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hdfs zkfc -formatZK

5)啟動

集群啟動法:Hadoop-NN-01: start-dfs.sh

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ start-dfs.sh

單進程啟動法:

<1>NameNode(Hadoop-NN-01,Hadoop-NN-02):hadoop-daemon.sh start namenode

<2>DataNode(Hadoop-DN-01,Hadoop-DN-02,Hadoop-DN-03):hadoop-daemon.sh start datanode

<3>JournalNode(Hadoop-DN-01,Hadoop-DN-02,Hadoop-DN-03):hadoop-daemon.sh start journalnode

<4>ZKFC(Hadoop-NN-01,Hadoop-NN-02):hadoop-daemon.sh start zkfc

6)驗證

<1>進程

NameNode:jps

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ jps9329 JournalNode 9875 NameNode 10155 DFSZKFailoverController 10223 Jps

DataNode:jps

[hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ jps9498 Jps 9019 JournalNode 9389 DataNode 5613 QuorumPeerMain

<2>頁面:

Active結點:http://192.168.254.151:50070

7)停止:stop-dfs.sh

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ stop-dfs.sh

5、啟動Yarn

1)啟動

<1>集群啟動

Hadoop-NN-01啟動Yarn,命令所在目錄:$HADOOP_HOME/sbin

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ start-yarn.sh

Hadoop-NN-02備機啟動RM:

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ yarn-daemon.sh start resourcemanager

<2>單進程啟動

ResourceManager(Hadoop-NN-01,Hadoop-NN-02):yarn-daemon.sh start resourcemanager

DataNode(Hadoop-DN-01,Hadoop-DN-02,Hadoop-DN-03):yarn-daemon.sh start nodemanager

2)驗證

<1>進程:

JobTracker:Hadoop-NN-01,Hadoop-NN-02

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ jps9329 JournalNode 9875 NameNode 10355 ResourceManager 10646 Jps 10155 DFSZKFailoverController

TaskTracker:Hadoop-DN-01,Hadoop-DN-02,Hadoop-DN-03

[hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ jps9552 NodeManager 9680 Jps 9019 JournalNode 9389 DataNode 5613 QuorumPeerMain

<2>頁面

ResourceManger(Active):192.168.254.151:23188

ResourceManager(Standby):192.168.254.152:23188

3)停止

Hadoop-NN-01:stop-yarn.sh

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ stop-yarn.shHadoop-NN-02:yarn-daemon.sh stop resourcemanager [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ yarn-daeman.sh stop resourcemanager

?

附:Hadoop常用命令總結

#第1步 啟動zookeeper [hadoopuser@Linux01 ~]$ zkServer.sh start [hadoopuser@Linux01 ~]$ zkServer.sh stop  #停止#第2步 啟動JournalNode: [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hadoop-daemon.sh start journalnode starting journalnode, logging to /home/hadoopuser/hadoop-dir/hadoop-2.6.0-cdh5.6.0/logs/hadoop-puppet-journalnode-BigData-03.out  #兩個namenode [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hadoop-daemon.sh stop journalnode stoping journalnode  #停止#第3步 啟動DFS: [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ start-dfs.sh [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ stop-dfs.sh  #停止#第4步 啟動Yarn: #Hadoop-NN-01啟動Yarn [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ start-yarn.sh [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ stop-yarn.sh  #停止 #Hadoop-NN-02備機啟動RM [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ yarn-daemon.sh start resourcemanager [hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ yarn-daemon.sh stop resourcemanager  #停止#如果安裝了HBase #Hadoop-NN-01啟動HBase的Thrift Server: [hadoopuser@Linux01 bin]$ hbase-daemon.sh start thrift [hadoopuser@Linux01 bin]$ hbase-daemon.sh stop thrift #停止#Hadoop-NN-01啟動HBase: [hadoopuser@Linux01 bin]$ hbase/bin/start-hbase.sh [hadoopuser@Linux01 bin]$ hbase/bin/stop-hbase.sh #停止#如果安裝了RHive #Hadoop-NN-01啟動Rserve: [hadoopuser@Linux01 ~]$ Rserve --RS-conf /usr/local/lib64/R/Rserv.conf #停止 直接kill#Hadoop-NN-01啟動hive遠程服務(rhive是通過thrift連接hiveserver的,需要要啟動后臺thrift服務): [hadoopuser@Linux01 ~]$ nohup hive --service hiveserver2 & #注意這里是hiveserver2

?

附:Hadoop常用環境變量配置

# JAVA export JAVA_HOME=/usr/java/jdk1.8.0_73 export PATH=$PATH:$JAVA_HOME/bin export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar# MYSQL export PATH=/usr/local/mysql/bin:/usr/local/mysql/lib:$PATH# Hive export HIVE_HOME=/home/hadoopuser/hive export PATH=$PATH:$HIVE_HOME/bin# Hadoop export HADOOP_HOME=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0 export HADOOP_CONF_DIR=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/etc/hadoop export HADOOP_CMD=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/bin/hadoop export HADOOP_STREAMING=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/tools/lib/hadoop-streaming-2.6.0-cdh5.6.0.jar export JAVA_LIBRARY_PATH=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/lib/native/ export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin# R export R_HOME=/usr/local/lib64/R export PATH=$PATH:$R_HOME/bin export RHIVE_DATA=/usr/local/lib64/R/rhive/data export CLASSPATH=.:/usr/local/lib64/R/library/rJava/jri export LD_LIBRARY_PATH=/usr/local/lib64/R/library/rJava/jri export RServe_HOME=/usr/local/lib64/R/library/Rserve# thrift export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig/# HBase export HBASE_HOME=/usr/local/hbase export PATH=$PATH:$HBASE_HOME/bin# Zookeeper export ZOOKEEPER_HOME=/home/hadoopuser/zookeeper-3.4.5-cdh5.6.0 export PATH=$PATH:$ZOOKEEPER_HOME/bin# Sqoop2 export SQOOP2_HOME=/home/hadoopuser/sqoop2-1.99.5-cdh5.6.0 export CATALINA_BASE=$SQOOP2_HOME/server export PATH=$PATH:$SQOOP2_HOME/bin# Scala export SCALA_HOME=/usr/local/scala export PATH=$PATH:${SCALA_HOME}/bin# Spark export SPARK_HOME=/home/hadoopuser/spark-1.5.0-cdh5.6.0 export PATH=$PATH:${SPARK_HOME}/bin# Storm export STORM_HOME=/home/hadoopuser/apache-storm-0.9.6 export PATH=$PATH:$STORM_HOME/bin#kafka export KAFKA_HOME=/home/hadoopuser/kafka_2.10-0.9.0.1 export PATH=$PATH:$KAFKA_HOME/bin

?

總結

以上是生活随笔為你收集整理的Hadoop集群安装-CDH5(5台服务器集群)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。