日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

CHD-5.3.6集群安装

發(fā)布時間:2023/12/18 编程问答 28 豆豆
生活随笔 收集整理的這篇文章主要介紹了 CHD-5.3.6集群安装 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

我是基于Apache-hadoop2.7.3版本安裝成功后,已有的環(huán)境進(jìn)行安裝chd-5..6

已用的環(huán)境:

JDK版本:

java version "1.8.0_191" Java(TM) SE Runtime Environment (build 1.8.0_191-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.191-b12, mixed mode)

三臺機(jī)器已經(jīng)免秘鑰:

192.168.1.30 master 192.168.1.40 saver1 192.168.1.50 saver2

現(xiàn)有的安裝包:

cdh5.3.6-snappy-lib-natirve.tar.gz hadoop-2.5.0-cdh5.3.6.tar.gz hive-0.13.1-cdh5.3.6.tar.gz sqoop-1.4.5-cdh5.3.6.tar.gz

開始安裝:

1.上傳上面的四個安裝包到soft 目錄

2.賦權(quán)

[hadoop@master soft]$ chmod 755 *

3.解壓到指定目錄

[hadoop@master soft]$ tar -xvf hadoop-2.5.0-cdh5.3.6.tar.gz -C /home/hadoop/CDH5.3.6 tar -xvf hive-0.13.1-cdh5.3.6.tar.gz -C /home/hadoop/CDH5.3.6

4.配置hadoop-env.sh

export JAVA_HOME=/usr/local/jdk1.8

5.配置mapred-env.sh

export JAVA_HOME=/usr/local/jdk1.8

6.配置core-sit.xml

<configuration><!-- 指定hdfs的nameservice為ns1 --><property><name>fs.defaultFS</name><value>hdfs://192.168.1.30:9000</value></property><!-- Size of read/write buffer used in SequenceFiles. --><property><name>io.file.buffer.size</name><value>131072</value></property><!-- 指定hadoop臨時目錄,自行創(chuàng)建 --><property><name>hadoop.tmp.dir</name><value>//home/hadoop/CDH5.3.6/hadoop-2.5.0-cdh5.3.6/data/tmp</value></property><property><name>hadoop.proxyuser.hadoop.groups</name><value>*</value></property><property><name>hadoop.proxyuser.hadoop.hosts</name><value>*</value></property> </configuration>

7.配置hdfs-site.xml

<configuration><property><name>dfs.namenode.secondary.http-address</name><value>192.168.1.30:50090</value></property><property><name>dfs.replication</name><value>3</value></property><property><name>dfs.namenode.name.dir</name><value>file:/home/hadoop/CDH5.3.6/hadoop-2.5.0-cdh5.3.6/hdfs/name</value></property><property><name>dfs.datanode.data.dir</name><value>file:/home/hadoop/CDH5.3.6/hadoop-2.5.0-cdh5.3.6/hdfs/data</value></property><property><name>dfs.webhdfs.enabled</name><value>true</value></property></configuration>

8.配置slaves

master saver1 saver2

9.拷貝到其他節(jié)點(diǎn)

scp -r /home/hadoop/CDH5.3.6 hadoop@saver1:/home/hadoop/ scp -r /home/hadoop/CDH5.3.6 hadoop@saver2:/home/hadoop/

10.格式化hadoop

[hadoop@master hadoop-2.5.0-cdh5.3.6]$ bin/hdfs namenode -format

出現(xiàn)錯誤:

Error: Could not find or load main class org.apache.hadoop.hdfs.server.namenode.NameNode

解決方法:

因?yàn)闆]有HADOOP_HOME/share/hadoop/hdfs/× 這個路徑,所以我在hadoop-XX\libexec\hadoop-config.sh最后自己加上
## 因?yàn)樗腸lasspath中沒有hdfs的路徑,所以手動添加
CLASSPATH=${CLASSPATH}:$HADOOP_HDFS_HOME'/share/hadoop/hdfs/*'

成功格式化:

?

?

11.配置yarn-site.xml

<configuration><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value> </property><property><name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value> </property><property><name>yarn.resourcemanager.address</name> <value>192.168.1.30:8032</value> </property> <property><name>yarn.resourcemanager.scheduler.address</name><value>192.168.1.30:8030</value></property><property><name>yarn.resourcemanager.resource-tracker.address</name><value>192.168.1.30:8031</value></property><property><name>yarn.resourcemanager.admin.address</name><value>192.168.1.30:8033</value></property><property><name>yarn.resourcemanager.webapp.address</name> <value>192.168.1.30:8088</value></property></configuration>

12.配置mapred-site.xml

<configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property><property><name>mapreduce.jobhistory.address</name><value>192.168.1.30:10020</value></property><property><name>mapreduce.jobhistory.webapp.address</name><value>192.168.1.30:19888</value></property> </configuration>

13,重啟系統(tǒng)

14.啟動服務(wù):

[hadoop@master hadoop-2.5.0-cdh5.3.6]$ sbin/hadoop-daemon.sh start namenode starting namenode, logging to /home/hadoop/CDH5.3.6/hadoop-2.5.0-cdh5.3.6/logs/hadoop-hadoop-namenode-master.out[hadoop@master hadoop-2.5.0-cdh5.3.6]$ sbin/hadoop-daemon.sh start datanode starting datanode, logging to /home/hadoop/CDH5.3.6/hadoop-2.5.0-cdh5.3.6/logs/hadoop-hadoop-datanode-master.out[hadoop@master hadoop-2.5.0-cdh5.3.6]$ sbin/yarn-daemon.sh start resourcemanager starting resourcemanager, logging to /home/hadoop/CDH5.3.6/hadoop-2.5.0-cdh5.3.6/logs/yarn-hadoop-resourcemanager-master.out[hadoop@master hadoop-2.5.0-cdh5.3.6]$ sbin/yarn-daemon.sh start nodemanager starting nodemanager, logging to /home/hadoop/CDH5.3.6/hadoop-2.5.0-cdh5.3.6/logs/yarn-hadoop-nodemanager-master.out[hadoop@master hadoop-2.5.0-cdh5.3.6]$ sbin/mr-jobhistory-daemon.sh start historyserver starting historyserver, logging to /home/hadoop/CDH5.3.6/hadoop-2.5.0-cdh5.3.6/logs/mapred-hadoop-historyserver-master.out

15.查看服務(wù)

[hadoop@master hadoop-2.5.0-cdh5.3.6]$ jps 3269 NodeManager 3414 JobHistoryServer 3447 Jps 2922 DataNode 3021 ResourceManager 2831 NameNode

?

轉(zhuǎn)載于:https://www.cnblogs.com/hello-wei/p/10964561.html

總結(jié)

以上是生活随笔為你收集整理的CHD-5.3.6集群安装的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。