日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 运维知识 > linux >内容正文

linux

[Linux][Hadoop] 将hadoop跑起来

發(fā)布時間:2025/5/22 linux 19 豆豆
生活随笔 收集整理的這篇文章主要介紹了 [Linux][Hadoop] 将hadoop跑起来 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

前面安裝過程待補充,安裝完成hadoop安裝之后,開始執(zhí)行相關(guān)命令,讓hadoop跑起來

?

使用命令啟動所有服務(wù):

hadoop@ubuntu:/usr/local/gz/hadoop-2.4.1$ ./sbin/start-all.sh

當然在目錄hadoop-2.4.1/sbin下面會有很多啟動文件:

里面會有所有服務(wù)各自啟動的命令,而start-all.sh則是把所有服務(wù)一起啟動,以下為.sh的內(nèi)容:

#!/usr/bin/env bash# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.# Start all hadoop daemons. Run this on master node.echo "This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh" #這里說明了這個腳本已經(jīng)被棄用了,要我們使用start-dfs.sh和start-yarn.sh來進行啟動 bin=`dirname "${BASH_SOURCE-$0}"` bin=`cd "$bin"; pwd`DEFAULT_LIBEXEC_DIR="$bin"/../libexec HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR} . $HADOOP_LIBEXEC_DIR/hadoop-config.sh #這里執(zhí)行相關(guān)配置文件在hadoop/libexec/hadoop-config.sh,該配置文件里面全是配置相關(guān)路徑的,CLASSPATH, export相關(guān)的 #真正執(zhí)行的是以下兩個,也就是分別執(zhí)行start-dfs.sh和start-yarn.sh兩個腳本,以后還是自己分別執(zhí)行這兩個命令 # start hdfs daemons if hdfs is present if [ -f "${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh ]; then"${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh --config $HADOOP_CONF_DIR fi# start yarn daemons if yarn is present if [ -f "${HADOOP_YARN_HOME}"/sbin/start-yarn.sh ]; then"${HADOOP_YARN_HOME}"/sbin/start-yarn.sh --config $HADOOP_CONF_DIR fi

執(zhí)行完成之后調(diào)用jps查看是否所有服務(wù)都已經(jīng)啟動起來了:

這里注意,一定要有6個服務(wù),我啟動的時候當時只有5個服務(wù),打開兩個連接都成功http://192.168.1.107:50070/,http://192.168.1.107:8088/,但是在執(zhí)行wordcount示例的時候,發(fā)現(xiàn)執(zhí)行失敗,查找原因之后,才發(fā)起我啟動的時候少了datanode服務(wù)

下面這個是application的運行情況:

下面這個是dfs的健康狀態(tài):

http://192.168.1.107:50070/打開的頁面可以查看hadoop啟動及運行的日志,詳情如下:

我就是通過這里的日志找到問題原因的,打開日志之后是各個服務(wù)運行的日志文件:

此時打開datanode-ubuntu.log文件,有相關(guān)異常拋出:

2014-07-21 22:05:21,064 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/gz/hadoop-2.4.1/dfs/data/in_use.lock acquired by nodename 3312@ubuntu 2014-07-21 22:05:21,075 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting. java.io.IOException: Incompatible clusterIDs in /usr/local/gz/hadoop-2.4.1/dfs/data: namenode clusterID = CID-2cfdb22e-07b2-4ab8-965d-fdb27645bd62; datanode clusterID = ID-2cfdb22e-07b2-4ab8-965d-fdb27645bd62at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:477)at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:226)at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:254)at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:974)at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:945)at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:278)at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)at java.lang.Thread.run(Thread.java:722) 2014-07-21 22:05:21,084 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000 2014-07-21 22:05:21,102 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned) 2014-07-21 22:05:23,103 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode 2014-07-21 22:05:23,106 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0 2014-07-21 22:05:23,112 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at ubuntu/127.0.1.1 ************************************************************/ 根據(jù)錯誤日志搜索之后發(fā)現(xiàn)是由于先前啟動Hadoop之后,再使用命令格式化namenode會導(dǎo)致,datanode和namenode的clusterID不一致: 找到hadoop/etc/hadoop/hdfs-site.xml配置文件里配置的datanode和namenode下的./current/VERSION,文件,對比兩個文件中的clusterID,不一致,將Datanode下的clusterID改為namenode下的此ID,重新啟動之后即可。 參考連接: http://www.cnblogs.com/kinglau/p/3796274.html

轉(zhuǎn)載于:https://www.cnblogs.com/garinzhang/p/linux_hadoop_start_all.html

總結(jié)

以上是生活随笔為你收集整理的[Linux][Hadoop] 将hadoop跑起来的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。