日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 >

真实HDFS集群启动后master的jps没有DataNode

發布時間:2023/12/31 42 豆豆
生活随笔 收集整理的這篇文章主要介紹了 真实HDFS集群启动后master的jps没有DataNode 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

環境:

臺式機和筆記本搭建的真實分布式HDFS集群(因為是兩臺,所以對于Spark集群而言是偽分布式)

故障:

筆記本和臺式機組建的集群,在仔細核對各種教程后,發現master的jps中總是沒有datanode

?

排查思路:

/home/appleyuchi/bigdata/hadoop-2.7.7/sbin/start-all.sh內容為:

提到了start-all.sh

#!/usr/bin/env bash# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.# Start all hadoop daemons. Run this on master node.echo "This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh"bin=`dirname "${BASH_SOURCE-$0}"` bin=`cd "$bin"; pwd`DEFAULT_LIBEXEC_DIR="$bin"/../libexec HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR} . $HADOOP_LIBEXEC_DIR/hadoop-config.sh# start hdfs daemons if hdfs is present if [ -f "${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh ]; then"${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh --config $HADOOP_CONF_DIR fi# start yarn daemons if yarn is present if [ -f "${HADOOP_YARN_HOME}"/sbin/start-yarn.sh ]; then"${HADOOP_YARN_HOME}"/sbin/start-yarn.sh --config $HADOOP_CONF_DIR fi

上面提到了start-dfs.sh的內容,我們來查看start-dfs.sh的內容:

#!/usr/bin/env bash# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.# Start hadoop dfs daemons. # Optinally upgrade or rollback dfs state. # Run this on master node.usage="Usage: start-dfs.sh [-upgrade|-rollback] [other options such as -clusterId]"bin=`dirname "${BASH_SOURCE-$0}"` bin=`cd "$bin"; pwd`DEFAULT_LIBEXEC_DIR="$bin"/../libexec HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR} . $HADOOP_LIBEXEC_DIR/hdfs-config.sh# get arguments if [[ $# -ge 1 ]]; thenstartOpt="$1"shiftcase "$startOpt" in-upgrade)nameStartOpt="$startOpt";;-rollback)dataStartOpt="$startOpt";;*)echo $usageexit 1;;esac fi#Add other possible options nameStartOpt="$nameStartOpt $@"#--------------------------------------------------------- # namenodesNAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -namenodes)echo "Starting namenodes on [$NAMENODES]""$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \--config "$HADOOP_CONF_DIR" \--hostnames "$NAMENODES" \--script "$bin/hdfs" start namenode $nameStartOpt#--------------------------------------------------------- # datanodes (using default slaves file)if [ -n "$HADOOP_SECURE_DN_USER" ]; thenecho \"Attempting to start secure cluster, skipping datanodes. " \"Run start-secure-dns.sh as root to complete startup." else"$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \--config "$HADOOP_CONF_DIR" \--script "$bin/hdfs" start datanode $dataStartOpt fi#--------------------------------------------------------- # secondary namenodes (if any)SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 2>/dev/null)if [ -n "$SECONDARY_NAMENODES" ]; thenecho "Starting secondary namenodes [$SECONDARY_NAMENODES]""$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \--config "$HADOOP_CONF_DIR" \--hostnames "$SECONDARY_NAMENODES" \--script "$bin/hdfs" start secondarynamenode fi#--------------------------------------------------------- # quorumjournal nodes (if any)SHARED_EDITS_DIR=$($HADOOP_PREFIX/bin/hdfs getconf -confKey dfs.namenode.shared.edits.dir 2>&-)case "$SHARED_EDITS_DIR" in qjournal://*)JOURNAL_NODES=$(echo "$SHARED_EDITS_DIR" | sed 's,qjournal://\([^/]*\)/.*,\1,g; s/;/ /g; s/:[0-9]*//g')echo "Starting journal nodes [$JOURNAL_NODES]""$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \--config "$HADOOP_CONF_DIR" \--hostnames "$JOURNAL_NODES" \--script "$bin/hdfs" start journalnode ;; esac#--------------------------------------------------------- # ZK Failover controllers, if auto-HA is enabled AUTOHA_ENABLED=$($HADOOP_PREFIX/bin/hdfs getconf -confKey dfs.ha.automatic-failover.enabled) if [ "$(echo "$AUTOHA_ENABLED" | tr A-Z a-z)" = "true" ]; thenecho "Starting ZK Failover Controllers on NN hosts [$NAMENODES]""$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \--config "$HADOOP_CONF_DIR" \--hostnames "$NAMENODES" \--script "$bin/hdfs" start zkfc fi# eof

里面有一句話是slaves file

猜測啟動的時候是根據slaves這個文件來決定哪些節點需要啟動datanode

#---------------------------------------------------------------------------------------------------------------------------------

最終解決方案:

/home/appleyuchi/bigdata/hadoop-2.7.7/etc/hadoop/slaves文件

從原來的

Laptop

改成:

Desktop
Laptop

這里Desktop是master的節點名,Laptop是slave的節點名

總結

以上是生活随笔為你收集整理的真实HDFS集群启动后master的jps没有DataNode的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。