真实HDFS集群启动后master的jps没有DataNode
生活随笔
收集整理的這篇文章主要介紹了
真实HDFS集群启动后master的jps没有DataNode
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
環境:
臺式機和筆記本搭建的真實分布式HDFS集群(因為是兩臺,所以對于Spark集群而言是偽分布式)
故障:
筆記本和臺式機組建的集群,在仔細核對各種教程后,發現master的jps中總是沒有datanode
?
排查思路:
/home/appleyuchi/bigdata/hadoop-2.7.7/sbin/start-all.sh內容為:
提到了start-all.sh
#!/usr/bin/env bash# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.# Start all hadoop daemons. Run this on master node.echo "This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh"bin=`dirname "${BASH_SOURCE-$0}"` bin=`cd "$bin"; pwd`DEFAULT_LIBEXEC_DIR="$bin"/../libexec HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR} . $HADOOP_LIBEXEC_DIR/hadoop-config.sh# start hdfs daemons if hdfs is present if [ -f "${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh ]; then"${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh --config $HADOOP_CONF_DIR fi# start yarn daemons if yarn is present if [ -f "${HADOOP_YARN_HOME}"/sbin/start-yarn.sh ]; then"${HADOOP_YARN_HOME}"/sbin/start-yarn.sh --config $HADOOP_CONF_DIR fi上面提到了start-dfs.sh的內容,我們來查看start-dfs.sh的內容:
#!/usr/bin/env bash# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.# Start hadoop dfs daemons. # Optinally upgrade or rollback dfs state. # Run this on master node.usage="Usage: start-dfs.sh [-upgrade|-rollback] [other options such as -clusterId]"bin=`dirname "${BASH_SOURCE-$0}"` bin=`cd "$bin"; pwd`DEFAULT_LIBEXEC_DIR="$bin"/../libexec HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR} . $HADOOP_LIBEXEC_DIR/hdfs-config.sh# get arguments if [[ $# -ge 1 ]]; thenstartOpt="$1"shiftcase "$startOpt" in-upgrade)nameStartOpt="$startOpt";;-rollback)dataStartOpt="$startOpt";;*)echo $usageexit 1;;esac fi#Add other possible options nameStartOpt="$nameStartOpt $@"#--------------------------------------------------------- # namenodesNAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -namenodes)echo "Starting namenodes on [$NAMENODES]""$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \--config "$HADOOP_CONF_DIR" \--hostnames "$NAMENODES" \--script "$bin/hdfs" start namenode $nameStartOpt#--------------------------------------------------------- # datanodes (using default slaves file)if [ -n "$HADOOP_SECURE_DN_USER" ]; thenecho \"Attempting to start secure cluster, skipping datanodes. " \"Run start-secure-dns.sh as root to complete startup." else"$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \--config "$HADOOP_CONF_DIR" \--script "$bin/hdfs" start datanode $dataStartOpt fi#--------------------------------------------------------- # secondary namenodes (if any)SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 2>/dev/null)if [ -n "$SECONDARY_NAMENODES" ]; thenecho "Starting secondary namenodes [$SECONDARY_NAMENODES]""$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \--config "$HADOOP_CONF_DIR" \--hostnames "$SECONDARY_NAMENODES" \--script "$bin/hdfs" start secondarynamenode fi#--------------------------------------------------------- # quorumjournal nodes (if any)SHARED_EDITS_DIR=$($HADOOP_PREFIX/bin/hdfs getconf -confKey dfs.namenode.shared.edits.dir 2>&-)case "$SHARED_EDITS_DIR" in qjournal://*)JOURNAL_NODES=$(echo "$SHARED_EDITS_DIR" | sed 's,qjournal://\([^/]*\)/.*,\1,g; s/;/ /g; s/:[0-9]*//g')echo "Starting journal nodes [$JOURNAL_NODES]""$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \--config "$HADOOP_CONF_DIR" \--hostnames "$JOURNAL_NODES" \--script "$bin/hdfs" start journalnode ;; esac#--------------------------------------------------------- # ZK Failover controllers, if auto-HA is enabled AUTOHA_ENABLED=$($HADOOP_PREFIX/bin/hdfs getconf -confKey dfs.ha.automatic-failover.enabled) if [ "$(echo "$AUTOHA_ENABLED" | tr A-Z a-z)" = "true" ]; thenecho "Starting ZK Failover Controllers on NN hosts [$NAMENODES]""$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \--config "$HADOOP_CONF_DIR" \--hostnames "$NAMENODES" \--script "$bin/hdfs" start zkfc fi# eof里面有一句話是slaves file
猜測啟動的時候是根據slaves這個文件來決定哪些節點需要啟動datanode
#---------------------------------------------------------------------------------------------------------------------------------
最終解決方案:
/home/appleyuchi/bigdata/hadoop-2.7.7/etc/hadoop/slaves文件
從原來的
Laptop
改成:
Desktop
Laptop
這里Desktop是master的節點名,Laptop是slave的節點名
總結
以上是生活随笔為你收集整理的真实HDFS集群启动后master的jps没有DataNode的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 「已回复」木加志是什么
- 下一篇: eclipse 无法运行php文件怎么办