环境:

台式机和笔记本搭建的真实分布式HDFS集群(因为是两台,所以对于Spark集群而言是伪分布式)

故障:

笔记本和台式机组建的集群,在仔细核对各种教程后,发现master的jps中总是没有datanode

排查思路:

/home/appleyuchi/bigdata/hadoop-2.7.7/sbin/start-all.sh内容为:

提到了start-all.sh

#!/usr/bin/env bash# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# Start all hadoop daemons.  Run this on master node.echo "This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh"bin=`dirname "${BASH_SOURCE-$0}"`
bin=`cd "$bin"; pwd`DEFAULT_LIBEXEC_DIR="$bin"/../libexec
HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
. $HADOOP_LIBEXEC_DIR/hadoop-config.sh# start hdfs daemons if hdfs is present
if [ -f "${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh ]; then"${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh --config $HADOOP_CONF_DIR
fi# start yarn daemons if yarn is present
if [ -f "${HADOOP_YARN_HOME}"/sbin/start-yarn.sh ]; then"${HADOOP_YARN_HOME}"/sbin/start-yarn.sh --config $HADOOP_CONF_DIR
fi

上面提到了start-dfs.sh的内容,我们来查看start-dfs.sh的内容:

#!/usr/bin/env bash# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# Start hadoop dfs daemons.
# Optinally upgrade or rollback dfs state.
# Run this on master node.usage="Usage: start-dfs.sh [-upgrade|-rollback] [other options such as -clusterId]"bin=`dirname "${BASH_SOURCE-$0}"`
bin=`cd "$bin"; pwd`DEFAULT_LIBEXEC_DIR="$bin"/../libexec
HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
. $HADOOP_LIBEXEC_DIR/hdfs-config.sh# get arguments
if [[ $# -ge 1 ]]; thenstartOpt="$1"shiftcase "$startOpt" in-upgrade)nameStartOpt="$startOpt";;-rollback)dataStartOpt="$startOpt";;*)echo $usageexit 1;;esac
fi#Add other possible options
nameStartOpt="$nameStartOpt $@"#---------------------------------------------------------
# namenodesNAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -namenodes)echo "Starting namenodes on [$NAMENODES]""$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \--config "$HADOOP_CONF_DIR" \--hostnames "$NAMENODES" \--script "$bin/hdfs" start namenode $nameStartOpt#---------------------------------------------------------
# datanodes (using default slaves file)if [ -n "$HADOOP_SECURE_DN_USER" ]; thenecho \"Attempting to start secure cluster, skipping datanodes. " \"Run start-secure-dns.sh as root to complete startup."
else"$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \--config "$HADOOP_CONF_DIR" \--script "$bin/hdfs" start datanode $dataStartOpt
fi#---------------------------------------------------------
# secondary namenodes (if any)SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 2>/dev/null)if [ -n "$SECONDARY_NAMENODES" ]; thenecho "Starting secondary namenodes [$SECONDARY_NAMENODES]""$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \--config "$HADOOP_CONF_DIR" \--hostnames "$SECONDARY_NAMENODES" \--script "$bin/hdfs" start secondarynamenode
fi#---------------------------------------------------------
# quorumjournal nodes (if any)SHARED_EDITS_DIR=$($HADOOP_PREFIX/bin/hdfs getconf -confKey dfs.namenode.shared.edits.dir 2>&-)case "$SHARED_EDITS_DIR" in
qjournal://*)JOURNAL_NODES=$(echo "$SHARED_EDITS_DIR" | sed 's,qjournal://\([^/]*\)/.*,\1,g; s/;/ /g; s/:[0-9]*//g')echo "Starting journal nodes [$JOURNAL_NODES]""$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \--config "$HADOOP_CONF_DIR" \--hostnames "$JOURNAL_NODES" \--script "$bin/hdfs" start journalnode ;;
esac#---------------------------------------------------------
# ZK Failover controllers, if auto-HA is enabled
AUTOHA_ENABLED=$($HADOOP_PREFIX/bin/hdfs getconf -confKey dfs.ha.automatic-failover.enabled)
if [ "$(echo "$AUTOHA_ENABLED" | tr A-Z a-z)" = "true" ]; thenecho "Starting ZK Failover Controllers on NN hosts [$NAMENODES]""$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \--config "$HADOOP_CONF_DIR" \--hostnames "$NAMENODES" \--script "$bin/hdfs" start zkfc
fi# eof

里面有一句话是slaves file

猜测启动的时候是根据slaves这个文件来决定哪些节点需要启动datanode

#---------------------------------------------------------------------------------------------------------------------------------

最终解决方案:

/home/appleyuchi/bigdata/hadoop-2.7.7/etc/hadoop/slaves文件

从原来的

Laptop

改成:

Desktop
Laptop

这里Desktop是master的节点名,Laptop是slave的节点名

真实HDFS集群启动后master的jps没有DataNode相关推荐

  1. Hadoop集群启动后利用Web界面管理HDFS

    Hadoop集群启动后,可以通过自带的浏览器Web界面查看HDFS集群的状态信息,访问IP为NameNode所在服务器的IP地址,hadoop版本为3.0以前访问端口默认为9870,hadoop版本为 ...

  2. 已解决:k8s集群启动后,默认创建哪些namespace?

    1.k8s集群启动后会创建如下namespace [root@master ~]# kubectl get namespace NAME STATUS AGE default Active 45h # ...

  3. Hadoop集群启动后没有SecondaryNameNode,IllegalArgument报错:Does not contain a valid host:port authority: hdfs:

    启动集群后发现没有SecondaryNameNode [root@hadoop02 hadoop-2.7.2]# sbin/start-dfs.sh Starting namenodes on [ha ...

  4. Nacos集群启动后,微服务无法注册

    最近在学习Nacos集群部署中踩到一个坑:在本机部署了3个Nacos:ip和端口配置如图 然后分别启动三个Nacos ,窗口提示启动成功. 然后使用Nginx做Nacos的反向代理,nginx.con ...

  5. Hadoop-HA集群启动后两个namenode都是standby问题。

    日志: WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: U nable to trigger a roll of the a ...

  6. HDFS集群常见异常及排查步骤

    1.问题一: 权限问题.比如hdfs需要写入目录的权限不足,本地目录工作异常,(权限问题),出现异常后大家不要看到一堆错误代码就心慌,不必害怕.hadoop目录下有个日志文件夹. 如果那个节点 出现问 ...

  7. Hadoop 3.x搭建基础HDFS集群

    与Hadoop 2.x相比,它有一些新的特性如下: 基于JDK 1.8 HDFS可擦除编码 MR Native Task优化 基于Cgroup的内存隔离和IO Disk隔离 更改分配容器资源Conta ...

  8. 【Hadoop篇】启动hdfs集群时,提示: ERROR: Cannot set priority of zkfc process 5668

    [问题描述] 启动hdfs集群时,遇到如下错误 [dylan@hadoop102 hadoop]$ start-dfs.sh Starting namenodes on [hadoop102] Sta ...

  9. 学习笔记Hadoop(八)—— Hadoop集群的安装与部署(5)—— Hadoop配置参数介绍、Hadoop集群启动与监控

    五.Hadoop配置参数介绍 Hadoop集群配置文件主要有: 它们的默认参数配置可以看: core-default.xml :https://hadoop.apache.org/docs/stabl ...

最新文章

  1. Google Brain去年干了太多事,Jeff Dean一篇长文都没回顾完
  2. 收藏丨机器学习顶级数据资源 Top 8 盘点
  3. 鸿蒙手机系统开发大会,鸿蒙OS+EMUI10,华为开发者大会的创新与看点
  4. 三角形周长最短问题_一道三角形周长最小值问题
  5. Elasticsearch 深入3
  6. 【Java】Java反射调用可变参数的方法
  7. mysql 中文字符 函数_MySQL基础之字符函数-Go语言中文社区
  8. Ajax 读取.ashx 返回404
  9. echarts 动态设置y轴单位_Recharts动态设置y轴的最大值最小值
  10. 研磨设计模式--外观模式
  11. 一文带你了解SQL的执行计划(explain)
  12. 2019 双十一京东全民养红包攻略分享
  13. ug80浩强工具_浩强工具下载|浩强UG工具下载 v2.59 最新版 - 比克尔下载
  14. VR MultiPass\SinglePass(Instanced)\MultiView 浅析和区分总结
  15. Win10添加Loopback网卡
  16. Java实现图像增强之伽马变换
  17. VUE路由防卫功能举例
  18. PolyLaneNet:基于深度多项式回归的车道估计(PolyLaneNet: Lane Estimation via Deep Polynomial Regression)
  19. java 由日期计算星期几_java计算日期是星期几
  20. 黑苹果引导工具 Clover 配置详解

热门文章

  1. 结对-贪吃蛇游戏-开发过程
  2. SSH——增删改的实现一
  3. 深入探究VC —— 链接器link.exe(4)【转】http://blog.csdn.net/wangningyu/article/details/4849452...
  4. typescript类型断言
  5. maven 部分命令
  6. 三维重建:SLAM算法的考题总结
  7. 位姿检索PoseRecognition:LSH算法.p稳定哈希
  8. 单像素骨架提取算法c语言实现,【图像】骨架提取与分水岭算法
  9. php如何打开数据库,php数据库怎么打开
  10. 8月14日 上课截图