配置hadoop集群常见报错汇总
1、使用hdfs namenode -format 格式化报错找不到JAVAHOME
该问题只需在对应的窗口导入JAVAHOME即可,注意,此处为对应环境安装的JDK路径,笔者为/usr/local/java
[hadoop@hadoop0 var]$ export JAVA_HOME=/usr/local/java
鉴于每次执行都要导入,建议直接在对应的/XXX/hadoop-xxx/etc/hadoop/hadoop-env.sh 添加如下语句,可以免去这个麻烦
export JAVA_HOME=/usr/local/java
export HADOOP_CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
2、使用hdfs namenode -format 格式化报错“Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/hadoop/yarn/server/nodemanager/NodeManager : Unsupported major.minor version 52.0”
此问题为jdk版本过旧,hadoop3.x需要JDK1.8
解决措施:在JDK对应解压目录上删除旧的链接,重新建立JDK1.8的新链接
[root@hadoop2 ~]# cd /usr/local/
[root@hadoop2 local]# rm -f java
[root@hadoop2 local]# ln -sv jdk1.8.0_231 java
‘java’ -> ‘jdk1.7.0_80’
[root@hadoop2 local]# java -version
java version "1.8.0_231"
Java(TM) SE Runtime Environment (build 1.8.0_231-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.231-b11, mixed mode)

3、启动 start-all.sh 脚本发现如下一堆如下找不到命令的报错
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-functions.sh: line 398: syntax error near unexpected token `<'
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-functions.sh: line 398: `  done < <(for text in "${input[@]}"; do'
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 70: hadoop_deprecate_envvar: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 87: hadoop_bootstrap: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 104: hadoop_parse_args: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 105: shift: : numeric argument required
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 110: hadoop_find_confdir: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 111: hadoop_exec_hadoopenv: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 112: hadoop_import_shellprofiles: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 113: hadoop_exec_userfuncs: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 119: hadoop_exec_user_hadoopenv: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 120: hadoop_verify_confdir: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 122: hadoop_deprecate_envvar: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 123: hadoop_deprecate_envvar: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 124: hadoop_deprecate_envvar: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 129: hadoop_os_tricks: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 131: hadoop_java_setup: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 133: hadoop_basic_init: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 140: hadoop_shellprofiles_init: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 143: hadoop_add_javalibpath: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 144: hadoop_add_javalibpath: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 146: hadoop_shellprofiles_nativelib: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 152: hadoop_add_common_to_classpath: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 153: hadoop_shellprofiles_classpath: command not found
/home/hadoop/hadoop-3.2.1/sbin/../libexec/hadoop-config.sh: line 157: hadoop_exec_hadooprc: command not found
WARNING: Attempting to start all Apache Hadoop daemons as hadoop in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
^C[hadoop@hadoop0 sbin]$
该问题可能是未配置程序运行的属主,也可能是执行脚本的shell错误
如下两步应可解决:
1)、 /XXX/hadoop-3.2.1/etc/hadoop/hadoop-env.sh 添加执行用户为hadoop,需已新建并配置免密
[hadoop@hadoop0 hadoop]$ tail hadoop-env.sh
# to only allow certain users to execute certain subcommands.
# It uses the format of (command)_(subcommand)_USER.
#
# For example, to limit who can execute the namenode command,
export HDFS_NAMENODE_USER="hadoop"
export HDFS_DATANODE_USER="hadoop"
export HDFS_SECONDARYNAMENODE_USER="hadoop"
export YARN_RESOURCEMANAGER_USER="hadoop"
export YARN_NODEMANAGER_USER="hadoop"
2)、可能是执行脚本的shell错误,不要使用sh,使用.执行
[hadoop@hadoop0 sbin]$ . start-all.sh

4、启动集群时报错Cannot set priority of datanode process 1620 ,该问题需要针对日志具体查看
[hadoop@hadoop0 sbin]$ . start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as hadoop in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [hadoop0]
Starting datanodes
WARNING: 'slaves' file has been deprecated. Please use 'workers' file instead.
hadooo0: ssh: Could not resolve hostname hadooo0: Name or service not known
hadoop3: ERROR: Cannot set priority of datanode process 1620
hadoop2: ERROR: Cannot set priority of datanode process 1614
hadoop1: ERROR: Cannot set priority of datanode process 1606
Starting secondary namenodes [hadoop0]
Starting resourcemanager
Starting nodemanagers
WARNING: 'slaves' file has been deprecated. Please use 'workers' file instead.
hadooo0: ssh: Could not resolve hostname hadooo0: Name or service not known
hadoop3: ERROR: Cannot set priority of nodemanager process 1695
hadoop1: ERROR: Cannot set priority of nodemanager process 1682
hadoop2: ERROR: Cannot set priority of nodemanager process 1689
[hadoop@hadoop0 sbin]$
启动集群后datanode节点无法启动,查看日志为java.lang.UnsupportedClassVersionError,该问题为版本不支持,升级为JDK8即可
[hadoop@hadoop1 logs]$ cat hadoop-hadoop-nodemanager-hadoop1.out
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/hadoop/yarn/server/nodemanager/NodeManager : Unsupported major.minor version 52.0
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:482)
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 3837
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 3837
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
[hadoop@hadoop1 logs]$
解决措施:在JDK对应解压目录上删除旧的链接,重新建立JDK1.8的新链接
[root@hadoop2 ~]# cd /usr/local/
[root@hadoop2 local]# rm -f java
[root@hadoop2 local]# ln -sv jdk1.8.0_231 java
‘java’ -> ‘jdk1.7.0_80’
[root@hadoop2 local]# java -version
java version "1.8.0_231"
Java(TM) SE Runtime Environment (build 1.8.0_231-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.231-b11, mixed mode)

5、启动集群时提示:'slaves' file has been deprecated. Please use 'workers' file instead.
[hadoop@hadoop0 sbin]$ . start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as hadoop in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [hadoop0]
Starting datanodes
WARNING: 'slaves' file has been deprecated. Please use 'workers' file instead.
Starting secondary namenodes [hadoop0]
Starting resourcemanager
Starting nodemanagers
WARNING: 'slaves' file has been deprecated. Please use 'workers' file instead.
[hadoop@hadoop0 sbin]$
该问题是因为在hadoop3.x 中slaves已经被workers文件替代,cat slaves>workers即可

6、浏览器访问50070,50030无法访问,curl 返回Connection refused
[hadoop@hadoop1 hadoop]$  curl hadoop0:50070
curl: (7) Failed connect to hadoop0:50070; Connection refused
[hadoop@hadoop1 hadoop]$  curl hadoop0:50030
curl: (7) Failed connect to hadoop0:50030; Connection refused
[hadoop@hadoop1 hadoop]$
该问题是因为3.x很多端口已经改变,使用8088访问集群监控界面,端口改变对应如下
改变了哪些端口
hadoop3.0修改端口如下:
Namenode 端口:
50470 --> 9871
50070 --> 9870
8020 --> 9820
Secondary NN 端口:
50091 --> 9869
50090 --> 9868
Datanode 端口:
50020 --> 9867
50010 --> 9866
50475 --> 9865
50075 --> 9864
对于以上端口相信我们比较重要,而且最熟悉的应该是8020端口,这是hdfs访问端口。
[hadoop@hadoop1 hadoop]$  curl http://hadoop0:8088
[hadoop@hadoop1 hadoop]$  curl http://hadoop0:9070
curl: (7) Failed connect to hadoop0:9070; Connection refused
[hadoop@hadoop1 hadoop]$  curl http://hadoop0:9870
<!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="REFRESH" content="0;url=dfshealth.html" />
<title>Hadoop Administration</title>
</head>
</html>
[hadoop@hadoop1 hadoop]$

7、通过命令 jps 可以查看各个节点所启动的进程。在 Master 节点上可以正确看到 NameNode、ResourceManager、SecondrryNameNode、JobHistoryServer 进程,data节点可已看到NodeManager、DataNode进程,使用 hdfs dfsadmin -report查看时,存活的节点只有一个
[hadoop@hadoop0 sbin]$ jps
6048 NodeManager
5923 ResourceManager
6435 Jps
5333 NameNode
5466 DataNode
5678 SecondaryNameNode
[hadoop@hadoop2 local]$ jps
1920 DataNode
2028 NodeManager
2125 Jps
[hadoop@hadoop2 local]$
[hadoop@hadoop1 hadoop]$ ps -ef|grep hadoop
root      1795  1187  0 17:59 pts/0    00:00:00 su hadoop
hadoop    1796  1795  0 17:59 pts/0    00:00:00 bash
hadoop    1886     1 47 18:02 ?        00:00:17 /usr/local/java/bin/java -Dproc_datanode -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=ERROR,RFAS -Dyarn.log.dir=/home/hadoop/hadoo-3.2.1/logs -Dyarn.log.file=hadoop-hadoop-datanode-hadoop1.log -Dyarn.home.dir=/home/hadoop/hadoop-3.2.1 -Dyarn.root.logger=INFO,console -Djava.library.path=/home/hadoop/hadoop-3.2.1/lib/native -Dhadoop.log.dir=/home/hadoop/hadoop-3.2.1/logs -Dhadoop.log.file=hadoop-hadoop-datanode-hadoop1.log -Dhadoop.home.dir=/home/hadoop/hadoop-3.2.1 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.DataNode
hadoop    1994     1 99 18:02 ?        00:00:28 /usr/local/java/bin/java -Dproc_nodemanager -Djava.net.preferIPv4Stack=true -Dyarn.log.dir=/home/hadoop/hadoop-3.2.1/logs -Dyarn.log.file=hadoop-hadoop-nodemanager-hadoop1.log -Dyarn.home.dir=/home/hadoop/hadoop-3.2.1 -Dyarn.root.logger=INFO,console -Djava.library.path=/home/hadoop/hadoop-3.2.1/lib/native -Dhadoop.log.dir=/home/hadoop/hadoop-3.2.1/logs -Dhadoop.log.file=hadoop-hadoop-nodemanager-hadoop1.log -Dhadoop.home.dir=/home/hadoop/hadoop-3.2.1 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.yarn.server.nodemanager.NodeManager
[hadoop@hadoop0 sbin]$  hdfs dfsadmin -report
Configured Capacity: 50432839680 (46.97 GB)
Present Capacity: 48499961856 (45.17 GB)
DFS Remaining: 48499957760 (45.17 GB)
DFS Used: 4096 (4 KB)
DFS Used%: 0.00%
Replicated Blocks:
    Under replicated blocks: 0
    Blocks with corrupt replicas: 0
    Missing blocks: 0
    Missing blocks (with replication factor 1): 0
    Low redundancy blocks with highest priority to recover: 0
    Pending deletion blocks: 0
Erasure Coded Block Groups:
    Low redundancy block groups: 0
    Block groups with corrupt internal blocks: 0
    Missing block groups: 0
    Low redundancy blocks with highest priority to recover: 0
    Pending deletion blocks: 0

-------------------------------------------------
Live datanodes (1):

Name: 127.0.0.1:9866 (localhost)
Hostname: localhost
Decommission Status : Normal
Configured Capacity: 50432839680 (46.97 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 1932877824 (1.80 GB)
DFS Remaining: 48499957760 (45.17 GB)
DFS Used%: 0.00%
DFS Remaining%: 96.17%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Mar 30 18:06:22 CST 2020
Last Block Report: Mon Mar 30 18:02:31 CST 2020
Num of Blocks: 0
[hadoop@hadoop0 sbin]$
该问题困扰许久,最后通过查看日志看出问题所在
[hadoop@hadoop1 logs]$ tail -f  hadoop-hadoop-datanode-hadoop1.log
2020-03-30 18:10:07,322 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: hadoop0/192.168.2.130:49000
2020-03-30 18:10:13,325 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop0/192.168.2.130:49000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-03-30 18:10:14,327 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop0/192.168.2.130:49000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-03-30 18:10:15,329 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop0/192.168.2.130:49000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-03-30 18:10:16,331 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop0/192.168.2.130:49000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-03-30 18:10:17,333 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop0/192.168.2.130:49000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-03-30 18:10:18,335 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop0/192.168.2.130:49000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-03-30 18:10:19,337 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop0/192.168.2.130:49000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-03-30 18:10:20,339 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop0/192.168.2.130:49000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2020-03-30 18:10:21,341 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop0/192.168.2.130:49000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
^C
[hadoop@hadoop1 logs]$
从如上日志可以看出,本身data节点启动并无问题,但在与主节点通信时报“Problem connecting to server: hadoop0/192.168.2.130:49000”,之后持续重试。
该日志说明无法连接上hadoop0/192.168.2.130:49000 49000端口,于是尝试在hadoop0查看端口监听
[root@hadoop0 ~]# netstat -anl |grep 49000
tcp        0      0 127.0.0.1:49000         0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:49000         127.0.0.1:44680         ESTABLISHED
tcp        0      0 127.0.0.1:44714         127.0.0.1:49000         TIME_WAIT  
tcp        0      0 127.0.0.1:44680         127.0.0.1:49000         ESTABLISHED
[root@hadoop0 ~]#
可以看到,只监听在了127.0.0.1:49000,怀疑/etc/hosts设置有问题
[root@hadoop0 ~]# cat /etc/hosts|grep '127.0.0.1'
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 hadoop0
[root@hadoop0 ~]#
果然发现笔记在配置时再127.0.0.1上添加了hadoop0,删除后重试
[root@hadoop0 ~]# cat /etc/hosts|grep '127.0.0.1'
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
[hadoop@hadoop0 sbin]$ . start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as hadoop in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [hadoop0]
Starting datanodes
Starting secondary namenodes [hadoop0]
Starting resourcemanager
Starting nodemanagers
[hadoop@hadoop0 sbin]$

[hadoop@hadoop0 sbin]$ . start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as hadoop in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [hadoop0]
Starting datanodes
Starting secondary namenodes [hadoop0]
Starting resourcemanager
Starting nodemanagers
[hadoop@hadoop0 sbin]$  hdfs dfsadmin -report
Configured Capacity: 201731358720 (187.88 GB)
Present Capacity: 196838965248 (183.32 GB)
DFS Remaining: 196838948864 (183.32 GB)
DFS Used: 16384 (16 KB)
DFS Used%: 0.00%
Replicated Blocks:
    Under replicated blocks: 0
    Blocks with corrupt replicas: 0
    Missing blocks: 0
    Missing blocks (with replication factor 1): 0
    Low redundancy blocks with highest priority to recover: 0
    Pending deletion blocks: 0
Erasure Coded Block Groups:
    Low redundancy block groups: 0
    Block groups with corrupt internal blocks: 0
    Missing block groups: 0
    Low redundancy blocks with highest priority to recover: 0
    Pending deletion blocks: 0

-------------------------------------------------
Live datanodes (4):

Name: 192.168.2.130:9866 (hadoop0)
Hostname: hadoop0
Decommission Status : Normal
Configured Capacity: 50432839680 (46.97 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 1933074432 (1.80 GB)
DFS Remaining: 48499761152 (45.17 GB)
DFS Used%: 0.00%
DFS Remaining%: 96.17%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Mar 30 18:21:22 CST 2020
Last Block Report: Mon Mar 30 18:20:34 CST 2020
Num of Blocks: 0

Name: 192.168.2.131:9866 (hadoop1)
Hostname: localhost
Decommission Status : Normal
Configured Capacity: 50432839680 (46.97 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 986411008 (940.71 MB)
DFS Remaining: 49446424576 (46.05 GB)
DFS Used%: 0.00%
DFS Remaining%: 98.04%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Mar 30 18:21:20 CST 2020
Last Block Report: Mon Mar 30 18:20:40 CST 2020
Num of Blocks: 0

Name: 192.168.2.132:9866 (hadoop2)
Hostname: localhost
Decommission Status : Normal
Configured Capacity: 50432839680 (46.97 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 986542080 (940.84 MB)
DFS Remaining: 49446293504 (46.05 GB)
DFS Used%: 0.00%
DFS Remaining%: 98.04%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Mar 30 18:21:20 CST 2020
Last Block Report: Mon Mar 30 18:20:41 CST 2020
Num of Blocks: 0

Name: 192.168.2.133:9866 (hadoop3)
Hostname: hadoop3
Decommission Status : Normal
Configured Capacity: 50432839680 (46.97 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 986365952 (940.67 MB)
DFS Remaining: 49446469632 (46.05 GB)
DFS Used%: 0.00%
DFS Remaining%: 98.04%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Mar 30 18:21:20 CST 2020
Last Block Report: Mon Mar 30 18:20:39 CST 2020
Num of Blocks: 0

配置hadoop集群常见报错汇总相关推荐

  1. linux安装redis集群+常见报错

    详细安装可参照该文档.描述的很详细:https://www.cnblogs.com/lihaoyang/p/6906444.html 我使用的redis版本为3.2.8.gem为3.3.0: 下面说说 ...

  2. Linux中安装配置hadoop集群

    原文:http://www.cnblogs.com/lijingchn/p/5574476.html 一. 简介 参考了网上许多教程,最终把hadoop在ubuntu14.04中安装配置成功.下面就把 ...

  3. 学习笔记Hadoop(七)—— Hadoop集群的安装与部署(4)—— 配置Hadoop集群

    四.配置Hadoop集群 Hadoop集群总体规划 Hadoop集群安装采用下面步骤: 在Master节点:上传并解压Hadoop安装包 . 在Master节点:配置Hadoop所需configura ...

  4. Windows下安装Cygwin配置Hadoop集群

    Hadoop集群一般是配置到Linux系统之上,如果电脑用的是Windows系统,那么可以通过虚拟机安装Linux系统或者在Windows上安装Cygwin来模拟Linux环境,从而搭建Hadoop集 ...

  5. FreeBSD下安装配置Hadoop集群(三)

    先给自己做个广告: 开源Hive管理工具phpHiveAdmin今日更新0.05 beta2 ChangeLog: 1. sql查询页全部重写,复杂查询现在可以用异步非阻塞的方式实时的查看map/re ...

  6. FreeBSD下安装配置Hadoop集群(性能调优)

    hadoop的性能调优是个比较艰难的事情,由于这个系统的整个环境比较复杂,对于接触时间不长的人来说,配置都很难,更别说找出性能优化的点了. 性能优化涉及的方面很广,操作系统,网络配置,配置文件,调度器 ...

  7. 3台机器配置hadoop集群_Hadoop学习之路(三)Hadoop集群搭建和简单应用

    概念了解 主从结构:在一个集群中,会有部分节点充当主服务器的角色,其他服务器都是从服务器的角色,当前这种架构模式叫做主从结构. 主从结构分类: 1.一主多从 2.多主多从 Hadoop中的HDFS和Y ...

  8. RHEL 5下配置Hadoop集群:java.net.NoRouteToHostException: No route to host问题的解决

    最近,要把原来基于Ubuntu下配置的Hadoop集群迁移到RHEL 5下,结果在启动的时候,出现了莫名其妙的问题: Namenode进程启动起来了,但是在登录到Datanode上启动集群slaves ...

  9. Linux centos 6配置hadoop 集群搭建笔记教程

    一.安装JDK 1.上传jdk-8u121-linux-x64.tar.gz文件到/opt目录 2.解压jdk文件 tar -zxvf jdk-8u121-linux-x64.tar.gz -C /u ...

最新文章

  1. 〖Android〗从Android Studio转为Eclipse开发项目运行程序闪退的解决方法
  2. php getimagesize图片宽高反了_PHP实现简单验证码识别
  3. 接口性能优化技巧,干掉慢代码!
  4. C++ template函数模板
  5. 解决VS2013或2017中类似于:error C4996: 'scanf': This function or variable may be unsafe的问题
  6. 经典排序算法(Java版)
  7. Hadoop权威指南(中文版,第2版)【分享】
  8. 小工具—系统API应用
  9. python中pickle模块_python中的pickle模块
  10. 正交表的查询地址汇总
  11. 红胖子网络科技博文大全:开发技术集合(包含Qt实用技术、树莓派、三维、OpenCV、OpenGL、ffmpeg、OSG、单片机、软硬结合等等)持续更新中...
  12. Win7各正式版下载地址和SHA验证
  13. vs2015官方下载链接
  14. Unity InControl插件 按键映射对照表
  15. at指令 meid_【技术分享】使用AT调制解调器命令解锁LG Android屏幕
  16. STM32 ETR使用
  17. 计算机和网络之间有个感叹号,电脑连接网络显示感叹号,教你电脑连接网络显示感叹号怎么办...
  18. PCB中solder层和paste层的区别
  19. 【LittleXi】sql学习笔记
  20. UE4轮廓描边【非后处理】

热门文章

  1. Microsoft 加入HPC战团
  2. 男人绝对不能冒犯的女人死穴
  3. where条件查询语句
  4. STM32F103跳过停止模式,不能进入停止模式
  5. ScrollerVelocityTracker
  6. linux中实现线程同步的6种方法
  7. SVN如何切换账号 下
  8. 简述Java的类加载过程
  9. 下载《建筑的永恒之道》
  10. UserScripts Safari 苹果iOS上特别好用且免费的脚本插件,五分钟学会