前期准备

使用三台主机,每台安装好JDK和Hadoop

参考:Hadoop学习笔记–单台安装

同步小技巧

scp–rsync–编写xsync

scp是主机之间安全拷贝数据的工具,一般的语法为

scp  -r  $pdir/$fname  $user@$host:$pdir/$fname

q其中-r表示递归

rsync是远程同步工具,只对差异化的文件更新。

rsync  -av  $pdir/$fname  $user@$host:$pdir/$fname

-a 归档拷贝
-v 显示复制过程

编写脚本xsync,自动完成同步

#!/bin/bash
#1. Judge the number of parameters
if [ $# -lt 1 ]
thenecho Not Enough Arguement!exit;
fi
#2.Traverse all hosts
for host in h102 h103 h104 #用到的所有主机
doecho ==============================$host==============================#3.Traverse all foulders and sendfor file in $@do#4. Determine whether the file existsif [ -e $file ]then#5. Get father foulderpdir=$(cd -P $(dirname $file); pwd)#6.Get the name of current filefname=$(basename $fiel)ssh $host "mkdir -p $pdir"rsync -av $pdir/$file $host:$pdirelseecho &file doesNOT exists!fidone
done

注意赋予+x可执行以及执行路径。

SSH免密登录


使用命令:

ssh-keygen -t rsa
ssh-copy-id $hostname

如果有 ssh h102不成功,查看log

[zyi@h102 ~]$ sudo cat /var/log/secure
。。。。
Jun  1 09:08:35 h102 sshd[5829]: Disconnected from 192.168.110.84 port 43308
Jun  1 09:08:35 h102 sshd[5825]: pam_unix(sshd:session): session closed for user zyi
Jun  1 09:09:23 h102 sshd[5853]: Connection closed by 192.168.110.83 port 43172 [preauth]
Jun  1 09:09:23 h102 sshd[5855]: Authentication refused: bad ownership or modes for directory /home/zyi

解决方法:

chmod g-w /home/zyi
Sysytemctl restart sshd

另外,SSH进行认证的过程中除了对用户目录有权限要求外,对 .ssh 文件夹和 authorized_keys 文件同样也要限制,如果日志中提示这两个的问题,可以通过如下方式进行修改:

chmod 700 /home/skyler/.ssh
chmod 600 /home/skyler/.ssh/authorized_keys

例子:

[zyi@h104 ~]$ sudo chmod g-w /home/zyi
[zyi@h104 ~]$ sudo chmod 700 ./.ssh
[zyi@h104 ~]$ sudo chmod 600 ./.ssh/authorized_keys
[zyi@h104 ~]$ sudo systemctl restart sshd
[zyi@h104 ~]$ ssh h102
Last failed login: Tue Jun  1 09:15:06 EDT 2021 from h103 on ssh:notty
There were 2 failed login attempts since the last successful login.
Last login: Tue Jun  1 09:13:29 2021 from h103

集群规划

Hadoop102 Hadoop103 Hadoop104
HDFS NameNode DataNode DataNode SecondaryNameNode DataNode
YARN NodeManager ResourceManager NodeManager NodeManager

➢ NameNode和 SecondaryNameNode都很消耗内存,不要安装在同一台服务器
➢ ResourceManager也很消耗内存,不要和 NameNode、SecondaryNameNode配置在同一台机器上

准备配置文件

Hadoop准备了默认的配置文件:

可以作为参考
一般我们要自定义配置文件,要修改的配置文件在:

$hadoop_home/etc/hadoop

实验环境:/opt/module/hadoop-3.1.3/etc/hadoop

[zyi@h102 hadoop-3.1.3]$ ll etc/hadoop/
total 172
......
-rw-r--r--. 1 zyi zyi  1048 Jun  1 22:58 core-site.xml
......
-rw-r--r--. 1 zyi zyi  1036 Jun  1 22:57 hdfs-site.xml
......
-rw-r--r--. 1 zyi zyi   982 Jun  1 23:29 mapred-site.xml
......
-rw-r--r--. 1 zyi zyi  1304 Jun  1 23:24 yarn-site.xml

以上四个文件是需要修改的。

core-site.xml 核心配置文件

官方参考:Configuring the Hadoop Daemons
安装规划,我们指定h102为hdfs的namenode,对内port=8020

[zyi@h102 hadoop-3.1.3]$ cat ./etc/hadoop/core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><!-- file system properties --><property><name>fs.defaultFS</name><value>hdfs://h102:8020</value></property><!--Foulder for hadoop data--><property><name>hadoop.tmp.dir</name><value>/opt/module/hadoop-3.1.3/data</value></property></configuration>

HDFS配置文件

配置namenode对外的服务地址和端口以及2nn的部署(h104)

[zyi@h102 hadoop-3.1.3]$ cat ./etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><!--nn website addr--><property><name>dfs.namenode.http-address</name><value>h102:9870</value></property><!--2nn website addr--><property><name>dfs.namenode.secondary.http-address</name><value>h104:9868</value></property></configuration>

Yarn配置文件

指定resourcemanager的安装位置和方式,以及配置环境变量继承

[zyi@h102 hadoop-3.1.3]$ cat ./etc/hadoop/yarn-site.xml
<?xml version="1.0"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
-->
<configuration><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value><!--RM use shuffle : <value>mapreduce_shuffle</value>--></property><property><description>The hostname of the RM.</description><name>yarn.resourcemanager.hostname</name><value>h103</value></property><property><description>Environment variables that containers may override rather than use NodeManager's default.</description><name>yarn.nodemanager.env-whitelist</name><value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value></property>
</configuration>

MapReduce配置文件


[zyi@h102 hadoop-3.1.3]$ cat ./etc/hadoop/mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><property><name>mapreduce.framework.name</name><value>yarn</value><description>The runtime framework for executing MapReduce jobs.Can be one of local, classic or yarn.</description></property></configuration>

利用xsync发布给集群里的主机

[zyi@h102 hadoop-3.1.3]$ cd etc/
[zyi@h102 etc]$ xsync hadoop/
==============================h102==============================
sending incremental file listsent 878 bytes  received 18 bytes  358.40 bytes/sec
total size is 107,612  speedup is 120.10
==============================h103==============================
sending incremental file list
hadoop/
hadoop/core-site.xml
hadoop/hdfs-site.xml
hadoop/mapred-site.xml
hadoop/yarn-site.xmlsent 3,339 bytes  received 139 bytes  2,318.67 bytes/sec
total size is 107,612  speedup is 30.94
==============================h104==============================
sending incremental file list
hadoop/
hadoop/core-site.xml
hadoop/hdfs-site.xml
hadoop/mapred-site.xml
hadoop/yarn-site.xmlsent 3,339 bytes  received 139 bytes  2,318.67 bytes/sec
total size is 107,612  speedup is 30.94

修改workers文件

告诉hadoop集群有哪些主机参与

[zyi@h102 hadoop-3.1.3]$ vi /opt/module/hadoop-3.1.3/etc/hadoop/workersh102
h103
h104[zyi@h102 hadoop-3.1.3]$ xsync /opt/module/hadoop-3.1.3/etc/hadoop/   #同步
==============================h102==============================
sending incremental file listsent 887 bytes  received 18 bytes  1,810.00 bytes/sec
total size is 107,617  speedup is 118.91
==============================h103==============================
sending incremental file list
hadoop/
hadoop/workerssent 952 bytes  received 46 bytes  665.33 bytes/sec
total size is 107,617  speedup is 107.83
==============================h104==============================
sending incremental file list
hadoop/
hadoop/workerssent 952 bytes  received 46 bytes  1,996.00 bytes/sec
total size is 107,617  speedup is 107.83

启动集群

初始化/格式化namenode

注意 :格式化 NameNode,会产生新的集群 id,导致 NameNode和 DataNode的集群id不一致,集群找不到已往数据。 如果集群在运行过程中报错,需要重新格式化,需要重新格式化 NameNode的话,一定要先停止 namenode和 datanode进程, 并且要 删除 所有机器的data和 logs目录,然后再进行格式。

[zyi@h102 hadoop-3.1.3]$ hdfs namenode -format
2021-06-02 05:01:45,180 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = h102/192.168.110.82
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 3.1.3
STARTUP_MSG:   classpath = /opt/module/hadoop-3.1.3/etc/hadoop:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/accessors-smart-1.2.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/asm-5.0.4.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/avro-1.7.7.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/checker-qual-2.5.2.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/commons-codec-1.11.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/commons-compress-1.18.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/commons-io-2.5.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/commons-lang3-3.4.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/commons-net-3.6.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/curator-client-2.13.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/curator-framework-2.13.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/curator-recipes-2.13.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/error_prone_annotations-2.2.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/failureaccess-1.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/gson-2.2.4.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/guava-27.0-jre.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/hadoop-annotations-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/hadoop-auth-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/httpclient-4.5.2.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/httpcore-4.4.4.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jackson-core-2.7.8.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jackson-databind-2.7.8.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jersey-core-1.19.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jersey-json-1.19.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jersey-server-1.19.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jersey-servlet-1.19.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jettison-1.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jetty-security-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jetty-server-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jetty-util-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jetty-webapp-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jsch-0.1.54.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/json-smart-2.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/kerb-client-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/kerb-common-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/kerb-core-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/kerb-server-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/kerb-util-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/kerby-config-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/kerby-util-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/netty-3.10.5.Final.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/paranamer-2.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/re2j-1.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/snappy-java-1.0.5.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/stax2-api-3.1.4.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/token-provider-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/zookeeper-3.4.13.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/hadoop-common-3.1.3-tests.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/hadoop-common-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/hadoop-nfs-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/common/hadoop-kms-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/hadoop-auth-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/json-smart-2.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/asm-5.0.4.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/zookeeper-3.4.13.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/curator-framework-2.13.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/curator-client-2.13.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/error_prone_annotations-2.2.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/commons-io-2.5.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jetty-http-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/hadoop-annotations-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/commons-net-3.6.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jettison-1.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/commons-lang3-3.4.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/avro-1.7.7.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/paranamer-2.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/commons-compress-1.18.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/gson-2.2.4.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/curator-recipes-2.13.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/hadoop-hdfs-3.1.3-tests.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/hadoop-hdfs-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/hadoop-hdfs-client-3.1.3-tests.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/hadoop-hdfs-client-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.3-tests.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.3-tests.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.3-tests.jar:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/dnsjava-2.1.7.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/fst-2.50.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/guice-4.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.7.8.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/java-util-1.9.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/jersey-client-1.19.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/json-io-2.5.1.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/objenesis-1.0.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-api-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-client-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-common-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-registry-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-server-common-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-server-router-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-services-api-3.1.3.jar:/opt/module/hadoop-3.1.3/share/hadoop/yarn/hadoop-yarn-services-core-3.1.3.jar
STARTUP_MSG:   build = https://gitbox.apache.org/repos/asf/hadoop.git -r ba631c436b806728f8ec2f54ab1e289526c90579; compiled by 'ztang' on 2019-09-12T02:47Z
STARTUP_MSG:   java = 1.8.0_212
************************************************************/
2021-06-02 05:01:45,195 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2021-06-02 05:01:45,466 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-19932890-94da-4efb-8d46-f8f1b85efc3c
2021-06-02 05:01:47,460 INFO namenode.FSEditLog: Edit logging is async:true
2021-06-02 05:01:47,533 INFO namenode.FSNamesystem: KeyProvider: null
2021-06-02 05:01:47,537 INFO namenode.FSNamesystem: fsLock is fair: true
2021-06-02 05:01:47,537 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2021-06-02 05:01:47,566 INFO namenode.FSNamesystem: fsOwner             = zyi (auth:SIMPLE)
2021-06-02 05:01:47,567 INFO namenode.FSNamesystem: supergroup          = supergroup
2021-06-02 05:01:47,567 INFO namenode.FSNamesystem: isPermissionEnabled = true
2021-06-02 05:01:47,567 INFO namenode.FSNamesystem: HA Enabled: false
2021-06-02 05:01:47,799 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2021-06-02 05:01:47,901 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2021-06-02 05:01:47,968 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2021-06-02 05:01:47,975 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2021-06-02 05:01:47,976 INFO blockmanagement.BlockManager: The block deletion will start around 2021 Jun 02 05:01:47
2021-06-02 05:01:47,978 INFO util.GSet: Computing capacity for map BlocksMap
2021-06-02 05:01:47,979 INFO util.GSet: VM type       = 64-bit
2021-06-02 05:01:47,980 INFO util.GSet: 2.0% max memory 1.7 GB = 34.8 MB
2021-06-02 05:01:47,980 INFO util.GSet: capacity      = 2^22 = 4194304 entries
2021-06-02 05:01:48,393 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2021-06-02 05:01:48,408 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
2021-06-02 05:01:48,408 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2021-06-02 05:01:48,408 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2021-06-02 05:01:48,409 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2021-06-02 05:01:48,410 INFO blockmanagement.BlockManager: defaultReplication         = 3
2021-06-02 05:01:48,410 INFO blockmanagement.BlockManager: maxReplication             = 512
2021-06-02 05:01:48,410 INFO blockmanagement.BlockManager: minReplication             = 1
2021-06-02 05:01:48,410 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
2021-06-02 05:01:48,410 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
2021-06-02 05:01:48,410 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
2021-06-02 05:01:48,410 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2021-06-02 05:01:48,963 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215
2021-06-02 05:01:49,164 INFO util.GSet: Computing capacity for map INodeMap
2021-06-02 05:01:49,164 INFO util.GSet: VM type       = 64-bit
2021-06-02 05:01:49,165 INFO util.GSet: 1.0% max memory 1.7 GB = 17.4 MB
2021-06-02 05:01:49,165 INFO util.GSet: capacity      = 2^21 = 2097152 entries
2021-06-02 05:01:49,915 INFO namenode.FSDirectory: ACLs enabled? false
2021-06-02 05:01:49,916 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2021-06-02 05:01:49,916 INFO namenode.FSDirectory: XAttrs enabled? true
2021-06-02 05:01:49,916 INFO namenode.NameNode: Caching file names occurring more than 10 times
2021-06-02 05:01:49,944 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2021-06-02 05:01:49,952 INFO snapshot.SnapshotManager: SkipList is disabled
2021-06-02 05:01:49,978 INFO util.GSet: Computing capacity for map cachedBlocks
2021-06-02 05:01:49,978 INFO util.GSet: VM type       = 64-bit
2021-06-02 05:01:49,978 INFO util.GSet: 0.25% max memory 1.7 GB = 4.3 MB
2021-06-02 05:01:49,979 INFO util.GSet: capacity      = 2^19 = 524288 entries
2021-06-02 05:01:50,010 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2021-06-02 05:01:50,011 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2021-06-02 05:01:50,011 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2021-06-02 05:01:50,028 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2021-06-02 05:01:50,028 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2021-06-02 05:01:50,048 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2021-06-02 05:01:50,048 INFO util.GSet: VM type       = 64-bit
2021-06-02 05:01:50,048 INFO util.GSet: 0.029999999329447746% max memory 1.7 GB = 534.2 KB
2021-06-02 05:01:50,048 INFO util.GSet: capacity      = 2^16 = 65536 entries
2021-06-02 05:01:50,140 INFO namenode.FSImage: Allocated new BlockPoolId: BP-847004710-192.168.110.82-1622624510113
2021-06-02 05:01:50,217 INFO common.Storage: Storage directory /opt/module/hadoop-3.1.3/data/dfs/name has been successfully formatted.
2021-06-02 05:01:50,274 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/module/hadoop-3.1.3/data/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2021-06-02 05:01:50,576 INFO namenode.FSImageFormatProtobuf: Image file /opt/module/hadoop-3.1.3/data/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 390 bytes saved in 0 seconds .
2021-06-02 05:01:50,620 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2021-06-02 05:01:50,637 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid = 0 when meet shutdown.
2021-06-02 05:01:50,637 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at h102/192.168.110.82
************************************************************/

完成以后在hadoop文件夹里生成了data和logs文件夹。
查看data文件夹

[zyi@h102 hadoop-3.1.3]$ ls
bin  data  etc  include  input  lib  libexec  LICENSE.txt  logs  NOTICE.txt  output  README.txt  sbin  share
[zyi@h102 hadoop-3.1.3]$ cd data/dfs/name/current/
[zyi@h102 current]$ ll
total 16
-rw-rw-r--. 1 zyi zyi 390 Jun  2 05:04 fsimage_0000000000000000000
-rw-rw-r--. 1 zyi zyi  62 Jun  2 05:04 fsimage_0000000000000000000.md5
-rw-rw-r--. 1 zyi zyi   2 Jun  2 05:04 seen_txid
-rw-rw-r--. 1 zyi zyi 218 Jun  2 05:04 VERSION

查看VERSION文件

[zyi@h102 current]$ cat VERSION
#Wed Jun 02 05:04:26 EDT 2021
namespaceID=632455869
clusterID=CID-fd675fc9-998e-44d4-84f9-baa472f33d21
cTime=1622624666933
storageType=NAME_NODE
blockpoolID=BP-1568640629-192.168.110.82-1622624666933
layoutVersion=-64

启动HDFS

启动的脚本在sbin目录下

[zyi@h102 current]$ cd /opt/module/hadoop-3.1.3/sbin/
[zyi@h102 sbin]$ ls
distribute-exclude.sh  httpfs.sh                start-all.cmd      start-dfs.sh         stop-all.cmd      stop-dfs.sh         workers.sh
FederationStateStore   kms.sh                   start-all.sh       start-secure-dns.sh  stop-all.sh       stop-secure-dns.sh  yarn-daemon.sh
hadoop-daemon.sh       mr-jobhistory-daemon.sh  start-balancer.sh  start-yarn.cmd       stop-balancer.sh  stop-yarn.cmd       yarn-daemons.sh
hadoop-daemons.sh      refresh-namenodes.sh     start-dfs.cmd      start-yarn.sh        stop-dfs.cmd      stop-yarn.sh
[zyi@h102 sbin]$ start-dfs.sh
Starting namenodes on [h102]
Starting datanodes
h103: WARNING: /opt/module/hadoop-3.1.3/logs does not exist. Creating.
h104: WARNING: /opt/module/hadoop-3.1.3/logs does not exist. Creating.
Starting secondary namenodes [h104]

查看各主机的安装情况,对照设计

Hadoop102 Hadoop103 Hadoop104
HDFS NameNode DataNode DataNode SecondaryNameNode DataNode
[zyi@h102 sbin]$ jps
19681 NameNode
20090 Jps
19821 DataNode
[zyi@h102 sbin]$ h103
bash: h103: command not found...
[zyi@h102 sbin]$ ssh h103
Last login: Tue Jun  1 23:32:00 2021 from h102
[zyi@h103 ~]$ jps
25846 Jps
25706 DataNode
[zyi@h103 ~]$ exit
logout
Connection to h103 closed.
[zyi@h102 sbin]$ ssh h104
Last login: Tue Jun  1 22:47:39 2021 from h102
[zyi@h104 ~]$ jps
25092 DataNode
25332 Jps
25206 SecondaryNameNode
[zyi@h104 ~]$

启动Yarn

注意需要在规划运行RM的主机上启动Yarn

Hadoop102 Hadoop103 Hadoop104
HDFS NameNode DataNode DataNode SecondaryNameNode DataNode
YARN NodeManager ResourceManager NodeManager NodeManager

所以我们需要在h103上面启动Yarn

[zyi@h102 sbin]$ ssh h103
Last login: Wed Jun  2 05:13:13 2021 from h102
[zyi@h103 ~]$ cd /opt/module/hadoop-3.1.3/sbin/
[zyi@h103 sbin]$ ls
distribute-exclude.sh  httpfs.sh                start-all.cmd      start-dfs.sh         stop-all.cmd      stop-dfs.sh         workers.sh
FederationStateStore   kms.sh                   start-all.sh       start-secure-dns.sh  stop-all.sh       stop-secure-dns.sh  yarn-daemon.sh
hadoop-daemon.sh       mr-jobhistory-daemon.sh  start-balancer.sh  start-yarn.cmd       stop-balancer.sh  stop-yarn.cmd       yarn-daemons.sh
hadoop-daemons.sh      refresh-namenodes.sh     start-dfs.cmd      start-yarn.sh        stop-dfs.cmd      stop-yarn.sh
[zyi@h103 sbin]$ start-yarn.sh
Starting resourcemanager
Starting nodemanagers

查看安装结果,对比设计表


[zyi@h103 sbin]$ jps
26679 ResourceManager
25706 DataNode
26798 NodeManager
27183 Jps
[zyi@h103 sbin]$ exit
logout
Connection to h103 closed.
[zyi@h102 sbin]$ jps
19681 NameNode
20772 NodeManager
20917 Jps
19821 DataNode
[zyi@h102 sbin]$ ssh h104
Last login: Wed Jun  2 05:12:43 2021 from h102
j[zyi@h104 ~]$ jps
25092 DataNode
25206 SecondaryNameNode
26376 Jps
26155 NodeManager

这样一个三台主机的hadoop集群就启动了。

Web查看

Hadoop自带HDFS和Yarn的web查看,地址为:

我们系统截图:


以上。

Hadoop学习笔记-集群部署相关推荐

  1. redis学习之集群部署

    redis学习之集群部署 1.Redis主从架构 1.1.主从复制原理 1.2.主从复制优缺点 1.3.redis主从架构搭建,配置从节点步骤 1.4.校验结果 1.5.数据部分复制 2.Redis哨 ...

  2. 【Hadoop】2、Hadoop高可用集群部署

    1.服务器设置 集群规划 Namenode-Hadoop管理节点 10.25.24.92 10.25.24.93 Datanode-Hadoop数据存储节点 10.25.24.89 10.25.24. ...

  3. (转)RabbitMQ学习之集群部署

    http://blog.csdn.net/zhu_tianwei/article/details/40931971 我们先搭建一个普通集群模式,在这个模式基础上再配置镜像模式实现高可用,Rabbit集 ...

  4. 学习笔记Hadoop(五)—— Hadoop集群的安装与部署(2)—— Hadoop集群部署模式、配置固定IP

    一.Hadoop集群部署模式 Hadoop的安装部署的模式一共有三种: 独立模式(本地模式) standalone 默认的模式,无需运行任何守护进程(daemon),所有程序都在单个JVM上执行.由于 ...

  5. Hadoop学习笔记—13.分布式集群中节点的动态添加与下架

    Hadoop学习笔记-13.分布式集群中节点的动态添加与下架 开篇:在本笔记系列的第一篇中,我们介绍了如何搭建伪分布与分布模式的Hadoop集群.现在,我们来了解一下在一个Hadoop分布式集群中,如 ...

  6. K8S 学习笔记三 核心技术 Helm nfs prometheus grafana 高可用集群部署 容器部署流程

    K8S 学习笔记三 核心技术 2.13 Helm 2.13.1 Helm 引入 2.13.2 使用 Helm 可以解决哪些问题 2.13.3 Helm 概述 2.13.4 Helm 的 3 个重要概念 ...

  7. 大数据学习笔记第1课 Hadoop基础理论与集群搭建

    大数据学习笔记第1课 Hadoop基础理论与集群搭建 一.环境准备 二.下载JDK 三.安装JDK 四.下载hadoop 五.安装hadoop集群 六.打通3台服务器的免密登录 七.hadoop集群配 ...

  8. Hadoop + HBase (自带zookeeper 也可单独加) 集群部署

    Hadoop+HBase搭建云存储总结 PDF http://www.linuxidc.com/Linux/2013-05/83844.htm HBase 结点之间时间不一致造成regionserve ...

  9. Hadoop伪分布式集群的安装部署

    Hadoop伪分布式集群的安装部署Hadoop伪分布式集群的安装部署 首先可以为Linux虚拟机搭建起来的最初状态做一个快照,方便后期搭建分布式集群时多台Linux虚拟机的准备. 一.如何为虚拟机做快 ...

最新文章

  1. 使用最新版(2020)IntelliJ IDEA 创建Servlet项目
  2. context linux,使用selinux contexts
  3. Mysql函数示例(如何定义输入变量与返回值)
  4. 内存墙,多核CPU的终结者?
  5. 关于伪静态和真静态的一点心得
  6. 银行软件业务开发分类杂谈-多年前的旧文
  7. 10个PHP常见安全问题(实例讲解)
  8. sharepoint html 编辑器 ,使用 Web 部件自定义页面简介
  9. InnerJoin分页导致的数据重复问题排查
  10. 魔兽世界服务器卡顿原理,《魔兽世界》怀旧服卡顿解决方法
  11. php多域名跳转,旱的旱死,涝的涝死,中超联赛的怪事
  12. 电脑配置学习(台式机)
  13. 【原创】Codeforces 39A C*++ Calculations
  14. 微信小程序github源码大全下载
  15. 电容屏物体识别_一种基于触摸屏触摸点的物体识别方法与流程
  16. 在access中一列称为_在数据表中,每一行称为一条____,每一列称为一个____。
  17. Win11系统组策略编辑器打不开解决方法
  18. 量化投资学习-11:图解股票长线价值趋势的操作策略与短线热点题材的操作策略的错位使用的危害
  19. 服务器重装ie浏览器,怎么重装IE浏览器
  20. AKS==donet6代码生成镜像到dockerhub并发布到Azure Kubernetes 服务 (AKS)

热门文章

  1. c语言求爱小程序,■■■向女孩子求爱的小程序■■■
  2. 百度地图api 自定义驾车线路规划 车辆实时定位
  3. python搜题手机软件_智慧职教APPPython程序设计(深圳信息职业技术学院)答案搜题公众号...
  4. 马尔可夫 java_马尔可夫过程(以马尔科夫链Markov为例)
  5. mysql case if 条件查询及性能分析
  6. 托福作文——《十天突破新托福Essay》
  7. TCP协议详解及交换机路由器概念
  8. HW-自动收集爱企查信息
  9. 原生JS匀速运动案例
  10. AI生态链全解析:百度、猎豹移动、商汤们背后的智能版图