目录

1.1 前言

2.1 环境配置清单

2.2 CentOS8服务器配置静态ip

2.2.1 配置个人电脑的网络设置

2.2.2 配置VMware

2.2.3 配置服务器【电脑ip变更,则需重新配置】

2.3 搭建CentOS8服务器小集群

2.3.1 设置ssh免密登录

2.4 搭建hadoop小集群

2.4.1 修改hadoop配置文件


1.1 前言

防止自己忘了,特意书写。

参考文章
分类 链接
如何在centos8搭建小肌群? (2条消息) contos8分布式集群部署hadoop-3.2.1_dp340823的博客-CSDN博客

2.1 环境配置清单

具体的配置如下,按需升级。

配置清单
角色  主机名     地址   系统   硬件

namenode,resourcemanager,datanode,nodemanager,secondarynamenode
 hadoop01 192.168.109.131    CentOS8   

处理器数为2、核心数为1、

磁盘40G

datanode,nodemanager  hadoop02 192.168.109.132 CentOS8

处理器数为2、核心数为1、

磁盘40G

datanode,nodemanager  hadoop03 192.168.109.133 CentOS8

处理器数为2、核心数为1、

磁盘40G

2.2 CentOS8服务器配置静态ip

2.2.1 配置个人电脑的网络设置

打开控制面板,具体路径:控制面板\网络和 Internet\网络连接

win11,具体路径:设置\网络和 Internet\高级网络设置\更多网络适配器

2.2.2 配置VMware

2.2.3 配置服务器【电脑ip变更,则需重新配置】

【步骤1】
使用root用户编辑ifcfg-ens33配置文件

su root
cd /etc/sysconfig/network-scripts/
cp ifcfg-ens33 backup_ifcfg-ens33
vim /etc/sysconfig/network-scripts/ifcfg-ens33

具体内容如下:

解释说明:IPADDR=192.168.109.140,填写的是:该服务器以后的静态ip。

[root@localhost hadoop]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=192.168.109.140
NETMASK=255.255.255.0
GATEWAY=192.168.109.2
DNS1=192.168.109.2
DNS2=8.8.8.8
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=a8c3fa22-02e9-4465-b111-8d73b66a8311
DEVICE=ens33
ONBOOT=yes

【注意】参数值必须根据你个人的电脑环境设置。

BOOTPROTO=static  //设置静态ip  其中枚举值为dhcp、static、none(不指定,默认静态)
IPADDR=192.168.1.128   //ip地址,必须和主机一致
NETMASK=255.255.255.0  //子网掩码,必须和主机一致
GATEWAY=192.168.1.2        //网关,必须和主机一致
DNS1=8.8.8.8
DNS2=114.114.114.114ONBOOT=yes    //网络开机自启动

解释完毕

【步骤2】
使用root用户编辑配置文件,观察DNS。

【注意】虚拟网络编辑器的子网ip和子网掩码一致,是你配置的vmnet8的首选DNS地址,即DNS1。而本人则配置为DNS1=192.168.109.2

vim /etc/resolv.conf
# 需要对应刚刚填写的DNS!!!!若少了则补上
[root@localhost hadoop]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 192.168.109.2
nameserver 8.8.8.8

【步骤3】验证结果

CentOS8重新加载网络配置
方案一:使用root用户执行命令nmcli c reload,然后reboot重启虚拟机
方案一:使用root用户 systemctl restart NetworkManager ,然后reboot重启虚拟机重启完毕后,执行ifconfig 或 ip addr 查看ip地址确认是否已经静态IP。

上述操作每台服务器都要执行。

注意:若是在windows安装vmware,然后安装centos虚拟机。计划在不同电脑上做镜像迁移,因为静态ip的原因虽然能正常登录centos,但是会连不上网,也ping 不通网络。所以也是要根据当前window的ip,做上述2.2.2和2.2.3章节适配。

2.3 搭建CentOS8服务器小集群

2.3.1 设置ssh免密登录

【步骤1】关闭防火墙

systemctl stop firewalld
firewall-cmd --state

【步骤2】修改host

vim /etc/hosts192.168.109.131 hadoop01
192.168.109.132 hadoop02
192.168.109.133 hadoop03

【步骤3】生成公钥

【步骤3】
cd /app && mkdir key && chmod 777 key
cd key
ssh-keygen -t rsa
#【方案一】一直回车即可
#【方案二】生成公钥文件y 然后密码qwertyuiop123..0Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in y.
Your public key has been saved in y.pub.
The key fingerprint is:
SHA256:Yi6elCedkV56um/1rNO46aniOxo4j7tIDviO9GMdITA root@localhost.localdomain
The key's randomart image is:
+---[RSA 3072]----+
|                 |
| E               |
|  o              |
|   . . .         |
|    . * S        |
|.  . B *  .      |
|ooo B O .. =     |
|=+.O Boo. oo+    |
|ooB+*o=Boo*+     |
+----[SHA256]-----+

【步骤4】公钥复制到其他机器上

ssh-copy-id hadoop01
ssh-copy-id hadoop02
ssh-copy-id hadoop03

【步骤5】验证登录

ssh hadoop01
ssh hadoop02
ssh hadoop03

2.4 搭建hadoop小集群

先找到hadoop安装在哪?

[root@localhost hadoop-3.3.0]# which hadoop
/app/hadoop-3.3.0/bin/hadoop
[root@localhost hadoop-3.3.0]# pwd
/app/hadoop-3.3.0
[root@localhost hadoop-3.3.0]# ls -lrt
total 92
-rw-rw-r--. 1 1001 1001   175 Mar 25  2020 README.txt
-rw-rw-r--. 1 1001 1001  1541 Mar 25  2020 NOTICE.txt
-rw-rw-r--. 1 1001 1001 27570 Mar 25  2020 NOTICE-binary
-rw-rw-r--. 1 1001 1001 15697 Mar 25  2020 LICENSE.txt
-rw-rw-r--. 1 1001 1001 22976 Jul  5  2020 LICENSE-binary
drwxr-xr-x. 3 1001 1001    20 Jul  7  2020 etc
drwxr-xr-x. 2 1001 1001  4096 Jul  7  2020 licenses-binary
drwxr-xr-x. 3 1001 1001    20 Jul  7  2020 lib
drwxr-xr-x. 2 1001 1001   203 Jul  7  2020 bin
drwxr-xr-x. 2 1001 1001   106 Jul  7  2020 include
drwxr-xr-x. 4 1001 1001  4096 Jul  7  2020 libexec
drwxr-xr-x. 4 1001 1001    31 Jul  7  2020 share
drwxr-xr-x. 4 root root    30 Dec 26  2020 hdfs
drwxr-xr-x. 3 1001 1001  4096 Dec 26  2020 sbin
drwxr-xr-x. 4 root root    37 Jan  2  2021 tmp
drwxr-xr-x. 3 root root  4096 Jul  4 12:55 logs#配置文件都在/app/hadoop-3.3.0/etc/hadoop/目录下
cd /app/hadoop-3.3.0/etc/hadoop/

可见,配置文件都在/app/hadoop-3.3.0/etc/hadoop/目录下。

2.4.1 修改hadoop配置文件

1.修改配置文件hadoop-env.sh(在末尾添加已下内容)
vim /app/hadoop-3.3.0/etc/hadoop/hadoop-env.sh

vim /app/hadoop-3.3.0/etc/hadoop/hadoop-env.shexport JAVA_HOME=/opt/jdk1.8.0_191
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root

2.修改配置文件core-site.xml

vim /app/hadoop-3.3.0/etc/hadoop/core-site.xml

cat /app/hadoop-3.3.0/etc/hadoop/core-site.xml

core-size.xml 文件包含了NameNode主机地址以及其监听RPC端口等信息(NameNode默认使用的RPC端口为8020,而hadoop3.0之后的版本将8020 改为 9820)。
对于分布式环境,每个节点都需要设置NameNode主机地址,其简要的配置内容如下所示:

<!-- 指定HDFS老大(namenode)的通信地址 -->
<property><name>fs.defaultFS</name><value>hdfs://master:9000</value>
</property>
<!-- 指定hadoop运行时产生文件的存储路径 -->
<property><name>hadoop.tmp.dir</name>
<value>/opt/hadoop-3.2.1/tmp</value>
</property>

【修改前】

----[原来]
[root@localhost hadoop]# cat /app/hadoop-3.3.0/etc/hadoop/core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><property><name>fs.defaultFS</name><value>hdfs://localhost:9000</value><description>HDFS的URI,文件系统://namenode标识:端口号</description></property><property><name>hadoop.tmp.dir</name><value>/app/hadoop-3.3.0/tmp</value><description>namenode上本地的hadoop临时文件夹</description></property><!--hadoop.proxyuser.root.groups中的root是用户,value里面的root是group 尝试解决User: root is not allowed to impersonate root--><property><name>hadoop.proxyuser.root.groups</name><value>root</value><description>Allow the superuser oozie to impersonate any members of the group group1 and group2</description></property><property><name>hadoop.proxyuser.root.hosts</name><value>localhost</value><description>The superuser can connect only from host1 and host2 to impersonate a user</description></property>
</configuration>
----[原来]

【修改后】

----[后来]
[root@localhost hadoop]# cat /app/hadoop-3.3.0/etc/hadoop/core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><property><name>fs.defaultFS</name><!-- <value>hdfs://localhost:9000</value> --><!-- 指定HDFS的namenode的通信地址 --><value>hdfs://hadoop01:9000</value><description>HDFS的URI,文件系统://namenode标识:端口号</description></property><property><name>hadoop.tmp.dir</name><value>/app/hadoop-3.3.0/tmp</value><description>namenode上本地的hadoop临时文件夹</description></property><!--hadoop.proxyuser.root.groups中的root是用户,value里面的root是group 尝试解决User: root is not allowed to impersonate root--><property><name>hadoop.proxyuser.root.groups</name><value>root</value><description>Allow the superuser oozie to impersonate any members of the group group1 and group2</description></property><property><name>hadoop.proxyuser.root.hosts</name><value>localhost</value><description>The superuser can connect only from host1 and host2 to impersonate a user</description></property>
</configuration>
----[后来]

3.修改配置文件hdfs-site.xml

vim /app/hadoop-3.3.0/etc/hadoop/hdfs-site.xml
cat /app/hadoop-3.3.0/etc/hadoop/hdfs-site.xml

hdfs-site.xml 主要用于配置HDFS相关的属性,列如复制因子(即数据块的副本数)、NN(namenode)和DN(datanode)用于存储数据的目录等。
数据块的副本数对于分布式的Hadoop应该为3,这里我设置为2,为了减少磁盘使用。
而NN和DN用于村粗的数据的目录为前面的步骤中专门为其创建的路径。另外,前面的步骤中也为SNN创建了相关的目录,这里也一并配置为启用状态。

<!-- 设置namenode的http通讯地址 -->
<property><name>dfs.namenode.http-address</name><value>master:50070</value>
</property><!-- 设置secondarynamenode的http通讯地址 -->
<property><name>dfs.namenode.secondary.http-address</name><value>slaver01:50090</value>
</property><!-- 设置namenode存放的路径 -->
<property><name>dfs.namenode.name.dir</name><value>/opt/hadoop-3.2.1/name</value>
</property>
<!-- 设置hdfs副本数量 -->
<property>
<name>dfs.replication</name><value>2</value>
</property>
<!-- 设置datanode存放的路径 -->
<property><name>dfs.datanode.data.dir</name><value>/opt/hadoop-3.2.1/data</value>
</property>

【修改前】

----[原来]
[root@localhost hadoop]# cat /app/hadoop-3.3.0/etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><property><name>dfs.name.dir</name><value>/app/hadoop-3.3.0/hdfs/name</value><description>namenode上存储hdfs名字空间元数据 </description> </property><property><name>dfs.data.dir</name><value>/app/hadoop-3.3.0/hdfs/data</value><description>datanode上数据块的物理存储位置</description></property><property><name>dfs.replication</name><value>1</value><description>副本个数,配置默认是3,应小于datanode机器数量</description></property><property><name>dfs.namenode.name.dir</name><value>file:/app/hadoop-3.3.0/tmp/dfs/name</value></property><property><name>dfs.datanode.data.dir</name><value>file:/app/hadoop-3.3.0/tmp/dfs/data</value></property>
</configuration><!--本人已经在core-site.xml设置了属性<property><name>hadoop.tmp.dir</name> 是hadoop文件系统依赖的基础配置,很多路径都依赖它。
如果这个 hdfs-site.xml 中不配置 namenode 和 datanode 的存放位置,默认就放在如下路径中。
NameNode
dfs.name.dir
预设值:${hadoop.tmp.dir}/dfs/nameDataNode
dfs.data.dir
预设值:${hadoop.tmp.dir}/dfs/data<property><name>dfs.namenode.name.dir</name><value>file:/approot/hadoop-3.2.1/tmp/dfs/name</value>
</property>
<property><name>dfs.datanode.data.dir</name><value>file:/approot/hadoop-3.2.1/tmp/dfs/data</value>
</property>-->
----[原来]

【修改后】

----[后来]
[root@localhost hadoop]# cat /app/hadoop-3.3.0/etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><!-- 设置namenode的http通讯地址 --><property><name>dfs.namenode.http-address</name><value>hadoop01:50070</value></property><!-- 设置secondarynamenode的http通讯地址 --><property><name>dfs.namenode.secondary.http-address</name><value>hadoop02:50090</value></property><property><name>dfs.name.dir</name><value>/app/hadoop-3.3.0/hdfs/name</value><description>namenode上存储hdfs名字空间元数据 </description> </property><property><name>dfs.data.dir</name><value>/app/hadoop-3.3.0/hdfs/data</value><description>datanode上数据块的物理存储位置</description></property><!-- 设置hdfs副本数量 --><property><name>dfs.replication</name><!--<value>1</value>--><value>2</value><description>副本个数,配置默认是3,应小于datanode机器数量</description></property><!-- 设置namenode存放的路径 --><property><name>dfs.namenode.name.dir</name><value>file:/app/hadoop-3.3.0/tmp/dfs/name</value></property><!-- 设置datanode存放的路径 --><property><name>dfs.datanode.data.dir</name><value>file:/app/hadoop-3.3.0/tmp/dfs/data</value></property>
</configuration><!--本人已经在core-site.xml设置了属性<property><name>hadoop.tmp.dir</name> 是hadoop文件系统依赖的基础配置,很多路径都依赖它。
如果这个 hdfs-site.xml 中不配置 namenode 和 datanode 的存放位置,默认就放在如下路径中。
NameNode
dfs.name.dir
预设值:${hadoop.tmp.dir}/dfs/nameDataNode
dfs.data.dir
预设值:${hadoop.tmp.dir}/dfs/data<property><name>dfs.namenode.name.dir</name><value>file:/approot/hadoop-3.2.1/tmp/dfs/name</value>
</property>
<property><name>dfs.datanode.data.dir</name><value>file:/approot/hadoop-3.2.1/tmp/dfs/data</value>
</property>-->
----[后来]

4.修改配置文件mapred-site.xml

vim /app/hadoop-3.3.0/etc/hadoop/mapred-site.xml
cat /app/hadoop-3.3.0/etc/hadoop/mapred-site.xml

mapred-site.xml 文件用于配置集群的MapReduce framework,此处应该指定使用yarn,另外的可用值还有local和classic。
mapred-site.xml默认不存在,但有模块文件mapred-site.xml.template,只需要将其复制为mapred-site.xml即可。

<property><name>mapreduce.framework.name</name><value>yarn</value>
</property>

【修改前】

----[原来]
[root@localhost hadoop]# cat /app/hadoop-3.3.0/etc/hadoop/mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property><!--为了解决下面的问题Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster,加上的属性 --><property><name>yarn.app.mapreduce.am.env</name><value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value></property><property><name>mapreduce.map.env</name><value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value></property><property><name>mapreduce.reduce.env</name><value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value></property></configuration>
----[原来]

5.修改配置文件yarn-site.xml

vim /app/hadoop-3.3.0/etc/hadoop/yarn-site.xml
cat /app/hadoop-3.3.0/etc/hadoop/yarn-site.xml 
yarn-site.yml 用于配置YARN进程及YARN的相关属性,
首先需要指定ResourceManager守护进程的主机和监听的端口(这里ResourceManager准备安装在NameNode节点);
其次需要指定ResourceMnager使用的scheduler,以及NodeManager的辅助服务。一个简要的配置示例如下所示:

<!-- 设置 resourcemanager 在哪个节点-->
<property><name>yarn.resourcemanager.hostname</name><value>hadoop01</value>
</property><!-- reducer取数据的方式是mapreduce_shuffle -->
<property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value>
</property><property>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

【修改前】

----[原来]
[root@localhost hadoop]# cat /app/hadoop-3.3.0/etc/hadoop/mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property><!--为了解决下面的问题Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster,加上的属性 --><property><name>yarn.app.mapreduce.am.env</name><value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value></property><property><name>mapreduce.map.env</name><value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value></property><property><name>mapreduce.reduce.env</name><value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value></property></configuration>
[root@localhost hadoop]# cat /app/hadoop-3.3.0/etc/hadoop/yarn-site.xml
<?xml version="1.0"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><configuration><!-- Site specific YARN configuration properties --><property><name>yarn.resourcemanager.hostname</name> <value>localhost</value> </property> <property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property></configuration>
[root@localhost hadoop]# ----[原来]

6.新建masters文件(/opt/hadoop-3.2.1/etc/hadoop/目录下)
vim masters

7.新建workers文件(/opt/hadoop-3.2.1/etc/hadoop/目录下)
vim workers

8.新建tmp、name、data文件夹(/opt/hadoop-3.2.1目录下)
mkdir tmp name data

9.将master机上的复制文件到slaver01、slaver02
scp /etc/profile slaver01:/etc/
scp /etc/profile slaver02:/etc/
 
scp -r /opt slaver01:/
scp -r /opt slaver02:/

scp /etc/profile hadoop02:/etc/
scp /etc/profile hadoop03:/etc/scp -r /opt hadoop02:/
scp -r /opt hadoop03:/source /etc/profile

需要在slaver01、slaver02执行source /etc/profile 使用配置文件生效

java -version和hadoop version验证slaver01和slaver02  上的jdk和hadoop是否安装成功

四、启动hadoop
3台虚拟机都要做

1.第一次启动需要格式化namenode(/opt/hadoop-3.2.1)
./bin/hdfs namenode -format

e/current/edits_0000000000000009479-0000000000000009559, /app/hadoop-3.3.0/tmp/dfs/name/current/edits_0000000000000009560-0000000000000009719, /app/hadoop-3.3.0/tmp/dfs/name/current/edits_0000000000000009720-0000000000000009809, /app/hadoop-3.3.0/tmp/dfs/name/current/edits_0000000000000009810-0000000000000009840, /app/hadoop-3.3.0/tmp/dfs/name/current/edits_inprogress_0000000000000009841]
2021-07-29 00:38:00,817 INFO common.Storage: Storage directory /app/hadoop-3.3.0/tmp/dfs/name has been successfully formatted.
2021-07-29 00:38:00,860 INFO namenode.FSImageFormatProtobuf: Saving image file /app/hadoop-3.3.0/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2021-07-29 00:38:01,055 INFO namenode.FSImageFormatProtobuf: Image file /app/hadoop-3.3.0/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 399 bytes saved in 0 seconds .
2021-07-29 00:38:01,071 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2021-07-29 00:38:01,110 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.
2021-07-29 00:38:01,111 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
************************************************************/

2.启动dfs
./sbin/start-dfs.sh
3.启动yarn
./sbin/start-yarn.sh

4.用jps验证

五、访问应用
1.浏览器输入192.168.137.161:50070
2.浏览器输入192.168.137.161:8088
点击nodes查看


[root@localhost hadoop-3.3.0]# ./sbin/start-dfs.sh
Starting namenodes on [hadoop01]
Starting datanodes
Starting secondary namenodes [hadoop02]
[root@localhost hadoop-3.3.0]# ./sbin/start-yarn.sh
Starting resourcemanager
Starting nodemanagers
[root@localhost hadoop-3.3.0]# master
bash: master: command not found...
[root@localhost hadoop-3.3.0]# jps
11264 Jps
9841 NameNode
10858 NodeManager
10670 ResourceManager
[root@localhost hadoop-3.3.0]# localhost: nodemanager is running as process 11160.  Stop it first.
hadoop02: nodemanager is running as process 11160.  Stop it first.
[root@localhost hadoop-3.3.0]# jps
11876 Jps
10661 SecondaryNameNode
11160 NodeManager
[root@localhost hadoop-3.3.0]# hadoop02: nodemanager is running as process 11160.  Stop it first.
[root@localhost hadoop-3.3.0]# jps
11123 Jps
10413 NodeManager
[root@localhost hadoop-3.3.0]# 1.浏览器输入192.168.109.128:500702.浏览器输入192.168.109.128:8088

centos8搭建分布式集群相关推荐

  1. Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分布式集群环境

    Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分布式集群环境 一.环境说明 个人理解: zookeeper可以独立搭建集群,hbase本身不能独立搭建集群需要和hado ...

  2. 使用VirtualBox搭建分布式集群环境记录

    针对集群环境,工作中基本上都是直接申请真实的物理机,或者干脆让运维给一套多机.自己之前个人研究的时候也是使用之前从二手市场淘来的多台物理机组成的分布式集群,但是最近一不小心把其中两台搞坏了,加之确实比 ...

  3. VMware搭建分布式集群基础环境

    VMware搭建分布式集群基础环境 1. 前言 在日常学习.工作当中,我们经常需要用到分布式集群环境,如zookeeper集群.redis集群.大数据集群等,而通常并没有那么多的物理机器可以使用,因此 ...

  4. 【干货】Dask快速搭建分布式集群(大数据0基础可以理解,并使用!)

    非常开心,解决了很久都没有解决的问题 使用的语言: Python3.5 分布式机器: windows7 注意到,其实,通过这工具搭建分布式不需要管使用的电脑是什么系统. 分布式使用流程 Created ...

  5. Zookeeper3.4.11+Hadoop2.7.6+Hbase2.0.0搭建分布式集群

    2019独角兽企业重金招聘Python工程师标准>>> 有段时间没更新博客了,趁着最近有点时间,来完成之前关于集群部署方面的知识.今天主要讲一讲Zookeeper+Hadoop+Hb ...

  6. redis3.0搭建分布式集群

    redis高版本使用ruby实现了集群,所以需要ruby环境,安装ruby环境和redis的gem接口后,就可以使用redis的redis-trib.rb脚本创建集群. 先列一下大的步骤. 1.修改配 ...

  7. centos8搭建k8s集群

    1. 系统初始化 关闭防火墙 systemctl disable firewalld 关闭swap sed -ri 's/.*swap.*/#&/' /etc/fstab 关闭selinux ...

  8. python搭建分布式集群_Spark完全分布式集群搭建【Spark2.4.4+Hadoop3.2.1】

    一.安装Linux 需要:3台CentOS7虚拟机 注意: 虚拟机的网络设置为NAT模式,NAT模式可以在断网的情况下连接上虚拟机而桥架模式不行! 二.设置静态IP 跳转目录到: 修改IP设置: 备注 ...

  9. 基于centos8搭建zookeeper集群

    [README] 本文基于centos8 搭建 1,其他linux版本,命令可能不同: 2,集群包括3个节点,如下(因为采用NAT模型进行网络连接,需要让windows和linux机器在同一个网段): ...

最新文章

  1. Kali Linux安全渗透教程1.1Linux安全渗透简介
  2. activity-alias的使用
  3. UltraEdit如何删除指定字符后的内容
  4. Java学习必不可少的网站,快收藏起来!
  5. 网络 http服务器-v1-多线程版本
  6. RMI non-JRMP server at remote endpoint
  7. P2473-[SCOI2008]奖励关【数学期望,状压dp】
  8. Taro+react开发(70):flex布局
  9. C++ 正则表达式教程:C++ 中的正则表达式与示例
  10. 酷派android.processa,酷派大神F2全网通(8675-A Android5.0)刷机教程图解,简单刷机
  11. php 显示探针_php 探针
  12. 一文读懂元宇宙,AI、灵境计算...核心技术到人文生态
  13. ZEMAX | 使用 ZPL 宏进行优化:ZPLM 操作数
  14. 7-8 哈利·波特的考试 (25 分)(Floyd算法)
  15. 浅析vendor_init
  16. matlab模拟嫦娥奔月,【文章】仿真动画软件设计作品--嫦娥奔月
  17. java环信后端接口
  18. ubuntu安装视频播放器
  19. cs1 cs2计算机课程,CS1是基本级还是CS2是基本级?
  20. Ruby+Watir搭建自动化测试框架

热门文章

  1. Windows电脑用户都应该知道的好安全软件和防护知识
  2. hfss和python接口_【技术分享】python和HFSS联合仿真微带天线的教程
  3. python人狗大战游戏_python面向对象-----组合的题目 定一个人狗大战 并且用面向对象的组合知识...
  4. Android Qt入门
  5. postman 最新下载地址 百度云!
  6. Android消息推送
  7. joinquant量化是什么?是主流的量化平台吗?
  8. 车载诊断概念简单介绍
  9. 阿里云直播 auth_key
  10. pythondictrunoob_Python3 字典 | 菜鸟教程