理论简介:
 HA 概念以及作用
    HA(High Available), 高可用性群集,是保证业务连续性的有效解决方案,一般有两个或两个以上的节点,且分为活动节点及备用节点。通常把正在执行业务的称为活动节点,而作为活动节点的一个备份的则称为备用节点。当活动节点出现问题,导致正在运行的业务(任务)不能正常运行时,备用节点此时就会侦测到,并立即接续活动节点来执行业务。从而实现业务的不中断或短暂中断。
HDFS概述
基础架构

1、NameNode(Master)
1)命名空间管理:命名空间支持对HDFS中的目录、文件和块做类似文件系统的创建、修改、删除、列表文件和目录等基本操作。

2)块存储管理。
NameNode+HA架构

  从上面的架构图可以看出,使用Active NameNode,Standby NameNode 两个节点可以解决单点问题,两个节点通过JounalNode共享状态,通过ZKFC 选举Active ,监控状态,自动备份。

1、Active NameNode
  接受client的RPC请求并处理,同时写自己的Editlog和共享存储上的Editlog,接收DataNode的Block report, block location updates和heartbeat。

2、Standby NameNode
  同样会接到来自DataNode的Block report, block location updates和heartbeat,同时会从共享存储的Editlog上读取并执行这些log操作,保持自己NameNode中的元数据(Namespcae information + Block locations map)和Active NameNode中的元数据是同步的。所以说Standby模式的NameNode是一个热备(Hot Standby NameNode),一旦切换成Active模式,马上就可以提供NameNode服务。

3、JounalNode
  用于Active NameNode , Standby NameNode 同步数据,本身由一组JounnalNode节点组成,该组节点奇数个。
4、ZKFC
  监控NameNode进程,自动备份。

YARN概述
基础架构
1、ResourceManager(RM)
  接收客户端任务请求,接收和监控NodeManager(NM)的资源情况汇报,负责资源的分配与调度,启动和监控ApplicationMaster(AM)。

2、NodeManager
  节点上的资源管理,启动Container运行task计算,上报资源、container情况汇报给RM和任务处理情况汇报给AM。

3、ApplicationMaster
  单个Application(Job)的task管理和调度,向RM进行资源的申请,向NM发出launch Container指令,接收NM的task处理状态信息。

4、Web Application Proxy
  用于防止Yarn遭受Web攻击,本身是ResourceManager的一部分,可通过配置独立进程。ResourceManager Web的访问基于守信用户,当Application Master运行于一个非受信用户,其提供给ResourceManager的可能是非受信连接,Web Application Proxy可以阻止这种连接提供给RM。

5、Job History Server
    NodeManager在启动的时候会初始化LogAggregationService服务, 该服务会在把本机执行的container log (在container结束的时候)收集并存放到hdfs指定的目录下. ApplicationMaster会把jobhistory信息写到hdfs的jobhistory临时目录下, 并在结束的时候把jobhisoty移动到最终目录, 这样就同时支持了job的recovery.History会启动web和RPC服务, 用户可以通过网页或RPC方式获取作业的信息。
ResourceManager+HA架构

ResourceManager HA 由一对Active,Standby结点构成,通过RMStateStore存储内部数据和主要应用的数据及标记。

需要说明以下几点:
         HDFS HA通常由两个NameNode组成,一个处于Active状态,另一个处于Standby状态。Active NameNode对外提供服务,而Standby NameNode则不对外提供服务,仅同步Active NameNode的状态,以便能够在它失败时快速进行切换。
Hadoop 2.0官方提供了两种HDFS HA的解决方案,一种是NFS,另一种是QJM。这里我们使用简单的QJM。在该方案中,主备NameNode之间通过一组JournalNode同步元数据信息,一条数据只要成功写入多数JournalNode即认为写入成功。通常配置奇数个JournalNode,这里还配置了一个Zookeeper集群,用于ZKFC故障转移,当Active NameNode挂掉了,会自动切换Standby NameNode为Active状态。
YARN的ResourceManager也存在单点故障问题,这个问题在hadoop-2.4.1得到了解决:有两个ResourceManager,一个是Active,一个是Standby,状态由zookeeper进行协调。
YARN框架下的MapReduce可以开启JobHistoryServer来记录历史任务信息,否则只能查看当前正在执行的任务信息。
Zookeeper的作用是负责HDFS中NameNode主备节点的选举,和YARN框架下ResourceManaer主备节点的选举。

(一),基础环境配置:
[hadoop@big-master1 hadoop]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
## bigdata cluster ##
192.168.41.20  big-master1   #bigdata1  namenode1,zookeeper,resourcemanager
192.168.41.21  big-master2   #bigdata2  namenode2,zookeeper,slave,resourcemanager
192.168.41.22  big-slave01   #bigdata3  datanode1,zookeeper,slave
192.168.41.25  big-slave02   #bigdata4  datanode2,zookeeper,slave
192.168.41.27  big-slave03   #bigdata5  datanode3,zookeeper,slave

hadoop 分布式系统各软件版本:
系统版本:
[hadoop@big-master1 hadoop]$ cat /etc/redhat-release 
CentOS Linux release 7.7.1908 (Core)

JDK版本:
下载JDK : https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
[hadoop@big-master1 ~]$ java -version
java version "1.8.0_251"
Java(TM) SE Runtime Environment (build 1.8.0_251-b08)
Java HotSpot(TM) 64-Bit Server VM (build 25.251-b08, mixed mode)

zookeeper 版本:
3.4.14 
download :  https://zookeeper.apache.org/releases.html 
wget http://mirror.bit.edu.cn/apache/zookeeper/stable/zookeeper-3.4.12.tar.gz
yum install glibc-headers gcc gcc-c++ make cmake openssl-devel ncurses-devel -y

hadoop版本:
2.8.5 版本
## appache hadoop
download : http://hadoop.apache.org
https://archive.apache.org/dist/hadoop/common/

网卡设置:
[hadoop@big-master1 tools]$ cat /etc/sysconfig/network-scripts/ifcfg-enp0s3 
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s3
UUID=46ab62bc-448c-443b-9755-6bf8abbdc612
DEVICE=enp0s3
ONBOOT=yes
IPV6_PRIVACY=no
IPADDR=192.168.41.20
NETMASK=255.255.255.0
GATEWAY=192.168.41.1

Java版本
[hadoop@big-master1 hadoop]$ java -version
java version "1.8.0_251"
Java(TM) SE Runtime Environment (build 1.8.0_251-b08)
Java HotSpot(TM) 64-Bit Server VM (build 25.251-b08, mixed mode)

####
全局环境变量新增:
[hadoop@big-master1 hadoop]$ cat /etc/profile
### JDK ###
JAVA_HOME=/usr/local/jdk1.8.0_251
CLASSPATH=$JAVA_HOME/lib/tools.jar$JAVA_HOME/lib/dt.jar
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH

### zookeeper  ##
export ZK_HOME=/usr/local/zookeeper
export PATH=$ZK_HOME/bin:$PATH

### hadoop ##
export HADOOP_HOME=/usr/local/hadoop
export PATH=$HADOOP_HOME/bin:$PATH:$HADOOP_HOME/sbin:$PATH

## tools ##
export PATH=/home/hadoop/tools:$PATH

####
关闭安全策略:
Selinux 安全策略关闭:
cat > /etc/selinux/config << EOF
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted
EOF

检测:
[root@big-slave03 local]# getenforce   --redhat 7 
Disabled
or 
[root@big-slave03 local]# sestatus -v
SELinux status:                 disabled

## 防火墙 Iptables   ##
CentOS7在防火墙与端口上的操作
CentOS7使用systemctl指令来管理系统的单一服务,操作如下:
启动防火墙: systemctl start firewalld
查看防火墙状态: systemctl status firewalld
关闭防火墙: systemctl stop firewalld
开机时启用防火墙服务:systemctl enable firewalld
开机时禁用防火墙服务:systemctl disable firewalld
查询防火墙服务是否开机启动:systemctl is-enabled firewalld
查询已经启动的服务列表:systemctl list-unit-files|grep enabled
查询启动失败的服务列表:systemctl --failed
在安装软件或列库时,除了直接开启和关闭防火墙,也可以通过对端口的操作直接开放连接;
添加端口:firewall-cmd --zone=public --add-port=80/tcp --permanent
更新防火墙规则:firewall-cmd --reload
查看端口状态:firewall-cmd --zone=public --query-port=80/tcp
删除开放的端口:firewall-cmd --zone=public --remove-port=80/tcp --permanent
每次都更新防火墙规则,都需要重新更新:firewall-cmd --reload
在更新完防火墙的设置后,也可以查看所有开启的端口:firewall-cmd --zone=public --list-ports

查看状态:
[root@big-slave03 local]# firewall-cmd --list-port
FirewallD is not running
[root@big-slave03 local]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

## redhat 6  ##
/etc/init.d/iptables stop
chkconfig iptables off
chkconfig iptables --list

####
修改主机名 - redhat 7 系列:
vim /etc/sysconfig/network 将HOSTNAME修改为你的主机名
vim /etc/hostname 将原来的主机名删除,添加你的新主机名
vim /etc/hosts 将原来的映射主机名修改为你的新主机名

## redhat 7.7 ##
[root@big-slave03 local]# cat /etc/sysconfig/network
# Created by anaconda

[root@big-slave03 local]# cat /etc/hostname 
big-slave03

[root@big-slave03 local]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
## bigdata cluster ##
192.168.41.20  big-master1   #bigdata1  namenode1,zookeeper,resourcemanager
192.168.41.21  big-master2   #bigdata2  namenode2,zookeeper,slave,resourcemanager
192.168.41.22  big-slave01   #bigdata3  datanode1,zookeeper,slave
192.168.41.25  big-slave02   #bigdata4  datanode2,zookeeper,slave
192.168.41.27  big-slave03   #bigdata5  datanode3,zookeeper,slave

####
创建hadoop 用户:
grouaddd -g  1100 hadoop 
useradd -m -u 1200 -g hadoop hadoop 
[hadoop@big-master1 ~]$ id hadoop
uid=1200(hadoop) gid=1100(hadoop) groups=1100(hadoop)
password hadoop

####
SSH 等效性验证 & 免密登录 设置:
以下参考RAC 配置: redhat 6, redhat 7 一样:
配置:
mkdir  ~/.ssh
chmod -R 755  ~/.ssh/
ssh-keygen -t rsa
touch ~/.ssh/authorized_keys
chmod 644 ~/.ssh/authorized_keys
(aa)
10.180.53.1:
ssh 10.180.53.1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

10.180.53.2:
ssh 10.180.53.2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

10.180.53.3:
ssh 10.180.53.3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

10.180.53.4: 
ssh 10.180.53.4 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  
(bb)
10.180.53.1:
ssh 10.180.53.2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  
ssh 10.180.53.3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  
ssh 10.180.53.4 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

(cc)
10.180.53.1:
scp ~/.ssh/authorized_keys 10.180.53.2:.ssh/authorized_keys
scp ~/.ssh/authorized_keys 10.180.53.3:.ssh/authorized_keys
scp ~/.ssh/authorized_keys 10.180.53.4:.ssh/authorized_keys

新增:
如果add一台服务器,重做ssh ,只需要 执行: 
1, vim /home/mysql/.ssh/known_hosts 删除之前的记录信息:
2, ssh-copy-id mysql@10.180.53.3  即可。(这里10.180.53.3 为新替换的服务器)

验证:
[hadoop@big-slave02 ~]$ ssh big-master2 date;date && ssh big-master1 date;date && ssh big-slave01 date;date && ssh big-slave02 date;date && ssh big-slave03 date;date
Fri May 15 16:39:59 CST 2020
Fri May 15 16:39:58 CST 2020
Fri May 15 16:39:58 CST 2020
Fri May 15 16:39:58 CST 2020
Fri May 15 16:39:59 CST 2020
Fri May 15 16:39:59 CST 2020
Fri May 15 16:39:59 CST 2020
Fri May 15 16:39:59 CST 2020
Fri May 15 16:39:59 CST 2020
Fri May 15 16:39:59 CST 2020

####
时间同步& 时区设置
# 修改时区  ##

修改Linux的时区,RedHat 7
[root@ip-172-31-29-22 sysconfig]# timedatectl
      Local time: Tue 2018-01-09 08:01:21 UTC
  Universal time: Tue 2018-01-09 08:01:21 UTC
        RTC time: Tue 2018-01-09 08:01:20
       Time zone: UTC (UTC, +0000)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a
修改时区:
[root@ip-172-31-29-22 sysconfig]# timedatectl set-timezone Asia/Shanghai

修改Linux的时区,RedHat 6
vim /etc/sysconfig/clock
ZONE="Asia/Shanghai"
UTC=false
ARC=false

# 时间同步NTP #
服务端:
yum install ntp ntpdate -y

[root@big-slave03 local]# systemctl stop ntpd
[root@big-slave03 local]# systemctl start ntpd
[root@big-master1 ~]# systemctl start ntpd && systemctl status ntpd && ntpq -p
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2020-05-15 17:10:16 CST; 47ms ago
  Process: 5669 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 5670 (ntpd)
   CGroup: /system.slice/ntpd.service
           └─5670 /usr/sbin/ntpd -u ntp:ntp -g

May 15 17:10:16 big-master1 ntpd[5670]: Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123
May 15 17:10:16 big-master1 ntpd[5670]: Listen and drop on 1 v6wildcard :: UDP 123
May 15 17:10:16 big-master1 ntpd[5670]: Listen normally on 2 lo 127.0.0.1 UDP 123
May 15 17:10:16 big-master1 ntpd[5670]: Listen normally on 3 enp0s3 192.168.41.20 UDP 123
May 15 17:10:16 big-master1 ntpd[5670]: Listen normally on 4 lo ::1 UDP 123
May 15 17:10:16 big-master1 ntpd[5670]: Listen normally on 5 enp0s3 fe80::cabd:f03f:32db:a908 UDP 123
May 15 17:10:16 big-master1 ntpd[5670]: Listening on routing socket on fd #22 for interface updates
May 15 17:10:16 big-master1 ntpd[5670]: inappropriate address 192.168.0.188 for the fudge command, line ignored
May 15 17:10:16 big-master1 ntpd[5670]: 0.0.0.0 c016 06 restart
May 15 17:10:16 big-master1 ntpd[5670]: 0.0.0.0 c012 02 freq_set kernel 22.512 PPM
     remote           refid      st t when poll reach   delay   offset  jitter
======================================================================
 dc1-1.500wan.co .INIT.          16 u    -   64    0    0.000    0.000   0.000

客户端:
vi /etc/ntp.conf
restrict master_ip_address   nomodify notrap noquery
server master_ip_address   Fudge master_ip_address   stratum 10
#server 0.rhel.pool.ntp.org iburst
#server 1.rhel.pool.ntp.org iburst
#server 2.rhel.pool.ntp.org iburst
#server 3.rhel.pool.ntp.org iburst

重启:
systemctl restart ntpd

同步时间:
ntpdate -u master_ip_address
ntpq -p

开机启动:
systemctl enable ntpd

ps :  hadoop 集群几个节点,我直接指向了 公司DNS 服务器,这里的client (客户端),就没有配置,直接配置 Server 端即可。

####
基本脚本tools池配置:
[hadoop@big-master1 tools]$ pwd
/home/hadoop/tools
[hadoop@big-master1 tools]$ ls
deploy.conf  deploy.sh  runRemoteCmd.sh
[hadoop@big-master1 tools]$ cat deploy.conf 
big-master1,all,namenode,zookeeper,resourcemanager,
big-master2,all,slave,namenode,zookeeper,resourcemanager,
big-slave01,all,slave,datanode,zookeeper,
big-slave02,all,slave,datanode,zookeeper,
big-slave03,all,slave,datanode,zookeeper,
[hadoop@big-master1 tools]$ cat deploy.sh 
#!/bin/bash
#set -x

if [ $# -lt 3 ]
then 
  echo "Usage: ./deply.sh srcFile(or Dir) descFile(or Dir) MachineTag"
  echo "Usage: ./deply.sh srcFile(or Dir) descFile(or Dir) MachineTag confFile"
  exit 
fi

src=$1
dest=$2
tag=$3
if [ 'a'$4'a' == 'aa' ]
then
  confFile=/home/hadoop/tools/deploy.conf
else 
  confFile=$4
fi

if [ -f $confFile ]
then
  if [ -f $src ]
  then
    for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'` 
    do
       scp $src $server":"${dest}
    done 
  elif [ -d $src ]
  then
    for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'` 
    do
       scp -r $src $server":"${dest}
    done 
  else
      echo "Error: No source file exist"
  fi

else
  echo "Error: Please assign config file or run deploy.sh command with deploy.conf in same directory"
fi
[hadoop@big-master1 tools]$ cat runRemoteCmd.sh 
#!/bin/bash
#set -x

if [ $# -lt 2 ]
then 
  echo "Usage: ./runRemoteCmd.sh Command MachineTag"
  echo "Usage: ./runRemoteCmd.sh Command MachineTag confFile"
  exit 
fi

cmd=$1
tag=$2
if [ 'a'$3'a' == 'aa' ]
then

confFile=/home/hadoop/tools/deploy.conf
else 
  confFile=$3
fi

if [ -f $confFile ]
then
    for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'` 
    do
       echo "*******************$server***************************"
       ssh $server "source /etc/profile; $cmd"
    done 
else
  echo "Error: Please assign config file or run deploy.sh command with deploy.conf in same directory"
fi

######################
(二),JDK 安装
2,软件版本安装及下载: 这里引用:
 2.1 jdk 安装
下载JDK : https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
这里推荐一个Windows系统与虚拟机可以相互传输文件的软件FileZilla,下载解压即可用:
链接:https://pan.baidu.com/s/193E3bfbHVpn5lsODg2ijJQ 提取码:kwiw
使用xshell-manageer 基本不需要ftp 工具

第一步:卸载删除OpenJDK
    卸载删除原有的OpenJDK,再安装新JDK,主要与对应的hadoop 版本对应兼容性。
   #  rpm -qa|grep openjdk -i #查找已经安装的OpenJDK,-i表示忽略“openjdk”的大小写
   # yum remove java-1.6.0-openjdk-devel-1.6.0.0-6.1.13.4.el7_0.x86_64 java-1.7.0-openjdk-devel-1.7.0.65-2.5.1.2.el7_0.x86_64 java-1.7.0-openjdk-headless-1.7.0.65-2.5.1.2.el7_0.x86_64 java-1.7.0-openjdk-1.7.0.65-2.5.1.2.el7_0.x86_64 java-1.6.0-openjdk-1.6.0.0-6.1.13.4.el7_0.x86_64
  -- 用RedHat系列系统自带的yum进行删除openjdk,yum类似ubuntu中的apt-get,均用于安装、卸载及更新系统自带的软件,注意:以上均以空格间隔
第二步:安装JDK

1、解压
首先解压下载得来的JDK:(JDK的tar.gz压缩包放在了~/dev目录下)
[Randy@localhost ~]$ sudo mkdir /usr/lib/jdk #如若没有/usr/lib/jdk路径,则执行此句予以创建jdk文件夹
[Randy@localhost ~]$ sudo tar -zxvf jdk-8u11-linux-i586.tar.gz -C /usr/lib/jdk #注意:-C, --directory=DIR        改变至目录 DIR
[Randy@localhost ~]$  ls /usr/lib/jdk
jdk1.8.0_11
[Randy@localhost ~]$ ls /usr/lib/jdk/jdk1.8.0_11/
bin        javafx-src.zip  man          THIRDPARTYLICENSEREADME-JAVAFX.txt
COPYRIGHT  jre             README.html  THIRDPARTYLICENSEREADME.txt
db         lib             release
include    LICENSE         src.zip
[Randy@localhost ~]$
移动jdk1.8.0_11中的文件到/usr/lib/jdk,并删除jdk1.8.0_11文件夹:

[Randy@localhost ~]$ sudo cp -rf /usr/lib/jdk/jdk1.8.0_11/* /usr/lib/jdk/ #移动
[Randy@localhost ~]$ 
[Randy@localhost ~]$  ls /usr/lib/jdk
bin        javafx-src.zip  LICENSE      src.zip
COPYRIGHT  jdk1.8.0_11     man          THIRDPARTYLICENSEREADME-JAVAFX.txt
db         jre             README.html  THIRDPARTYLICENSEREADME.txt
include    lib             release
[Randy@localhost ~]$ sudo rm -rf /usr/lib/jdk/jdk1.8.0_11/ #删除
[Randy@localhost ~]$  ls /usr/lib/jdk
bin        javafx-src.zip  man          THIRDPARTYLICENSEREADME-JAVAFX.txt
COPYRIGHT  jre             README.html  THIRDPARTYLICENSEREADME.txt
db         lib             release
include    LICENSE         src.zip
[Randy@localhost ~]$
 
2、配置环境变量

[Randy@localhost ~]$ sudo vim /etc/profile
在最后一行插入:
#JAVA Environment
export JAVA_HOME=/usr/lib/jdk
export JRE_HOME=/usr/lib/jdk/jre
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
export CLASSPATH=$CLASSPATH:.:$JAVA_HOME/lib:$JRE_HOME/lib

3、修改系统默认的JDK
[Randy@localhost ~]$  sudo update-alternatives --install /usr/bin/java java /usr/lib/jdk/bin/java 300  #使系统默认的java命令是/usr/lib/jdk/bin中的java命令
[Randy@localhost ~]$  sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jdk/bin/javac 300  #使系统默认的javac命令是/usr/lib/jdk/bin中的javac命令
  [Randy@localhost ~]$ sudo update-alternatives --install /usr/bin/jar jar /usr/lib/jdk/bin/jar 300 #使系统默认的jar命令是/usr/lib/jdk/bin中的jar命令 
[Randy@localhost ~]$  sudo update-alternatives --config java   #配置默认java命令
共有 1 个提供“java”的程序。
  选项    命令
-----------------------------------------------
*+ 1          /usr/lib/jdk/bin/java
按 Enter 保留当前选项[+],或者键入选项编号:1
[Randy@localhost ~]$ sudo update-alternatives --config javac   #配置默认java命令
共有 1 个提供“java”的程序。
  选项    命令
-----------------------------------------------
*+ 1          /usr/lib/jdk/bin/javac
按 Enter 保留当前选项[+],或者键入选项编号:1

第三步:测试JDK
[Randy@localhost ~]$ java -version
java version "1.8.0_11"
Java(TM) SE Runtime Environment (build 1.8.0_11-b12)
Java HotSpot(TM) Server VM (build 25.11-b03, mixed mode)
[Randy@localhost ~]$ javac -version
javac 1.8.0_11
测试是遇到了一个问题:

[Randy@localhost ~]$ java
-bash: /usr/bin/java: /lib/ld-linux.so.2: bad ELF interpreter: 没有那个文件或目录
[Randy@localhost ~]$ ls /lib/ld-linux
ls: 无法访问/lib/ld-linux: 没有那个文件或目录
[Randy@localhost ~]$ java -version
-bash: /usr/bin/java: /lib/ld-linux.so.2: bad ELF interpreter: 没有那个文件或目录
[Randy@localhost ~]$
解决方法是:

[Randy@localhost ~]$ sudo yum install glibc.i686 #在64系统里执行32位程序如果出现/lib/ld-linux.so.2: bad ELF interpreter: No such file or directory,安装下glic即可
 
完成之后,把所有的jdk 配置文件 发送给 hadoop集群所有节点(big-master1,big-master2,big-slave01,big-slave02,big-slave03)
 #####

(三), zookeeper  安装:
 zookeeper 是一个分布式,开源调度服务。 他在hadoop cluster 中充当 角色有: 同步锁,HA方案,leader 选举方案等。
下载:
download :  https://zookeeper.apache.org/releases.html 
yum install glibc-headers gcc gcc-c++ make cmake openssl-devel ncurses-devel -y
# cd /usr/local
# wget http://mirror.bit.edu.cn/apache/zookeeper/stable/zookeeper-3.4.12.tar.gz
# tar -zxvf zookeeper-3.4.12.tar.gz
# cd zookeeper-3.4.12
Step3:重命名配置文件zoo_sample.cfg

# cp conf/zoo_sample.cfg conf/zoo.cfg
Step4:启动zookeeper

Zookeeper集群模式安装:

[hadoop@big-slave02 zookeeper]$ cd conf/
[hadoop@big-slave02 conf]$ pwd
/usr/local/zookeeper/conf
[hadoop@big-slave02 conf]$ ls
configuration.xsl  log4j.properties  zoo.cfg  zoo_sample.cfg
[hadoop@big-slave02 conf]$ cat zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/data/zookeeper/zkdata
#dataLogDir=/data/zookeeper/zkdatalog
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
#quorumListenOnAllIPs=true
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=big-master1:2888:3888
server.2=big-master2:2888:3888
server.3=big-slave01:2888:3888
server.4=big-slave02:2888:3888
server.5=big-slave03:2888:3888

zookeeper 配置zoo.cfg 参数说明:
##  我这里是多机,所以端口可以一样 (其他部署,详见@Zookeeper入门详解)
参数详解:
tickTime:这个时间是作为 Zookeeper 服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个 tickTime 时间就会发送一个心跳。
initLimit:这个配置项是用来配置 Zookeeper 接受客户端(这里所说的客户端不是用户连接 Zookeeper 服务器的客户端,而是 Zookeeper 服务器集群中连接到 Leader 的 Follower 服务器)初始化连接时最长能忍受多少个心跳时间间隔数。当已经超过 10个心跳的时间(也就是 tickTime)长度后 Zookeeper 服务器还没有收到客户端的返回信息,那么表明这个客户端连接失败。总的时间长度就是 10*2000=20 秒
syncLimit:这个配置项标识 Leader 与 Follower 之间发送消息,请求和应答时间长度,最长不能超过多少个 tickTime 的时间长度,总的时间长度就是 5*2000=10秒
dataDir:顾名思义就是 Zookeeper 保存数据的目录,默认情况下,Zookeeper 将写数据的日志文件也保存在这个目录里。
clientPort:这个端口就是客户端连接 Zookeeper 服务器的端口,Zookeeper 会监听这个端口,接受客户端的访问请求。
server.A=B:C:D:其中 A 是一个数字,表示这个是第几号服务器;B 是这个服务器的 ip 地址;C 表示的是这个服务器与集群中的 Leader 服务器交换信息的端口;D 表示的是万一集群中的 Leader 服务器挂了,需要一个端口来重新进行选举,选出一个新的 Leader,而这个端口就是用来执行选举时服务器相互通信的端口。如果是伪集群的配置方式,由于 B 都是一样,所以不同的 Zookeeper 实例通信端口号不能一样,所以要给它们分配不同的端口号。

Step5:标识Server ID   --> dataDir 下的myid 设置。
在对应的目录下 ,我这里是 /usr/local/zookeeper-3.4.14/ ,在此目录创建唯一的id 值。

[hadoop@big-master1 tools]$ cat /data/zookeeper/zkdata/myid 
1
[hadoop@big-slave02 conf]$ cat /data/zookeeper/zkdata/myid 
4

最后一步,把zookeeper 添加到全局变量中. /etc/profile 
### zookeeper  ##
export ZK_HOME=/usr/local/zookeeper-3.4.14
export PATH=$ZK_HOME/bin:$PATH
source /etc/profile

zookeeper 启动测试:
## 启动zookeeper

[hadoop@tidb07 ~]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@tidb07 ~]$ ps -ef |grep zookeeper
hadoop   30713     1 13 16:51 pts/0    00:00:01 /usr/local/jdk1.8.0_251/bin/java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/local/zookeeper-3.4.14/bin/../zookeeper-server/target/classes:/usr/local/zookeeper-3.4.14/bin/../build/classes:/usr/local/zookeeper-3.4.14/bin/../zookeeper-server/target/lib/*.jar:/usr/local/zookeeper-3.4.14/bin/../build/lib/*.jar:/usr/local/zookeeper-3.4.14/bin/../lib/slf4j-log4j12-1.7.25.jar:/usr/local/zookeeper-3.4.14/bin/../lib/slf4j-api-1.7.25.jar:/usr/local/zookeeper-3.4.14/bin/../lib/netty-3.10.6.Final.jar:/usr/local/zookeeper-3.4.14/bin/../lib/log4j-1.2.17.jar:/usr/local/zookeeper-3.4.14/bin/../lib/jline-0.9.94.jar:/usr/local/zookeeper-3.4.14/bin/../lib/audience-annotations-0.5.0.jar:/usr/local/zookeeper-3.4.14/bin/../zookeeper-3.4.14.jar:/usr/local/zookeeper-3.4.14/bin/../zookeeper-server/src/main/resources/lib/*.jar:/usr/local/zookeeper-3.4.14/bin/../conf:/usr/local/jdk1.8.0_251/lib/tools.jar/usr/local/jdk1.8.0_251/lib/dt.jar -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
hadoop   30739 30662  0 16:52 pts/0    00:00:00 grep --color=auto zookeeper
[hadoop@tidb07 ~]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower    ##  这是备节点。 leader 为主节点

### 另一节点
[hadoop@pd1-500 ~]$ zkServer.sh  status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: leader

关闭zookeeper :
[hadoop@tidb07 ~]$ zkServer.sh stop

使用脚本,启动其他几个zookeeper:
使用runRemoteCmd.sh 脚本,启动所有节点上面的Zookeeper。
-- 把这个脚本也放在全局变量/etc/profile 下:
## tools ##
export PATH=/home/hadoop/tools:$PATH
##

[hadoop@big-master1 ~]$ cd /home/hadoop/tools/
[hadoop@big-master1 tools]$ ls
deploy.conf  deploy.sh  runRemoteCmd.sh
[hadoop@big-master1 ~]$ runRemoteCmd.sh "/usr/local/zookeeper/bin/zkServer.sh start " zookeeper
[hadoop@big-master1 ~]$ runRemoteCmd.sh "/usr/local/zookeeper/bin/zkServer.sh status " zookeeper
*******************big-master1***************************
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
*******************big-master2***************************
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
*******************big-slave01***************************
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: leader
*******************big-slave02***************************
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
*******************big-slave03***************************
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower

-- 查看状态,只有当所有节点都启动后,才有mode 模式。如果只启动部分,是没有的。
-- 如果一个节点为leader,另四个节点为follower,则说明Zookeeper安装成功。

查看所有节点上面的QuorumPeerMain进程是否启动。

[hadoop@big-master1 ~]$ runRemoteCmd.sh "jps" zookeeper
*******************big-master1***************************
30037 JournalNode
4023 ResourceManager
29642 DFSZKFailoverController
29804 NameNode
9132 Jps
28141 QuorumPeerMain
*******************big-master2***************************
20032 NameNode
20116 JournalNode
20324 DFSZKFailoverController
13722 Jps
18830 QuorumPeerMain
2462 ResourceManager
*******************big-slave01***************************
10161 NodeManager
7702 QuorumPeerMain
8583 DataNode
11751 Jps
8686 JournalNode
*******************big-slave02***************************
5187 DataNode
8227 Jps
6697 NodeManager
4362 QuorumPeerMain
5290 JournalNode
*******************big-slave03***************************
4562 QuorumPeerMain
5442 DataNode
8405 Jps
6903 NodeManager
5545 JournalNode

--- 以上是hadoop ,yarm 都部署成功后,所有的启动服务,红色为zookeeper 的启动项。

#############

(四), hadoop 安装:

1, hadoop 开源版本下载:
## appache hadoop 
download : http://hadoop.apache.org
https://archive.apache.org/dist/hadoop/common/

2,配置对应文件:本次需要编辑的文件:
slaves
hdfs-site.xml  
core-site.xml 
mapred-env.sh  
mapred-site.xml
 yarn-env.sh
hadoop-env.sh 
yarn-site.xml

配置文件目录: /usr/local/hadoop-xxxx/etc/hadoop/
[hadoop@big-master1 ~]$ cd /usr/local/hadoop/etc/hadoop/
[hadoop@big-master1 hadoop]$ ls
capacity-scheduler.xml      hadoop-policy.xml        kms-log4j.properties        slaves
configuration.xsl           hdfs-site.xml            kms-site.xml                ssl-client.xml.example
container-executor.cfg      httpfs-env.sh            log4j.properties            ssl-server.xml.example
core-site.xml               httpfs-log4j.properties  mapred-env.cmd              yarn-env.cmd
hadoop-env.cmd              httpfs-signature.secret  mapred-env.sh               yarn-env.sh
hadoop-env.sh               httpfs-site.xml          mapred-queues.xml.template  yarn-site.xml
hadoop-metrics2.properties  kms-acls.xml             mapred-site.xml
hadoop-metrics.properties   kms-env.sh               mapred-site.xml.template

-- 需要设置jdk 环境变量的文件为 hadoop-env.sh 和 yarn-env.sh 
[hadoop@big-master1 hadoop]$ cat hadoop-env.sh
JAVA_HOME=/usr/local/jdk1.8.0_251   -- new add

[hadoop@big-master1 hadoop]$ cat yarn-env.sh 
export JAVA_HOME=/usr/local/jdk1.8.0_251  -- new add

2, 配置HDFS 
-- 
2.1 配置core-site.xml 文件
  [hadoop@big-master1 hadoop]$ cat /usr/local/hadoop/etc/hadoop/core-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://cluster1</value>
       <!--  这里的值指的是默认的HDFS路径 ,取名为cluster1 -->
    </property>
    <property>
         <name>hadoop.tmp.dir</name>
            <value>/data/hadoop/data/hadoop_${user.name}</value>
        <!-- hadoop的临时目录,如果需要配置多个目录,需要逗号隔开,data目录需要我们自己创建 -->
        </property>
        <property>
            <name>ha.zookeeper.quorum</name>
        <value>big-master1:2181,big-master2:2181,big-slave01:2181,big-slave02:2181,big-slave03:2181</value>
         <!--  配置Zookeeper 管理HDFS -->
    </property>
</configuration>
#####

2.2 配置hdfs-site.xml 文件:
[hadoop@big-master1 hadoop]$ cat hdfs-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
   <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <!-- 数据块副本数为3 -->
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>
    <property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
    </property>
    <!-- 权限默认配置为false -->

<property>
        <name>dfs.nameservices</name>
        <value>cluster1</value>
    </property>
    <!-- 命名空间,它的值与fs.defaultFS的值要对应,namenode高可用之后有两个namenode,cluster1是对外提供的统一入口 -->

<property>
        <name>dfs.ha.namenodes.cluster1</name>
        <value>big-master1,big-master2</value>
    </property>
    <!-- 指定 nameService 是 cluster1 时的nameNode有哪些,这里的值也是逻辑名称,名字随便起,相互不重复即可 -->

<property>
        <name>dfs.namenode.rpc-address.cluster1.big-master1</name>
        <value>big-master1:9000</value>
    </property>
    <!--big-master1 rpc地址 -->

<property>
        <name>dfs.namenode.http-address.cluster1.big-master1</name>
        <value>big-master1:50070</value>
    </property>
    <!-- big-master1 http地址-->

<property>
        <name>dfs.namenode.rpc-address.cluster1.big-master2</name>
        <value>big-master2:9000</value>
    </property>
    <!-- big-master2 rpc地址 -->

<property>
        <name>dfs.namenode.http-address.cluster1.big-master2</name>
        <value>big-master2:50070</value>
    </property>
    <!-- big-master2 http地址 -->

<property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    <!-- 启动故障自动恢复-->

<property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://big-master1:8485;big-master2:8485;big-slave01:8485;big-slave02:8485;big-slave03:8485/cluster1</value>
    </property>
    <!-- 指定journal -->

<property>
        <name>dfs.client.failover.proxy.provider.cluster1</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <!-- 指定 cluster1 出故障时,哪个实现类负责执行故障切换 -->

<property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/data/hadoop/data/journaldata/jn</value>
    </property>
    <!-- 指定JournalNode集群在对nameNode的目录进行共享时,自己存储数据的磁盘路径,目录需要提前创建 -->

<property>
        <name>dfs.ha.fencing.methods</name>
        <value>shell(/bin/true)</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hadoop/.ssh/id_rsa</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>10000</value>
    </property>
    <!-- 脑裂默认配置 -->

<property>
        <name>dfs.namenode.handler.count</name>
        <value>100</value>
    </property>
</configuration>
#######

2.3 slave配置文件配置
[hadoop@big-master1 hadoop]$ cat slaves 
big-slave01
big-slave02
big-slave03

2.4  把配置好的所有文件,scp 到每一个节点,包括namenode, datanode 节点。
可以通过deploy.sh 脚本:
 [hadoop@big-master1 tools]$ deploy.sh /usr/local/hadoop  slave

2.5 hdfs 配置完成后的启动执行顺序(由上而下):
2.5.1 所有节点的zookeeper进程 
2.5.2 所有节点的journalnode 进程
2.5.3 在big-master1(primary)节点上 执行初始化
 a,namenode 的初始化: /usr/local/hadoop/bin/hdfs namenode -format 
 b,  高可用的初始化: /usr/local/hadoop/bin/hdfs zkfc -formatZK 
 b1,在启动主namenode 之前,确保journalnode 都已经启动,不然namenode standby 同步时会报错。
[hadoop@big-master1 bin]$ runRemoteCmd.sh "/usr/local/hadoop/sbin/hadoop-daemon.sh start journalnode" all
[hadoop@big-master1 bin]$ runRemoteCmd.sh "/usr/local/hadoop/sbin/hadoop-daemon.sh status journalnode" all
 c,  再启动namenode: /usr/local/hadoop/bin/hdfs namenode 
 d,  以上三步完成后,并没有报错后,需要再namenode(standby)节点 执行数据同步:
  namenode2 :  /usr/local/hadoop/bin/hdfs namenode -bootstrapStandby 
  --主要同步的是namenode1主节点的所有元数据。
 e, namenode2 同步主节点完成之后,
 f,启动zkfc 集群: /usr/local/hadoop/sbin/hadoop -daemon.sh start zkfc  (现在主节点big-master1上启动,如果没有问题,再启动一键启动所有hdfs 相关进程)。
 -- > 这里,可以通过关闭namenode 主,查看主备是否有切换过程。
 -- > 通过登陆的web 界面: http://big-master1:50070   (主),http://bing-master2:50070 (备),可以查看,主为actinve 状态,备为standby 状态。

g, 通过测试上传文件,看是否能成功,所有的命令 都为hdfs 自己的命令。
[hadoop@big-master1 data]$ touch test01.txt    -- 有权限的目录下创建一个文件
[hadoop@big-master1 data]$ vim test01.txt 
[hadoop@big-master1 data]$ cat test01.txt      --随机的内容
hadoop  big-data
hadoop  yarm 
rdbms   oracle
rdbms   mysql

验证一,通过hdfs 命令
[hadoop@big-master1 data]$ hdfs dfs -mkdir /test01   -- 在hdfs 系统上创建一个文件目录
[hadoop@big-master1 data]$ hdfs dfs -put test01.txt /test01    --向hdfs上传一个文件
[hadoop@big-master1 data]$ hdfs dfs -ls /test01                      --查看文件是否上传成功
Found 1 items
-rw-r--r--   3 hadoop supergroup         61 2020-05-17 01:14 /test01/test01.txt

验证二,通过web 界面

查看文件:
[hadoop@big-master2 hadoop]$ hdfs dfs -cat /test01/test01.txt
hadoop  big-data
hadoop  yarm 
rdbms   oracle
rdbms   mysql

[hadoop@big-master2 hadoop]$ hdfs dfs -cat /test/test.txt
hadoop appache
hadoop ywendeng
hadoop tomcat

[hadoop@big-slave02 conf]$ hdfs dfs -cat /test01/test01.txt
hadoop  big-data
hadoop  yarm 
rdbms   oracle
rdbms   mysql

-- 截止到这里,说明 hdfs 已经安装成功。

3  yarn 安装配置:

3.1  配置mapred-site.xml
[hadoop@big-master1 ~]$ cat /usr/local/hadoop/etc/hadoop/mapred-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    </property>
</configuration>

3.2  配置yarn-site.xml
[hadoop@big-master1 ~]$ cat /usr/local/hadoop/etc/hadoop/yarn-site.xml 
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties --> 
<property>
    <name>yarn.resourcemanager.connect.retry-interval.ms</name>
    <value>2000</value>
</property>
<!-- 超时的周期 -->
<property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
</property>
<!-- 打开高可用 -->
<property>
    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
    <value>true</value>
</property>
<!-- 启动故障自动恢复 -->
<property>
    <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
    <value>true</value>
</property>

<property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>yarn-rm-cluster</value>
</property>
<!-- 给yarn cluster 取个名字yarn-rm-cluster -->
<property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
</property>
<!-- 给ResourceManager 取个名字 rm1,rm2 -->
<property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>big-master1</value>
</property>
<!-- 配置ResourceManager rm1 hostname -->
<property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>big-master2</value>
</property>
<!-- 配置ResourceManager rm2 hostname -->
<property>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
</property>
<!-- 启用resourcemanager 自动恢复 -->
<property>
    <name>yarn.resourcemanager.zk.state-store.address</name>
    <value>big-master1:2181,big-master2:2181,big-slave01:2181,big-slave02:2181,big-slave03:2181</value>
</property>
<!-- 配置Zookeeper地址 -->
<property>
    <name>yarn.resourcemanager.zk-address</name>
    <value>big-master1:2181,big-master2:2181,big-slave01:2181,big-slave02:2181,big-slave03:2181</value>
</property>

<!-- 配置Zookeeper地址 -->
<!-- rm1 <-> big-master1 -->
<property>
    <name>yarn.resourcemanager.address.rm1</name>
    <value>big-master1:8032</value>
</property>
<!-- rm1端口号 -->
<property>
    <name>yarn.resourcemanager.scheduler.address.rm1</name>
    <value>big-master1:8034</value>
</property>
<!-- rm1调度器的端口号 -->
<property>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>big-master1:8088</value>
 <!-- rm1 webapp端口号 -->
</property>

<!-- rm2 <-> big-master2 -->
<property>
    <name>yarn.resourcemanager.address.rm2</name>
    <value>big-master2:8032</value>
</property>
<!-- rm2端口号 -->
<property>
    <name>yarn.resourcemanager.scheduler.address.rm2</name>
    <value>big-master2:8034</value>
</property>
<!-- rm2调度器的端口号 -->
<property>
    <name>yarn.resourcemanager.webapp.address.rm2</name>
    <value>big-master2:8088</value>
</property>
<!-- rm2 webapp端口号 -->
<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<property>
    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<!-- 执行MapReduce需要配置的shuffle过程 -->
</configuration>

3.3 启动yarn 
 /usr/local/hadoop/sbin/start-yarn.sh   -- 在namenode1 上
 /usr/local/hadoop/sbin/yarn-daemon.sh start resourcemanager  -- 在namenode2上打开。

同时验证:
 http://big-master1:8088 
 http://big-master2:8088 
 -- 这个时候,只能big-master1  namenode1 可以开启,namenode2 不能显示,因为没有切换,所以正常。
 -- 关闭其中一个resourcemanager,然后再启动,看看这个过程的web界面变化。

检查ResourceManager状态:
[hadoop@big-master1 data]$ cd /usr/local/hadoop/bin/
[hadoop@big-master1 bin]$ ./yarn rmadmin -getServiceState rm1
active
[hadoop@big-master1 bin]$ ./yarn rmadmin -getServiceState rm2
standby

[hadoop@big-master2 bin]$ ./yarn rmadmin -getServiceState rm1
active
[hadoop@big-master2 bin]$ ./yarn rmadmin -getServiceState rm2
standby
[hadoop@big-slave01 bin]$ ./yarn rmadmin -getServiceState rm2
standby

Wordcount示例测试,如没有出错,表示该集群安装成成功。
[hadoop@big-master1 ~]$ hadoop
hadoop             hadoop.cmd         hadoop-daemon.sh   hadoop-daemons.sh  
[hadoop@big-master1 ~]$ hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.5.jar wordcount /test01/test01.txt /test01/out/
20/05/17 01:37:18 INFO input.FileInputFormat: Total input files to process : 1
20/05/17 01:37:19 INFO mapreduce.JobSubmitter: number of splits:1
20/05/17 01:37:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1589524167213_0002
20/05/17 01:37:21 INFO impl.YarnClientImpl: Submitted application application_1589524167213_0002
20/05/17 01:37:21 INFO mapreduce.Job: The url to track the job: http://big-master1:8088/proxy/application_1589524167213_0002/
20/05/17 01:37:21 INFO mapreduce.Job: Running job: job_1589524167213_0002
20/05/17 01:37:32 INFO mapreduce.Job: Job job_1589524167213_0002 running in uber mode : false
20/05/17 01:37:32 INFO mapreduce.Job:  map 0% reduce 0%
20/05/17 01:37:40 INFO mapreduce.Job:  map 100% reduce 0%
20/05/17 01:37:49 INFO mapreduce.Job:  map 100% reduce 100%
20/05/17 01:37:53 INFO mapreduce.Job: Job job_1589524167213_0002 completed successfully
20/05/17 01:37:53 INFO mapreduce.Job: Counters: 49
    File System Counters
        FILE: Number of bytes read=82
        FILE: Number of bytes written=324515
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=159
        HDFS: Number of bytes written=52
        HDFS: Number of read operations=6
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters 
        Launched map tasks=1
        Launched reduce tasks=1
        Data-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=5484
        Total time spent by all reduces in occupied slots (ms)=6662
        Total time spent by all map tasks (ms)=5484
        Total time spent by all reduce tasks (ms)=6662
        Total vcore-milliseconds taken by all map tasks=5484
        Total vcore-milliseconds taken by all reduce tasks=6662
        Total megabyte-milliseconds taken by all map tasks=5615616
        Total megabyte-milliseconds taken by all reduce tasks=6821888
    Map-Reduce Framework
        Map input records=5
        Map output records=8
        Map output bytes=85
        Map output materialized bytes=82
        Input split bytes=98
        Combine input records=8
        Combine output records=6
        Reduce input groups=6
        Reduce shuffle bytes=82
        Reduce input records=6
        Reduce output records=6
        Spilled Records=12
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=224
        CPU time spent (ms)=2850
        Physical memory (bytes) snapshot=450756608
        Virtual memory (bytes) snapshot=4229697536
        Total committed heap usage (bytes)=317194240
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=61
    File Output Format Counters 
        Bytes Written=52

-- 以上表示成功,在其他节点显示同样。

---------------------  The End ----------------------

##### 测试过程显示 ####
--启动journalnode 
[hadoop@big-master1 hadoop]$ /usr/local/hadoop-2.8.5/sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /usr/local/hadoop-2.8.5/logs/hadoop-hadoop-journalnode-big-master1.out
[hadoop@big-master1 hadoop]$ cat /usr/local/hadoop-2.8.5/logs/hadoop-hadoop-journalnode-big-master1.out
ulimit -a for user hadoop
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 30816
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4096
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
-- 如果没启动journalnode ,就启动 namenode 同步或者hadoop 其他进程,就会报错:
20/05/14 23:12:32 INFO ipc.Client: Retrying connect to server: big-master1/192.168.41.20:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/05/14 23:12:32 WARN ipc.Client: Failed to connect to server: big-master1/192.168.41.20:9000: retries get failed due to exceeded maximum allowed retries number: 10
java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)

[hadoop@big-master2 bin]$ ./hdfs namenode -bootstrapStandby
20/05/14 23:12:21 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode

-- 可以通过脚本,所有节点关闭启动:
[hadoop@big-master1 bin]$ runRemoteCmd.sh "/usr/local/hadoop/sbin/hadoop-daemon.sh stop journalnode" all
*******************big-master1***************************
stopping journalnode
*******************big-master2***************************
stopping journalnode
*******************big-slave01***************************
stopping journalnode
*******************big-slave02***************************
stopping journalnode
*******************big-slave03***************************
stopping journalnode

--格式化:
[hadoop@big-master1 bin]$ ./hdfs namenode -format
20/05/14 23:10:00 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   user = hadoop
STARTUP_MSG:   host = big-master1/192.168.41.20
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.8.5
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-1.3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.5.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 0b8464d75227fcee2c6e7f2410377b3d53d3d5f8; compiled by 'jdu' on 2018-09-10T03:32Z
STARTUP_MSG:   java = 1.8.0_251
************************************************************/
20/05/14 23:10:00 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
20/05/14 23:10:01 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-4521db8a-5242-4421-a2bc-3c569020af64
20/05/14 23:10:01 INFO namenode.FSEditLog: Edit logging is async:true
20/05/14 23:10:01 INFO namenode.FSNamesystem: KeyProvider: null
20/05/14 23:10:01 INFO namenode.FSNamesystem: fsLock is fair: true
20/05/14 23:10:01 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
20/05/14 23:10:02 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
20/05/14 23:10:02 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
20/05/14 23:10:02 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
20/05/14 23:10:02 INFO blockmanagement.BlockManager: The block deletion will start around 2020 May 14 23:10:02
20/05/14 23:10:02 INFO util.GSet: Computing capacity for map BlocksMap
20/05/14 23:10:02 INFO util.GSet: VM type       = 64-bit
20/05/14 23:10:02 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
20/05/14 23:10:02 INFO util.GSet: capacity      = 2^21 = 2097152 entries
20/05/14 23:10:02 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
20/05/14 23:10:02 INFO blockmanagement.BlockManager: defaultReplication         = 3
20/05/14 23:10:02 INFO blockmanagement.BlockManager: maxReplication             = 512
20/05/14 23:10:02 INFO blockmanagement.BlockManager: minReplication             = 1
20/05/14 23:10:02 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
20/05/14 23:10:02 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
20/05/14 23:10:02 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
20/05/14 23:10:02 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
20/05/14 23:10:02 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
20/05/14 23:10:02 INFO namenode.FSNamesystem: supergroup          = supergroup
20/05/14 23:10:02 INFO namenode.FSNamesystem: isPermissionEnabled = false
20/05/14 23:10:02 INFO namenode.FSNamesystem: Determined nameservice ID: cluster1
20/05/14 23:10:02 INFO namenode.FSNamesystem: HA Enabled: true
20/05/14 23:10:02 INFO namenode.FSNamesystem: Append Enabled: true
20/05/14 23:10:02 INFO util.GSet: Computing capacity for map INodeMap
20/05/14 23:10:02 INFO util.GSet: VM type       = 64-bit
20/05/14 23:10:02 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
20/05/14 23:10:02 INFO util.GSet: capacity      = 2^20 = 1048576 entries
20/05/14 23:10:02 INFO namenode.FSDirectory: ACLs enabled? false
20/05/14 23:10:02 INFO namenode.FSDirectory: XAttrs enabled? true
20/05/14 23:10:02 INFO namenode.NameNode: Caching file names occurring more than 10 times
20/05/14 23:10:02 INFO util.GSet: Computing capacity for map cachedBlocks
20/05/14 23:10:02 INFO util.GSet: VM type       = 64-bit
20/05/14 23:10:02 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
20/05/14 23:10:02 INFO util.GSet: capacity      = 2^18 = 262144 entries
20/05/14 23:10:02 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
20/05/14 23:10:02 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
20/05/14 23:10:02 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
20/05/14 23:10:02 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
20/05/14 23:10:02 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
20/05/14 23:10:02 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
20/05/14 23:10:02 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
20/05/14 23:10:02 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
20/05/14 23:10:02 INFO util.GSet: Computing capacity for map NameNodeRetryCache
20/05/14 23:10:02 INFO util.GSet: VM type       = 64-bit
20/05/14 23:10:02 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
20/05/14 23:10:02 INFO util.GSet: capacity      = 2^15 = 32768 entries
20/05/14 23:10:04 INFO namenode.FSImage: Allocated new BlockPoolId: BP-822972339-192.168.41.20-1589469004161
20/05/14 23:10:04 INFO common.Storage: Storage directory /data/hadoop/data/hadoop_hadoop/dfs/name has been successfully formatted.
20/05/14 23:10:04 INFO namenode.FSImageFormatProtobuf: Saving image file /data/hadoop/data/hadoop_hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
20/05/14 23:10:04 INFO namenode.FSImageFormatProtobuf: Image file /data/hadoop/data/hadoop_hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
20/05/14 23:10:04 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
20/05/14 23:10:04 INFO util.ExitUtil: Exiting with status 0
20/05/14 23:10:04 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at big-master1/192.168.41.20
************************************************************

--格式化zkfc 集群:
[hadoop@big-master1 bin]$ ./hdfs zkfc -formatZK
20/05/14 23:11:30 INFO tools.DFSZKFailoverController: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting DFSZKFailoverController
STARTUP_MSG:   user = hadoop
STARTUP_MSG:   host = big-master1/192.168.41.20
STARTUP_MSG:   args = [-formatZK]
STARTUP_MSG:   version = 2.8.5
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-1.3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.5.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 0b8464d75227fcee2c6e7f2410377b3d53d3d5f8; compiled by 'jdu' on 2018-09-10T03:32Z
STARTUP_MSG:   java = 1.8.0_251
************************************************************/
20/05/14 23:11:30 INFO tools.DFSZKFailoverController: registered UNIX signal handlers for [TERM, HUP, INT]
20/05/14 23:11:31 INFO tools.DFSZKFailoverController: Failover controller configured for NameNode NameNode at big-master1/192.168.41.20:9000
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:host.name=big-master1
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_251
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/local/jdk1.8.0_251/jre
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-1.3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.5.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/local/hadoop/lib/native
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-1062.18.1.el7.x86_64
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:user.dir=/usr/local/hadoop/bin
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=big-master1:2181,big-master2:2181,big-slave01:2181,big-slave02:2181,big-slave03:2181 sessionTimeout=10000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@765d7657
20/05/14 23:11:31 INFO zookeeper.ClientCnxn: Opening socket connection to server big-master1/192.168.41.20:2181. Will not attempt to authenticate using SASL (unknown error)
20/05/14 23:11:31 INFO zookeeper.ClientCnxn: Socket connection established to big-master1/192.168.41.20:2181, initiating session
20/05/14 23:11:31 INFO zookeeper.ClientCnxn: Session establishment complete on server big-master1/192.168.41.20:2181, sessionid = 0x100061aaf330000, negotiated timeout = 10000
20/05/14 23:11:31 INFO ha.ActiveStandbyElector: Session connected.
20/05/14 23:11:31 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/cluster1 in ZK.
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Session: 0x100061aaf330000 closed
20/05/14 23:11:31 INFO zookeeper.ClientCnxn: EventThread shut down
20/05/14 23:11:31 INFO tools.DFSZKFailoverController: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DFSZKFailoverController at big-master1/192.168.41.20
************************************************************/

--namenode2 同步 主节点的元数据
[hadoop@big-master2 bin]$ ./hdfs namenode -bootstrapStandby
20/05/14 23:54:27 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   user = hadoop
STARTUP_MSG:   host = big-master2/192.168.41.21
STARTUP_MSG:   args = [-bootstrapStandby]
STARTUP_MSG:   version = 2.8.5
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-1.3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.5.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 0b8464d75227fcee2c6e7f2410377b3d53d3d5f8; compiled by 'jdu' on 2018-09-10T03:32Z
STARTUP_MSG:   java = 1.8.0_251
************************************************************/
20/05/14 23:54:27 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
20/05/14 23:54:27 INFO namenode.NameNode: createNameNode [-bootstrapStandby]
=====================================================
About to bootstrap Standby ID big-master2 from:
           Nameservice ID: cluster1
        Other Namenode ID: big-master1
  Other NN's HTTP address: http://big-master1:50070
  Other NN's IPC  address: big-master1/192.168.41.20:9000
             Namespace ID: 1159823603
            Block pool ID: BP-822972339-192.168.41.20-1589469004161
               Cluster ID: CID-4521db8a-5242-4421-a2bc-3c569020af64
           Layout version: -63
       isUpgradeFinalized: true
=====================================================
20/05/14 23:54:29 INFO common.Storage: Storage directory /data/hadoop/data/hadoop_hadoop/dfs/name has been successfully formatted.
20/05/14 23:54:29 INFO namenode.FSEditLog: Edit logging is async:true
20/05/14 23:54:30 INFO namenode.TransferFsImage: Opening connection to http://big-master1:50070/imagetransfer?getimage=1&txid=0&storageInfo=-63:1159823603:1589469004161:CID-4521db8a-5242-4421-a2bc-3c569020af64&bootstrapstandby=true
20/05/14 23:54:30 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds
20/05/14 23:54:30 INFO namenode.TransferFsImage: Transfer took 0.03s at 0.00 KB/s
20/05/14 23:54:30 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000000 size 323 bytes.
20/05/14 23:54:30 INFO util.ExitUtil: Exiting with status 0
20/05/14 23:54:30 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at big-master2/192.168.41.21
************************************************************/

--启动zkfc 
[hadoop@big-master1 hadoop]$ sbin/hadoop-daemon.sh start zkfc
starting zkfc, logging to /usr/local/hadoop/logs/hadoop-hadoop-zkfc-big-master1.out
[hadoop@big-master1 hadoop]$ cat /usr/local/hadoop/logs/hadoop-hadoop-zkfc-big-master1.out
ulimit -a for user hadoop
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 30817
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4096
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

--启动所有hdfs 进程
[hadoop@big-master1 hadoop]$ sbin/start-dfs.sh 
Starting namenodes on [big-master2 big-master1]
big-master1: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-big-master1.out
big-master2: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-big-master2.out
big-slave02: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-big-slave02.out
big-slave03: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-big-slave03.out
big-slave01: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-big-slave01.out
Starting journal nodes [big-slave01 big-slave02 big-slave03 big-master1 big-master2]
big-master1: starting journalnode, logging to /usr/local/hadoop/logs/hadoop-hadoop-journalnode-big-master1.out
big-slave01: starting journalnode, logging to /usr/local/hadoop/logs/hadoop-hadoop-journalnode-big-slave01.out
big-slave03: starting journalnode, logging to /usr/local/hadoop/logs/hadoop-hadoop-journalnode-big-slave03.out
big-master2: starting journalnode, logging to /usr/local/hadoop/logs/hadoop-hadoop-journalnode-big-master2.out
big-slave02: starting journalnode, logging to /usr/local/hadoop/logs/hadoop-hadoop-journalnode-big-slave02.out
Starting ZK Failover Controllers on NN hosts [big-master2 big-master1]
big-master1: zkfc running as process 29642. Stop it first.
big-master2: starting zkfc, logging to /usr/local/hadoop/logs/hadoop-hadoop-zkfc-big-master2.out

--启动yarn 进程
[hadoop@big-master1 sbin]$ start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-big-master1.out
big-slave01: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-big-slave01.out
big-slave02: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-big-slave02.out
big-slave03: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-big-slave03.out

[hadoop@big-slave03 ~]$ cat /usr/local/hadoop/logs/yarn-hadoop-nodemanager-big-slave03.out
May 15, 2020 2:29:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices as a root resource class
May 15, 2020 2:29:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class
May 15, 2020 2:29:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver as a provider class
May 15, 2020 2:29:27 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
May 15, 2020 2:29:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton"
May 15, 2020 2:29:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton"
May 15, 2020 2:29:28 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices to GuiceManagedComponentProvider with the scope "Singleton"

--启动yarn ResourceManager 
[hadoop@big-master2 sbin]$ ./yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-big-master2.out

[hadoop@big-master2 sbin]$ cat /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-big-master2.out
May 15, 2020 2:32:22 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver as a provider class
May 15, 2020 2:32:22 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices as a root resource class
May 15, 2020 2:32:22 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class
May 15, 2020 2:32:22 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
May 15, 2020 2:32:22 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton"
May 15, 2020 2:32:23 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton"
May 15, 2020 2:32:24 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices to GuiceManagedComponentProvider with the scope "Singleton"

########## 个人测试,原始配置文件  ###########

### zookeeper

### hadoop

Hadoop HA集群部署 - A - 详解相关推荐

  1. 基于Hadoop HA集群部署HBase HA集群(详细版)

    文章目录 1.前言 2.ZooKeeper与Hadoop.HBase的关系 3.Hadoop与HBase的关系 4.架构资源规划 5.ZooKeeper集群设置 5.1 设置nn节点的zoo.conf ...

  2. Quartz.net持久化与集群部署开发详解

    支持集群高可用的开发方案 Quartz.net的数据库表结构 如果支持集群与持久化,单靠本机的内存和xml来保存计算任务调度的各种状态值,可想而知,是困难的.所以支持数据库这样的解决方案,OpenSy ...

  3. Linux 高可用(HA)集群之Pacemaker详解

    大纲 说明:本来我不想写这篇博文的,因为前几篇博文都有介绍pacemaker,但是我觉得还是得写一下,试想应该会有博友需要,特别是pacemaker 1.1.8(CentOS 6.4)以后,pacem ...

  4. [转]Hadoop集群_WordCount运行详解--MapReduce编程模型

    Hadoop集群_WordCount运行详解--MapReduce编程模型 下面这篇文章写得非常好,有利于初学mapreduce的入门 http://www.nosqldb.cn/1369099810 ...

  5. 04_Flink-HA高可用、Standalone集群模式、Flink-Standalone集群重要参数详解、集群节点重启及扩容、启动组件、Flink on Yarn、启动命令等

    1.4.Flink集群安装部署standalone+yarn 1.4.1.Standalone集群模式 1.4.2.Flink-Standalone集群重要参数详解 1.4.3.集群节点重启及扩容 1 ...

  6. Windows下搭建Tomcat集群的配置详解

    < Windows下搭建Tomcat集群基础入门详解 > 前言 在搭建 < Apache + Tomcat 实现Web服务器集群 > 前我们还需要实现 Tomcat集群实现Se ...

  7. CHAPTER 3 Web HA集群部署 - Keepalived

    Web HA集群部署 - Keepalived 1. Keepalived概述 1.1 工作原理 1.2 核心功能 1.3 拓扑图 2. KeepAlived安装方式 2.1 yum源安装 2.2 源 ...

  8. 运维工程师必备之负载 均衡集群及LVS详解

    原文地址:运维工程师必备之负载 均衡集群及LVS详解作者:蚁巡运维平台 来源: chrinux 的BLOG 时间: 2013-07-01 14:00 此博文主要介绍集群和负载均衡的基本理论和类别,内容 ...

  9. Kafka单机、集群模式安装详解(二)

    本文环境如下: 操作系统:CentOS 6 32位 JDK版本:1.8.0_77 32位 Kafka版本:0.9.0.1(Scala 2.11) 接上篇 Kafka单机.集群模式安装详解(一) 6. ...

最新文章

  1. HDU_Virtual Friends (并查集)
  2. 动态引入/添加js脚本
  3. 《动物森友会》的社交分级,在虚拟世界设计舒适的社交氛围
  4. oracle shell 登录,linux 本地账号密码无法登陆(shell可以登录),一直返回 登陆的login界面...
  5. java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray
  6. 使用Kakapo.js进行动态模拟
  7. 设计模式之单例模式——Singleton
  8. react中父子组件数据传递,子组件之间的数据传递
  9. python中的zen原则到底是什么
  10. ESP32 M5 超级问卷星:轻松收集数据
  11. 小米路由器sn算ssh密码_【玩转路由】小米路由器安装Misstar Tools(MT)工具箱
  12. AI轻松入门,AI零基础入门,AI初级教学,
  13. POJ1201/ZOJ1508/HDU1384 Intervals(spfa解差分约束问题)
  14. Uniswap V2-Core 部分智能合约代码解析
  15. 使用Jupyter Notebook远程连接服务器
  16. bash: /home/xxx/anaconda3/bin/conda: No such file or directory
  17. c语言判断两个单词是否为变位词,C++变位词问题分析
  18. 系统调用的内核实现,一文讲透open函数内核真实实现。
  19. Windows 10 启用telnet client功能
  20. 求职经历--ThoughtWorks

热门文章

  1. 利用PyCharm实现服务器远程代码开发
  2. 初中数学503个必考知识点_初中数学:21个必考知识点+重难点!打印背熟,3年不下130+!...
  3. 物理层设备(中继器和集线器)
  4. 函数计算FC助力游戏群采集营销数据滴水不漏
  5. 1.8正版生存服务器,我的世界1.8纯净版
  6. android删除所有已保存wifi密码,安卓移除/忘记已保存的wifi密码
  7. Harbor容器安装以及相关特性部署与使用(SSL证书+AD域)
  8. 马克思 第四章 资本主义的形成及其本质
  9. 有关戴尔服务器信息的公众号,戴尔DELL
  10. Windows10安装Docker详细步骤