尝试跨nameservice访问

在zhiyong2:

    ┌──────────────────────────────────────────────────────────────────────┐│                 • MobaXterm Personal Edition v21.4 •                 ││               (SSH client, X server and network tools)               ││                                                                      ││ ➤ SSH session to root@192.168.88.101                                 ││   • Direct SSH      :  ✔                                             ││   • SSH compression :  ✔                                             ││   • SSH-browser     :  ✔                                             ││   • X11-forwarding  :  ✔  (remote display is forwarded through SSH)  ││                                                                      ││ ➤ For more info, ctrl+click on help or visit our website.            │└──────────────────────────────────────────────────────────────────────┘Last login: Wed Mar  2 22:16:34 2022
/usr/bin/xauth:  file /root/.Xauthority does not exist
[root@zhiyong2 ~]# cd /opt/usdp-srv/srv/udp/2.0.0.0/hdfs/bin
[root@zhiyong2 bin]# ll
总用量 804
-rwxr-xr-x. 1 hadoop hadoop     98 3月   1 23:06 bootstrap-namenode.sh
-rwxr-xr-x. 1 hadoop hadoop 372928 11月 15 2020 container-executor
-rwxr-xr-x. 1 hadoop hadoop     88 3月   1 23:06 format-namenode.sh
-rwxr-xr-x. 1 hadoop hadoop     86 3月   1 23:06 format-zkfc.sh
-rwxr-xr-x. 1 hadoop hadoop   8580 11月 15 2020 hadoop
-rwxr-xr-x. 1 hadoop hadoop  11417 3月   1 23:06 hdfs
-rwxr-xr-x. 1 hadoop hadoop   6237 11月 15 2020 mapred
-rwxr-xr-x. 1 hadoop hadoop 387368 11月 15 2020 test-container-executor
-rwxr-xr-x. 1 hadoop hadoop  11888 11月 15 2020 yarn
[root@zhiyong2 bin]# ./hadoop fs -ls /
Found 6 items
drwxr-xr-x   - hadoop supergroup          0 2022-03-02 22:27 /hbase
drwxr-xr-x   - hadoop supergroup          0 2022-03-01 23:08 /tez
drwxrwxr-x   - hadoop supergroup          0 2022-03-01 23:08 /tez-0.10.0
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:09 /tmp
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:09 /user
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:12 /zhiyong-1

也可以直接通过IP或者映射访问:

[root@zhiyong2 bin]# ./hadoop fs -ls hdfs://zhiyong2:8020/
ls: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
[root@zhiyong2 bin]# ./hadoop fs -ls hdfs://zhiyong3:8020/
Found 6 items
drwxr-xr-x   - hadoop supergroup          0 2022-03-02 22:27 hdfs://zhiyong3:8020/hbase
drwxr-xr-x   - hadoop supergroup          0 2022-03-01 23:08 hdfs://zhiyong3:8020/tez
drwxrwxr-x   - hadoop supergroup          0 2022-03-01 23:08 hdfs://zhiyong3:8020/tez-0.10.0
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:09 hdfs://zhiyong3:8020/tmp
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:09 hdfs://zhiyong3:8020/user
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:12 hdfs://zhiyong3:8020/zhiyong-1
[root@zhiyong2 bin]# ./hadoop fs -ls hdfs://zhiyong4:8020/
ls: Call From zhiyong2/192.168.88.101 to zhiyong4:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

也可以跨集群访问:

[root@zhiyong2 bin]# ./hadoop fs -ls hdfs://zhiyong5:8020/
ls: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
[root@zhiyong2 bin]# ./hadoop fs -ls hdfs://zhiyong6:8020/
Found 6 items
drwxr-xr-x   - hadoop supergroup          0 2022-03-02 22:39 hdfs://zhiyong6:8020/hbase
drwxr-xr-x   - hadoop supergroup          0 2022-03-01 23:34 hdfs://zhiyong6:8020/tez
drwxrwxr-x   - hadoop supergroup          0 2022-03-01 23:35 hdfs://zhiyong6:8020/tez-0.10.0
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:35 hdfs://zhiyong6:8020/tmp
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:35 hdfs://zhiyong6:8020/user
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:38 hdfs://zhiyong6:8020/zhiyong-2
[root@zhiyong2 bin]# ./hadoop fs -ls hdfs://zhiyong7:8020/
ls: Call From zhiyong2/192.168.88.101 to zhiyong7:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

可以直接访问域:

[root@zhiyong2 bin]# ./hadoop fs -ls hdfs://zhiyong-1/
Found 6 items
drwxr-xr-x   - hadoop supergroup          0 2022-03-02 22:27 hdfs://zhiyong-1/hbase
drwxr-xr-x   - hadoop supergroup          0 2022-03-01 23:08 hdfs://zhiyong-1/tez
drwxrwxr-x   - hadoop supergroup          0 2022-03-01 23:08 hdfs://zhiyong-1/tez-0.10.0
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:09 hdfs://zhiyong-1/tmp
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:09 hdfs://zhiyong-1/user
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:12 hdfs://zhiyong-1/zhiyong-1
[root@zhiyong2 bin]# ./hadoop fs -ls hdfs://zhiyong-2/
-ls: java.net.UnknownHostException: zhiyong-2
Usage: hadoop fs [generic options][-appendToFile <localsrc> ... <dst>][-cat [-ignoreCrc] <src> ...][-checksum <src> ...][-chgrp [-R] GROUP PATH...][-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...][-chown [-R] [OWNER][:[GROUP]] PATH...][-copyFromLocal [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst>][-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>][-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] [-e] <path> ...][-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>][-createSnapshot <snapshotDir> [<snapshotName>]][-deleteSnapshot <snapshotDir> <snapshotName>][-df [-h] [<path> ...]][-du [-s] [-h] [-v] [-x] <path> ...][-expunge][-find <path> ... <expression> ...][-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>][-getfacl [-R] <path>][-getfattr [-R] {-n name | -d} [-e en] <path>][-getmerge [-nl] [-skip-empty-file] <src> <localdst>][-head <file>][-help [cmd ...]][-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]][-mkdir [-p] <path> ...][-moveFromLocal <localsrc> ... <dst>][-moveToLocal <src> <localdst>][-mv <src> ... <dst>][-put [-f] [-p] [-l] [-d] <localsrc> ... <dst>][-renameSnapshot <snapshotDir> <oldName> <newName>][-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...][-rmdir [--ignore-fail-on-non-empty] <dir> ...][-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]][-setfattr {-n name [-v value] | -x name} <path>][-setrep [-R] [-w] <rep> <path> ...][-stat [format] <path> ...][-tail [-f] <file>][-test -[defsz] <path>][-text [-ignoreCrc] <src> ...][-touchz <path> ...][-truncate [-w] <length> <path> ...][-usage [cmd ...]]Generic options supported are:
-conf <configuration file>        specify an application configuration file
-D <property=value>               define a value for a given property
-fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations.
-jt <local|resourcemanager:port>  specify a ResourceManager
-files <file1,...>                specify a comma-separated list of files to be copied to the map reduce cluster
-libjars <jar1,...>               specify a comma-separated list of jar files to be included in the classpath
-archives <archive1,...>          specify a comma-separated list of archives to be unarchived on the compute machinesThe general command line syntax is:
command [genericOptions] [commandOptions]Usage: hadoop fs [generic options] -ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]

但是集群之间默认没有打通,故USDP新集群不能直接通过域进行跨集群操作。

打通集群

查找hdfs-site.xml配置

顺便提一句,这个监控界面还是很讨我喜欢的。。。


可以在hdfs-site.xml看到这些内容:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><property><name>dfs.replication</name><value>3</value></property><property><name>dfs.name.dir</name><value>/data/udp/2.0.0.0/hdfs/dfs/nn</value></property><property><name>dfs.data.dir</name><value>/data/udp/2.0.0.0/hdfs/dfs/data</value></property><property><name>dfs.journalnode.edits.dir</name><value>/data/udp/2.0.0.0/hdfs/jnData</value></property><property><name>dfs.ha.namenodes.zhiyong-1</name><value>nn1,nn2</value></property><property><name>dfs.namenode.rpc-address.zhiyong-1.nn1</name><value>zhiyong2:8020</value></property><property><name>dfs.namenode.rpc-address.zhiyong-1.nn2</name><value>zhiyong3:8020</value></property><property><name>dfs.namenode.http-address.zhiyong-1.nn1</name><value>zhiyong2:50070</value></property><property><name>dfs.namenode.http-address.zhiyong-1.nn2</name><value>zhiyong3:50070</value></property><property><name>ha.zookeeper.quorum</name><value>zhiyong2:2181,zhiyong3:2181,zhiyong4:2181</value></property><property><name>dfs.namenode.shared.edits.dir</name><value>qjournal://zhiyong2:8485;zhiyong3:8485;zhiyong4:8485/zhiyong-1</value></property><property><name>dfs.client.failover.proxy.provider.zhiyong-1</name><value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value></property><property><name>dfs.ha.fencing.methods</name><value>sshfence(hadoop:22)</value></property><property><name>dfs.ha.fencing.ssh.connect-timeout</name><value>30000</value></property><property><name>dfs.ha.fencing.ssh.private-key-files</name><value>/home/hadoop/.ssh/id_rsa</value></property><property><name>dfs.ha.automatic-failover.enabled</name><value>true</value></property><property><name>dfs.datanode.max.xcievers</name><value>4096</value></property><property><name>dfs.permissions.enable</name><value>false</value></property><property><name>dfs.webhdfs.enabled</name><value>true</value></property><property><name>dfs.namenode.heartbeat.recheck-interval</name><value>45000</value></property><property><name>fs.trash.interval</name><value>7320</value></property><property><name>dfs.datanode.max.transfer.threads</name><value>8192</value></property><property><name>dfs.image.compress</name><value>true</value></property><property><name>dfs.namenode.num.checkpoints.retained</name><value>12</value></property><property><name>dfs.datanode.data.dir.perm</name><value>750</value></property><!-- 开启短路读 --><!--<property><name>dfs.client.read.shortcircuit</name><value>true</value></property><property><name>dfs.domain.socket.path</name><value>/var/lib/hadoop-hdfs/dn_socket</value></property><property><name>dfs.client.use.legacy.blockreader.local</name><value>true</value></property><property><name>dfs.block.local-path-access.user</name><value>hadoop,root</value></property>--><property><name>dfs.datanode.handler.count</name><value>50</value></property><property><name>dfs.namenode.handler.count</name><value>50</value></property><property><name>dfs.socket.timeout</name><value>900000</value></property><property><name>dfs.hosts.exclude</name><value>/srv/udp/2.0.0.0/hdfs/etc/hadoop/excludes</value></property><property><name>dfs.namenode.replication.max-streams</name><value>32</value></property><property><name>dfs.namenode.replication.max-streams-hard-limit</name><value>200</value></property><property><name>dfs.namenode.replication.work.multiplier.per.iteration</name><value>200</value></property><property><name>dfs.datanode.balance.bandwidthPerSec</name><value>10485760</value></property><property><name>dfs.disk.balancer.enabled</name><value>true</value></property><property><name>dfs.disk.balancer.max.disk.throughputInMBperSec</name><value>50</value></property><property><name>dfs.disk.balancer.plan.threshold.percent</name><value>2</value></property><property><name>dfs.disk.balancer.block.tolerance.percent</name><value>5</value></property></configuration>

zhiyong-2集群的这个xml内容:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><property><name>dfs.replication</name><value>3</value></property><property><name>dfs.name.dir</name><value>/data/udp/2.0.0.0/hdfs/dfs/nn</value></property><property><name>dfs.data.dir</name><value>/data/udp/2.0.0.0/hdfs/dfs/data</value></property><property><name>dfs.journalnode.edits.dir</name><value>/data/udp/2.0.0.0/hdfs/jnData</value></property><property><name>dfs.ha.namenodes.zhiyong-2</name><value>nn1,nn2</value></property><property><name>dfs.namenode.rpc-address.zhiyong-2.nn1</name><value>zhiyong5:8020</value></property><property><name>dfs.namenode.rpc-address.zhiyong-2.nn2</name><value>zhiyong6:8020</value></property><property><name>dfs.namenode.http-address.zhiyong-2.nn1</name><value>zhiyong5:50070</value></property><property><name>dfs.namenode.http-address.zhiyong-2.nn2</name><value>zhiyong6:50070</value></property><property><name>ha.zookeeper.quorum</name><value>zhiyong5:2181,zhiyong6:2181,zhiyong7:2181</value></property><property><name>dfs.namenode.shared.edits.dir</name><value>qjournal://zhiyong5:8485;zhiyong6:8485;zhiyong7:8485/zhiyong-2</value></property><property><name>dfs.client.failover.proxy.provider.zhiyong-2</name><value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value></property><property><name>dfs.ha.fencing.methods</name><value>sshfence(hadoop:22)</value></property><property><name>dfs.ha.fencing.ssh.connect-timeout</name><value>30000</value></property><property><name>dfs.ha.fencing.ssh.private-key-files</name><value>/home/hadoop/.ssh/id_rsa</value></property><property><name>dfs.ha.automatic-failover.enabled</name><value>true</value></property><property><name>dfs.datanode.max.xcievers</name><value>4096</value></property><property><name>dfs.permissions.enable</name><value>false</value></property><property><name>dfs.webhdfs.enabled</name><value>true</value></property><property><name>dfs.namenode.heartbeat.recheck-interval</name><value>45000</value></property><property><name>fs.trash.interval</name><value>7320</value></property><property><name>dfs.datanode.max.transfer.threads</name><value>8192</value></property><property><name>dfs.image.compress</name><value>true</value></property><property><name>dfs.namenode.num.checkpoints.retained</name><value>12</value></property><property><name>dfs.datanode.data.dir.perm</name><value>750</value></property><!-- 开启短路读 --><!--<property><name>dfs.client.read.shortcircuit</name><value>true</value></property><property><name>dfs.domain.socket.path</name><value>/var/lib/hadoop-hdfs/dn_socket</value></property><property><name>dfs.client.use.legacy.blockreader.local</name><value>true</value></property><property><name>dfs.block.local-path-access.user</name><value>hadoop,root</value></property>--><property><name>dfs.datanode.handler.count</name><value>50</value></property><property><name>dfs.namenode.handler.count</name><value>50</value></property><property><name>dfs.socket.timeout</name><value>900000</value></property><property><name>dfs.hosts.exclude</name><value>/srv/udp/2.0.0.0/hdfs/etc/hadoop/excludes</value></property><property><name>dfs.namenode.replication.max-streams</name><value>32</value></property><property><name>dfs.namenode.replication.max-streams-hard-limit</name><value>200</value></property><property><name>dfs.namenode.replication.work.multiplier.per.iteration</name><value>200</value></property><property><name>dfs.datanode.balance.bandwidthPerSec</name><value>10485760</value></property><property><name>dfs.disk.balancer.enabled</name><value>true</value></property><property><name>dfs.disk.balancer.max.disk.throughputInMBperSec</name><value>50</value></property><property><name>dfs.disk.balancer.plan.threshold.percent</name><value>2</value></property><property><name>dfs.disk.balancer.block.tolerance.percent</name><value>5</value></property></configuration>

筛选有用的配置

zhiyong-1中与跨nameservice域访问相关的配置为:

<property><name>dfs.ha.namenodes.zhiyong-1</name><value>nn1,nn2</value>
</property>
<property><name>dfs.namenode.rpc-address.zhiyong-1.nn1</name><value>zhiyong2:8020</value>
</property>
<property><name>dfs.namenode.rpc-address.zhiyong-1.nn2</name><value>zhiyong3:8020</value>
</property>
<property><name>dfs.namenode.http-address.zhiyong-1.nn1</name><value>zhiyong2:50070</value>
</property>
<property><name>dfs.namenode.http-address.zhiyong-1.nn2</name><value>zhiyong3:50070</value>
</property>
<property><name>dfs.client.failover.proxy.provider.zhiyong-1</name><value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

zhiyong-2中与跨nameservice域访问相关的配置为:

<property><name>dfs.ha.namenodes.zhiyong-2</name><value>nn1,nn2</value>
</property>
<property><name>dfs.namenode.rpc-address.zhiyong-2.nn1</name><value>zhiyong5:8020</value>
</property>
<property><name>dfs.namenode.rpc-address.zhiyong-2.nn2</name><value>zhiyong6:8020</value>
</property>
<property><name>dfs.namenode.http-address.zhiyong-2.nn1</name><value>zhiyong5:50070</value>
</property>
<property><name>dfs.namenode.http-address.zhiyong-2.nn2</name><value>zhiyong6:50070</value>
</property>
<property><name>dfs.client.failover.proxy.provider.zhiyong-2</name><value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

添加配置

所以,在zhiyong-1中需要添加:

<property><name>dfs.nameservices</name><value>zhiyong-1,zhiyong-2</value>
</property>
<property><name>dfs.ha.namenodes.zhiyong-2</name><value>nn1,nn2</value>
</property>
<property><name>dfs.namenode.rpc-address.zhiyong-2.nn1</name><value>zhiyong5:8020</value>
</property>
<property><name>dfs.namenode.rpc-address.zhiyong-2.nn2</name><value>zhiyong6:8020</value>
</property>
<property><name>dfs.namenode.http-address.zhiyong-2.nn1</name><value>zhiyong5:50070</value>
</property>
<property><name>dfs.namenode.http-address.zhiyong-2.nn2</name><value>zhiyong6:50070</value>
</property>
<property><name>dfs.client.failover.proxy.provider.zhiyong-2</name><value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

同理,在zhiyong-2中需要添加:

<property><name>dfs.nameservices</name><value>zhiyong-1,zhiyong-2</value>
</property>
<property><name>dfs.ha.namenodes.zhiyong-1</name><value>nn1,nn2</value>
</property>
<property><name>dfs.namenode.rpc-address.zhiyong-1.nn1</name><value>zhiyong2:8020</value>
</property>
<property><name>dfs.namenode.rpc-address.zhiyong-1.nn2</name><value>zhiyong3:8020</value>
</property>
<property><name>dfs.namenode.http-address.zhiyong-1.nn1</name><value>zhiyong2:50070</value>
</property>
<property><name>dfs.namenode.http-address.zhiyong-1.nn2</name><value>zhiyong3:50070</value>
</property>
<property><name>dfs.client.failover.proxy.provider.zhiyong-1</name><value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

同步配置

滚动重启HDFS

然后滚动重启2套集群的HDFS:

还需要重启节点:

解决UnknownHostException报错

事实上光是这样还不能访问。不出意外的话会看到-ls: java.net.UnknownHostException。这种报错的解决方式如下:

[root@zhiyong5 /]# ./hadoop fs -ls hdfs://zhiyong-1/
-bash: ./hadoop: 没有那个文件或目录
[root@zhiyong5 /]# cd /opt/usdp-srv/srv/udp/2.0.0.0/hdfs/bin
[root@zhiyong5 bin]# ll
总用量 804
-rwxr-xr-x. 1 hadoop hadoop     98 3月   1 23:32 bootstrap-namenode.sh
-rwxr-xr-x. 1 hadoop hadoop 372928 11月 15 2020 container-executor
-rwxr-xr-x. 1 hadoop hadoop     88 3月   1 23:32 format-namenode.sh
-rwxr-xr-x. 1 hadoop hadoop     86 3月   1 23:32 format-zkfc.sh
-rwxr-xr-x. 1 hadoop hadoop   8580 11月 15 2020 hadoop
-rwxr-xr-x. 1 hadoop hadoop  11417 3月  11 23:11 hdfs
-rwxr-xr-x. 1 hadoop hadoop   6237 11月 15 2020 mapred
-rwxr-xr-x. 1 hadoop hadoop 387368 11月 15 2020 test-container-executor
-rwxr-xr-x. 1 hadoop hadoop  11888 11月 15 2020 yarn
[root@zhiyong5 bin]# hadoop fs -ls hdfs://zhiyong-1/
-ls: java.net.UnknownHostException: zhiyong-1
Usage: hadoop fs [generic options][-appendToFile <localsrc> ... <dst>][-cat [-ignoreCrc] <src> ...][-checksum <src> ...][-chgrp [-R] GROUP PATH...][-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...][-chown [-R] [OWNER][:[GROUP]] PATH...][-copyFromLocal [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst>][-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>][-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] [-e] <path> ...][-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>][-createSnapshot <snapshotDir> [<snapshotName>]][-deleteSnapshot <snapshotDir> <snapshotName>][-df [-h] [<path> ...]][-du [-s] [-h] [-v] [-x] <path> ...][-expunge][-find <path> ... <expression> ...][-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>][-getfacl [-R] <path>][-getfattr [-R] {-n name | -d} [-e en] <path>][-getmerge [-nl] [-skip-empty-file] <src> <localdst>][-head <file>][-help [cmd ...]][-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]][-mkdir [-p] <path> ...][-moveFromLocal <localsrc> ... <dst>][-moveToLocal <src> <localdst>][-mv <src> ... <dst>][-put [-f] [-p] [-l] [-d] <localsrc> ... <dst>][-renameSnapshot <snapshotDir> <oldName> <newName>][-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...][-rmdir [--ignore-fail-on-non-empty] <dir> ...][-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]][-setfattr {-n name [-v value] | -x name} <path>][-setrep [-R] [-w] <rep> <path> ...][-stat [format] <path> ...][-tail [-f] <file>][-test -[defsz] <path>][-text [-ignoreCrc] <src> ...][-touchz <path> ...][-truncate [-w] <length> <path> ...][-usage [cmd ...]]Generic options supported are:
-conf <configuration file>        specify an application configuration file
-D <property=value>               define a value for a given property
-fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations.
-jt <local|resourcemanager:port>  specify a ResourceManager
-files <file1,...>                specify a comma-separated list of files to be copied to the map reduce cluster
-libjars <jar1,...>               specify a comma-separated list of jar files to be included in the classpath
-archives <archive1,...>          specify a comma-separated list of archives to be unarchived on the compute machinesThe general command line syntax is:
command [genericOptions] [commandOptions]Usage: hadoop fs [generic options] -ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]
[root@zhiyong5 bin]# hadoop fs -ls hdfs://zhiyong-2/
Found 7 items
drwxr-xr-x   - root   supergroup          0 2022-03-11 23:04 hdfs://zhiyong-2/a1
drwxrwxrwx   - hadoop supergroup          0 2022-03-02 22:39 hdfs://zhiyong-2/hbase
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:34 hdfs://zhiyong-2/tez
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:35 hdfs://zhiyong-2/tez-0.10.0
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:35 hdfs://zhiyong-2/tmp
drwxrwxrwx   - hadoop supergroup          0 2022-03-11 22:28 hdfs://zhiyong-2/user
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:38 hdfs://zhiyong-2/zhiyong-2
[root@zhiyong5 bin]# find /** -iname 'hdfs-site.xml'
/opt/usdp-srv/usdp/templated/2.0.0.0/hdfs/hdfs-site.xml
/opt/usdp-srv/usdp/templated/2.0.0.0/yarn/hdfs-site.xml
/opt/usdp-srv/srv/udp/2.0.0.0/hdfs/etc/hadoop/hdfs-site.xml
/opt/usdp-srv/srv/udp/2.0.0.0/yarn/etc/hadoop/hdfs-site.xml
/opt/usdp-srv/srv/udp/2.0.0.0/hbase/conf/hdfs-site.xml
/opt/usdp-srv/srv/udp/2.0.0.0/phoenix/bin/hdfs-site.xml
/opt/usdp-srv/srv/udp/2.0.0.0/dolphinscheduler/conf/hdfs-site.xml
[root@zhiyong5 bin]# cat /opt/usdp-srv/srv/udp/2.0.0.0/hdfs/etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><property><name>dfs.nameservices</name><value>zhiyong-1,zhiyong-2</value>
</property>
<property><name>dfs.ha.namenodes.zhiyong-1</name><value>nn1,nn2</value>
</property>
<property><name>dfs.namenode.rpc-address.zhiyong-1.nn1</name><value>zhiyong2:8020</value>
</property>
<property><name>dfs.namenode.rpc-address.zhiyong-1.nn2</name><value>zhiyong3:8020</value>
</property>
<property><name>dfs.namenode.http-address.zhiyong-1.nn1</name><value>zhiyong2:50070</value>
</property>
<property><name>dfs.namenode.http-address.zhiyong-1.nn2</name><value>zhiyong3:50070</value>
</property>
<property><name>dfs.client.failover.proxy.provider.zhiyong-1</name><value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property><property><name>dfs.replication</name><value>3</value></property><property><name>dfs.name.dir</name><value>/data/udp/2.0.0.0/hdfs/dfs/nn</value></property><property><name>dfs.data.dir</name><value>/data/udp/2.0.0.0/hdfs/dfs/data</value></property><property><name>dfs.journalnode.edits.dir</name><value>/data/udp/2.0.0.0/hdfs/jnData</value></property><property><name>dfs.ha.namenodes.zhiyong-2</name><value>nn1,nn2</value></property><property><name>dfs.namenode.rpc-address.zhiyong-2.nn1</name><value>zhiyong5:8020</value></property><property><name>dfs.namenode.rpc-address.zhiyong-2.nn2</name><value>zhiyong6:8020</value></property><property><name>dfs.namenode.http-address.zhiyong-2.nn1</name><value>zhiyong5:50070</value></property><property><name>dfs.namenode.http-address.zhiyong-2.nn2</name><value>zhiyong6:50070</value></property><property><name>ha.zookeeper.quorum</name><value>zhiyong5:2181,zhiyong6:2181,zhiyong7:2181</value></property><property><name>dfs.namenode.shared.edits.dir</name><value>qjournal://zhiyong5:8485;zhiyong6:8485;zhiyong7:8485/zhiyong-2</value></property><property><name>dfs.client.failover.proxy.provider.zhiyong-2</name><value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value></property><property><name>dfs.ha.fencing.methods</name><value>sshfence(hadoop:22)</value></property><property><name>dfs.ha.fencing.ssh.connect-timeout</name><value>30000</value></property><property><name>dfs.ha.fencing.ssh.private-key-files</name><value>/home/hadoop/.ssh/id_rsa</value></property><property><name>dfs.ha.automatic-failover.enabled</name><value>true</value></property><property><name>dfs.datanode.max.xcievers</name><value>4096</value></property><property><name>dfs.permissions.enable</name><value>false</value></property><property><name>dfs.webhdfs.enabled</name><value>true</value></property><property><name>dfs.namenode.heartbeat.recheck-interval</name><value>45000</value></property><property><name>fs.trash.interval</name><value>7320</value></property><property><name>dfs.datanode.max.transfer.threads</name><value>8192</value></property><property><name>dfs.image.compress</name><value>true</value></property><property><name>dfs.namenode.num.checkpoints.retained</name><value>12</value></property><property><name>dfs.datanode.data.dir.perm</name><value>750</value></property><!-- 开启短路读 --><!--<property><name>dfs.client.read.shortcircuit</name><value>true</value></property><property><name>dfs.domain.socket.path</name><value>/var/lib/hadoop-hdfs/dn_socket</value></property><property><name>dfs.client.use.legacy.blockreader.local</name><value>true</value></property><property><name>dfs.block.local-path-access.user</name><value>hadoop,root</value></property>--><property><name>dfs.datanode.handler.count</name><value>50</value></property><property><name>dfs.namenode.handler.count</name><value>50</value></property><property><name>dfs.socket.timeout</name><value>900000</value></property><property><name>dfs.hosts.exclude</name><value>/srv/udp/2.0.0.0/hdfs/etc/hadoop/excludes</value></property><property><name>dfs.namenode.replication.max-streams</name><value>32</value></property><property><name>dfs.namenode.replication.max-streams-hard-limit</name><value>200</value></property><property><name>dfs.namenode.replication.work.multiplier.per.iteration</name><value>200</value></property><property><name>dfs.datanode.balance.bandwidthPerSec</name><value>10485760</value></property><property><name>dfs.disk.balancer.enabled</name><value>true</value></property><property><name>dfs.disk.balancer.max.disk.throughputInMBperSec</name><value>50</value></property><property><name>dfs.disk.balancer.plan.threshold.percent</name><value>2</value></property><property><name>dfs.disk.balancer.block.tolerance.percent</name><value>5</value></property></configuration>
[root@zhiyong5 bin]# cp /opt/usdp-srv/srv/udp/2.0.0.0/hdfs/etc/hadoop/hdfs-site.xml /opt/usdp-srv/srv/udp/2.0.0.0/yarn/etc/hadoop/hdfs-site.xml
cp:是否覆盖"/opt/usdp-srv/srv/udp/2.0.0.0/yarn/etc/hadoop/hdfs-site.xml"? y
[root@zhiyong5 bin]# hadoop fs -ls hdfs://zhiyong-2/
Found 7 items
drwxr-xr-x   - root   supergroup          0 2022-03-11 23:04 hdfs://zhiyong-2/a1
drwxrwxrwx   - hadoop supergroup          0 2022-03-02 22:39 hdfs://zhiyong-2/hbase
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:34 hdfs://zhiyong-2/tez
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:35 hdfs://zhiyong-2/tez-0.10.0
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:35 hdfs://zhiyong-2/tmp
drwxrwxrwx   - hadoop supergroup          0 2022-03-11 22:28 hdfs://zhiyong-2/user
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:38 hdfs://zhiyong-2/zhiyong-2
[root@zhiyong5 bin]# hadoop fs -ls hdfs://zhiyong-1/
Found 7 items
-rw-r--r--   3 root   supergroup      14444 2022-03-11 22:37 hdfs://zhiyong-1/a1
drwxrwxrwx   - hadoop supergroup          0 2022-03-03 00:35 hdfs://zhiyong-1/hbase
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:08 hdfs://zhiyong-1/tez
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:08 hdfs://zhiyong-1/tez-0.10.0
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:09 hdfs://zhiyong-1/tmp
drwxrwxrwx   - hadoop supergroup          0 2022-03-02 23:51 hdfs://zhiyong-1/user
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:12 hdfs://zhiyong-1/zhiyong-1
[root@zhiyong5 bin]#

zhiyong-1也需要这样操作,之后就可以正常访问:

[root@zhiyong2 bin]# cd /
[root@zhiyong2 /]# hadoop fs -ls hdfs://zhiyong-2/
Found 7 items
drwxr-xr-x   - root   supergroup          0 2022-03-11 23:04 hdfs://zhiyong-2/a1
drwxrwxrwx   - hadoop supergroup          0 2022-03-02 22:39 hdfs://zhiyong-2/hbase
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:34 hdfs://zhiyong-2/tez
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:35 hdfs://zhiyong-2/tez-0.10.0
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:35 hdfs://zhiyong-2/tmp
drwxrwxrwx   - hadoop supergroup          0 2022-03-11 22:28 hdfs://zhiyong-2/user
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:38 hdfs://zhiyong-2/zhiyong-2
[root@zhiyong2 /]# hadoop fs -ls hdfs://zhiyong-1/
Found 7 items
-rw-r--r--   3 root   supergroup      14444 2022-03-11 22:37 hdfs://zhiyong-1/a1
drwxrwxrwx   - hadoop supergroup          0 2022-03-03 00:35 hdfs://zhiyong-1/hbase
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:08 hdfs://zhiyong-1/tez
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:08 hdfs://zhiyong-1/tez-0.10.0
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:09 hdfs://zhiyong-1/tmp
drwxrwxrwx   - hadoop supergroup          0 2022-03-02 23:51 hdfs://zhiyong-1/user
drwxrwxrwx   - hadoop supergroup          0 2022-03-01 23:12 hdfs://zhiyong-1/zhiyong-1

USDP使用笔记(四)打通双集群HDFS实现跨nameservice访问相关推荐

  1. 《Apache Kafka实战》读书笔记-调优Kafka集群

    <Apache Kafka实战>读书笔记-调优Kafka集群 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.确定调优目标 1>.常见的非功能性要求 一.性能( ...

  2. 【SRE笔记 2022.9.30 集群知识及Centos基础优化】

    SRE笔记 2022.9.30 集群内服务软件 集群模板机创建 Linux系统优化 用户优化 ssh远程连接效率提升 配置yum源 常用软件安装 安全优化 中文字符集(非必要) 时间同步 提升命令行安 ...

  3. 好程序员大数据笔记之:Hadoop集群搭建

    好程序员大数据笔记之:Hadoop集群搭建在学习大数据的过程中,我们接触了很多关于Hadoop的理论和操作性的知识点,尤其在近期学习的Hadoop集群的搭建问题上,小细节,小难点拼频频出现,所以,今天 ...

  4. Redis数据库(四)——Redis集群模式(主从复制、哨兵、Cluster)

    Redis数据库(四)--Redis集群模式(主从复制.哨兵.Cluster) 一.Redis主从复制 1.主从复制流程 二.哨兵模式 1.哨兵模式集群架构 2.哨兵模式主要功能 3.哨兵监控整个系统 ...

  5. Redis运维和开发学习笔记(3)redis搭建集群

    Redis运维和开发学习笔记(3)redis搭建集群 文章目录 Redis运维和开发学习笔记(3)redis搭建集群 Redis集群搭建 Redis集群搭建 cp /etc/redis.d/redis ...

  6. 让数据库无惧灾难,华为云GaussDB同城双集群高可用方案正式发布!

    摘要:在华为全联接2021期间,华为云GaussDB(for openGauss)正式推出重大内核新特性--同城双集群高可用方案,提供金融级高可用服务,支持RPO=0 .RTO<60s的同城双集 ...

  7. 从原理到实践,手把手带你轻松get数仓双集群容灾

    摘要:本文通过介绍双集群的架构.log结构.分析步骤来介绍双集群容灾的问题分析方法. 本文分享自华为云社区<从原理到实践,手把手带你轻松get数仓双集群容灾>,原文作者:Puyol . 双 ...

  8. 从数据仓库双集群系统模式探讨,看GaussDB(DWS)的容灾设计

    摘要:本文主要是探讨OLAP关系型数据库框架的数据仓库平台如何设计双集群系统,即增强系统高可用的保障水准,然后讨论一下GaussDB(DWS)的容灾应该如何设计. 当前社会.企业运行当中,大数据分析. ...

  9. 3种双集群系统方案设计模式详解

    当前社会.企业运行当中,大数据分析.数据仓库平台已逐渐成为生产.生活的重要地位,不再是一个附属的可有可无的分析系统,外部监控要求.企业内部服务,涌现大批要求7*24小时在线的应用,逐步出现不同等级要求 ...

最新文章

  1. 您有一份意外的福利待领取!
  2. 创建Vue项目出错,提示vue : 无法加载文件C:\Users\xxx\AppData\Roaming\npm\vue.ps1,因为在此系统上禁止运行脚本。有关详细信息,请参阅 https:/go
  3. jJMeter UDP Request:不等待服务器响应
  4. ubuntu下wps不能输入中文
  5. Bootstrap 源码版文件结构
  6. mysql root账号_修改mysql root账号密码
  7. python vue token_Haytham个人博客开发日志 -- Flask+Vue基于token的登录状态与路由管理...
  8. linux+git登陆,图解如何在Linux上配置git自动登陆验证
  9. MinGw编译opencv教程
  10. linux--封装redhat镜像
  11. HBuilderX搭建Vue项目
  12. SuperMap iDesktop常见问题解答集锦 (一)
  13. 王 第潜艇三天 引用类型 继承
  14. 佳明比华为的手表好在哪
  15. android 获取通讯录[BUG速记]
  16. 体验过智慧家居后,我再也不想回家了……
  17. 《重说中国近代史》—张鸣——(2)战争的开始
  18. 在高通410c开发板使用PyQt开发电动窗帘Demo(五)
  19. 查看,设置,设备的 竖屏-横屏模式 screen.orientation
  20. mysql中的视图作用是什么意思_mysql数据库视图的作用是什么意思

热门文章

  1. sqlserver阻止保存要求重新创建表的更改
  2. 最没灵魂的爬虫——Selenium 游戏信息的爬取与分析
  3. python 离群值_如何从Numpy数组中删除离群值
  4. 响应式Web程序设计【15】
  5. 城市代码(weather.com)[转]
  6. VUE项目学习(一):搭建VUE前端项目
  7. vue中奖名单,新闻列表跑马灯,无缝上下滚动
  8. 蓦然回首,十余年的程序员生涯最后就只剩下了这些!希望我犯过的错误你不要再犯!
  9. Oh My ZSH使用教程
  10. 诺基亚Q4净利润5.64亿美元 同比增54%