原理图  两个集群---目的:扩容

HA联邦模式解决了单纯HA模式的性能瓶颈(主要指Namenode、ResourceManager),将整个HA集群划分为两个以上的集群,不同的集群之间通过Federation进行连接,使得HA集群拥有了横向扩展的能力。理论上,在该模式下,能够通过增加计算节点以处理无限增长的数据。联邦模式下的配置在原HA模式的基础上做了部分调整。

配置过程
federation

 
cp -r local/ federation
    1.规划集群
        ns1:nn1(s101) + nn2(s102)
        ns2:nn3(s103) + nn4(s014)
    2.准备
        [nn1 ~ nn4 ]ssh 所有节点.
 
    3.停止整个集群
        
    4.配置文件
        4.1)s101和s102的hdfs-site.xml配置
            [hadoop/federation/hdfs-site.xml]
            <?xml version="1.0" encoding="UTF-8"?>
            <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
            <configuration>
                    <property>
                                    <name>dfs.nameservices</name>
                                    <value>ns1,ns2</value>
                    </property>
                    <!-- **************ns1********************* -->
                    <property>
                                    <name>dfs.ha.namenodes.ns1</name>
                                    <value>nn1,nn2</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns1.nn1</name>
                                    <value>s101:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns1.nn2</name>
                                    <value>s102:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns1.nn1</name>
                                    <value>s101:50070</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns1.nn2</name>
                                    <value>s102:50070</value>
                    </property>
                    <property>
                                    <name>dfs.client.failover.proxy.provider.ns1</name>
                                    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
                    </property>
                    <!-- **************ns2********************* -->
                    <property>
                                    <name>dfs.ha.namenodes.ns2</name>
                                    <value>nn3,nn4</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns2.nn3</name>
                                    <value>s103:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns2.nn4</name>
                                    <value>s104:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns2.nn3</name>
                                    <value>s103:50070</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns2.nn4</name>
                                    <value>s104:50070</value>
                    </property>
                    <property>
                                    <name>dfs.client.failover.proxy.provider.ns2</name>
                                    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
                    </property>
                    <!--***********************************************-->
 
                    <property>
                                    <name>dfs.namenode.shared.edits.dir</name>
                                    <value>qjournal://s102:8485;s103:8485;s104:8485/ns1</value>
                    </property>
 
                    <property>
                                    <name>dfs.ha.fencing.methods</name>
                                    <value>
                                                    sshfence
                                                    shell(/bin/true)
                                    </value>
                    </property>
                    <property>
                                    <name>dfs.ha.fencing.ssh.private-key-files</name>
                                    <value>/home/centos/.ssh/id_rsa</value>
                    </property>
 
                    <property>
                                    <name>dfs.ha.automatic-failover.enabled</name>
                                    <value>true</value>
                    </property>
                    <property>
                                    <name>dfs.replication</name>
                                    <value>3</value>
                    </property>
            </configuration>
 
        4.2)s103和s104的hdfs-site.xml配置
            [hadoop/federation/hdfs-site.xml]
            <?xml version="1.0" encoding="UTF-8"?>
            <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
            <configuration>
                    <property>
                                    <name>dfs.nameservices</name>
                                    <value>ns1,ns2</value>
                    </property>
                    <!-- **************ns1********************* -->
                    <property>
                                    <name>dfs.ha.namenodes.ns1</name>
                                    <value>nn1,nn2</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns1.nn1</name>
                                    <value>s101:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns1.nn2</name>
                                    <value>s102:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns1.nn1</name>
                                    <value>s101:50070</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns1.nn2</name>
                                    <value>s102:50070</value>
                    </property>
                    <property>
                                    <name>dfs.client.failover.proxy.provider.ns1</name>
                                    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
                    </property>
                    <!-- **************ns2********************* -->
                    <property>
                                    <name>dfs.ha.namenodes.ns2</name>
                                    <value>nn3,nn4</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns2.nn3</name>
                                    <value>s103:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns2.nn4</name>
                                    <value>s104:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns2.nn3</name>
                                    <value>s103:50070</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns2.nn4</name>
                                    <value>s104:50070</value>
                    </property>
                    <property>
                                    <name>dfs.client.failover.proxy.provider.ns2</name>
                                    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
                    </property>
                    <!--***********************************************-->
 
                    <property>
                                    <name>dfs.namenode.shared.edits.dir</name>
                                    <value>qjournal://s102:8485;s103:8485;s104:8485/ns2</value>
                    </property>
 
                    <property>
                                    <name>dfs.ha.fencing.methods</name>
                                    <value>
                                                    sshfence
                                                    shell(/bin/true)
                                    </value>
                    </property>
                    <property>
                                    <name>dfs.ha.fencing.ssh.private-key-files</name>
                                    <value>/home/centos/.ssh/id_rsa</value>
                    </property>
 
                    <property>
                                    <name>dfs.ha.automatic-failover.enabled</name>
                                    <value>true</value>
                    </property>
                    <property>
                                    <name>dfs.replication</name>
                                    <value>3</value>
                    </property>
            </configuration>
        
        4.3)s101 ~ s104的core-site.xml配置文件
            [hadoop/federation/core-site.xml]
            <?xml version="1.0"?>
            <configuration xmlns:xi="http://www.w3.org/2001/XInclude">
                    <xi:include href="mountTable.xml" />
                    <property>
                                    <name>fs.defaultFS</name>
                                    <value>viewfs://ClusterX</value>
                    </property>
                    <property>
                                    <name>dfs.journalnode.edits.dir</name>
                                    <value>/home/centos/hadoop/federation/journalnode</value>
                    </property>
                    <property>
                                     <name>hadoop.tmp.dir</name>
                                    <value>/home/centos/hadoop/federation</value>
                    </property>
                    <property>
                                    <name>ha.zookeeper.quorum</name>
                                    <value>s102:2181,s103:2181,s104:2181</value>
                    </property>
            </configuration>
        
        4.4)mountTable.xml  挂载表文件
            [hadoop/federation/mountTable.xml]
            <configuration>
                    <property>
                            <name>fs.viewfs.mounttable.ClusterX.homedir</name>
                            <value>/home</value>
                    </property>
                    <property>
                            <name>fs.viewfs.mounttable.ClusterX.link./home</name>
                            <value>hdfs://ns1/home</value>
                    </property>
                    <property>
                            <name>fs.viewfs.mounttable.ClusterX.link./tmp</name>
                            <value>hdfs://ns2/tmp</value>
                    </property>
                    <property>
                            <name>fs.viewfs.mounttable.ClusterX.link./projects/foo</name>
                            <value>hdfs://ns1/projects/foo</value>
                    </property>
                    <property>
                            <name>fs.viewfs.mounttable.ClusterX.link./projects/bar</name>
                            <value>hdfs://ns2/projects/bar</value>
                    </property>
            </configuration>
 
    5.操作
        5.1)删除所有节点的日志和本地临时目录
            $>xcall.sh rm -rf /soft/hadoop/logs/*
            $>xcall.sh rm -rf /home/centos/hadoop/federation/*
        
        5.2)修改所有节点的hadoop软连接
            $>xcall.sh ln -sfT /soft/hadoop/etc/federation /soft/hadoop/etc/hadoop
 
        5.3)对ns1集群进行格式化以及初始工作
            a)启动jn集群
                登录s102 ~ s104,启动jounalnode进程。
                $>hadoop-daemon.sh start journalnode
            b)格式化nn1节点
                [s101]
                $>hdfs namenode -format
            c)复制s101的元数据到s102下.
                [s101]
                $>scp -r
            d)在s102上执行引导过程
                #s101启动名称节点
                $>hadoop-daemon.sh start namenode
                # s102执行引导,不要重格(N)
                $>hdfs namenode -bootstrapStandby
 
            e)在s102上初始化编辑日志到jn集群(N)
                $>hdfs namenode -initializeSharedEdits
            f)在s102对zookeeper格式化zkfc(选择Y).
                $>hdfs zkfc -formatZK
            g)启动s101和s102的namenode和zkfc进程。
                [s101]
                $>hadoop-daemon.sh start zkfc
                
                [s102]
                $>hadoop-daemon.sh start namenode
                $>hadoop-daemon.sh start zkfc
 
            h)测试webui
                
 
            
        5.4)对ns2集群进行格式化以及初始工作
            a)格式nn3,切记使用-clusterId属性,保持和ns1的一致。
                [s103]
                $>hdfs namenode -format -clusterId CID-e16c5e2f-c0a5-4e51-b789-008e36b7289a
                
            b)复制s103的元数据到s104上。
                $>scp -r /home/centos/hadoop/federation centos@s104:/home/centos/hadoop/
            c)在s104引导
                #在s103启动namenode
                $>hadoop-daemon.sh start namenode
                #在s104执行引导
                $>hdfs namenode -bootstrapStandby
            d)在s104上初始化编辑日志
                $>hdfs namenode -initializeSharedEdits
            e)在s104对zookeeper格式化zkfc(选择Y).
                $>hdfs zkfc -formatZK
            f)启动s103和s104的namenode和zkfc进程。
                [s103]
                $>hadoop-daemon.sh start zkfc
                
                [s104]
                $>hadoop-daemon.sh start namenode
                $>hadoop-daemon.sh start zkfc
        5.5)停止集群
            $>stop-dfs.sh
 
        5.6)重启dfs集群
            $>start-dfs.sh
                
        5.7)创建目录
            # 注意加p参数
            $>hdfs dfs -mkdir -p /home/data
          
            #上传文件,考察webui
            $>hdfs dfs -put 1.txt /home/data
 
 

转载于:https://www.cnblogs.com/star521/p/9703171.html

2.联邦模式配置---扩容,负载均衡相关推荐

  1. [Nginx]nginx 配置实例-负载均衡

    nginx 配置实例-负载均衡 1.实现效果 (1)浏览器地址栏输入地址 http://192.168.111.134/edu/a.html,负载均衡效果,平均分担到 8080和 8081 端口中 2 ...

  2. Nginx 部署、反向代理配置、负载均衡

    Nginx 部署.反向代理配置.负载均衡 最近我们的angular项目部署,我们采用的的是Nginx,下面对Nginx做一个简单的介绍. 为什么选择Nginx 轻:相比于Apache,同样的web服务 ...

  3. Nginx配置之负载均衡、限流、缓存、黑名单和灰度发布

    Nginx配置之负载均衡.限流.缓存.黑名单和灰度发布 一.Nginx安装(基于CentOS 6.5) 1.yum命令安装 yum install nginx –y (若不能安装,执行命令yum in ...

  4. windows网络服务之配置网络负载均衡(NLB)群集

    O首页51CTO博客我的博客搜索 每日博报 社区:学院论坛博客下载更多            登录注册 家园 学院 博客 论坛 下载 自测 门诊 周刊 读书 技术圈 曾垂鑫的技术专栏 http://5 ...

  5. Kong配置service负载均衡

    Kong配置service负载均衡 文章目录 Kong配置service负载均衡 1. 创建upstream 2. 创建target关联upstream 3. service(服务) 4. route ...

  6. Nginx以及通过Nginx实现tomcat集群配置与负载均衡

    Nginx简介 启动,停止,和重新加载配置文件命令 Nginx功能 正向代理和反向代理的区别 反向代理 负载均衡 1.RR(默认) 2.权重 3.ip_hash 4.fair(第三方) 5.url_h ...

  7. nginx配置tcp负载均衡

    1.历史背景 在服务器快速集群环境搭建中,都迫切需要一个能拿来即用的负载均衡器,nginx在1.9版本之前,只支持http协议web服务器的负载均衡,从1.9版本开始以后,nginx开始支持tcp的长 ...

  8. nacos集群搭建并配置nginx负载均衡

    一.配置 nacos 集群 注意:需要先配置好 nacos 连接本地数据库 1.拷贝三份 nacos 2.修改配置文件(application.properties) 修改启动端口: nacos1:8 ...

  9. 配置 Haproxy 负载均衡群集

    配置 haproxy 负载均衡群集

  10. Nginx 虚拟主机配置及负载均衡

    虚拟主机配置: 在现实中,公司为了充分利用服务器资源 , 一台 Nginx 服务器会同时挂多个站点,这些站点可以基于80端口配置 N 多不同域名的服务器,那么,怎样实现这一功能呢,接下来就让我们学习一 ...

最新文章

  1. viterbi算法_HMM模型和Viterbi算法如何应用于分词
  2. webservice restful类型接口的调用实例
  3. 混沌系列 | 其实制造“假死”很容易
  4. python-css反爬之svg映射
  5. JIL 编译与 AOT 编译
  6. 小程序 input自动换行_直播 | 最实用的微信小程序自动化测试技术独家揭秘
  7. jQuery datepicker和jQuery validator 共用时bug
  8. 28线性空间02——坐标、坐标变换与基变换、过度矩阵
  9. 阿里云云计算 21 VPC的概念
  10. 云计算核心技术的基本理解
  11. 嵌入式开发——常见的存储器分类和特性介绍
  12. 计算机学生英语面试自我介绍ppt,大学生英语自我介绍ppt.doc
  13. 三维空间点到直线距离计算
  14. Apple ID更换绑定的受信任电话号码教程
  15. Flutter报错: type ‘double‘ is not a subtype of type ‘int?‘或type ‘int‘ is not a subtype of type ‘double
  16. VMware复制ubuntu16虚拟机时提示句柄无效解决方法
  17. 关于深度学习人工智能模型的探讨(一)(1)
  18. WPS关闭不了后台一直运行的解决办法(wpscloudsvr.exe)
  19. video视频设置第一帧为封面
  20. 01-windows下载与安装neo4j

热门文章

  1. win7系统的右键菜单只显示一个白色框不显示菜单项 解决办法
  2. angular——更多按钮的上拉菜单(路由跳转)
  3. Oracle数据库入门——常用的数据字典
  4. 看完这篇文章保你面试稳操胜券——基础篇(html/css)
  5. IDEA将项目上传至码云/GitHub托管
  6. 绝非玩笑!人工智能或开创黑客新时代
  7. 深入浅出 RPC - 浅出篇+深入篇
  8. 用SD卡下载uboot、linux内核和文件系统
  9. CSS 实现背景半透明
  10. fortinate防火墙使用本地用户三步开通PPTP ***