2013/08/09 转发自http://bkeep.blog.163.com/blog/static/123414290201272644422987/

【案例】dfs.datanode.max.xcievers参数导致hbase-0.92集群报错

2012-08-26 16:44:22|  分类: Hbase|字号 订阅

场景:

15个datanode挂掉,只有2个存活

[dwhftp@dw-hbase-1 ~]$ hadoop dfsadmin -report

Configured Capacity: 73837983129600 (67.16 TB)

Present Capacity: 69740285348454 (63.43 TB)

DFS Remaining: 61837580668928 (56.24 TB)

DFS Used: 7902704679526 (7.19 TB)

DFS Used%: 11.33%

Under replicated blocks: 0

Blocks with corrupt replicas: 0

Missing blocks: 0

14个regionserver只有2个存活

[dwhftp@dw-hbase-11 ~]$ hbase shell

HBase Shell; enter 'help<RETURN>' for list of supported commands.

Type "exit<RETURN>" to leave the HBase Shell

Version 0.92.0, r1231986, Mon Jan 16 13:16:35 UTC 2012

hbase(main):001:0> status

14 servers, 0 dead, 4739.8571 average load

服务进程没有挂掉,但不能对外提供服务,且服务器内存消耗16G

dwhftp   29754  1 15 Jul30 ?        3-07:58:27 /usr/alibaba/java/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx16000m -Dhbase.log.dir=/home/dwhftp/opt/hbase/logs -Dhbase.log.file=hbase-dwhftp-regionserver-dw-hbase-9.hst.ali.dw.alidc.net.log -Dhbase.home.dir=/home/dwhftp/opt/hbase -Dhbase.id.str=dwhftp -Dhbase.root.logger=INFO,DRFA -Djava.library.path=/home/dwhftp/opt/hbase/lib/native/Linux-amd64-64 -classpath /home/dwhftp/opt/hbase/conf:/usr/alibaba/java/lib/tools.jar:/home/dwhftp/opt/hbase:/home/dwhftp/opt/hbase/hbase-0.92.0.jar:/home/dwhftp/opt/hbase/hbase-0.92.0-tests.jar:/home/dwhftp/opt/hbase/lib/activation-1.1.jar:/home/dwhftp/opt/hbase/lib/asm-3.1.jar:/home/dwhftp/opt/hbase/lib/avro-1.5.3.jar:/home/dwhftp/opt/hbase/lib/avro-ipc-1.5.3.jar:/home/dwhftp/opt/hbase/lib/commons-beanutils-1.7.0.jar:/home/dwhftp/opt/hbase/lib/commons-beanutils-core-1.8.0.jar:/home/dwhftp/opt/hbase/lib/commons-cli-1.2.jar:/home/dwhftp/opt/hbase/lib/commons-codec-1.4.jar:/home/dwhftp/opt/hbase/lib/commons-collections-3.2.1.jar:/home/dwhftp/opt/hbase/lib/commons-configuration-1.6.jar:/home/dwhftp/opt/hbase/lib/commons-digester-1.8.jar:/home/dwhftp/opt/hbase/lib/commons-el-1.0.jar:/home/dwhftp/opt/hbase/lib/commons-httpclient-3.1.jar:/home/dwhftp/opt/hbase/lib/commons-lang-2.5.jar:/home/dwhftp/opt/hbase/lib/commons-logging-1.1.1.jar:/home/dwhftp/opt/hbase/lib/commons-math-2.1.jar:/home/dwhftp/opt/hbase/lib/commons-net-1.4.1.jar:/home/dwhftp/opt/hbase/lib/core-3.1.1.jar:/home/dwhftp/opt/hbase/lib/guava-r09.jar:/home/dwhftp/opt/hbase/lib/guava-r09-jarjar.jar:/home/dwhftp/opt/hbase/lib/hadoop-core-0.20.2-cdh3u3.jar:/home/dwhftp/opt/hbase/lib/high-scale-lib-1.1.1.jar:/home/dwhftp/opt/hbase/lib/httpclient-4.0.1.jar:/home/dwhftp/opt/hbase/lib/httpcore-4.0.1.jar:/home/dwhftp/opt/hbase/lib/jackson-core-asl-1.5.5.jar:/home/dwhftp/opt/hbase/lib/jackson-jaxrs-1.5.5.jar:/home/dwhftp/opt/hbase/lib/jackson-mapper-asl-1.5.5.jar:/home/dwhftp/opt/hbase/lib/jackson-xc-1.5.5.jar:/home/dwhftp/opt/hbase/lib/jamon-runtime-2.3.1.jar:/home/dwhftp/opt/hbase/lib/jasper-compiler-5.5.23.jar:/home/dwhftp/opt/hbase/lib/jasper-runtime-5.5.23.jar:/home/dwhftp/opt/hbase/lib/jaxb-api-2.1.jar:/home/dwhftp/opt/hbase/lib/jaxb-impl-2.1.12.jar:/home/dwhftp/opt/hbase/lib/jersey-core-1.4.jar:/home/dwhftp/opt/hbase/lib/jersey-json-1.4.jar:/home/dwhftp/opt/hbase/lib/jersey-server-1.4.jar:/home/dwhftp/opt/hbase/lib/jettison-1.1.jar:/home/dwhftp/opt/hbase/lib/jetty-6.1.26.jar:/home/dwhftp/opt/hbase/lib/jetty-util-6.1.26.jar:/home/dwhftp/opt/hbase/lib/jruby-complete-1.6.5.jar:/home/dwhftp/opt/hbase/lib/jsp-2.1-6.1.14.jar:/home/dwhftp/opt/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/dwhftp/opt/hbase/lib/libthrift-0.7.0.jar:/home/dwhftp/opt/hbase/lib/log4j-1.2.16.jar:/home/dwhftp/opt/hbase/lib/netty-3.2.4.Final.jar:/home/dwhftp/opt/hbase/lib/protobuf-java-2.4.0a.jar:/home/dwhftp/opt/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/dwhftp/opt/hbase/lib/servlet-api-2.5.jar:/home/dwhftp/opt/hbase/lib/slf4j-api-1.5.8.jar:/home/dwhftp/opt/hbase/lib/slf4j-log4j12-1.5.8.jar:/home/dwhftp/opt/hbase/lib/snappy-java-1.0.3.2.jar:/home/dwhftp/opt/hbase/lib/stax-api-1.0.1.jar:/home/dwhftp/opt/hbase/lib/velocity-1.7.jar:/home/dwhftp/opt/hbase/lib/xmlenc-0.52.jar:/home/dwhftp/opt/hbase/lib/zookeeper-3.4.2.jar::/home/dwhftp/opt/hadoop/conf:/home/dwhftp/opt/hadoop/conf org.apache.hadoop.hbase.regionserver.HRegionServer start

hbase查看表时报错如下

hbase(main):001:0> scan '20120819entry'

ROW                           COLUMN+CELL

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/home/dwhftp/opt/install/hbase-0.92.0/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/home/dwhftp/opt/install/hadoop-0.20/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

ERROR: org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1

at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:2819)

at org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1755)

at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)

at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)

at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

regionserver报错日志

org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /hbase/.logs/dw-hbase-17,60020,1345529427507/dw-hbase-17%2C60020%2C1345529427507.1345529431034 File does not exist. [Lease.  Holder: DFSClient_-1871695140, pendingcreates: 4]

省略部分文字...

2012-08-21 14:35:13,530 WARN org.apache.hadoop.hdfs.DFSClient: Error while syncing

java.io.IOException: Bad connect ack with firstBadLink as 172.16.197.18:50010

HMaster 上的报错

[dwhftp@dw-hbase-3 logs]$ pwd

/home/dwhftp/opt/hbase/logs

[dwhftp@dw-hbase-3 logs]$ tail -200f hbase-dwhftp-master-dw-hbase-3.hst.ali.dw.alidc.net.log

2012-08-21 10:37:56,548 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of .META.,,1 at address=dw-hbase-9,60020,1343643389530; org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: .META.,,1

hadoop日志(问题终于找到了!)

[dwhftp@dw-hbase-13 logs]$ pwd

/home/dwhftp/opt/hadoop/logs

[dwhftp@dw-hbase-13 logs]$ vi hadoop-dwhftp-datanode-dw-hbase-13.hst.ali.dw.alidc.net.log

2012-08-21 14:35:00,203 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.16.197.25:50010, storageID=DS-2042732685-172.16.197.25-50010-1334122560477, infoPort=50075, ipcPort=50020):DataXceiver

java.io.IOException: xceiverCount 4097 exceeds the limit of concurrent xcievers 4096

at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:156)

解决办法,调整xcievers参数

默认是4096,改为8192

vi /home/dwhftp/opt/hadoop/conf/hdfs-site.xml

<property>

<name>dfs.datanode.max.xcievers</name>

<value>8192</value>

</property>

dfs.datanode.max.xcievers 参数说明

一个 Hadoop HDFS Datanode 有一个同时处理文件的上限. 这个参数叫 xcievers (Hadoop的作者把这个单词拼错了). 在你加载之前,先确认下你有没有配置这个文件conf/hdfs-site.xml里面的xceivers参数,至少要有4096:

<property>

<name>dfs.datanode.max.xcievers</name>

<value>4096</value>

</property>

对于HDFS修改配置要记得重启.

如果没有这一项配置,你可能会遇到奇怪的失败。你会在Datanode的日志中看到xcievers exceeded,但是运行起来会报missing blocks错误。例如: 10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry... [5]

重启hbase和hdfs

hbase-1

[dwhftp@dw-hbase-1 ~]$ start-dfs.sh

hbase-3

[dwhftp@dw-hbase-3 ~]$ start-hbase.sh

hbase-3

还要启动mapper/reduce用于和云梯同步数据

[dwhftp@dw-hbase-3 ~]$start-mapred.sh

后续动作

添加监控

端口监控、进程监控、

hbase-1 (namenode) 9000  50070  80  2181  3555

hbase-2 (SecondaryNameNode) 50090  2181  3555

hbase-3 (HMaster) 60010  80  2181  3555

hbase-4 (HMaster备) 60000  2181  3555

hbase-5 (HRegionServer) 60020  60030  2181  3555  datanode  50010  50075

hbase-6 ~ dw-hbase-18 (HRegionServer) 60020  60030  datanode  50010  50075

自己写脚本监控

监控活动的hbase和datanode

监控日志中的ERROR关键字

容量

跟开发人员约定可用性

namenode是单点,所以肯定不是7*24小时服务。单台机器挂掉hbase集群需要时间重建hbase,分钟级别的不可用。必须跟他们讲清楚。然后确定谁在维护hbase

另外创建一个hbase维护群,方便大家交流。

hbase资源申请

所以应用共同使用hbase集群,所以资源隔离。业务评审都需要有流程和规范。

日志级别调整

info、debug、warn、error放在同一个日志文件里面,信息量太多导致排错时间长

通过配置log4j将error级别日志打到单独文件里面。

向淘宝取经

日常维护动作

监控点

转载于:https://www.cnblogs.com/DeeFOX/p/3247587.html

dfs.datanode.max.xcievers参数导致hbase集群报错相关推荐

  1. HBase案例 | 20000个分区导致HBase集群宕机事故处理

    这是几个月前遇到的一次HBase集群宕机事件,今天重新整理下事故分析报告.概况的说是业务方的一个10节点HBase集群支撑百TB级别的数据量,集群region数量达 23000+,最终集群支持不住业务 ...

  2. HBase停止集群报错,pid: No such file or directory

    HBase停止集群报错,pid不存在的问题 停止HBase集群时报错如下: [plain]  stopping hbasecat: /tmp/hbase-mango-master.pid: No su ...

  3. spark 序列化错误 集群提交时_【问题解决】本地提交任务到Spark集群报错:Initial job has not accepted any resources...

    本地提交任务到Spark集群报错:Initial job has not accepted any resources 错误信息如下: 18/04/17 18:18:14 INFO TaskSched ...

  4. redis创建集群报错can‘t connect to node 192.168.163.203

    [README] 创建集群报错 can't connect to node 192.168.163.203 [root@centos201 ~]# /usr/local/redis-cluster/b ...

  5. quartz集群报错but has failed to stop it. This is very likely to create a memory leak.

    quartz集群报错but has failed to stop it. This is very likely to create a memory leak. 在一台配置1核2G内存的阿里云服务器 ...

  6. nginx集群报错“upstream”directive is not allow here 错误 [

    nginx集群报错"upstream"directive is not allow here 错误 搭建了一个服务器, 采用的是nginx + apache(多个) + php + ...

  7. kubeadm初始化集群报错:kubelet driver: “cgroupfs“ is different from docker cgroup driver: “systemd“

    kubeadm初始化集群报错:   报错信息如下: [kubelet-check] It seems like the kubelet isn't running or healthy. [kubel ...

  8. 代码操作redis集群报错:(error) MOVED 解决方法

    记录一下今天搭建完本地redis集群以后,使用C++代码测试redis集群搭建是否成功. 在初始化.链接等一系列成功后,我开开心心进行写操作: 这时候报错: Run 382 Redis Set Err ...

  9. redis集群报错:(error) MOVED 解决方法

    在使用 redis-cli 连接 redis 集群,进行数据操作时,有报错 ./redis-cli -h 192.24.54.1 -p 6379 -a '123456' 192.24.54.1:637 ...

最新文章

  1. 不用se11创建表结构,作smartforms
  2. mysql日期存到oracle_mysql与oracle的日期/时间函数小结
  3. 怎样用注解的方式配置Spring?
  4. 机器学习常用术语超全汇总
  5. 事件EVENT,WaitForSingleObject(),WaitForMultipleObjecct()和SignalObjectAndWait() 的使用(上)
  6. LeetCode 900. RLE 迭代器(模拟/二分查找)
  7. 刷《剑指offer》笔记
  8. mac python2.7升级到3.7_Mac 升级 Python2.7 到 Python3.5
  9. 金蝶云·星空——采购入库单生成凭证取不到价税合计
  10. HTML5+CSS3从入门到精通随书光盘 ISO 镜像视频教程​
  11. 作为架构师,你必需要搞清楚的概念:POJO、PO、DTO、DAO、BO、VO
  12. 用Maven构建 Fat JAR
  13. 深入windows的关机消息截获-从XP到Win7的变化
  14. 现在移动端还用rem吗?nonono
  15. 怎么看电脑是32位还是64位?2个方法,快速查看
  16. 观众关注人数超4万,CIOE信息通信展热度持续高涨
  17. H264/AVC SEI和VUI
  18. 店铺信息html,编辑店铺信息.html
  19. 欧洲6G时间表、目标和关键技术(下篇)
  20. 酷讯网半年内两换CEO 风投要业绩被指心太急

热门文章

  1. 9 Characteristics of Free Software Users
  2. css3中的动画学习分享
  3. php软件开发--php进阶
  4. android服务的应用,Android学习指南之十四:Service详解及应用实例
  5. web管理 pdo-mysql_PHP重新安装启用PDO扩展和PDO_MySQL扩展
  6. windows安装gnu_在Windows上安装GNU Emacs
  7. 与高通公司合作的Cyanogen团队,Thunderbird等等
  8. 前端:CSS/11/CSS浮动和清除,CSS继承性,CSS优先级
  9. 对任意长度字符串,删除其中的任意的N(N=0)个字符
  10. mybatis 入门搭建