今天搭建hadoop2.0 时hadoop fs  -put 文件时报错,看到网上有这样的解决方法先转载下 呵呵 已解决

转自:http://lykke.iteye.com/blog/1320558

Exception in thread "main" java.io.IOException: Bad connect ack with firstBadLink 192.168.1.14:50010
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2903)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

运行hadoop put文件 的时候 回报这个错误

这个在 DFSClient 里

[java] view plaincopyprint?

  1. // connects to the first datanode in the pipeline

  2. // Returns true if success, otherwise return failure.

  3. //

  4. private boolean createBlockOutputStream(DatanodeInfo[] nodes, String client,

  5. boolean recoveryFlag) {

  6. String firstBadLink = "";

  7. if (LOG.isDebugEnabled()) {

  8. for (int i = 0; i < nodes.length; i++) {

  9. LOG.debug("pipeline = " + nodes[i].getName());

  10. }

  11. }

  12. // persist blocks on namenode on next flush

  13. persistBlocks = true;

  14. try {

  15. LOG.debug("Connecting to " + nodes[0].getName());

  16. InetSocketAddress target = NetUtils.createSocketAddr(nodes[0].getName());

  17. s = socketFactory.createSocket();

  18. int timeoutValue = 3000 * nodes.length + socketTimeout;

  19. NetUtils.connect(s, target, timeoutValue);

  20. s.setSoTimeout(timeoutValue);

  21. s.setSendBufferSize(DEFAULT_DATA_SOCKET_SIZE);

  22. LOG.debug("Send buf size " + s.getSendBufferSize());

  23. long writeTimeout = HdfsConstants.WRITE_TIMEOUT_EXTENSION * nodes.length +

  24. datanodeWriteTimeout;

  25. //

  26. // Xmit header info to datanode

  27. //

  28. DataOutputStream out = new DataOutputStream(

  29. new BufferedOutputStream(NetUtils.getOutputStream(s, writeTimeout),

  30. DataNode.SMALL_BUFFER_SIZE));

  31. blockReplyStream = new DataInputStream(NetUtils.getInputStream(s));

  32. out.writeShort( DataTransferProtocol.DATA_TRANSFER_VERSION );

  33. out.write( DataTransferProtocol.OP_WRITE_BLOCK );

  34. out.writeLong( block.getBlockId() );

  35. out.writeLong( block.getGenerationStamp() );

  36. out.writeInt( nodes.length );

  37. out.writeBoolean( recoveryFlag );       // recovery flag

  38. Text.writeString( out, client );

  39. out.writeBoolean(false); // Not sending src node information

  40. out.writeInt( nodes.length - 1 );

  41. for (int i = 1; i < nodes.length; i++) {

  42. nodes[i].write(out);

  43. }

  44. checksum.writeHeader( out );

  45. out.flush();

  46. // receive ack for connect

  47. firstBadLink = Text.readString(blockReplyStream);

  48. if (firstBadLink.length() != 0) {

  49. throw new IOException("Bad connect ack with firstBadLink " + firstBadLink);

  50. }

  51. blockStream = out;

  52. return true;     // success

  53. } catch (IOException ie) {

  // connects to the first datanode in the pipeline// Returns true if success, otherwise return failure.//private boolean createBlockOutputStream(DatanodeInfo[] nodes, String client,boolean recoveryFlag) {String firstBadLink = "";if (LOG.isDebugEnabled()) {for (int i = 0; i < nodes.length; i++) {LOG.debug("pipeline = " + nodes[i].getName());}}// persist blocks on namenode on next flushpersistBlocks = true;try {LOG.debug("Connecting to " + nodes[0].getName());InetSocketAddress target = NetUtils.createSocketAddr(nodes[0].getName());s = socketFactory.createSocket();int timeoutValue = 3000 * nodes.length + socketTimeout;NetUtils.connect(s, target, timeoutValue);s.setSoTimeout(timeoutValue);s.setSendBufferSize(DEFAULT_DATA_SOCKET_SIZE);LOG.debug("Send buf size " + s.getSendBufferSize());long writeTimeout = HdfsConstants.WRITE_TIMEOUT_EXTENSION * nodes.length +datanodeWriteTimeout;//// Xmit header info to datanode//DataOutputStream out = new DataOutputStream(new BufferedOutputStream(NetUtils.getOutputStream(s, writeTimeout), DataNode.SMALL_BUFFER_SIZE));blockReplyStream = new DataInputStream(NetUtils.getInputStream(s));out.writeShort( DataTransferProtocol.DATA_TRANSFER_VERSION );out.write( DataTransferProtocol.OP_WRITE_BLOCK );out.writeLong( block.getBlockId() );out.writeLong( block.getGenerationStamp() );out.writeInt( nodes.length );out.writeBoolean( recoveryFlag );       // recovery flagText.writeString( out, client );out.writeBoolean(false); // Not sending src node informationout.writeInt( nodes.length - 1 );for (int i = 1; i < nodes.length; i++) {nodes[i].write(out);}checksum.writeHeader( out );out.flush();// receive ack for connectfirstBadLink = Text.readString(blockReplyStream);if (firstBadLink.length() != 0) {throw new IOException("Bad connect ack with firstBadLink " + firstBadLink);}blockStream = out;return true;     // success} catch (IOException ie) {

显示为没有收到正确的应答包,我用了两种方式解决了

1) '/etc/init.d/iptables stop' -->stopped firewall
2) SELINUX=disabled in '/etc/selinux/config' file.-->disabled selinux

一般的这种hadoop 应答类错误 多半是防火墙没有关闭

转载于:https://blog.51cto.com/gcjava/1426492

Bad connect ack with firstBadLink 192.168.*.*:50010相关推荐

  1. 【Hadoop】Bad connect ack with firstBadLink as ×.×.×.×:50010

    1.背景 本地运行代码,先报错 [Flink]UnsatisfieldlinkError : org.apache.hadoop.util.NativeCrc32,然后解决后,报错这个. [Hadoo ...

  2. Spark报错: IOException: Bad connect ack with firstBadlink as xxx:500010

    1.美图 2.背景 执行一个spark任务报错,看着是spark structured streaming 写入到kaf INFO: Exception in createBlockOutputStr ...

  3. ipc.Client: Retrying connect to server: h1/192.168.1.61:9000. Already tried 0 time(s);解决方法

    1.检查namenode服务器的是否运行正常,我的问题是没有开启hadoop集群出现的. 2.检查namenode服务器的防火墙是否开放的响应端口,一般内网建议关闭. 转载于:https://www. ...

  4. MySQL远程连接报错:ERROR 2002 (HY000): Can‘t connect to server on ‘192.168.172.130‘ (115)

    目录 1.程序报错:(不能远程连接数据库) 2.测试是否能ping到远程机器 3.登录数据库 4.仍无法连接到数据库,可能不能访问端口号,再次测试(端口telnet 不通) 5.MySQL远程登录连接 ...

  5. java.io.IOException: Bad connect ack with firstBad

    2019独角兽企业重金招聘Python工程师标准>>> [hadoop @master ~]$ touch bigfile.tar [hadoop @master ~]$ cat h ...

  6. INFO org.apache.hadoop.ipc.RPC: Server at master/192.168.200.128:9000 not available yet, Zzzzz...

    hadoop 启动时namenode和datanode可以启动,使用jps命令也可以看到进程,但是在浏览器中输入master:50070却没有显示datanode. 查看datanode的log日志: ...

  7. ssh: connect to host 192.168.57.131 port 22: Conne

    为什么80%的码农都做不了架构师?>>>    文档背景: 重新克隆了一台centos机器,由于时间太久,已经忘了之前Xshell远程连接是如何配置的了.只记得,需要关闭防火墙.现将 ...

  8. ssh连接服务器出现:ssh: connect to host 192.168.1.107 port 22: Connection refused 的解决方法

    文章目录: 1 说明遇到问题场景 2 解决方式 1 说明遇到问题场景 1.我的系统环境 windows10 连接的服务器系统为:Mint19.3 2.我使用windows,在局域网下通过ssh连接服务 ...

  9. Xshell 连接本地的Linux 系统,提示:Could not connect to '192.168.182.128' (port 23): Connection failed

    --------- Xshell 连接本地的Linux 系统,提示:Could not connect to '192.168.182.128' (port 23): Connection faile ...

最新文章

  1. 在线作图|微生物多样性分析——丰度等级曲线
  2. 特征值和特征向量(Eigenvalues and Eigenvectors)
  3. 我的Firefox1.0的阻止弹出广告的功能失效了!
  4. 升级鸿蒙系统照片,华为鸿蒙系统照片出炉,神似EMUI,海内外花粉沸腾了
  5. Centos7.2安装python3与python2共存
  6. C++ 面向对象程序三大特性之 多态
  7. OpenGL vscode 安装与配置
  8. HDU 4990 Reading comprehension
  9. 【自我救赎--牛客网Top101 4天刷题计划】 第四天 登峰造极
  10. Javascript第二章中for循环第四课
  11. 新版ubuntu16.04安装旧版商店与flash
  12. 苹果电脑制作Windows U盘系统
  13. c语言scanf输入无理数,简单C语言scanf输入问题
  14. 如何实用gho文件安装操作系统
  15. python语言 表白程序_python程序员实现表白代码的案例
  16. 英雄联盟手游拳头账号注册
  17. Dota 2 with Large Scale Deep Reinforcement Learning翻译
  18. 隐私泄露中的人性剖析
  19. 程序员的专属表情包,看了吓一跳
  20. SEO-搜索引擎优化

热门文章

  1. ICPC2008哈尔滨-E-Gauss Elimination
  2. 利用vue和jQuery实现中国主要城市搜索与选择
  3. CheckStyle
  4. please get a license from www.texturepacker.com
  5. 秀尔算法:破解RSA加密的“不灭神话” --zz
  6. jQuery 常用的效果函数(一)
  7. BZOJ4129: Haruna’s Breakfast
  8. H.264 RTP payload 格式
  9. 在DataGireView中加筛选条件
  10. Unity3d HDR和Bloom效果(高动态范围图像和泛光)