Bad connect ack with firstBadLink 192.168.*.*:50010
今天搭建hadoop2.0 时hadoop fs -put 文件时报错,看到网上有这样的解决方法先转载下 呵呵 已解决
转自:http://lykke.iteye.com/blog/1320558
Exception in thread "main" java.io.IOException: Bad connect ack with firstBadLink 192.168.1.14:50010
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2903)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
运行hadoop put文件 的时候 回报这个错误
这个在 DFSClient 里
[java] view plaincopyprint?
// connects to the first datanode in the pipeline
// Returns true if success, otherwise return failure.
//
private boolean createBlockOutputStream(DatanodeInfo[] nodes, String client,
boolean recoveryFlag) {
String firstBadLink = "";
if (LOG.isDebugEnabled()) {
for (int i = 0; i < nodes.length; i++) {
LOG.debug("pipeline = " + nodes[i].getName());
}
}
// persist blocks on namenode on next flush
persistBlocks = true;
try {
LOG.debug("Connecting to " + nodes[0].getName());
InetSocketAddress target = NetUtils.createSocketAddr(nodes[0].getName());
s = socketFactory.createSocket();
int timeoutValue = 3000 * nodes.length + socketTimeout;
NetUtils.connect(s, target, timeoutValue);
s.setSoTimeout(timeoutValue);
s.setSendBufferSize(DEFAULT_DATA_SOCKET_SIZE);
LOG.debug("Send buf size " + s.getSendBufferSize());
long writeTimeout = HdfsConstants.WRITE_TIMEOUT_EXTENSION * nodes.length +
datanodeWriteTimeout;
//
// Xmit header info to datanode
//
DataOutputStream out = new DataOutputStream(
new BufferedOutputStream(NetUtils.getOutputStream(s, writeTimeout),
DataNode.SMALL_BUFFER_SIZE));
blockReplyStream = new DataInputStream(NetUtils.getInputStream(s));
out.writeShort( DataTransferProtocol.DATA_TRANSFER_VERSION );
out.write( DataTransferProtocol.OP_WRITE_BLOCK );
out.writeLong( block.getBlockId() );
out.writeLong( block.getGenerationStamp() );
out.writeInt( nodes.length );
out.writeBoolean( recoveryFlag ); // recovery flag
Text.writeString( out, client );
out.writeBoolean(false); // Not sending src node information
out.writeInt( nodes.length - 1 );
for (int i = 1; i < nodes.length; i++) {
nodes[i].write(out);
}
checksum.writeHeader( out );
out.flush();
// receive ack for connect
firstBadLink = Text.readString(blockReplyStream);
if (firstBadLink.length() != 0) {
throw new IOException("Bad connect ack with firstBadLink " + firstBadLink);
}
blockStream = out;
return true; // success
} catch (IOException ie) {
// connects to the first datanode in the pipeline// Returns true if success, otherwise return failure.//private boolean createBlockOutputStream(DatanodeInfo[] nodes, String client,boolean recoveryFlag) {String firstBadLink = "";if (LOG.isDebugEnabled()) {for (int i = 0; i < nodes.length; i++) {LOG.debug("pipeline = " + nodes[i].getName());}}// persist blocks on namenode on next flushpersistBlocks = true;try {LOG.debug("Connecting to " + nodes[0].getName());InetSocketAddress target = NetUtils.createSocketAddr(nodes[0].getName());s = socketFactory.createSocket();int timeoutValue = 3000 * nodes.length + socketTimeout;NetUtils.connect(s, target, timeoutValue);s.setSoTimeout(timeoutValue);s.setSendBufferSize(DEFAULT_DATA_SOCKET_SIZE);LOG.debug("Send buf size " + s.getSendBufferSize());long writeTimeout = HdfsConstants.WRITE_TIMEOUT_EXTENSION * nodes.length +datanodeWriteTimeout;//// Xmit header info to datanode//DataOutputStream out = new DataOutputStream(new BufferedOutputStream(NetUtils.getOutputStream(s, writeTimeout), DataNode.SMALL_BUFFER_SIZE));blockReplyStream = new DataInputStream(NetUtils.getInputStream(s));out.writeShort( DataTransferProtocol.DATA_TRANSFER_VERSION );out.write( DataTransferProtocol.OP_WRITE_BLOCK );out.writeLong( block.getBlockId() );out.writeLong( block.getGenerationStamp() );out.writeInt( nodes.length );out.writeBoolean( recoveryFlag ); // recovery flagText.writeString( out, client );out.writeBoolean(false); // Not sending src node informationout.writeInt( nodes.length - 1 );for (int i = 1; i < nodes.length; i++) {nodes[i].write(out);}checksum.writeHeader( out );out.flush();// receive ack for connectfirstBadLink = Text.readString(blockReplyStream);if (firstBadLink.length() != 0) {throw new IOException("Bad connect ack with firstBadLink " + firstBadLink);}blockStream = out;return true; // success} catch (IOException ie) {
显示为没有收到正确的应答包,我用了两种方式解决了
1) '/etc/init.d/iptables stop' -->stopped firewall
2) SELINUX=disabled in '/etc/selinux/config' file.-->disabled selinux
一般的这种hadoop 应答类错误 多半是防火墙没有关闭
转载于:https://blog.51cto.com/gcjava/1426492
Bad connect ack with firstBadLink 192.168.*.*:50010相关推荐
- 【Hadoop】Bad connect ack with firstBadLink as ×.×.×.×:50010
1.背景 本地运行代码,先报错 [Flink]UnsatisfieldlinkError : org.apache.hadoop.util.NativeCrc32,然后解决后,报错这个. [Hadoo ...
- Spark报错: IOException: Bad connect ack with firstBadlink as xxx:500010
1.美图 2.背景 执行一个spark任务报错,看着是spark structured streaming 写入到kaf INFO: Exception in createBlockOutputStr ...
- ipc.Client: Retrying connect to server: h1/192.168.1.61:9000. Already tried 0 time(s);解决方法
1.检查namenode服务器的是否运行正常,我的问题是没有开启hadoop集群出现的. 2.检查namenode服务器的防火墙是否开放的响应端口,一般内网建议关闭. 转载于:https://www. ...
- MySQL远程连接报错:ERROR 2002 (HY000): Can‘t connect to server on ‘192.168.172.130‘ (115)
目录 1.程序报错:(不能远程连接数据库) 2.测试是否能ping到远程机器 3.登录数据库 4.仍无法连接到数据库,可能不能访问端口号,再次测试(端口telnet 不通) 5.MySQL远程登录连接 ...
- java.io.IOException: Bad connect ack with firstBad
2019独角兽企业重金招聘Python工程师标准>>> [hadoop @master ~]$ touch bigfile.tar [hadoop @master ~]$ cat h ...
- INFO org.apache.hadoop.ipc.RPC: Server at master/192.168.200.128:9000 not available yet, Zzzzz...
hadoop 启动时namenode和datanode可以启动,使用jps命令也可以看到进程,但是在浏览器中输入master:50070却没有显示datanode. 查看datanode的log日志: ...
- ssh: connect to host 192.168.57.131 port 22: Conne
为什么80%的码农都做不了架构师?>>> 文档背景: 重新克隆了一台centos机器,由于时间太久,已经忘了之前Xshell远程连接是如何配置的了.只记得,需要关闭防火墙.现将 ...
- ssh连接服务器出现:ssh: connect to host 192.168.1.107 port 22: Connection refused 的解决方法
文章目录: 1 说明遇到问题场景 2 解决方式 1 说明遇到问题场景 1.我的系统环境 windows10 连接的服务器系统为:Mint19.3 2.我使用windows,在局域网下通过ssh连接服务 ...
- Xshell 连接本地的Linux 系统,提示:Could not connect to '192.168.182.128' (port 23): Connection failed
--------- Xshell 连接本地的Linux 系统,提示:Could not connect to '192.168.182.128' (port 23): Connection faile ...
最新文章
- 在线作图|微生物多样性分析——丰度等级曲线
- 特征值和特征向量(Eigenvalues and Eigenvectors)
- 我的Firefox1.0的阻止弹出广告的功能失效了!
- 升级鸿蒙系统照片,华为鸿蒙系统照片出炉,神似EMUI,海内外花粉沸腾了
- Centos7.2安装python3与python2共存
- C++ 面向对象程序三大特性之 多态
- OpenGL vscode 安装与配置
- HDU 4990 Reading comprehension
- 【自我救赎--牛客网Top101 4天刷题计划】 第四天 登峰造极
- Javascript第二章中for循环第四课
- 新版ubuntu16.04安装旧版商店与flash
- 苹果电脑制作Windows U盘系统
- c语言scanf输入无理数,简单C语言scanf输入问题
- 如何实用gho文件安装操作系统
- python语言 表白程序_python程序员实现表白代码的案例
- 英雄联盟手游拳头账号注册
- Dota 2 with Large Scale Deep Reinforcement Learning翻译
- 隐私泄露中的人性剖析
- 程序员的专属表情包,看了吓一跳
- SEO-搜索引擎优化
热门文章
- ICPC2008哈尔滨-E-Gauss Elimination
- 利用vue和jQuery实现中国主要城市搜索与选择
- CheckStyle
- please get a license from www.texturepacker.com
- 秀尔算法:破解RSA加密的“不灭神话” --zz
- jQuery 常用的效果函数(一)
- BZOJ4129: Haruna’s Breakfast
- H.264 RTP payload 格式
- 在DataGireView中加筛选条件
- Unity3d HDR和Bloom效果(高动态范围图像和泛光)