dfs.client.block.write.replace-datanode-on-failure
问题描述
在使用hdfs api追加内容操作,从windows电脑上的idea对aliyun服务器上的hdfs中的文件追加内容时,出现错误,如下:java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[172.25.55.228:9866,DS-6ba619cf-5189-45c7-8dbc-56afa381ab0b,DISK]], original=[DatanodeInfoWithStorage[172.25.55.228:9866,DS-6ba619cf-5189-45c7-8dbc-56afa381ab0b,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via ‘dfs.client.block.write.replace-datanode-on-failure.policy’ in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStreamDataStreamer.findNewDatanode(DFSOutputStream.java:918)atorg.apache.hadoop.hdfs.DFSOutputStreamDataStreamer.findNewDatanode(DFSOutputStream.java:918) at org.apache.hadoop.hdfs.DFSOutputStreamDataStreamer.findNewDatanode(DFSOutputStream.java:918)atorg.apache.hadoop.hdfs.DFSOutputStreamDataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:984)
at org.apache.hadoop.hdfs.DFSOutputStreamDataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1131)atorg.apache.hadoop.hdfs.DFSOutputStreamDataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1131) at org.apache.hadoop.hdfs.DFSOutputStreamDataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1131)atorg.apache.hadoop.hdfs.DFSOutputStreamDataStreamer.run(DFSOutputStream.java:455)截图:
问题解决
在idea代码中添加
configuration.set("dfs.client.block.write.replace-datanode-on-failure.policy", "NEVER");
dfs.client.block.write.replace-datanode-on-failure相关推荐
- 密码错误Neo.ClientError.Security.Unauthorized: The client is unauthorized due to authentication failure
neo4j密码错误 解决办法 neo4j Neo.ClientError.Security.Unauthorized 修改密码 参考 https://blog.csdn.net/qq_22521211 ...
- Initialization failed for Block pool registering (Datanode Uuid unassigned)
文章目录 一.原因 二.解决: 2.1思路: 2.2步骤: 一.原因 是namenode和datanode的clusterID不一致导致datanode无法启动. 二.解决: 2.1思路: 删除dat ...
- java.io.IOException Failed to replace a bad datanode
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good d ...
- flume 集群datanode节点失败导致hdfs写失败(转)
来自:http://www.geedoo.info/dfs-client-block-write-replace-datanode-on-failure-enable.html 这几天由于杭州集群处于 ...
- hadoop java client_hadoop3 Java client客户端kerberos认证
hadoop集群升级hadoop3,并需要Kerberos认证,hadoop3代码包做了合并,引用jar包如下: org.apache.hadoop hadoop-hdfs 3.1.1 org.apa ...
- HDFS异常:last block does not have enough number of replicas
HDFS异常:last block does not have enough number of replicas [问题解决办法] 可以通过调整参数dfs.client.block.write.lo ...
- 项目四推荐系统源码(十二万字)
目录 背景指路 0 pom.xml 大概的项目框架 1.0 资源 1.1 sparkml2pmml.properties 1.2 core-site.xml 1.3 hdfs-site.xml 1.4 ...
- java 追加写入hdfs_java操作之HDFS-创建-删除目录-读写文件-追加写文件
Hadoop文件操作之HDFS,创建.删除目录,读写文件,追加写文件 package hadoop.hadoop_demo; import java.io.InputStream; import ja ...
- hadoop报错 java home_hadoop的常见报错日志以及解决方案
1. WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using built ...
最新文章
- 【BZOJ4069】【APIO2015】巴厘岛的雕塑 [贪心][DP]
- 中台,都他妈被你们说糊涂了,文内才是正宗解释,别摸石头过河了,石头早就有了
- 项目: 实时钟表(C语言)
- group by 将null放到其他_为什么我不建议你用去 “ ! = null quot; 做判空?
- 从阿里核心场景看实时数仓的发展趋势
- 使用双栈实现一个队列
- Ubuntu18.04 测试Azure Kinect DK 安装Azure Kinect传感器SDK
- Spark 开源新特性:Catalyst 优化流程裁剪
- 关于WM下创建和删除GPRS接入点
- 在Winform中实现半透明遮罩层
- 腾讯云播放器 Demo与调用方法
- 高频电子线路复习笔记(2)——高频电路基础
- java接口对带宽的要求,常见的接口带宽分析
- android ip 黑白名单,“IP 黑白名单”功能说明
- 在QPixmap的图片上添加文字
- devops包括什么_名字叫什么? DevOps版。
- umijs在Jenkins上npm run buid,FATAL ERROR: Ineffective mark-compacts near heap limit Allocation fail...
- ip地址大全_2020全球公共 DNS 服务器 IP 地址大全
- JAVA梦幻之星攻略_《梦幻之星携带版》最速流程攻略(完结)
- Python求两数之和