1.  今天在写mr的时候,map完成之后好长时间没有执行reducer,也没报什么错误,只是提示任务失败。。。

解决方案:去jobtracker的日志中查看错误信息(hadoop-hadoop-jobtracker-steven.log),发现:

2014-05-09 17:42:46,811 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to close file /ies/result/_logs/history/job_201405091001_0005_1399626116239_hadoop_IES%5FResult%5F2014-05-09+17%3A01%3A55
org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /ies/result/_logs/history/job_201405091001_0005_1399626116239_hadoop_IES%5FResult%5F2014-05-09+17%3A01%3A55 File does not exist. Holder DFSClient_NONMAPREDUCE_1956983432_28 does not have any open filesat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1999)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1990)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1899)at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:415)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)at org.apache.hadoop.ipc.Client.call(Client.java:1113)at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)at com.sun.proxy.$Proxy7.addBlock(Unknown Source)at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)at com.sun.proxy.$Proxy7.addBlock(Unknown Source)at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)
2014-05-09 17:42:46,816 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201405091001_0005_m_000001_0'
2014-05-09 17:42:46,816 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201405091001_0005_m_000002_0'
2014-05-09 17:42:46,824 INFO org.apache.hadoop.mapred.JobHistory: Moving file:/home/hadoop/hadoop1.1.2/hadoop-1.2.1/logs/history/job_201405091001_0005_conf.xml to file:/home/hadoop/hadoop1.1.2/hadoop-1.2.1/logs/history/done/version-1/localhost_1399600913639_/2014/05/09/000000
2014-05-09 17:46:02,621 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt_201405091001_0006_m_000000_2: Task attempt_201405091001_0006_m_000000_2 failed to report status for 600 seconds. Killing!
2014-05-09 17:46:02,621 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201405091001_0006_m_000000_2'
2014-05-09 17:46:02,622 INFO org.apache.hadoop.mapred.JobTracker: Adding task (TASK_CLEANUP) 'attempt_201405091001_0006_m_000000_2' to tip task_201405091001_0006_m_000000, for tracker 'tracker_steven:localhost/127.0.0.1:37363'
2014-05-09 17:46:05,041 INFO org.apache.hadoop.mapred.JobInProgress: Choosing a failed task task_201405091001_0006_m_000000
2014-05-09 17:46:05,042 INFO org.apache.hadoop.mapred.JobTracker: Adding task (MAP) 'attempt_201405091001_0006_m_000000_3' to tip task_201405091001_0006_m_000000, for tracker 'tracker_steven:localhost/127.0.0.1:37363'
2014-05-09 17:46:05,042 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201405091001_0006_m_000000_2'

错误原因:关闭文件失败,那就是FileSystem的事情了,我查看mr代码发现没有地方关闭文件啊,什么情况!!!

在网上找到这么一片文章:主要是说,map中如果已经关闭了文件,但是当map执行完毕后调用cleanup关闭文件的时,将会导致程序出错。。。

Generally, you should not call fs.close() when you do a FileSystem.get(...). FileSystem.get(...) won't actually open a "new" FileSystem object. When you do a close() on that FileSystem, you will close it for any upstream process as well.For example, if you close the FileSystem during a mapper, your MapReduce driver will fail when it again tries to close the FileSystem on cleanup.

但是问题是我没有在map执行cleanup函数的时候关闭Filesystem啊!!!  在细查代码发现我在setup中调用了:super.setup(context);难道是在这把文件流关闭了。于是把这行代码删除掉,程序正常了。。。   我很奇怪。。。

参考:http://stackoverflow.com/questions/20492278/hdfs-filesystem-close-exception

FileSystem close Exception相关推荐

  1. flume-异常Closing file:log.xxxtmp failed. Will retry again in 180 seconds

    文章目录 现象 原因 解决方案 现象 21/12/10 16:24:13 WARN hdfs.BucketWriter: Closing file: hdfs://xxx/origin_data/ka ...

  2. Exception in thread main java.io.IOException: No FileSystem for scheme: hdfs

    如果是命令行spark-submit运行时出现这个问题,那么: import org.apache.spark.sql.{Dataset, SparkSession}val spark = Spark ...

  3. java.io.IOException: No FileSystem for scheme: hdfs

    转自:http://www.cnblogs.com/justinzhang/p/4983673.html 介绍了如何将Maven依赖的包一起打包进jar包.使用maven-assembly打成jar后 ...

  4. The Hadoop Distributed Filesystem

    The Design of HDFS HDFS is a filesystem designed for storing very large files with streaming data ac ...

  5. Hadoop API编程——FileSystem操作

    基于前面配置的HDFS伪分布式模式进行实验,完全分布式模式下次来搞.. 创建Java项目,File->New->Java Project,命名为TestHDFS 采用单元测试做实验,加入单 ...

  6. boost::filesystem::equivalent的用法测试程序

    boost::filesystem::equivalent的用法测试程序 实现功能 C++实现代码 实现功能 boost::filesystem::equivalent的用法测试程序 C++实现代码 ...

  7. java.io.IOException No FileSystem for scheme hdfs

    java.io.IOException: No FileSystem for scheme: hdfs 以下将转载两篇博文,博文中有解决方法. 1.Java下Maven打包项目解决方法 log4j:W ...

  8. Exception in thread main java.lang.NullPointerException

    1.在window操作系统上,使用eclipse开发工具从hdfs分布式文件系统上下载文件报空指针异常解决方法: log4j:WARN No appenders could be found for ...

  9. HDFS的API操作-获取FileSystem方式

    HDFS 的 API 操作 导入 Maven 依赖 <repositories><repository><id>cloudera</id><url ...

最新文章

  1. 【重磅】Google元老Eric Schmidt发布《深度学习2020大综述》,深度学习集大成者
  2. [PHP]Maximum execution time of 30 seconds exceeded
  3. 更改oracle 端口,Oracle数据库11G R2 修改服务端口
  4. 光子 量子 DNA计算机的发展情况,科研萌新关于非冯诺依曼结构计算机的一些知识mewo~~...
  5. 加大Linux服务器的文件描述符
  6. 【超参数寻优】交叉验证(Cross Validation)超参数寻优的python实现:单一参数寻优
  7. 拼多多这是得罪华为了?
  8. python爬虫知乎问答
  9. Adobe Acrobat XI Pro 11安装教程
  10. AIoT助力文旅产业,2020年5A景区数字化发展指数报告
  11. 知其所以然技术论坛VC++资源下载
  12. DropBox系列-安卓DropBox介绍
  13. word中字体大小(pt)和网页中css设置font-size时用的px大小对应关系
  14. 读《万一针》老中医万方琴五十年针灸心得
  15. ios6.0 siri语音识别
  16. ffmpeg 提取 视频,音频,字幕 方法
  17. 最新手机号段归属地数据库 2019年6月版 430826条记录
  18. 一个故事,让你一生不再生气
  19. [杂记]一些感悟,随时更新
  20. 窗口置顶工具v2.3.0

热门文章

  1. document.domain 跨域问题[转]
  2. RH033 Unit 9 vim: An Advanced Text Editor
  3. Linux字符设备驱动结构
  4. 步步为营 .NET 设计模式学习笔记系列总结
  5. 博客作业2---线性表
  6. spark (java API) 在Intellij IDEA中开发并运行
  7. 基于Selenium2与Python自动化测试环境搭建
  8. env export set 作用
  9. c# winforms TextBox的记忆功能
  10. 【2781】二分练习 sdutOJ