问题现象

问题原因


由于参考了之前研究生阶段下载的资料,按照真分布式环境一步步配置,觉得只是将datanode同时存放在namenode,将hdfs-site.xml文件中的dfs.replication的值设置为1而已,应该也能运行起来。
百度各种hadoop2.2.0分布式搭建攻略,也尝试了其他方法,比如修改mapred-queues.xml等,最后发现是需要修改mapred-site.xml才会运行成功。

问题解决

参考CSDN-er博文:Hadoop2.2.0版本多节点集群安装及测试

wordcount运行成功

【运行中间过程详情】
17/07/26 22:23:08 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
17/07/26 22:23:08 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
17/07/26 22:23:09 INFO input.FileInputFormat: Total input paths to process : 1
17/07/26 22:23:09 INFO mapreduce.JobSubmitter: number of splits:1
17/07/26 22:23:09 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
17/07/26 22:23:09 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
17/07/26 22:23:09 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
17/07/26 22:23:09 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
17/07/26 22:23:09 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
17/07/26 22:23:09 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
17/07/26 22:23:09 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
17/07/26 22:23:09 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
17/07/26 22:23:09 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
17/07/26 22:23:09 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
17/07/26 22:23:09 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
17/07/26 22:23:09 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
17/07/26 22:23:09 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1749311414_0001
17/07/26 22:23:09 WARN conf.Configuration: file:/usr/local/hadoop/hadoop-2.2.0/tmp/mapred/staging/root1749311414/.staging/job_local1749311414_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
17/07/26 22:23:09 WARN conf.Configuration: file:/usr/local/hadoop/hadoop-2.2.0/tmp/mapred/staging/root1749311414/.staging/job_local1749311414_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
17/07/26 22:23:10 WARN conf.Configuration: file:/usr/local/hadoop/hadoop-2.2.0/tmp/mapred/local/localRunner/root/job_local1749311414_0001/job_local1749311414_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
17/07/26 22:23:10 WARN conf.Configuration: file:/usr/local/hadoop/hadoop-2.2.0/tmp/mapred/local/localRunner/root/job_local1749311414_0001/job_local1749311414_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
17/07/26 22:23:10 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
17/07/26 22:23:10 INFO mapreduce.Job: Running job: job_local1749311414_0001
17/07/26 22:23:10 INFO mapred.LocalJobRunner: OutputCommitter set in config null
17/07/26 22:23:10 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
17/07/26 22:23:10 INFO mapred.LocalJobRunner: Waiting for map tasks
17/07/26 22:23:10 INFO mapred.LocalJobRunner: Starting task: attempt_local1749311414_0001_m_000000_0
17/07/26 22:23:10 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
17/07/26 22:23:10 INFO mapred.MapTask: Processing split: hdfs://Master:9000/data/input/wordcountTest:0+135
17/07/26 22:23:10 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
17/07/26 22:23:10 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
17/07/26 22:23:10 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
17/07/26 22:23:10 INFO mapred.MapTask: soft limit at 83886080
17/07/26 22:23:10 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
17/07/26 22:23:10 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
17/07/26 22:23:10 INFO mapred.LocalJobRunner:
17/07/26 22:23:10 INFO mapred.MapTask: Starting flush of map output
17/07/26 22:23:10 INFO mapred.MapTask: Spilling map output
17/07/26 22:23:10 INFO mapred.MapTask: bufstart = 0; bufend = 178; bufvoid = 104857600
17/07/26 22:23:10 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214340(104857360); length = 57/6553600
17/07/26 22:23:10 INFO mapred.MapTask: Finished spill 0
17/07/26 22:23:10 INFO mapred.Task: Task:attempt_local1749311414_0001_m_000000_0 is done. And is in the process of committing
17/07/26 22:23:10 INFO mapred.LocalJobRunner: map
17/07/26 22:23:10 INFO mapred.Task: Task ‘attempt_local1749311414_0001_m_000000_0’ done.
17/07/26 22:23:10 INFO mapred.LocalJobRunner: Finishing task: attempt_local1749311414_0001_m_000000_0
17/07/26 22:23:10 INFO mapred.LocalJobRunner: Map task executor complete.
17/07/26 22:23:10 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
17/07/26 22:23:10 INFO mapred.Merger: Merging 1 sorted segments
17/07/26 22:23:10 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 129 bytes
17/07/26 22:23:10 INFO mapred.LocalJobRunner:
17/07/26 22:23:10 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
17/07/26 22:23:11 INFO mapred.Task: Task:attempt_local1749311414_0001_r_000000_0 is done. And is in the process of committing
17/07/26 22:23:11 INFO mapred.LocalJobRunner:
17/07/26 22:23:11 INFO mapred.Task: Task attempt_local1749311414_0001_r_000000_0 is allowed to commit now
17/07/26 22:23:11 INFO output.FileOutputCommitter: Saved output of task ‘attempt_local1749311414_0001_r_000000_0’ to hdfs://Master:9000/output/wordcount/_temporary/0/task_local1749311414_0001_r_000000
17/07/26 22:23:11 INFO mapred.LocalJobRunner: reduce > reduce
17/07/26 22:23:11 INFO mapred.Task: Task ‘attempt_local1749311414_0001_r_000000_0’ done.
17/07/26 22:23:11 INFO mapreduce.Job: Job job_local1749311414_0001 running in uber mode : false
17/07/26 22:23:11 INFO mapreduce.Job: map 100% reduce 100%
17/07/26 22:23:11 INFO mapreduce.Job: Job job_local1749311414_0001 completed successfully
17/07/26 22:23:11 INFO mapreduce.Job: Counters: 32
File System Counters
FILE: Number of bytes read=540918
FILE: Number of bytes written=935238
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=270
HDFS: Number of bytes written=98
HDFS: Number of read operations=15
HDFS: Number of large read operations=0
HDFS: Number of write operations=4
Map-Reduce Framework
Map input records=11
Map output records=15
Map output bytes=178
Map output materialized bytes=144
Input split bytes=108
Combine input records=15
Combine output records=10
Reduce input groups=10
Reduce shuffle bytes=0
Reduce input records=10
Reduce output records=10
Spilled Records=20
Shuffled Maps =0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=0
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=436731904
File Input Format Counters
Bytes Read=135
File Output Format Counters
Bytes Written=98
root@Master:/usr/local/hadoop/hadoop-2.2.0/sbin# hadoop fs -cat /output/wordcount/part-r-00000 |head
chengzhi 1
feiyang 4
taoxin 1
wanglin 2
wangliu 1
wei 1
zhangbo 2
zhanglin 1
zhao 1
zhaoqingwei 1
root@Master:/usr/local/hadoop/hadoop-2.2.0/sbin# vim mapred-site.xml

Hadoop小兵笔记【五】hadoop2.2.0伪分布式环境搭建疑难-第一个用例wordcount失败相关推荐

  1. Hadoop小兵笔记【六】hadoop2.2.0伪分布式环境搭建疑难-JobHistory显示Job信息为空

    问题现象 运行Hadoop2.2.0自带的wordcount实例: Job运行成功: http://Master:8080/cluster/app查看Job信息为空: http://Master:19 ...

  2. Hadoop2.2.0伪分布式环境搭建(附:64位下编译Hadoop-2.2.0过程)

    Hadoop2.2.0伪分布式环境搭建: 写在前面:Hadoop2.2.0默认是支持32位的OS,如果想要在64位OS下运行的话,可以通过在64位OS下面编译Hadoop2.2.0来实现,编译的操作步 ...

  3. 在Win7虚拟机下搭建Hadoop2.6.0伪分布式环境

    近几年大数据越来越火热.由于工作需要以及个人兴趣,最近开始学习大数据相关技术.学习过程中的一些经验教训希望能通过博文沉淀下来,与网友分享讨论,作为个人备忘. 第一篇,在win7虚拟机下搭建hadoop ...

  4. Hadoop2.6.0伪分布环境搭建

    用到的软件: 一.安装jdk: 1.要安装的jdk,我把它拷在了共享文件夹里面.   (用优盘拷也可以) 2.我把jdk拷在了用户文件夹下面. (其他地方也可以,不过路径要相应改变) 3.执行复制安装 ...

  5. 安装hadoop2.6.0伪分布式环境

    集群环境搭建请见:http://blog.csdn.net/jediael_lu/article/details/45145767 一.环境准备 1.安装linux.jdk 2.下载hadoop2.6 ...

  6. hadoop2.9.1伪分布式环境搭建以及文件系统的简单操作

    1.准备 1.1.在vmware上安装centos7的虚拟机 1.2.系统配置 配置网络 # vi /etc/sysconfig/network-scripts/ifcfg-ens33 BOOTPRO ...

  7. Hadoop单机和伪分布式环境搭建

    hadoop环境搭建 1.三个环境        单机.伪分布式.分布式 2.三个分支       apache版本(Apache基金会)       cdh版本(cloudera公司)       ...

  8. HDFS伪分布式环境搭建-很不错

    HDFS伪分布式环境搭建 原创 ZeroOne01 2018-03-24 19:51:20 评论(0) 655人阅读 HDFS概述及设计目标 什么是HDFS: 是Hadoop实现的一个分布式文件系统( ...

  9. Hadoop2.2.0伪分布式搭建

    在hadoop中,分为单机模式,伪分布式,和完全分布式.而伪分布式在1.X中就是类似JobTracker和TaskTracker都在一台机器上运行,在2.X中,就是NameNode和DataNode在 ...

最新文章

  1. rabbitmq实战指南 pdf_企业服务智能用户运营实战指南.pdf
  2. 聚焦AI落地痛点,纵论跨域学习技术前沿和应用趋势 | CNCC技术论坛
  3. Linux 磁盘与文件系统管理
  4. 如何恢复XP系统中原来的Administrator用户
  5. linux环境cpp/c文件的makefile编写(caffe举例)
  6. CTF加解密/编码常用在线网址
  7. WiresharkTCP的状态 (SYN, FIN, ACK, PSH, RST, URG)
  8. INTERSPEECH2020大会收录了哪些论文?
  9. mysql 提交 按钮_表单提交按钮input和button、a的差异
  10. Qt5.3.2(VS2010)_调试_进入Qt源码
  11. python prettytable格式设置_Python prettytable模
  12. django和celery结合应用
  13. Oracle数据库学习笔记(十五)--自连接
  14. java中如何生成随机数
  15. 网站搭建:从零搭建个人网站教程(10)
  16. 关于STAR法则简历
  17. background-color实现渐变过渡
  18. Mac安装MongoDB(极简)
  19. [Java并发-14] Future: 优雅的使用多线程
  20. OLTP+OLAP->HTAP

热门文章

  1. python ffmpeg pipe_Python子进程中的ffmpeg – 无法为’pipe:’找到合适的输出格式
  2. 北汽新能源驱动桥冷却水泵维修笔记
  3. NVIDIA设备弹出错误,这个设备是不可移动的,问题的解决办法
  4. 以数组作为函数参数的函数调用
  5. 第七章:使用Netlify零成本部署组件文档
  6. 善用机制,做有智慧的管理者
  7. Linux学习:任务管理
  8. 【原创】彼得德鲁克《管理的实践》札记(十)
  9. NPM酷库:lru-cache 基于内存的缓存管理
  10. python中reversed用法_Python中reversed函数有哪些功能呢?