yarn oom问题一例
![](https://s3.51cto.com/wyfs02/M01/26/03/wKiom1No-TCyEeI5AAOLNlmRctQ989.jpg)
2014-05-06 16:00:00,632 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode: Assigned container container_1399267192386_43455_01_000037 of capacity <memory:1536, vCores:1> on host xxxx:44614, which currently has 4 containers, <memory:6144, vCores:4> used and <memory:79872, vCores:42> available
2014-05-05 10:14:47,001 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1399203487215_21532_01_000035 by user hdfs 2014-05-05 10:14:47,001 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hdfs IP=10.201.203.111 OPERATION=Start Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1399203487215_21532 CONTAINERID=container_1399203487215_21532_01_000035 2014-05-05 10:14:47,001 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1399203487215_21532_01_000035 to application application_1399203487215_21532 2014-05-05 10:14:47,055 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1399203487215_21532_01_000035 transitioned from NEW to LOCALIZING 2014-05-05 10:14:47,058 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1399203487215_21532_01_000035 2014-05-05 10:14:47,060 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /home/vipshop/hard_disk/10/yarn/local/nmPrivate/container_1399203487215_21532_01_000035.tokens. Credentials list: 2014-05-05 10:14:47,412 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1399203487215_21532_01_000035 transitioned from LOCALIZING to LOCALIZED 2014-05-05 10:14:47,454 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1399203487215_21532_01_000035 transitioned from LOCALIZED to RUNNING 2014-05-05 10:14:47,493 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /home/vipshop/hard_disk/6/yarn/local/usercache/hdfs/appcache/application_1399203487215_21532/container_1399203487215_21532_01_000035/default_container_executor.sh] 2014-05-05 10:14:48,827 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /home/vipshop/hard_disk/10/yarn/local/nmPrivate/container_1399203487215_21532_01_000035.tokens to /home/vipshop/hard_disk/11/yarn/local/usercache/hdfs/appcache/application_1399203487215_21532/container_1399203487215_21532_01_000035.tokens 2014-05-05 10:14:49,169 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1399203487215_21532_01_000035 2014-05-05 10:14:49,305 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 21209 for container-id container_1399203487215_21532_01_000035: 66.7 MB of 1.5 GB physical memory used; 2.1 GB of 3.1 GB virtual memory used 2014-05-05 10:14:53,063 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 21209 for container-id container_1399203487215_21532_01_000035: 984.1 MB of 1.5 GB physical memory used; 2.1 GB of 3.1 GB virtual memory used 2014-05-05 10:14:56,379 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 21209 for container-id container_1399203487215_21532_01_000035: 984.5 MB of 1.5 GB physical memory used; 2.1 GB of 3.1 GB virtual memory used ....... 2014-05-05 10:19:26,823 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 21209 for container-id container_1399203487215_21532_01_000035: 1.1 GB of 1.5 GB physical memory used; 2.1 GB of 3.1 GB virtual memory used 2014-05-05 10:19:27,459 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hdfs IP=10.201.203.111 OPERATION=Stop Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1399203487215_21532 CONTAINERID=container_1399203487215_21532_01_000035 2014-05-05 10:19:27,459 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1399203487215_21532_01_000035 transitioned from RUNNING to KILLING 2014-05-05 10:19:27,459 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1399203487215_21532_01_000035 2014-05-05 10:19:27,800 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1399203487215_21532_01_000035 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL
<property><name>mapreduce.map.java.opts</name><value>-Xmx1280m -Xms1280m -Xmn256m -XX:SurvivorRatio=6 -XX:MaxPermSize=128m</value></property><property><name>mapreduce.reduce.java.opts</name><value>-Xmx1280m -Xms1280m -Xmn256m -XX:SurvivorRatio=6 -XX:MaxPermSize=128m</value></property><property><name>mapred.child.java.opts</name><value>-Xmx1280m -Xms1280m -Xmn256m -XX:SurvivorRatio=6 -XX:MaxPermSize=128m</value><final>true</final></property>
转载于:https://blog.51cto.com/caiguangguang/1407424
yarn oom问题一例相关推荐
- idea内存溢出解决_各种OOM代码样例及解决方法
针对目前大家对OOM的类型不太熟悉,那么来总结一下各种OOM出现的情况以及解决方法.把各种OOM的情况列出来,然后逐一进行代码编写复现和提供解决方法. 1. 堆溢出-java.lang.OutOfMe ...
- 各种OOM代码样例及解决方法
点击上方「蓝字」关注我们 针对目前大家对OOM的类型不太熟悉,那么来总结一下各种OOM出现的情况以及解决方法.把各种OOM的情况列出来,然后逐一进行代码编写复现和提供解决方法. 1. 堆溢出-java ...
- yarn的配置 -- 无法将“yo”项识别为 cmdlet、函数、脚本文件或可运行程序的名称。
yo : 无法将"yo"项识别为 cmdlet.函数.脚本文件或可运行程序的名称.请检查名称的拼写,如果包括路径,请确保路径正确,然后再试一次. 在这里以 yarn 安装 yeom ...
- spark submit参数及调优
spark submit参数及调优 原文地址 spark submit参数介绍 你可以通过spark-submit --help或者spark-shell --help来查看这些参数. 使用格式: ...
- spark submit参数及调优(转载)
spark submit参数介绍 你可以通过spark-submit --help或者spark-shell --help来查看这些参数. 使用格式: ./bin/spark-submit \--c ...
- spark submit参数及调试
原文:http://www.cnblogs.com/haoyy/p/6893943.html spark submit参数介绍 你可以通过spark-submit --help或者spark-shel ...
- Spark Submit提交及参数调优
https://www.cnblogs.com/LHWorldBlog/p/8414342.html https://www.cnblogs.com/haoyy/p/6893943.html spar ...
- Spark:spark submit参数及调优 perfect
先看下spark运行原理流程: 我们使用spark-submit提交一个Spark作业之后,这个作业就会启动一个对应的Driver进程. 根据你使用的部署模式(deploy-mode)不同,Drive ...
- Android面试总结经
自上周怒辞职以后,就開始苦逼的各种面试生涯,生活全然靠私活来接济,时有时没有,真难.还能快乐的玩耍吗.最多一天面试了5家,哎感觉都是不急招人,各种等待通知.好不easy等来一家.还克扣了薪资,从我要的 ...
最新文章
- 如何解决从数据库里面取出的时间晚了8个小时
- Centos5.5下lvs+keepalived集群
- Nginx + FastCGI 程序(C/C++) 搭建高性能web service的Demo及部署发布
- python3.7 6如何安装-Python 3.7.1在CentOS 6.10 安装部署
- IT PRO的职业生涯
- 数据结构解析——小白也能看懂的单链表
- Silverlight初级教程-库
- Presto基本概念
- 设置其他用户文件、文件夹权限与现用户权限相同(命令、权限)
- 如何配置 strongSwan 客户端 -- 节选自 OpenSuSE 中文用户手册
- javaScript与MVC
- PacketFence ZEN 4.0.1 发布,网络接入控制
- 解决atomikos在oracle应用中的XA事务异常 Error in recovery
- 在写spring项目的时候,有时候需要写ApplicationContext,有时候不要写ApplicationContext
- linux sokit使用方法,【sokit TCP/UDP 数据包收发测试(调试)工具怎么用】sokit TCP/UDP 数据包收发测试(调试)工具好不好_使用技巧-ZOL软件百科...
- Word修改标题样式缩进不起作用原因
- Linux显示2015年日历表
- js底层原理作用域和作用域链
- CC00009.CloudOpenStack——|OpenStack组件.V02|——|openstack-glance|controller节点下部署glanc
- [网站建设] 深度解析搜索引擎的原理结构