原文地址:http://zh.hortonworks.com/blog/apache-hadoop-yarn-nodemanager/

The NodeManager (NM) is YARN’s per-node agent, and takes care of the individual compute nodes in a Hadoop cluster. This includes keeping up-to date with the ResourceManager (RM), overseeing containers’ life-cycle management; monitoring resource usage (memory, CPU) of individual containers, tracking node-health, log’s management and auxiliary services which may be exploited by different YARN applications.

NodeManager Components

  1. NodeStatusUpdater

On startup, this component registers with the RM and sends information about the resources available on the nodes. Subsequent NM-RM communication is to provide updates on container statuses – new containers running on the node, completed containers, etc.

In addition the RM may signal the NodeStatusUpdater to potentially kill already running containers.

  1. ContainerManager

This is the core of the NodeManager. It is composed of the following sub-components, each of which performs a subset of the functionality that is needed to manage containers running on the node.

    1. RPC server: ContainerManager accepts requests from Application Masters (AMs) to start new containers, or to stop running ones. It works with ContainerTokenSecretManager (described below) to authorize all requests. All the operations performed on containers running on this node are written to an audit-log which can be post-processed by security tools.
    2. ResourceLocalizationService: Responsible for securely downloading and organizing various file resources needed by containers. It tries its best to distribute the files across all the available disks. It also enforces access control restrictions of the downloaded files and puts appropriate usage limits on them.
    3. ContainersLauncher: Maintains a pool of threads to prepare and launch containers as quickly as possible. Also cleans up the containers’ processes when such a request is sent by the RM or the ApplicationMasters (AMs).
    4. AuxServices: The NM provides a framework for extending its functionality by configuring auxiliary services. This allows per-node custom services that specific frameworks may require, and still sandbox them from the rest of the NM. These services have to be configured before NM starts. Auxiliary services are notified when an application’s first container starts on the node, and when the application is considered to be complete.
    5. ContainersMonitor: After a container is launched, this component starts observing its resource utilization while the container is running. To enforce isolation and fair sharing of resources like memory, each container is allocated some amount of such a resource by the RM. The ContainersMonitor monitors each container’s usage continuously and if a container exceeds its allocation, it signals the container to be killed. This is done to prevent any runaway container from adversely affecting other well-behaved containers running on the same node.
    6. LogHandler: A pluggable component with the option of either keeping the containers’ logs on the local disks or zipping them together and uploading them onto a file-system.
  1. ContainerExecutor

Interacts with the underlying operating system to securely place files and directories needed by containers and subsequently to launch and clean up processes corresponding to containers in a secure manner.

  1. NodeHealthCheckerService

Provides functionality of checking the health of the node by running a configured script frequently. It also monitors the health of the disks specifically by creating temporary files on the disks every so often. Any changes in the health of the system are notified to NodeStatusUpdater (described above) which in turn passes on the information to the RM.

  1. Security

    1. ApplicationACLsManagerNM needs to gate the user facing APIs like container-logs’ display on the web-UI to be accessible only to authorized users. This component maintains the ACLs lists per application and enforces them whenever such a request is received.
    2. ContainerTokenSecretManager: verifies various incoming requests to ensure that all the incoming operations are indeed properly authorized by the RM.
  2. WebServer

Exposes the list of applications, containers running on the node at a given point of time, node-health related information and the logs produced by the containers.

Spotlight on Key Functionality

  1. Container Launch

To facilitate container launch, the NM expects to receive detailed information about a container’s runtime as part of the container-specifications. This includes the container’s command line, environment variables, a list of (file) resources required by the container and any security tokens.

On receiving a container-launch request – the NM first verifies this request, if security is enabled, to authorize the user, correct resources assignment, etc. The NM then performs the following set of steps to launch the container.

    1. A local copy of all the specified resources is created (Distributed Cache).
    2. Isolated work directories are created for the container, and the local resources are made available in these directories.
    3. The launch environment and command line is used to start the actual container.
  1. Log Aggregation

Handling user-logs has been one of the big pain-points for Hadoop installations in the past. Instead of truncating user-logs, and leaving them on individual nodes like the TaskTracker, the NM addresses the logs’ management issue by providing the option to move these logs securely onto a file-system (FS), for e.g. HDFS, after the application completes.

Logs for all the containers belonging to a single Application and that ran on this NM are aggregated and written out to a single (possibly compressed) log file at a configured location in the FS. Users have access to these logs via YARN command line tools, the web-UI or directly from the FS.

  1. How MapReduce shuffle takes advantage of NM’s Auxiliary-services

The Shuffle functionality required to run a MapReduce (MR) application is implemented as an Auxiliary Service. This service starts up a Netty Web Server, and knows how to handle MR specific shuffle requests from Reduce tasks. The MR AM specifies the service id for the shuffle service, along with security tokens that may be required. The NM provides the AM with the port on which the shuffle service is running which is passed onto the Reduce tasks.

Conclusion

In YARN, the NodeManager is primarily limited to managing abstract containers i.e. only processes corresponding to a container and not concerning itself with per-application state management like MapReduce tasks. It also does away with the notion of named slots like map and reduce slots. Because of this clear separation of responsibilities coupled with the modular architecture described above, NM can scale much more easily and its code is much more maintainable.

转载于:https://www.cnblogs.com/davidwang456/p/4895000.html

Apache Hadoop YARN – NodeManager--转载相关推荐

  1. Apache Hadoop YARN:另一个资源协调者

    文章目录 摘要 1. 引言 2. 历史和基本原理 2.1 专用集群的时代 2.2 Hadoop on Demand的缺点 2.3 共享集群 3. 架构 3.1 概述 3.2 Resource Mana ...

  2. hive提交命令org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error parsing application ID:

    org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error parsing application ID: 怎么说?我说我重新提交一次行 ...

  3. java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/exceptions/YarnException

    flink提交任务卡死,cancel job以后,在$FLINK_HOME/log/flink-appleyuchi-client-Desktop.log 发现该报错: java.lang.NoCla ...

  4. hadoop错误org.apache.hadoop.yarn.exceptions.YarnException Unauthorized request to start container

    错误: 17/11/22 15:17:15 INFO client.RMProxy: Connecting to ResourceManager at Master/192.168.136.100:8 ...

  5. Apache Hadoop YARN – ResourceManager--转载

    原文地址:http://zh.hortonworks.com/blog/apache-hadoop-yarn-resourcemanager/ ResourceManager (RM) is the ...

  6. Hadoop - YARN NodeManager 剖析、NodeManger内部架构、分布式缓存、目录结构、状态机管理、Container 生命周期剖、资源隔离

    一 概述 NodeManager是运行在单个节点上的代理 ,它管理Hadoop集群中单个计算节点,功能包括与ResourceManager保持通信,管理Container的生命周期.监控每个Conta ...

  7. 【Flink】Flink yarn 下报错ClassNotFoundException: org.apache.hadoop.yarn.api.ApplicationConstants$Environ

    1.概述 flink启动日志报错如下 2020-08-28 16:44:19,839 ERROR org.apache.flink.client.cli

  8. Apache Hadoop 2.9.2 的YARN High Available 模式部署

    Apache Hadoop 2.9.2 的YARN High Available 模式部署 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.环境准备 1>.官方文档(htt ...

  9. Hadoop Yarn内存使用优化配置

    第一部分Hadoop Yarn内存使用优化配置 在Hadoop2.0中, YARN负责管理MapReduce中的资源(内存, CPU等)并且将其打包成Container. 这样可以精简MapReduc ...

最新文章

  1. ACMNO.27 Python的两行代码解决 C语言-字符逆序 写一函数。使输入的一个字符串按反序存放,在主函数中输入输出反序后的字符串。 输入 一行字符 输出 逆序后的字符串
  2. Bundle Adjustment原理及应用(附实战代码)
  3. 进程和程序:编写shell——《Unix/Linux编程实践教程》读书笔记(第8章)
  4. 手机耗电统计app_Android O新特性:精确统计APP电量消耗
  5. 数字n,按字典排序,找出第k小的数字
  6. 文件移到废纸篓,可是这个状态就卡住了
  7. 队列的链式存储结构及其实现_了解队列数据结构及其实现
  8. mysql clickhouse_通过mysql操作clickhouse
  9. termux怎么生成木马_一个木马病毒是如何诞生的?
  10. 不懂Python装饰器?教程双手奉上!
  11. JS有哪些数据类型?
  12. Apache服务器 403 Forbidden的几种错误原因小结!
  13. php JS 导出表格特殊处理
  14. datavideo切换台说明书_datavideo洋铭 SE-650 HD4通道切换台,高清导播切换台
  15. 南方CASS工程应用--道路断面土方计算实例教程
  16. python Pystaller 将python文件打包成exe
  17. asus pc server + geforce titan xp + centos 7
  18. C程序逆向破解-实战WinRAR去广告(3)
  19. grails Domian对象转JSON去class以及自己定义字段的最佳方式
  20. 微信h5支付添加域名时报错,“h5支付域名需要提供完整的支付路径“

热门文章

  1. 惠普打印机节能环保认证证书_学生在家学习 入门级喷墨打印机 300元-500元档
  2. linux应用程序抢占键盘,linux 系统挂起
  3. 对没有标记为安全的activex控件进行初始化和脚本运行_新的C++安全编码规则出炉...
  4. QMouseEvent
  5. 字符集:ASCII、GB2312、GBK、GB18030、Unicode
  6. java range(10)_Java Year range()用法及代码示例
  7. Linux下C语言程序的内存布局(内存模型)
  8. c++ 新建一个数组
  9. pyspark基础教程
  10. vue 只在父级容器移动_Vue易遗忘的基础复习(二)