一. 前言

ContainerLaunch 实现了Callable 接口. 交由ContainersLauncher类的ExecutorService 线程池执行.
ContainerLaunch组件为Container创建工作目录和构造执行脚本, 并通知ContainerExecutor执行该脚本, 使得Container进入RUNNING状态 .

  • 测试使用指令:
./bin/spark-submit --class org.apache.spark.examples.SparkPi \--master yarn \--deploy-mode cluster \--driver-memory 1g \--executor-memory 1g \--executor-cores 1 \--queue default \examples/jars/spark-examples*.jar \10
  • yarn-site.xml

<configuration><property><name>yarn.nodemanager.resource.memory-mb</name><value>104857600</value></property><property><name>yarn.nodemanager.resource.cpu-vcores</name><value>16</value></property><property><name>yarn.nodemanager.resource.detect-hardware-capabilities</name><value>true</value></property><property><name>yarn.nodemanager.log-dirs</name><value>/opt/tools/hadoop-3.2.1/logs/userlogs</value></property><property><name> yarn.nodemanager.local-dirs</name><value>/opt/tools/hadoop-3.2.1/local-dirs</value></property><property><name>yarn.nodemanager.log-container-debug-info.enabled</name><value>true</value></property></configuration>

二. 属性

// 日志名称private static final String CONTAINER_PRE_LAUNCH_PREFIX = "prelaunch";// 正常日志输出扩展名public static final String CONTAINER_PRE_LAUNCH_STDOUT = CONTAINER_PRE_LAUNCH_PREFIX + ".out";// 异常文件名称.public static final String CONTAINER_PRE_LAUNCH_STDERR = CONTAINER_PRE_LAUNCH_PREFIX + ".err";// 启动任务脚本名称 launch_containerpublic static final String CONTAINER_SCRIPT =  Shell.appendScriptExtension("launch_container");// token文件名称public static final String FINAL_CONTAINER_TOKENS_FILE = "container_tokens";// 系统目录public static final String SYSFS_DIR = "sysfs";// pid 文件private static final String PID_FILE_NAME_FMT = "%s.pid";// 进程退出文件扩展名static final String EXIT_CODE_FILE_SUFFIX = ".exitcode";// Dispatcherprotected final Dispatcher dispatcher;// ContainerExecutor 的实现   DefaultContainerExecutor  or   LinuxContainerExecutorprotected final ContainerExecutor exec;// Application信息protected final Application app;// Container信息protected final Container container;// 配置相关信息private final Configuration conf;// RM context信息private final Context context;// ContainerManager 对象private final ContainerManagerImpl containerManager;// 原子标识 Container 是否应启动protected AtomicBoolean containerAlreadyLaunched = new AtomicBoolean(false);// 原子标识 Container是否已经暂停protected AtomicBoolean shouldPauseContainer = new AtomicBoolean(false);// 原子标识 Container是否已经完成protected AtomicBoolean completed = new AtomicBoolean(false);// 启动之前是否需要killprivate volatile boolean killedBeforeStart = false;// kill最大等待时间  yarn.nodemanager.process-kill-wait.ms : 5000private long maxKillWaitTime = 2000;// pid 文件的路径protected Path pidFilePath = null;// 本地存储服务.protected final LocalDirsHandlerService dirsHandler;// 启动的锁.private final Lock launchLock = new ReentrantLock();

三.call 方法分解

ContainerLaunch 实现了Callable 接口. 交由ContainersLauncher类的ExecutorService 线程池执行.
所以核心的就是call方法.

在call方法的前部分获取container 的基础信息.

    final ContainerLaunchContext launchContext = container.getLaunchContext();// 获取containerID container_1611680479488_0001_01_000001ContainerId containerID = container.getContainerId();String containerIdStr = containerID.toString();// 获取命令final List<String> command = launchContext.getCommands();

3.1.获取本地资源文件

      // 获取本地资源文件 : {Path@9096} "/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/usercache/henghe/filecache/62/logging-interceptor-3.12.0.jar" -> {ArrayList@9097}  size = 1Map<Path, List<String>> localResources = getLocalizedResources();

3.2.获取用户 henghe

      // 获取用户 henghefinal String user = container.getUser();

3.3.变量解析,替换变量. 生成新的命令

// ****************   变量解析,替换变量.  生成新的命令  # 变量解析开始   ****************// /// Variable expansion// Before the container script gets written out.List<String> newCmds = new ArrayList<String>(command.size());// 获取appId  application_1611680479488_0001String appIdStr = app.getAppId().toString();// 获取相对于容器的日志目录 : application_1611680479488_0001/container_1611680479488_0001_01_000001String relativeContainerLogDir = ContainerLaunch.getRelativeContainerLogDir(appIdStr, containerIdStr);// container日志目录// /opt/tools/hadoop-3.2.1/logs/userlogs/application_1611680479488_0001/container_1611680479488_0001_01_000001containerLogDir =  dirsHandler.getLogPathForWrite(relativeContainerLogDir, false);// 设置容器日志目录recordContainerLogDir(containerID, containerLogDir.toString());for (String str : command) {// TODO: Should we instead work via symlinks without this grammar?newCmds.add(expandEnvironment(str, containerLogDir));}// 设置命令launchContext.setCommands(newCmds);// 获取环境变量//    "SPARK_YARN_STAGING_DIR" -> "hdfs://localhost:8020/user/henghe/.sparkStaging/application_1611680479488_0001"//    "APPLICATION_WEB_PROXY_BASE" -> "/proxy/application_1611680479488_0001"//    "SPARK_USER" -> "henghe"//    "CLASSPATH" -> "$PWD:$PWD/__spark_conf__:$PWD/__spark_libs__/*:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*:$PWD/__spark_conf__/__hadoop_conf__"//    "PYTHONHASHSEED" -> "0"//    "APP_SUBMIT_TIME_ENV" -> "1611680611847"Map<String, String> environment = expandAllEnvironmentVars( launchContext, containerLogDir);// /// End of variable expansion// ****************   变量解析结束   ****************

3.4.构造各种输出文件的路径

// 通过 NodeManger 将 track 变量加载到环境变量中// Use this to track variables that are added to the environment by nm.LinkedHashSet<String> nmEnvVars = new LinkedHashSet<String>();// 获取本地文件系统上下文.// defaultFS : LocalFs@9710// workingDir : file:/opt/workspace/apache/hadoop-3.2.1-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanagerFileContext lfs = FileContext.getLocalFSFileContext();// 获取私有Container 启动脚本路径 [ nmPrivateContainerScriptPath ]// /opt/tools/hadoop-3.2.1/local-dirs/nmPrivate/application_1611681788558_0001/container_1611681788558_0001_01_000001/launch_container.shPath nmPrivateContainerScriptPath = dirsHandler.getLocalPathForWrite( getContainerPrivateDir(appIdStr, containerIdStr) + Path.SEPARATOR  + CONTAINER_SCRIPT);// 获取私有 Tokens 脚本路径 [ nmPrivateTokensPath ]// /opt/tools/hadoop-3.2.1/local-dirs/nmPrivate/application_1611681788558_0001/container_1611681788558_0001_01_000001/container_1611681788558_0001_01_000001.tokensPath nmPrivateTokensPath = dirsHandler.getLocalPathForWrite( getContainerPrivateDir(appIdStr, containerIdStr) + Path.SEPARATOR   + String.format(ContainerLocalizer.TOKEN_FILE_NAME_FMT,  containerIdStr));// class jar 路径 [ nmPrivateClasspathJarDir ]// /opt/tools/hadoop-3.2.1/local-dirs/nmPrivate/application_1611681788558_0001/container_1611681788558_0001_01_000001Path nmPrivateClasspathJarDir = dirsHandler.getLocalPathForWrite( getContainerPrivateDir(appIdStr, containerIdStr));// Select the working directory for the container// 获取container的工作空间路径 [ containerWorkDir ]// /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611681788558_0001/container_1611681788558_0001_01_000001Path containerWorkDir = deriveContainerWorkDir();// 记录容器工作目录recordContainerWorkDir(containerID, containerWorkDir.toString());// pid文件相对路径 [pidFileSubpath]// nmPrivate/application_1611680479488_0001/container_1611680479488_0001_01_000001//container_1611680479488_0001_01_000001.pidString pidFileSubpath = getPidFileSubpath(appIdStr, containerIdStr);// pid文件绝对路径 [pidFilePath]// pid文件应该在nm private dir中,这样用户就不能访问它// /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/nmPrivate/application_1611680479488_0001/container_1611680479488_0001_01_000001/container_1611680479488_0001_01_000001.pid// pid file should be in nm private dir so that it is not accessible by userspidFilePath = dirsHandler.getLocalPathForWrite(pidFileSubpath);// 本地目录 [localDirs]//  /opt/tools/hadoop-3.2.1/local-dirs/usercache/hengheList<String> localDirs = dirsHandler.getLocalDirs();// /opt/tools/hadoop-3.2.1/local-dirs/usercache/hengheList<String> localDirsForRead = dirsHandler.getLocalDirsForRead();// 日志目录// logDirs :  /opt/tools/hadoop-3.2.1/logs/userlogsList<String> logDirs = dirsHandler.getLogDirs();// 文件缓存目录// /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/filecacheList<String> filecacheDirs = getNMFilecacheDirs(localDirsForRead);// 用户本地目录// /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/List<String> userLocalDirs = getUserLocalDirs(localDirs);// containerLocalDirs 容器目录// /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611681788558_0001/List<String> containerLocalDirs = getContainerLocalDirs(localDirs);// 容器日志目录// /opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001List<String> containerLogDirs = getContainerLogDirs(logDirs);// 用户文件缓存目录// /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/filecacheList<String> userFilecacheDirs = getUserFilecacheDirs(localDirsForRead);// application 本地目录// /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611681788558_0001List<String> applicationLocalDirs = getApplicationLocalDirs(localDirs, appIdStr);// 磁盘检查if (!dirsHandler.areDisksHealthy()) {ret = ContainerExitStatus.DISKS_FAILED;throw new IOException("Most of the disks failed. "+ dirsHandler.getDisksHealthReport(false));}List<Path> appDirs = new ArrayList<Path>(localDirs.size());for (String localDir : localDirs) {// usercache : /opt/tools/hadoop-3.2.1/local-dirs/usercachePath usersdir = new Path(localDir, ContainerLocalizer.USERCACHE);// 用户工作空间 : /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghePath userdir = new Path(usersdir, user);// appcache : /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcachePath appsdir = new Path(userdir, ContainerLocalizer.APPCACHE);appDirs.add(new Path(appsdir, appIdStr));}

3.5.设置container 容器中token位置…

      // 设置token位置..// HADOOP_TOKEN_FILE_LOCATION -> /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611681788558_0001/container_1611681788558_0001_01_000001/container_tokensaddToEnvMap(environment, nmEnvVars,  ApplicationConstants.CONTAINER_TOKEN_FILE_ENV_NAME, new Path(containerWorkDir, FINAL_CONTAINER_TOKENS_FILE).toUri().getPath());

3.6.在私有nmPrivate 空间中 写入container 脚本

// 在私有nmPrivate 空间中   写入container 脚本// /// Write out the container-script in the nmPrivate space.try (DataOutputStream containerScriptOutStream =  lfs.create(nmPrivateContainerScriptPath, EnumSet.of(CREATE, OVERWRITE))) {// 整理 container's environment// Sanitize the container's environmentsanitizeEnv(environment, containerWorkDir, appDirs, userLocalDirs, containerLogDirs, localResources, nmPrivateClasspathJarDir,  nmEnvVars);prepareContainer(localResources, containerLocalDirs);// 输出环境变量.// Write out the environmentexec.writeLaunchEnv(containerScriptOutStream, environment,localResources, launchContext.getCommands(),containerLogDir, user, nmEnvVars);}// /// End of writing out container-script

3.7.写入container-tokens

      // /// Write out the container-tokens in the nmPrivate space.try (DataOutputStream tokensOutStream = lfs.create(nmPrivateTokensPath, EnumSet.of(CREATE, OVERWRITE))) {Credentials creds = container.getCredentials();creds.writeTokenStorageToStream(tokensOutStream);}// /// End of writing out container-tokens

3.8.启动Container : launchContainer

//    启动Containerret = launchContainer(new ContainerStartContext.Builder().setContainer(container).setLocalizedResources(localResources).setNmPrivateContainerScriptPath(nmPrivateContainerScriptPath).setNmPrivateTokensPath(nmPrivateTokensPath).setUser(user).setAppId(appIdStr).setContainerWorkDir(containerWorkDir).setLocalDirs(localDirs).setLogDirs(logDirs).setFilecacheDirs(filecacheDirs).setUserLocalDirs(userLocalDirs).setContainerLocalDirs(containerLocalDirs).setContainerLogDirs(containerLogDirs).setUserFilecacheDirs(userFilecacheDirs).setApplicationLocalDirs(applicationLocalDirs).build());

3.9. 设置执行结果.

      // 设置执行结果.setContainerCompletedStatus(ret);

3.10.根据返回码处理请求.

    // 根据返回码处理请求.handleContainerExitCode(ret, containerLogDir);

四. 输出启动脚本 [ launch_container.sh ]

ContainerExecutor# writeLaunchEnv 负责生成launch_container.sh .

/*** 此方法将容器的启动环境写出到指定的路径。* This method writes out the launch environment of a container to a specified path.** @param out the output stream to which the environment is written (usually* a script file which will be executed by the Launcher)* @param environment the environment variables and their values* @param resources the resources which have been localized for this* container. Symlinks will be created to these localized resources* @param command the command that will be run* @param logDir the log dir to which to copy debugging information* @param user the username of the job owner* @param outFilename the path to which to write the launch environment* @param nmVars the set of environment vars that are explicitly set by NM* @throws IOException if any errors happened writing to the OutputStream,* while creating symlinks*/@VisibleForTestingpublic void writeLaunchEnv(OutputStream out, Map<String, String> environment,Map<Path, List<String>> resources, List<String> command, Path logDir,String user, String outFilename, LinkedHashSet<String> nmVars)throws IOException {ContainerLaunch.ShellScriptBuilder sb =  ContainerLaunch.ShellScriptBuilder.create();
//    #!/bin/bash// Add "set -o pipefail -e" to validate launch_container script.sb.setExitOnFailure();//    set -o pipefail -e//Redirect stdout and stderr for launch_container scriptsb.stdout(logDir, CONTAINER_PRE_LAUNCH_STDOUT);//    export PRELAUNCH_OUT="/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/prelaunch.out"
//    exec >"${PRELAUNCH_OUT}"sb.stderr(logDir, CONTAINER_PRE_LAUNCH_STDERR);//    export PRELAUNCH_ERR="/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/prelaunch.err"
//    exec 2>"${PRELAUNCH_ERR}"if (environment != null) {sb.echo("Setting up env variables");//      echo "Setting up env variables"// 白名单环境变量被特别处理。// 仅当环境中尚未定义它们时才添加它们。// 使用特殊的语法添加它们,以防止它们掩盖可能在容器映像(例如docker映像)中显式设置的变量。// 将这些放在其他之前,以确保使用正确的使用。// Whitelist environment variables are treated specially.// Only add them if they are not already defined in the environment.// Add them using special syntax to prevent them from eclipsing variables that may be set explicitly in the container image (e.g, in a docker image).// Put these before the others to ensure the correct expansion is used.sb.echo("Setting up env variables#whitelistVars");for(String var : whitelistVars) {if (!environment.containsKey(var)) {String val = getNMEnvVar(var);if (val != null) {sb.whitelistedEnv(var, val);}}}//      echo "Setting up env variables#whitelistVars"
//      export JAVA_HOME=${JAVA_HOME:-"/Library/java/JavaVirtualMachines/jdk1.8.0_271.jdk/Contents/Home"}
//      export HADOOP_COMMON_HOME=${HADOOP_COMMON_HOME:-"/opt/workspace/apache/hadoop-3.2.1-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/../../../../hadoop-common-project/hadoop-common/target"}
//      export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/opt/tools/hadoop-3.2.1/etc/hadoop"}
//      export HADOOP_HOME=${HADOOP_HOME:-"/opt/workspace/apache/hadoop-3.2.1-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/../../../../hadoop-common-project/hadoop-common/target"}
//      export PATH=${PATH:-"/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/tools/apache-maven-3.6.3/bin:/opt/tools/scala-2.12.10/bin:/usr/local/mysql-5.7.28-macos10.14-x86_64/bin:/Library/java/JavaVirtualMachines/jdk1.8.0_271.jdk/Contents/Home/bin:/opt/tools/hadoop-3.2.1/bin:/opt/tools/hadoop-3.2.1/etc/hadoop:henghe:/opt/tools/ozone-1.0.0/bin:/opt/tools/spark-2.4.5/bin:/opt/tools/spark-2.4.5/conf:/opt/tools/redis-5.0.7/src:/opt/tools/datax/bin:/opt/tools/apache-ant-1.9.6/bin:/opt/tools/hbase-2.0.2/bin"}sb.echo("Setting up env variables#env");// 现在编写由nodemanager显式设置的变量,保留它们的写入顺序。// Now write vars that were set explicitly by nodemanager, preserving the order they were written in.for (String nmEnvVar : nmVars) {sb.env(nmEnvVar, environment.get(nmEnvVar));}//      echo "Setting up env variables#env"
//      export HADOOP_TOKEN_FILE_LOCATION="/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611681788558_0001/container_1611681788558_0001_01_000001/container_tokens"
//      export CONTAINER_ID="container_1611681788558_0001_01_000001"
//      export NM_PORT="62016"
//      export NM_HOST="boyi-pro.lan"
//      export NM_HTTP_PORT="8042"
//      export LOCAL_DIRS="/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611681788558_0001"
//      export LOCAL_USER_DIRS="/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/"
//      export LOG_DIRS="/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001"
//      export USER="henghe"
//      export LOGNAME="henghe"
//      export HOME="/home/"
//      export PWD="/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611681788558_0001/container_1611681788558_0001_01_000001"
//      export JVM_PID="$$"
//      export MALLOC_ARENA_MAX="4"sb.echo("Setting up env variables#remaining");// 现在写入剩余的环境变量// Now write the remaining environment variables.for (Map.Entry<String, String> env :  sb.orderEnvByDependencies(environment).entrySet()) {if (!nmVars.contains(env.getKey())) {sb.env(env.getKey(), env.getValue());}}//      echo "Setting up env variables#remaining"
//      export SPARK_YARN_STAGING_DIR="hdfs://localhost:8020/user/henghe/.sparkStaging/application_1611681788558_0001"
//      export APPLICATION_WEB_PROXY_BASE="/proxy/application_1611681788558_0001"
//      export CLASSPATH="$PWD:$PWD/__spark_conf__:$PWD/__spark_libs__/*:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*:$PWD/__spark_conf__/__hadoop_conf__"
//      export APP_SUBMIT_TIME_ENV="1611681915166"
//      export SPARK_USER="henghe"
//      export PYTHONHASHSEED="0"}if (resources != null) {sb.echo("Setting up job resources");Map<Path, Path> symLinks = resolveSymLinks(resources, user);for (Map.Entry<Path, Path> symLink : symLinks.entrySet()) {// 链接环境变量sb.symlink(symLink.getKey(), symLink.getValue());}//      echo "Setting up job resources"
//      mkdir -p __spark_libs__
//      ln -sf -- "/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/filecache/35/spark-examples_2.11-2.4.5.jar" "__app__.jar"
//      mkdir -p __spark_libs__
//      ln -sf -- "/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/filecache/180/__spark_conf__.zip" "__spark_conf__"
//      # 此处省略N多
//      # mkdir -p __spark_libs__
//      # ln -sf --"xxxxx"  "__spark_libs__/xxxxx.jar"}// dump 调试信息(如果已配置)// dump debugging information if configuredif (getConf() != null && getConf().getBoolean(YarnConfiguration.NM_LOG_CONTAINER_DEBUG_INFO,  YarnConfiguration.DEFAULT_NM_LOG_CONTAINER_DEBUG_INFO)) {sb.echo("Copying debugging information");sb.copyDebugInformation(new Path(outFilename),  new Path(logDir, outFilename));sb.listDebugInformation(new Path(logDir, DIRECTORY_CONTENTS));//      echo "Copying debugging information"
//      # Creating copy of launch script
//      cp "launch_container.sh" "/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/launch_container.sh"
//      chmod 640 "/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/launch_container.sh"
//      # Determining directory contents
//      echo "ls -l:" 1>"/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/directory.info"
//      ls -l 1>>"/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/directory.info"
//      echo "find -L . -maxdepth 5 -ls:" 1>>"/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/directory.info"
//      find -L . -maxdepth 5 -ls 1>>"/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/directory.info"
//      echo "broken symlinks(find -L . -maxdepth 5 -type l -ls):" 1>>"/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/directory.info"
//      find -L . -maxdepth 5 -type l -ls 1>>"/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/directory.info"}sb.echo("Launching container");
//    echo "Launching container"// 启动containersb.command(command);//    exec /bin/bash -c "
//      $JAVA_HOME/bin/java
//      -server
//      -Xmx1024m
//      -Djava.io.tmpdir=$PWD/tmp
//      -Dspark.yarn.app.container.log.dir=/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001
//      org.apache.spark.deploy.yarn.ApplicationMaster
//        --class 'org.apache.spark.examples.SparkPi'
//        --jar file:/opt/tools/spark-2.4.5/examples/jars/spark-examples_2.11-2.4.5.jar
//        --arg '10'
//        --properties-file $PWD/__spark_conf__/__spark_conf__.properties
//      1> /opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/stdout
//      2> /opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/stderr"//最终输入内容LOG.warn("ContainerExecutor#writeLaunchEnv : " + sb.toString());PrintStream pout = null;try {pout = new PrintStream(out, false, "UTF-8");sb.write(pout);} finally {if (out != null) {out.close();}}}

五.call 方法完整代码

@Overridepublic Integer call() {if (!validateContainerState()) {return 0;}// 获取ContainerLaunchContext 信息  final ContainerLaunchContext launchContext = container.getLaunchContext();// 获取containerID container_1611680479488_0001_01_000001ContainerId containerID = container.getContainerId();String containerIdStr = containerID.toString();// 获取命令final List<String> command = launchContext.getCommands();int ret = -1;Path containerLogDir;try {// 获取本地资源文件 : {Path@9096} "/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/usercache/henghe/filecache/62/logging-interceptor-3.12.0.jar" -> {ArrayList@9097}  size = 1Map<Path, List<String>> localResources = getLocalizedResources();// 获取用户 henghefinal String user = container.getUser();// ****************   变量解析,替换变量.  生成新的命令  # 变量解析开始   ****************// /// Variable expansion// Before the container script gets written out.List<String> newCmds = new ArrayList<String>(command.size());// 获取appId  application_1611680479488_0001String appIdStr = app.getAppId().toString();// 获取相对于容器的日志目录 : application_1611680479488_0001/container_1611680479488_0001_01_000001String relativeContainerLogDir = ContainerLaunch.getRelativeContainerLogDir(appIdStr, containerIdStr);// container日志目录// /opt/tools/hadoop-3.2.1/logs/userlogs/application_1611680479488_0001/container_1611680479488_0001_01_000001containerLogDir =  dirsHandler.getLogPathForWrite(relativeContainerLogDir, false);// 设置容器日志目录recordContainerLogDir(containerID, containerLogDir.toString());for (String str : command) {// TODO: Should we instead work via symlinks without this grammar?newCmds.add(expandEnvironment(str, containerLogDir));}// 设置命令launchContext.setCommands(newCmds);// 获取环境变量//    "SPARK_YARN_STAGING_DIR" -> "hdfs://localhost:8020/user/henghe/.sparkStaging/application_1611680479488_0001"//    "APPLICATION_WEB_PROXY_BASE" -> "/proxy/application_1611680479488_0001"//    "SPARK_USER" -> "henghe"//    "CLASSPATH" -> "$PWD:$PWD/__spark_conf__:$PWD/__spark_libs__/*:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*:$PWD/__spark_conf__/__hadoop_conf__"//    "PYTHONHASHSEED" -> "0"//    "APP_SUBMIT_TIME_ENV" -> "1611680611847"Map<String, String> environment = expandAllEnvironmentVars( launchContext, containerLogDir);// /// End of variable expansion// ****************   变量解析结束   ****************// 通过 NodeManger 将 track 变量加载到环境变量中// Use this to track variables that are added to the environment by nm.LinkedHashSet<String> nmEnvVars = new LinkedHashSet<String>();// 获取本地文件系统上下文.// defaultFS : LocalFs@9710// workingDir : file:/opt/workspace/apache/hadoop-3.2.1-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanagerFileContext lfs = FileContext.getLocalFSFileContext();// 获取私有Container 启动脚本路径 [ nmPrivateContainerScriptPath ]// /opt/tools/hadoop-3.2.1/local-dirs/nmPrivate/application_1611681788558_0001/container_1611681788558_0001_01_000001/launch_container.shPath nmPrivateContainerScriptPath = dirsHandler.getLocalPathForWrite( getContainerPrivateDir(appIdStr, containerIdStr) + Path.SEPARATOR  + CONTAINER_SCRIPT);// 获取私有 Tokens 脚本路径 [ nmPrivateTokensPath ]// /opt/tools/hadoop-3.2.1/local-dirs/nmPrivate/application_1611681788558_0001/container_1611681788558_0001_01_000001/container_1611681788558_0001_01_000001.tokensPath nmPrivateTokensPath = dirsHandler.getLocalPathForWrite( getContainerPrivateDir(appIdStr, containerIdStr) + Path.SEPARATOR   + String.format(ContainerLocalizer.TOKEN_FILE_NAME_FMT,  containerIdStr));// class jar 路径 [ nmPrivateClasspathJarDir ]// /opt/tools/hadoop-3.2.1/local-dirs/nmPrivate/application_1611681788558_0001/container_1611681788558_0001_01_000001Path nmPrivateClasspathJarDir = dirsHandler.getLocalPathForWrite( getContainerPrivateDir(appIdStr, containerIdStr));// Select the working directory for the container// 获取container的工作空间路径 [ containerWorkDir ]// /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611681788558_0001/container_1611681788558_0001_01_000001Path containerWorkDir = deriveContainerWorkDir();// 记录容器工作目录recordContainerWorkDir(containerID, containerWorkDir.toString());// pid文件相对路径 [pidFileSubpath]// nmPrivate/application_1611680479488_0001/container_1611680479488_0001_01_000001//container_1611680479488_0001_01_000001.pidString pidFileSubpath = getPidFileSubpath(appIdStr, containerIdStr);// pid文件绝对路径 [pidFilePath]// pid文件应该在nm private dir中,这样用户就不能访问它// /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/nmPrivate/application_1611680479488_0001/container_1611680479488_0001_01_000001/container_1611680479488_0001_01_000001.pid// pid file should be in nm private dir so that it is not accessible by userspidFilePath = dirsHandler.getLocalPathForWrite(pidFileSubpath);// 本地目录 [localDirs]//  /opt/tools/hadoop-3.2.1/local-dirs/usercache/hengheList<String> localDirs = dirsHandler.getLocalDirs();// /opt/tools/hadoop-3.2.1/local-dirs/usercache/hengheList<String> localDirsForRead = dirsHandler.getLocalDirsForRead();// 日志目录// logDirs :  /opt/tools/hadoop-3.2.1/logs/userlogsList<String> logDirs = dirsHandler.getLogDirs();// 文件缓存目录// /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/filecacheList<String> filecacheDirs = getNMFilecacheDirs(localDirsForRead);// 用户本地目录// /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/List<String> userLocalDirs = getUserLocalDirs(localDirs);// containerLocalDirs 容器目录// /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611681788558_0001/List<String> containerLocalDirs = getContainerLocalDirs(localDirs);// 容器日志目录// /opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001List<String> containerLogDirs = getContainerLogDirs(logDirs);// 用户文件缓存目录// /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/filecacheList<String> userFilecacheDirs = getUserFilecacheDirs(localDirsForRead);// application 本地目录// /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611681788558_0001List<String> applicationLocalDirs = getApplicationLocalDirs(localDirs, appIdStr);// 磁盘检查if (!dirsHandler.areDisksHealthy()) {ret = ContainerExitStatus.DISKS_FAILED;throw new IOException("Most of the disks failed. "+ dirsHandler.getDisksHealthReport(false));}List<Path> appDirs = new ArrayList<Path>(localDirs.size());for (String localDir : localDirs) {// usercache : /opt/tools/hadoop-3.2.1/local-dirs/usercachePath usersdir = new Path(localDir, ContainerLocalizer.USERCACHE);// 用户工作空间 : /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghePath userdir = new Path(usersdir, user);// appcache : /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcachePath appsdir = new Path(userdir, ContainerLocalizer.APPCACHE);appDirs.add(new Path(appsdir, appIdStr));}// 设置token位置..// HADOOP_TOKEN_FILE_LOCATION -> /opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611681788558_0001/container_1611681788558_0001_01_000001/container_tokensaddToEnvMap(environment, nmEnvVars,  ApplicationConstants.CONTAINER_TOKEN_FILE_ENV_NAME, new Path(containerWorkDir, FINAL_CONTAINER_TOKENS_FILE).toUri().getPath());// 在私有nmPrivate 空间中   写入container 脚本// /// Write out the container-script in the nmPrivate space.try (DataOutputStream containerScriptOutStream =  lfs.create(nmPrivateContainerScriptPath, EnumSet.of(CREATE, OVERWRITE))) {// 整理 container's environment// Sanitize the container's environmentsanitizeEnv(environment, containerWorkDir, appDirs, userLocalDirs, containerLogDirs, localResources, nmPrivateClasspathJarDir,  nmEnvVars);prepareContainer(localResources, containerLocalDirs);// 输出环境变量.// Write out the environmentexec.writeLaunchEnv(containerScriptOutStream, environment,localResources, launchContext.getCommands(),containerLogDir, user, nmEnvVars);}// /// End of writing out container-script// /// Write out the container-tokens in the nmPrivate space.try (DataOutputStream tokensOutStream = lfs.create(nmPrivateTokensPath, EnumSet.of(CREATE, OVERWRITE))) {Credentials creds = container.getCredentials();creds.writeTokenStorageToStream(tokensOutStream);}// /// End of writing out container-tokens//    启动Containerret = launchContainer(new ContainerStartContext.Builder().setContainer(container).setLocalizedResources(localResources).setNmPrivateContainerScriptPath(nmPrivateContainerScriptPath).setNmPrivateTokensPath(nmPrivateTokensPath).setUser(user).setAppId(appIdStr).setContainerWorkDir(containerWorkDir).setLocalDirs(localDirs).setLogDirs(logDirs).setFilecacheDirs(filecacheDirs).setUserLocalDirs(userLocalDirs).setContainerLocalDirs(containerLocalDirs).setContainerLogDirs(containerLogDirs).setUserFilecacheDirs(userFilecacheDirs).setApplicationLocalDirs(applicationLocalDirs).build());} catch (ConfigurationException e) {LOG.error("Failed to launch container due to configuration error.", e);dispatcher.getEventHandler().handle(new ContainerExitEvent(containerID, ContainerEventType.CONTAINER_EXITED_WITH_FAILURE, ret,e.getMessage()));// Mark the node as unhealthycontext.getNodeStatusUpdater().reportException(e);return ret;} catch (Throwable e) {LOG.warn("Failed to launch container.", e);dispatcher.getEventHandler().handle(new ContainerExitEvent(containerID, ContainerEventType.CONTAINER_EXITED_WITH_FAILURE, ret,e.getMessage()));return ret;} finally {// 设置执行结果.setContainerCompletedStatus(ret);}// 根据返回码处理请求.handleContainerExitCode(ret, containerLogDir);return ret;}

六.launchContext 内容


localResources { key: "__app__.jar" value { resource { scheme: "hdfs" host: "localhost" port: 8020 file: "/user/henghe/.sparkStaging/application_1611718329098_0001/spark-examples_2.11-2.4.5.jar" } size: 1475072 timestamp: 1611718431941 type: FILE visibility: PRIVATE}
} // 此处省略N多 localResources ....tokens: "HDTS\000\001\000\032\n\r\n\t\b\001\020\212\206\230\217\364.\020\002\020\272\276\206\361\370\377\377\377\377\001\024\004\321\263\034\375)\2549\374\035y\346\221l\357`\344\330\242\034\020YARN_AM_RM_TOKEN\000\000"
environment { key: "SPARK_YARN_STAGING_DIR" value: "hdfs://localhost:8020/user/henghe/.sparkStaging/application_1611718329098_0001" }
environment { key: "LOCAL_DIRS" value: "/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611718329098_0001" }
environment { key: "APPLICATION_WEB_PROXY_BASE" value: "/proxy/application_1611718329098_0001" }
environment { key: "NM_HTTP_PORT" value: "8042" }
environment { key: "LOG_DIRS" value: "/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611718329098_0001/container_1611718329098_0001_02_000001" }
environment { key: "NM_PORT" value: "51146" }
environment { key: "USER" value: "henghe" }
environment { key: "CLASSPATH" value: "$PWD:$PWD/__spark_conf__:$PWD/__spark_libs__/*:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*:$PWD/__spark_conf__/__hadoop_conf__" }
environment { key: "APP_SUBMIT_TIME_ENV" value: "1611718435286" }
environment { key: "NM_HOST" value: "192.168.8.188" }
environment { key: "HADOOP_TOKEN_FILE_LOCATION" value: "/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611718329098_0001/container_1611718329098_0001_02_000001/container_tokens" }
environment { key: "SPARK_USER" value: "henghe" }
environment { key: "LOCAL_USER_DIRS" value: "/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/" }
environment { key: "LOGNAME" value: "henghe" }
environment { key: "JVM_PID" value: "$$" }
environment { key: "PWD" value: "/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611718329098_0001/container_1611718329098_0001_02_000001" }
environment { key: "PYTHONHASHSEED" value: "0" }
environment { key: "HOME" value: "/home/" }
environment { key: "CONTAINER_ID" value: "container_1611718329098_0001_02_000001" }
environment { key: "MALLOC_ARENA_MAX" value: "4" }
command: "$JAVA_HOME/bin/java"
command: "-server"
command: "-Xmx1024m"
command: "-Djava.io.tmpdir=$PWD/tmp"
command: "-Dspark.yarn.app.container.log.dir=/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611718329098_0001/container_1611718329098_0001_02_000001"
command: "org.apache.spark.deploy.yarn.ApplicationMaster"
command: "--class"
command: "\'org.apache.spark.examples.SparkPi\'"
command: "--jar"
command: "file:/opt/tools/spark-2.4.5/examples/jars/spark-examples_2.11-2.4.5.jar"
command: "--arg"
command: "\'10\'"
command: "--properties-file"
command: "$PWD/__spark_conf__/__spark_conf__.properties"
command: "1>"
command: "/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611718329098_0001/container_1611718329098_0001_02_000001/stdout"
command: "2>"
command: "/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611718329098_0001/container_1611718329098_0001_02_000001/stderr" application_ACLs { accessType: APPACCESS_MODIFY_APP acl: "henghe " }
application_ACLs { accessType: APPACCESS_VIEW_APP acl: "henghe " }

七. launch_container.sh 文件

#输出脚本的头信息
#!/bin/bash#快速失败并检查退出状态(exit codes).
set -o pipefail -e#输出运行日志
export PRELAUNCH_OUT="/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/prelaunch.out"
exec >"${PRELAUNCH_OUT}"
#输出异常日志
export PRELAUNCH_ERR="/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/prelaunch.err"
exec 2>"${PRELAUNCH_ERR}"# 设置环境变量
echo "Setting up env variables"
# 设置环境变量#白名单变量
echo "Setting up env variables#whitelistVars"
export JAVA_HOME=${JAVA_HOME:-"/Library/java/JavaVirtualMachines/jdk1.8.0_271.jdk/Contents/Home"}
export HADOOP_COMMON_HOME=${HADOOP_COMMON_HOME:-"/opt/workspace/apache/hadoop-3.2.1-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/../../../../hadoop-common-project/hadoop-common/target"}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/opt/tools/hadoop-3.2.1/etc/hadoop"}
export HADOOP_HOME=${HADOOP_HOME:-"/opt/workspace/apache/hadoop-3.2.1-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/../../../../hadoop-common-project/hadoop-common/target"}
export PATH=${PATH:-"/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/tools/apache-maven-3.6.3/bin:/opt/tools/scala-2.12.10/bin:/usr/local/mysql-5.7.28-macos10.14-x86_64/bin:/Library/java/JavaVirtualMachines/jdk1.8.0_271.jdk/Contents/Home/bin:/opt/tools/hadoop-3.2.1/bin:/opt/tools/hadoop-3.2.1/etc/hadoop:henghe:/opt/tools/ozone-1.0.0/bin:/opt/tools/spark-2.4.5/bin:/opt/tools/spark-2.4.5/conf:/opt/tools/redis-5.0.7/src:/opt/tools/datax/bin:/opt/tools/apache-ant-1.9.6/bin:/opt/tools/hbase-2.0.2/bin"}# 设置环境变量#环境变量
echo "Setting up env variables#env"
export HADOOP_TOKEN_FILE_LOCATION="/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611681788558_0001/container_1611681788558_0001_01_000001/container_tokens"
export CONTAINER_ID="container_1611681788558_0001_01_000001"
export NM_PORT="62016"
export NM_HOST="boyi-pro.lan"
export NM_HTTP_PORT="8042"
export LOCAL_DIRS="/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611681788558_0001"
export LOCAL_USER_DIRS="/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/"
export LOG_DIRS="/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001"
export USER="henghe"
export LOGNAME="henghe"
export HOME="/home/"
export PWD="/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/appcache/application_1611681788558_0001/container_1611681788558_0001_01_000001"
export JVM_PID="$$"
export MALLOC_ARENA_MAX="4"# 设置环境变量#剩余环境变量
echo "Setting up env variables#remaining"
export SPARK_YARN_STAGING_DIR="hdfs://localhost:8020/user/henghe/.sparkStaging/application_1611681788558_0001"
export APPLICATION_WEB_PROXY_BASE="/proxy/application_1611681788558_0001"
export CLASSPATH="$PWD:$PWD/__spark_conf__:$PWD/__spark_libs__/*:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*:$PWD/__spark_conf__/__hadoop_conf__"
export APP_SUBMIT_TIME_ENV="1611681915166"
export SPARK_USER="henghe"
export PYTHONHASHSEED="0"# 设置资源文件[通过ln -sf 构建所需jar文件/配置文件的软连接. ]
echo "Setting up job resources"
mkdir -p __spark_libs__
ln -sf -- "/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/filecache/35/spark-examples_2.11-2.4.5.jar" "__app__.jar"
mkdir -p __spark_libs__
ln -sf -- "/opt/tools/hadoop-3.2.1/local-dirs/usercache/henghe/filecache/180/__spark_conf__.zip" "__spark_conf__"
# 此处省略N多
# mkdir -p __spark_libs__
# ln -sf --"xxxxx"  "__spark_libs__/xxxxx.jar"# 设置debug 信息
echo "Copying debugging information"
# Creating copy of launch script
cp "launch_container.sh" "/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/launch_container.sh"
chmod 640 "/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/launch_container.sh"# 确定目录内容
# Determining directory contents
echo "ls -l:" 1>"/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/directory.info"
ls -l 1>>"/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/directory.info"
echo "find -L . -maxdepth 5 -ls:" 1>>"/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/directory.info"
find -L . -maxdepth 5 -ls 1>>"/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/directory.info"
echo "broken symlinks(find -L . -maxdepth 5 -type l -ls):" 1>>"/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/directory.info"
find -L . -maxdepth 5 -type l -ls 1>>"/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/directory.info"
#输出启动脚本
echo "Launching container"
exec /bin/bash -c "$JAVA_HOME/bin/java -server -Xmx1024m -Djava.io.tmpdir=$PWD/tmp -Dspark.yarn.app.container.log.dir=/opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'org.apache.spark.examples.SparkPi' --jar file:/opt/tools/spark-2.4.5/examples/jars/spark-examples_2.11-2.4.5.jar --arg '10' --properties-file $PWD/__spark_conf__/__spark_conf__.properties 1> /opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/stdout 2> /opt/tools/hadoop-3.2.1/logs/userlogs/application_1611681788558_0001/container_1611681788558_0001_01_000001/stderr"

Hadoop3.2.1 【 YARN 】源码分析 : ContainerLaunch源码浅析相关推荐

  1. 【Android 电量优化】JobScheduler 源码分析 ( JobServiceContext 源码分析 | 闭环操作总结 | 用户提交任务 | 广播接收者接受相关广播触发任务执行 )★

    文章目录 一.JobServiceContext 引入 二.JobServiceContext 源码分析 三.用户在应用层如何使用 JobScheduler 四.用户提交任务 五.广播接收者监听广播触 ...

  2. Android源码分析--MediaServer源码分析(二)

    在上一篇博客中Android源码分析–MediaServer源码分析(一),我们知道了ProcessState和defaultServiceManager,在分析源码的过程中,我们被Android的B ...

  3. Linux内核 eBPF基础:kprobe原理源码分析:源码分析

    Linux内核 eBPF基础 kprobe原理源码分析:源码分析 荣涛 2021年5月11日 在 <Linux内核 eBPF基础:kprobe原理源码分析:基本介绍与使用>中已经介绍了kp ...

  4. k8s源码分析--kube-scheduler源码(一)

    版本:v1.13.0 启动分析 kubernetes基础组件的入口均在cmd目录下,kube-schduler入口在scheduler.go下. kubernetes所有的组件启动采用的均是comma ...

  5. k8s client-go源码分析 informer源码分析(3)-Reflector源码分析

    k8s client-go源码分析 informer源码分析(3)-Reflector源码分析 1.Reflector概述 Reflector从kube-apiserver中list&watc ...

  6. xf86-video-intel源码分析1 —— 源码目录结构概览

    在<Spectacle/Flameshot/X11 Xlib截屏问题现象及解决方法>一文(链接如下)中提到, Spectacle/Flameshot/X11 Xlib截屏问题现象及解决方法 ...

  7. 【Android 热修复】热修复原理 ( 类加载分析 | 分析 PathClassLoader 源码 | 分析 BaseDexClassLoader 源码 | 分析 PathDexList 源码 )

    文章目录 一.分析 PathClassLoader 源码 二.分析 BaseDexClassLoader 源码 三.分析 PathDexList 源码 四. 源码资源 一.分析 PathClassLo ...

  8. 【Android 电量优化】JobScheduler 相关源码分析 ( JobSchedulerService 源码分析 | 任务检查 | 任务执行 )

    文章目录 一.回调 StateChangedListener 接口 二.JobHandler 处理 ( 任务检查 ) 三.maybeRunPendingJobsH 方法 四.assignJobsToC ...

  9. 【Android 电量优化】JobScheduler 相关源码分析 ( JobSchedulerService 源码分析 | Android 源码在线网址推荐 )

    文章目录 一.JobScheduler 提交任务 schedule 方法源码分析 二.schedule(JobInfo job, int uId) 方法 三.scheduleAsPackage 方法 ...

最新文章

  1. 相机夜视原理——红外补光
  2. java配置常量_Java构建时间常量配置
  3. Morphling:云原生部署 AI ,如何把降本做到极致?
  4. 一个失败项目的复盘会
  5. Pytorch机器学习/深度学习代码笔记
  6. 云小课 | 一分钟了解AppCube中的应用
  7. Linux编程(6)_makefile
  8. hadoop的伪分布环境配置(2.5.2)
  9. 双非二本院校,北京211,字节跳动 → 一个新秀的六年
  10. synchronized关键字的4种用法
  11. 酒店管理系统需求分析
  12. TP50 TP90 TP99 TP999 详细说明
  13. PTX-NPs 纳米粒子修饰紫杉醇/与桦木酸PEG/邻硝基苯丙酸紫杉醇偶联物的制备
  14. linux给文件夹加密码,如何使用linux命令给文件上锁?linux命令文件加密方法
  15. nyist 三点顺序
  16. 华为社招嵌入式软件面试_华为OD社招面试(技术二面完)--总结复盘
  17. 计算机ATA考试详细讲解
  18. 弹性地基梁板法计算原理_独立基础加防水板的设计方法的思考
  19. 如何在Linux中安装应用程序
  20. 不理智的_如何显示大量指标并保持理智

热门文章

  1. python 创建线程
  2. ReactHooks--踩坑1 :React Hook xx is called in function xx which is neither a React function component
  3. 配置 Spring Batch 批处理失败重试机制
  4. python学习 字典
  5. 蓝桥杯 算法提高 9-2 文本加密(c语言版详细注释)
  6. Invoking “cmake“ failed 没有安装serial 包
  7. 网络职业成长规划经验谈
  8. 开源风控系统radar部署
  9. 【MFC学习笔记】常见问题解答
  10. 用Python调用Graphviz生成复杂股权关系图