完整报错如下:

WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

------------------------------------------------------------------------------------------------------------------------------------------------------------

①访问http://master:8080/

点击下面的Running Applications

②选择下面RUNNING中的stderr

得到下面的报错:

java.io.InvalidClassException: scala.collection.mutable.WrappedArray$ofRef; local class incompatible: stream classdesc serialVersionUID = 3456489343829468865, local class serialVersionUID = 1028182004549731694at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:616)at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1843)at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2000)at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:109)at org.apache.spark.rpc.netty.NettyRpcEnv.$anonfun$deserialize$2(NettyRpcEnv.scala:292)at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)at org.apache.spark.rpc.netty.NettyRpcEnv.deserialize(NettyRpcEnv.scala:345)at org.apache.spark.rpc.netty.NettyRpcEnv.$anonfun$deserialize$1(NettyRpcEnv.scala:291)at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)at org.apache.spark.rpc.netty.NettyRpcEnv.deserialize(NettyRpcEnv.scala:291)at org.apache.spark.rpc.netty.NettyRpcEnv.$anonfun$askAbortable$7(NettyRpcEnv.scala:246)at org.apache.spark.rpc.netty.NettyRpcEnv.$anonfun$askAbortable$7$adapted(NettyRpcEnv.scala:246)at org.apache.spark.rpc.netty.RpcOutboxMessage.onSuccess(Outbox.scala:90)at org.apache.spark.network.client.TransportResponseHandler.handle(TransportResponseHandler.java:194)at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:142)at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:53)at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:102)at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)at java.lang.Thread.run(Thread.java:748)

最终办法:

jps检查两个节点是否都有启动worker

修改File->Project Structure->Libraries中的Scala SDK为自己的$SCALA_HOME。

mvn clean

mvn package

check your cluster UI to ensure that workers are registered and have sufficient resources相关推荐

  1. Scheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers

    今天试着用Spark on YARN集群提交一个任务,读取的文件路径是本地的,结果报了这个错误. Scheduler: Initial job has not accepted any resourc ...

  2. WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure

    1:遇到这个问题是在启动bin/spark-shell以后,然后呢,执行spark实现wordcount的例子的时候出现错误了,如: scala> sc.textFile("hdfs: ...

  3. redis集群添加节点报错Either the node already knows other nodes (check with CLUSTER NODES) or contains some k

    [README] redis集群添加节点报错 [ERR] Node 192.168.163.202:6380 is not empty. Either the node already knows o ...

  4. pyspark汇总小结

    20220402 Spark报Total size of serialized results of 12189 tasks is bigger than spark.driver.maxResult ...

  5. spark 序列化错误 集群提交时_【问题解决】本地提交任务到Spark集群报错:Initial job has not accepted any resources...

    本地提交任务到Spark集群报错:Initial job has not accepted any resources 错误信息如下: 18/04/17 18:18:14 INFO TaskSched ...

  6. Spark源码阅读02-Spark核心原理之作业执行原理

    概述 Spark的作业调度主要是指基于RDD的一系列操作构成的一个作业,在Executor中执行的过程.其中,在Spark作业调度中最主要的是DAGScheduler和TaskScheduler两个调 ...

  7. 【数据平台】Eclipse+Scala远程开发调试关于hostname的问题

    1.代码: import org.apache.spark.SparkConf import org.apache.spark.SparkContextobject wc {def main(args ...

  8. 深入理解Spark 2.1 Core (三):任务调度器的原理与源码分析

    提交Task 调用栈如下: TaskSchedulerImpl.submitTasks CoarseGrainedSchedulerBackend.reviveOffers CoarseGrained ...

  9. Spark修炼之道(高级篇)——Spark源码阅读:第六节 Task提交

    Task提交 在上一节中的 Stage提交中我们提到,最终stage被封装成TaskSet,使用taskScheduler.submitTasks提交,具体代码如下: taskScheduler.su ...

最新文章

  1. # vmware异常关机后,虚拟系统无法启动的解决办法
  2. Linux下挂载与解除挂载U盘
  3. nginx 解决session共享问题(jvm-route)方式
  4. VS2010中使用正则表达式替换时无法使用回车符的解决方法
  5. Radical and array
  6. 关于机器学习的一些感想
  7. 简单的Delegate(委托)例子
  8. go语言编程项目_一个项目需要多少种编程语言?
  9. 通过异常处理错误-2
  10. 谷歌浏览器(Chrome浏览器)最新官方下载(含注意事项)
  11. SQL中,把SQL查询分析器查询出来的结果,导出到EXCEL表格
  12. 关于各式竞赛书籍的点评
  13. win10 系统字体大小修改
  14. 计算机应用基础——计算机硬件(一)
  15. App加密:常用加密方式和爱加密原理
  16. 统计模型-基于sas
  17. CSAPP实验二——bomb lab实验
  18. 作业1:关于使用python中scikit-learn(sklearn)模块,实现鸢尾花(iris)相关数据操作(数据加载、标准化处理、构建聚类模型并训练、可视化、评价模型)
  19. Latex入门篇之论文排版
  20. linux内核时钟工作原理,需要学习并了解Linux时钟的原理及其应用

热门文章

  1. HihoCoder - 1558
  2. Concurrent实现原理
  3. SDE+ORACLE优化配置
  4. redchat怎么编写shell脚本_如何写shell脚本?尝试自己编写一个简单脚本
  5. linux screen 进程,screen 命令使用
  6. Generator简单了解
  7. 手机常用分页加载loading框
  8. DNN:LSTM的前向计算和参数训练
  9. android 标准字体,文字规范标准(IOS/Android)
  10. java在面板中点击按钮后弹出对话框