1、详细的日志异常信息

java.lang.RuntimeException: serious problem
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1021)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1048)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:84)
at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:84)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.immutable.List.map(List.scala:285)
at org.apache.spark.rdd.UnionRDD.getPartitions(UnionRDD.scala:84)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:314)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:298)
at org.apache.spark.sql.execution.QueryExecution.hiveResultString(QueryExecution.scala:133)
at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:63)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:340)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:248)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.util.concurrent.ExecutionException: java.lang.IndexOutOfBoundsException: Index: 0
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1016)
... 56 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 0
at java.util.Collections$EmptyList.get(Collections.java:4454)
at org.apache.hadoop.hive.ql.io.orc.OrcProto$Type.getSubtypes(OrcProto.java:12240)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.getColumnIndicesFromNames(ReaderImpl.java:651)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.getRawDataSizeOfColumns(ReaderImpl.java:634)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:927)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:836)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:702)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

2、查看hdfs目录,存在空文件

3、数据表的存储格式为orc

解决办法:

将 表的存储格式改成 parquet

spark sql 出现 java.lang.RuntimeException: serious problem相关推荐

  1. java.sql.SQLException: java.lang.RuntimeException: serious problem

    场景: spark on hive 使用sparksql查询hive的ORC表. select * from  evchk_ods.ODS_CAR_DRIVE_INFO_P_D 报错如下: java. ...

  2. hive报错 spark_sparksql读取hive数据报错:java.lang.RuntimeException: serious problem

    问题: Caused by: java.util.concurrent.ExecutionException: java.lang.IndexOutOfBoundsException: Index: ...

  3. sql查询报java.lang.RuntimeException: serious problem

    原因:数据表的存储格式为orc 解决办法: 将 表的存储格式改成 parquet /textfile

  4. Dolphinscheduler执行MySQL任务时报错execute sql error java.lang.RuntimeException: send mail failed!

    本次测试Mysql脚本的执行情况报错如下图 解决步骤 前往github官网寻找解决方案 https://github.com/apache/incubator-dolphinscheduler/iss ...

  5. azkaban 与 java任务_azkaban任务报错java.lang.RuntimeException: The root scratch dir: /tmp/hive...

    azkaban运行任务的时候失败报错如下: 23-03-2016 08:16:14 CST analyzer-kafka2hdfs_new ERROR - Exception in thread &q ...

  6. FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.解决办法

    http://blog.csdn.net/lxpbs8851/article/details/11018933 起因是我重装了mysql数据库. 安装之后 把访问权限都配置好 : GRANT ALL ...

  7. ibatis sql_Map中出现异常:Cause: java.lang.RuntimeException: JavaBeansDataExchange could not instantiate..

    sql_map 原始配置 <select id="getContacts" parameterClass="Java.util.Map" resultCl ...

  8. java.lang.RuntimeException: Manifest merger failed with multiple errors

    解决方案 打开资源文件 点击下面的Merged Manifest 这时你会看到具体报错信息,修改掉即可 Manifest merger failed 的意思是清单文件合并错误,首先是可以定位Andro ...

  9. 手把手教你轻松解决Error:java.lang.RuntimeException: Manifest merger failed with multiple errors, see logs...

    这可谓经典错误了,Manifest merger failed 的意思是清单文件合并错误,首先是可以定位AndroidManifest.xml文件了. 错误等级 高 因为错误不明显,所以解决起来貌似非 ...

最新文章

  1. CVPR2020论文点评: AdderNet(加法网络)
  2. 进程管理ps,top
  3. 扩展 OpenLayers.Layer.WMS 为自定义的瓦片浏览服务
  4. IP插件:批量替换论文图片
  5. delphi image 编辑器_照片拼图编辑器app下载-照片拼图编辑器下载 v1.0.0 安卓版
  6. [SHOI2008]cactus仙人掌图
  7. 调试WebApi的一些方法
  8. 这个假期,百度差一点点点重回巅峰
  9. 使用JsonCpp实现JSON文件读写操作的方法
  10. [CLR via C#]18. Attribute
  11. C/C++[codeup 1805]首字母大写
  12. Mysql函数和存储过程
  13. 分布式架构在云计算平台中的应用及优缺点
  14. 基于SSM流浪宠物管理系统
  15. 基于IPQAM的VOD低成本方案
  16. 你还在对着手机干唱?k歌神器挑选法则
  17. 如何取消默认浏览器中hao123主页
  18. 深度解析BAT三家互联网公司,为什么腾讯产品第一,百度技术第一,阿里运营第一?
  19. 数论及其应用——同余式定理
  20. P1903 数颜色 (带修莫队)

热门文章

  1. com.fr.general.data.TableDataException:错误代码:11300001 数据集配置错误
  2. 草图大师Sketchup打不开3d模型库怎么办
  3. N个数内的奇数和/偶数和
  4. 17python实操案例八
  5. Arcgis使用教程(七)书签(BookMarks)操作使用方法
  6. 能量启发模型:从负采样到自监督学习NEG-NCE-GAN-SSL家族
  7. 苹果手机怎么下载python123_苹果手机怎么下载端游吃鸡
  8. 吃鸡建筑模型资源及转模
  9. 什么是css css3,CSS3简介 - CSS3 | 绿叶学习网
  10. 计算机毕业设计JavaCar易达租车系统(源码+系统+mysql数据库+lw文档)