NoViableAltException(-1@[215:51: ( KW_AS )?])

报错日志:
NoViableAltException(-1@[215:51: ( KW_AS )?]) at org.antlr.runtime.DFA.noViableAlt(DFA.java:158) at org.antlr.runtime.DFA.predict(DFA.java:144) at org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.subQuerySource(HiveParser_FromClauseParser.java:5319) at org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.fromSource(HiveParser_FromClauseParser.java:3741) at org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.joinSource(HiveParser_FromClauseParser.java:1873) at org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.fromClause(HiveParser_FromClauseParser.java:1518) at org.apache.hadoop.hive.ql.parse.HiveParser.fromClause(HiveParser.java:45861) at org.apache.hadoop.hive.ql.parse.HiveParser.selectStatement(HiveParser.java:41516) at org.apache.hadoop.hive.ql.parse.HiveParser.regularBody(HiveParser.java:41402) at org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpressionBody(HiveParser.java:40413) at org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpression(HiveParser.java:40283) at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1590) at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1109) at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:202) at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166) at org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:276) at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:303) at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41) at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40) at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136) at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135) at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242) at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242) at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222) at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254) at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254) at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202) at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254) at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254) at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222) at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891) at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890) at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110) at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34) at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:295) at org.apache.spark.sql.hive.HiveQLDialect$$anonfun$parse$1.apply(HiveContext.scala:66) at org.apache.spark.sql.hive.HiveQLDialect$$anonfun$parse$1.apply(HiveContext.scala:66) at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:290) at org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:237) at org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:236) at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:279) at org.apache.spark.sql.hive.HiveQLDialect.parse(HiveContext.scala:65) at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211) at org.apache.spark.sql.SQLContext$$anonfun$2.apply(SQLContext.scala:211) at org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:114) at org.apache.spark.sql.execution.SparkSQLParser$$anonfun$org$apache$spark$sql$execution$SparkSQLParser$$others$1.apply(SparkSQLParser.scala:113) at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136) at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135) at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242) at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242) at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222) at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254) at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254) at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202) at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254) at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254) at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222) at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891) at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890) at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110) at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34) at org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208) at org.apache.spark.sql.SQLContext$$anonfun$1.apply(SQLContext.scala:208) at org.apache.spark.sql.execution.datasources.DDLParser.parse(DDLParser.scala:43) at org.apache.spark.sql.SQLContext.parseSql(SQLContext.scala:231) at org.apache.spark.sql.hive.HiveContext.parseSql(HiveContext.scala:331) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817) at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:63) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:311) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Error in query: cannot recognize input near '<EOF>' '<EOF>' '<EOF>' in subquery source; line 17 pos 94

翻译过来就是子表输入无法识别,下面是我做的一些尝试:
1. 飘号``(网上有一种说法是加飘号能解决)

SELECT SUM(`day1`) AS `day_num`
,SUM(`month1`) AS `month_num`
,SUM(`year1`) AS `year_num`
,COLLECT_SET(`date_max`) AS `date_max`
FROM
(SELECT
IF(total_3.`date_max` == total_3.date1,1,0) AS `day1`
,IF(total_3.`date1` >= CONCAT(SUBSTR(total_3.`date_max`,1,7),"-01"),1,0) AS `month1`
,IF(total_3.`date1` >= CONCAT(SUBSTR(total_3.`date_max`,1,4),"-01-01"),1,0) AS `year1`
,total_3.`date_max`
FROM
(SELECT `date1`,`date_max` from (
SELECT `date1`,"A" AS flag
FROM `original_service_code.o_bd_workdata_s`) AS total_1
JOIN (SELECT MAX(`date1`) AS date_max --取出最大日期
,"A" AS flag
FROM original_service_code.o_bd_workdata_s) AS total ON total_1.flag = total.flag) AS total_3);

然而并没有用

2. 把sql语句分开后子表分别建表

步骤1
drop table apply_service_code.B ;
CREATE TABLE apply_service_code.B AS
SELECT MAX(date1) AS date_max --取出最大日期
,"A" AS flag
FROM original_service_code.o_bd_workdata_s;
步骤2
create table apply_service_code.c as
SELECT date1,"A" AS flag
FROM original_service_code.o_bd_workdata_s;步骤3
SELECT date1,"A" AS flag from (SELECT MAX(date1) AS date_max --取出最大日期
,"A" AS flag
FROM original_service_code.o_bd_workdata_s) AS total_1
JOIN (SELECT date1,"A" AS flag
FROM original_service_code.o_bd_workdata_s) AS total ON total_1.flag = total.flag ;select date,步骤4
create table apply_service_code.d as SELECT
IF(total_3.date_max == total_3.date1,1,0) AS day
,IF(total_3.date1 >= CONCAT(SUBSTR(total_3.date_max,1,7),"-01"),1,0) AS month
,IF(total_3.date1 >= CONCAT(SUBSTR(total_3.date_max,1,4),"-01-01"),1,0) AS year
,total_3.date_max
FROM
(SELECT date1,date_max from (
SELECT date1,"A" AS flag
FROM original_service_code.o_bd_workdata_s) AS total_1
JOIN (SELECT MAX(date1) AS date_max --取出最大日期
,"A" AS flag
FROM original_service_code.o_bd_workdata_s) AS total ON total_1.flag = total.flag) AS total_3;CREATE TABLE apply_service_code.a_bd_worknumber_s AS
SELECT SUM(day1) AS day_num
,SUM(month1) AS month_num
,SUM(year1) AS year_num
,COLLECT_SET(date_max)[0] AS date_max
FROM apply_service_code.d ;

实测这种方法能解决以上该种问题(反正我的NoViableAltException都能解决)

3. 更换hive版本(另一种说法是版本过低)
我用的是hive0.13.0,然后改为1.2.2后,并没有什么用

目前只有这些解决办法,网上没有一个做法靠谱的,第二种还是我自己试出来的,之后要是遇到更好的解决办法在更新

大数据,从入门到放弃,你值得拥有


4.更新一下后来发现好使了,只要把字段名指定到子表别名就好使了,改变后代码如下:
DROP TABLE IF EXISTS apply_service_code.a_bd_worknumber_s;
--创建表填充表
CREATE TABLE apply_service_code.a_bd_worknumber_s AS
SELECT SUM(D.day1) AS day_num
,SUM(D.month1) AS month_num
,SUM(D.year1) AS year_num
,COLLECT_SET(D.date_max) AS date_max
FROM
(SELECT
IF(C.date_max == C.date1,1,0) AS day1
,IF(C.date1 >= CONCAT(SUBSTR(C.date_max,1,7),"-01"),1,0) AS month1
,IF(C.date1 >= CONCAT(SUBSTR(C.date_max,1,4),"-01-01"),1,0) AS year1
,C.date_max
FROM
(SELECT date1,date_max from (
SELECT date1,"A" AS flag
FROM original_service_code.o_bd_workdata_s ) AS A
JOIN (SELECT MAX(date1) AS date_max --取出最大日期
,"A" AS flag
FROM original_service_code.o_bd_workdata_s) AS B ON A.flag = B.flag) AS C) AS D;

深刻理解了这个报错的含义了,very good

先暂时不放弃好了,哈哈哈哈

记一次hive 报错NoViableAltException(-1@[215:51: ( KW_AS )?])相关推荐

  1. #Hive报错 WritableStringObjectInspector cannot be cast to org.apache.hadoop.hive.serde2.objectinspect

    #Hive报错 FAILED: ClassCastException org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableS ...

  2. 启动hive报错:java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang

    报错详情: b/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf ...

  3. Hive报错java.lang.NoClassDefFoundError: org/codehaus/jackson/JsonFactory

    一 问题 Hive报错java.lang.NoClassDefFoundError:org/codehaus/jackson/JsonFactory 二 原因 Hadoop版本是0.20.2.$HAD ...

  4. hive报错Could not get block locations. Source file “/user/hive/warehouse/xxx

    hive报错 Could not get block locations. Source file "/user/hive/warehouse/xxx... 以及 Caused by: or ...

  5. (详细)解决hive报错FAILED: SemanticException Cartesian products are disabled for safety的问题

    在使用hive-2.3.3执行TPC-H benchmark时,遇到hive报错.而且这个错误不是以Java异常栈的形式跑出的,很可能被忽略: FAILED: SemanticException Ca ...

  6. Hive报错FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask

    Hive报错Error while processing statement: FAILED: Execution Error, return code 3 from org.apache.hadoo ...

  7. sqoop将oracle数据导入到hive报错:Error: java.io.IOException: SQLException in nextKeyValue

    提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档 文章目录 问题 一.问题是什么导致的? 二.验证问题 总结 问题 sqoop将oracle数据导入到hive报错:Error: jav ...

  8. hive报错(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory/tmp/hive/.

    报错场景: 使用shell脚本进行对hive的数据查询导入导致出错,先是hive执行时间较长,后面hive报错. 问题描述: 使用jps查询进程,发现有三个hive进程,三个RunJar,RunJar ...

  9. Hive 报错提示及解决方法

    Hive 报错提示 报错提示:message:Database xxx is not empty. One or more tables exist 原因分析:在HIve的数据库时执行drop dat ...

  10. hive 报错 FAILED: SemanticException Cannot find class ‘com.hadoop.mapred.DeprecatedLzoTextInputFormat’

    hive 报错 FAILED: SemanticException Cannot find class 'com.hadoop.mapred.DeprecatedLzoTextInputFormat' ...

最新文章

  1. 讲讲 group by 的实现原理
  2. idea 2018.1 创建springboot开启找回Run Dashboard
  3. linux anaconda搜索路径,Anaconda安装及虚拟环境搭建教程(linux)
  4. 【专栏精选】Unity中的HTTP网络通信
  5. 【SpringMVC 笔记】SpringMVC 原理 + 入门项目(xml 配置版 vs 注解版)
  6. SQL Server 复制, 集群
  7. EViews11.0程序安装及注意事项
  8. customer-service项目重构总结
  9. 计算机硬盘最小容量是多少,通常计算机的存储容量是多少?
  10. 佐治亚理工学计算机硕士,美国计算机专业硕士留学推荐:佐治亚理工学院
  11. 页面浏览量和点击量_如何计算页面浏览量
  12. 春节不断电之机器学习 —— 决策树
  13. Java语言每日一练—第10天:谁是胖子
  14. PDF免费转PPT值得您收藏使用的网站
  15. IAM统一身份认证服务
  16. matlab在命令行注册,在命令行窗口中输入语句
  17. 11.0_[Java 继承]-继承/重写/抽象类/抽象方法/ final 修饰符
  18. 中国标准时间、2021-01-11T09:49:43.000+0000等各种时间的转换、各种时间处理
  19. 卸载VS2013或者VS2012版本
  20. Revit二次开发入门关键

热门文章

  1. 卡巴斯基 7.0 免费激活码使用方法!
  2. 绕过图片打印成PDF时出现锯齿的问题
  3. 魔兽服务器里炉石怎么修改,魔兽世界炉石怎么用
  4. VBA读excel写xml
  5. Linux下常用软件大比拼
  6. Linux 下载百度网盘大文件
  7. 霍尔在光伏发电系统中的应用与产品选型
  8. 如何管理项目成本:工时管理
  9. 京东商品评论的文本主题分析
  10. 2021-01-07