spark读取kafka数据

    // Create DataFrame representing the stream of input lines from kafkaval lines = spark.readStream.format("kafka").option("kafka.bootstrap.servers", "kafka1:9092,kafka2:9092,kafka3:9092").option("subscribe", "log_active").load()

错误信息:

21/02/01 10:39:50 WARN consumer.ConsumerConfig: The configuration max.poll.records = 1 was supplied but isn't a known config.
21/02/01 10:39:50 INFO utils.AppInfoParser: Kafka version : 0.9.0-kafka-2.0.2
21/02/01 10:39:50 INFO utils.AppInfoParser: Kafka commitId : unknown
21/02/01 10:39:50 ERROR streaming.StreamExecution: Query [id = 3a0fd490-4f78-4d4f-ac33-a245b04e363f, runId = 2c5f1322-2c8e-4e5a-b992-5b859cb0bdd6] terminated with error
java.lang.NoSuchMethodError: org.apache.kafka.clients.consumer.KafkaConsumer.subscribe(Ljava/util/Collection;)Vat org.apache.spark.sql.kafka010.SubscribeStrategy.createConsumer(ConsumerStrategy.scala:63)at org.apache.spark.sql.kafka010.KafkaOffsetReader.createConsumer(KafkaOffsetReader.scala:297)at org.apache.spark.sql.kafka010.KafkaOffsetReader.<init>(KafkaOffsetReader.scala:78)at org.apache.spark.sql.kafka010.KafkaSourceProvider.createSource(KafkaSourceProvider.scala:88)at org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:243)at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2$$anonfun$applyOrElse$1.apply(StreamExecution.scala:158)at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2$$anonfun$applyOrElse$1.apply(StreamExecution.scala:155)at scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:194)at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:80)at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.applyOrElse(StreamExecution.scala:155)at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.applyOrElse(StreamExecution.scala:153)at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266)at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256)at org.apache.spark.sql.execution.streaming.StreamExecution.logicalPlan$lzycompute(StreamExecution.scala:153)at org.apache.spark.sql.execution.streaming.StreamExecution.logicalPlan(StreamExecution.scala:147)at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:276)at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:206)
Exception in thread "stream execution thread for [id = 3a0fd490-4f78-4d4f-ac33-a245b04e363f, runId = 2c5f1322-2c8e-4e5a-b992-5b859cb0bdd6]" java.lang.NoSuchMethodError: org.apache.kafka.clients.consumer.KafkaConsumer.subscribe(Ljava/util/Collection;)Vat org.apache.spark.sql.kafka010.SubscribeStrategy.createConsumer(ConsumerStrategy.scala:63)at org.apache.spark.sql.kafka010.KafkaOffsetReader.createConsumer(KafkaOffsetReader.scala:297)at org.apache.spark.sql.kafka010.KafkaOffsetReader.<init>(KafkaOffsetReader.scala:78)at org.apache.spark.sql.kafka010.KafkaSourceProvider.createSource(KafkaSourceProvider.scala:88)at org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:243)at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2$$anonfun$applyOrElse$1.apply(StreamExecution.scala:158)at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2$$anonfun$applyOrElse$1.apply(StreamExecution.scala:155)at scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:194)at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:80)at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.applyOrElse(StreamExecution.scala:155)at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.applyOrElse(StreamExecution.scala:153)at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266)at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256)at org.apache.spark.sql.execution.streaming.StreamExecution.logicalPlan$lzycompute(StreamExecution.scala:153)at org.apache.spark.sql.execution.streaming.StreamExecution.logicalPlan(StreamExecution.scala:147)at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:276)at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:206)
Exception in thread "main" org.apache.spark.sql.streaming.StreamingQueryException: org.apache.kafka.clients.consumer.KafkaConsumer.subscribe(Ljava/util/Collection;)V

这个在官方文档中有介绍。地址如下:https://www.cloudera.com/documentation/spark2/latest/topics/spark2_kafka.html#running_jobs

方案一:错误信息中可以看出kafka的版本:Kafka version : 0.9.0-kafka-2.0.2,而我在pom.xml中应用的jar是0.10,因此导致包不一致。

# Set the environment variable for the duration of your shell session:
export SPARK_KAFKA_VERSION=0.10
spark-submit arguments# Or:# Set the environment variable for the duration of a single command:
SPARK_KAFKA_VERSION=0.10 spark-submit arguments

方法二:参照https://docs.cloudera.com/documentation/spark2/latest/topics/spark2_kafka.html#running_jobs

spark2+kafka报错:java.lang.NoSuchMethodError:org.apache.kafka.clients.consumer.KafkaConsumer.subscribe相关推荐

  1. flink SQL报错java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLj

    问题 flink SQL连接hive以及hudi 报错java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkAr ...

  2. java.lang.NoSuchMethodError: org.apache.kafka.clients.consumer.KafkaConsumer.assign(Ljava/util/List

    版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明. 本文链接: Flink入门程序异常,记录一下跟大家分享. SLF4J: Failed to l ...

  3. DRP问题集结(一)-Tomcat无法启动,报错java.lang.NoClassDefFoundError: org/apache/juli/logging/LogFactory...

    问题一:  Tomcat无法启动,报错java.lang.NoClassDefFoundError: org/apache/juli/logging/LogFactory 问题二:[Error]Jav ...

  4. 使用POI操作Excel时new XSSFWorkbook ()报错java.lang.NoSuchMethodError解决方式

    使用最新的POI3.11时,在执行 Workbook  workBook = new XSSFWorkbook ();这段代码时出现错误: java.lang.NoSuchMethodError: j ...

  5. Hive启动报错 java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang

    Hive启动报错 [lili@hadoop102 hive]$ bin/hive which: no hbase in (/usr/lib64/qt-3.3/bin:/usr/local/bin:/b ...

  6. 【错误记录】Groovy 工程编译报错 ( java.lang.NoClassDefFoundError: org/apache/tools/ant/util/ReaderInputStream )

    文章目录 一.报错信息 二.解决方案 一.报错信息 编译 Groovy 工程时 , 报如下错误信息 : gradle-resources-test:Groovy_Demo.main: java.lan ...

  7. mybatis-plus自动生成的时候报错java.lang.NoClassDefFoundError: org/apache/velocity/context/Context

    当使用mybatisplus的代码自动生成的时候报错 09:02:44.188 [main] DEBUG com.baomidou.mybatisplus.generator.AutoGenerato ...

  8. Tomcat无法启动,报错java.lang.NoClassDefFoundError: org/apache/juli/logging/LogFactory

    前面一段时间看到Tomcat7.0发布了几个测试版,由于没有稳定,也就没有测试了,今天看到新闻,看到Tomcat7.0正式版已经发布了,到官网上下载下来,看看效果如何. 下面列出Tomcat 7的一些 ...

  9. 启动Tomcat报错java.lang.UnsupportedClassVersionError: org/apache/catalina/startup/Bootstra

    在MyEclipse中启动Tomcat时显示java.lang.UnsupportedClassVersionError: org/apache/catalina/startup/Bootstra 是 ...

最新文章

  1. 今天在YY上听课感觉不错
  2. 深入理解JAVA序列化
  3. Citrix Port(常用端口)
  4. xpath选择当前结点的子节点
  5. git-bug分支-git-stash-工作代码与bug解决同时处理时解决模拟
  6. SpringCloud 如何搭建Eureka注册中心
  7. linux块设备驱动编写,Linux内核学习笔记 -49 工程实践-编写块设备驱动的基础
  8. 在Matplotlib图中插入LaTex公式
  9. 高中信息技术python教材内容_高中信息技术教材 掌控基础版
  10. pdfbox创建pdf_使用PDFBox处理PDF文档(新建PDF文件、修改PDF文件、PDF中插入图片、将PDF文件转换为图片)...
  11. 《大话数据结构(C#实现)》(Yanlz+VR云游戏+Unity+SteamVR+云技术+5G+AI+软件架构设计+框架编程+数组+栈+链表+图+队列+树+堆+二叉树+哈希表+立钻哥哥+==)
  12. 一招教你如何调整图片的分辨率DPI?
  13. SpringSecurity--记住我
  14. DateTimeFormatter格式化 eee MMM d HH:mm:ss yyyy
  15. Costech A17T23SWB MTo
  16. LeetCode解析------111. 二叉树的最小深度-深度优先搜索
  17. Code_Aster comm命令文件结构与说明(by Yang 2017.3.30)
  18. 【JavaSE】----- Java语言的介绍
  19. 1833 雪糕的最大数量(排序)
  20. 报错:cannot match operand(s)in the condition to the corresponding edges in the enclosing event control

热门文章

  1. 基于android音乐播放器的设计与实现
  2. solidity 函数01
  3. 【OA外勤签到】软件免费下载
  4. 【渝粤教育】电大中专药事管理与法规 (3)作业 题库
  5. docker 安装Subversion Edge
  6. 终于还是降回ios4.2.1了
  7. BOSHIDA 模块电源的分类(三)
  8. Alink如何读写Libsvm格式数据【Alink使用技巧】
  9. vim查找和替换详解
  10. VisionMobile:M2M生态系统的秘方(2):一、完美风暴