问题描述

我在查询中使用了udf导致报错,因为目前spark2.4对Continuous Processing的查询仅支持投影类(projections),如select, map, flatMap, mapPartitions,etc。或者是选择类(selections),如where, filter, etc。

官网描述

As of Spark 2.4, only the following type of queries are supported in the continuous processing mode.Operations: Only map-like Dataset/DataFrame operations are supported in continuous mode, that is, only projections (select, map, flatMap, mapPartitions, etc.) and selections (where, filter, etc.).
All SQL functions are supported except aggregation functions (since aggregations are not yet supported), current_timestamp() and current_date() (deterministic computations using time is challenging).

报错信息

[Stage 0:>(0 + 3) / 36]2019-09-29 11:26:05 WARN  TaskSetManager:66 - Lost task 2.0 in stage 0.0: java.util.NoSuchElementException: None.getat scala.None$.get(Option.scala:347)at scala.None$.get(Option.scala:345)at org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader.next(ContinuousQueuedDataReader.scala:116)at org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD$$anon$1.getNext(ContinuousDataSourceRDD.scala:83)at org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD$$anon$1.getNext(ContinuousDataSourceRDD.scala:81)at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$11$$anon$1.hasNext(WholeStageCodegenExec.scala:619)at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1124)at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1130)at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)at scala.collection.Iterator$class.foreach(Iterator.scala:891)at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:224)at org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$2.writeIteratorToStream(PythonUDFRunner.scala:50)at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:345)at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945)at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:194)2019-09-29 11:26:05 WARN  TaskSetManager:66 - Lost task 1.1 in stage 0.0: org.apache.spark.sql.execution.streaming.continuous.ContinuousTaskRetryException: Continuous execution does not support task retryat org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD.compute(ContinuousDataSourceRDD.scala:68)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD$$anonfun$compute$1.apply$mcV$sp(ContinuousWriteRDD.scala:52)at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD$$anonfun$compute$1.apply(ContinuousWriteRDD.scala:51)at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD$$anonfun$compute$1.apply(ContinuousWriteRDD.scala:51)at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.compute(ContinuousWriteRDD.scala:76)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)at org.apache.spark.scheduler.Task.run(Task.scala:121)at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)2019-09-29 11:26:05 ERROR TaskSetManager:70 - Task 1 in stage 0.0 failed 4 times; aborting job
2019-09-29 11:26:05 ERROR ContinuousExecution:91 - Query [id = 1c99b68c-b629-4eac-b797-6d41a8050773, runId = 2f4cb83d-6ca9-4fd1-bb9d-2133b277e663] terminated with error
org.apache.spark.SparkException: Writing job aborted.at org.apache.spark.sql.execution.streaming.continuous.WriteToContinuousDataSourceExec.doExecute(WriteToContinuousDataSourceExec.scala:62)at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)at org.apache.spark.sql.execution.streaming.continuous.ContinuousExecution$$anonfun$runContinuous$3$$anonfun$apply$1.apply(ContinuousExecution.scala:262)at org.apache.spark.sql.execution.streaming.continuous.ContinuousExecution$$anonfun$runContinuous$3$$anonfun$apply$1.apply(ContinuousExecution.scala:262)at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)at org.apache.spark.sql.execution.streaming.continuous.ContinuousExecution$$anonfun$runContinuous$3.apply(ContinuousExecution.scala:262)at org.apache.spark.sql.execution.streaming.continuous.ContinuousExecution$$anonfun$runContinuous$3.apply(ContinuousExecution.scala:262)at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)at org.apache.spark.sql.execution.streaming.continuous.ContinuousExecution.runContinuous(ContinuousExecution.scala:260)at org.apache.spark.sql.execution.streaming.continuous.ContinuousExecution.runActivatedStream(ContinuousExecution.scala:90)at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 : org.apache.spark.sql.execution.streaming.continuous.ContinuousTaskRetryException: Continuous execution does not support task retryat org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD.compute(ContinuousDataSourceRDD.scala:68)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD$$anonfun$compute$1.apply$mcV$sp(ContinuousWriteRDD.scala:52)at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD$$anonfun$compute$1.apply(ContinuousWriteRDD.scala:51)at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD$$anonfun$compute$1.apply(ContinuousWriteRDD.scala:51)at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.compute(ContinuousWriteRDD.scala:76)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)at org.apache.spark.scheduler.Task.run(Task.scala:121)at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)Driver stacktrace:at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)at scala.Option.foreach(Option.scala:257)at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)at org.apache.spark.rdd.RDD.collect(RDD.scala:944)at org.apache.spark.sql.execution.streaming.continuous.WriteToContinuousDataSourceExec.doExecute(WriteToContinuousDataSourceExec.scala:53)... 21 more
Caused by: org.apache.spark.sql.execution.streaming.continuous.ContinuousTaskRetryException: Continuous execution does not support task retryat org.apache.spark.sql.execution.streaming.continuous.ContinuousDataSourceRDD.compute(ContinuousDataSourceRDD.scala:68)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD$$anonfun$compute$1.apply$mcV$sp(ContinuousWriteRDD.scala:52)at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD$$anonfun$compute$1.apply(ContinuousWriteRDD.scala:51)at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD$$anonfun$compute$1.apply(ContinuousWriteRDD.scala:51)at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)at org.apache.spark.sql.execution.streaming.continuous.ContinuousWriteRDD.compute(ContinuousWriteRDD.scala:76)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)at org.apache.spark.scheduler.Task.run(Task.scala:121)at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)
2019-09-29 11:26:05 WARN  ContinuousExecution:87 - Failed to stop streaming source: KafkaSource[Subscribe[cloud_behavior]]. Resources may have leaked.
java.lang.IllegalStateException: This consumer has already been closed.at org.apache.kafka.clients.consumer.KafkaConsumer.ensureNotClosed(KafkaConsumer.java:1416)at org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:1427)at org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1360)at org.apache.spark.sql.kafka010.KafkaOffsetReader.org$apache$spark$sql$kafka010$KafkaOffsetReader$$stopConsumer(KafkaOffsetReader.scala:317)at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$close$1.apply$mcV$sp(KafkaOffsetReader.scala:108)at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$close$1.apply(KafkaOffsetReader.scala:108)at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$close$1.apply(KafkaOffsetReader.scala:108)at org.apache.spark.sql.kafka010.KafkaOffsetReader.runUninterruptibly(KafkaOffsetReader.scala:255)at org.apache.spark.sql.kafka010.KafkaOffsetReader.close(KafkaOffsetReader.scala:108)at org.apache.spark.sql.kafka010.KafkaContinuousReader.stop(KafkaContinuousReader.scala:118)at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$stopSources$1.apply(StreamExecution.scala:373)at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$stopSources$1.apply(StreamExecution.scala:371)at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)at org.apache.spark.sql.execution.streaming.StreamExecution.stopSources(StreamExecution.scala:371)at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runStream$2.apply(StreamExecution.scala:319)at org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:77)at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:308)at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)

Structed Streaming(Continuous Processing报错):StreamingQueryException;java.util.NoSuchElementException相关推荐

  1. IDEA添加模块时,插件报错:java.util.NoSuchElementException Collection is empty

    IDEA添加模块时,插件报错:java.util.NoSuchElementException: Collection is empty. 解决方法一: 根据github上面的提示说是要在插件中添加设 ...

  2. 解决报错:java.util.UnknownFormatConversionException: Conversion = ‘p‘

    前些天发现了一个巨牛的人工智能学习网站,通俗易懂,风趣幽默,忍不住分享一下给大家.点击跳转到教程. 1. ssm框架下  报错如题 2. 错误原因:我的情况是,代码中实体属性映射书写和数据库字段名字不 ...

  3. 【SpringCloud后端项目报错】java.util.LinkedHashMap cannot be cast to…异常处理

    问题代码: public Object getAjpc(ScreenDto screenDto) {String cgyBh = screenDto.getCgyBh();RestTemplate r ...

  4. Spring整合Redis时报错:java.util.NoSuchElementException: Unable to validate object

    我在Spring整合Redis时报错,我是犯了一个很低级的错误! 我设置了Redis的访问密码,在Spring的配置文件却没有配置密码这一项,配置上密码后,终于不报错了!

  5. Set遍历解决java.util.NoSuchElementException报错问题

    出现问题:在遍历Set集合时,使用到了迭代器.项目运行时报错:java.util.NoSuchElementException 报错原因:itr本身只有两个值,而itr.next()两次,没有值可取导 ...

  6. 【java】在分页查询结果中对最后的结果集List进行操作add()或remove()操作,报错:java.lang.UnsupportedOperationException...

    场景: 在分页查询结果中对最后的结果集List进行操作add()或remove()操作,报错:java.lang.UnsupportedOperationException 错误: java.lang ...

  7. (007) java后台开发之Scanner报错java.util.NoSuchElementException

    在测试Scanner 时写了两次 .close(); 结果运行报错. 原因参考:http://www.cnblogs.com/qingyibusi/p/5812725.html 一个方法A使用了Sca ...

  8. 解决非controller使用@Autowired注解注入报错为java.lang.NullPointerException问题

    解决非controller使用@Autowired注解注入报错为java.lang.NullPointerException问题 参考文章: (1)解决非controller使用@Autowired注 ...

  9. Java报错:java.math.BigDecimal cannot be cast to java.lang.String

    从数据库取数字,转为string,报错: java.math.BigDecimal cannot be cast to java.lang.String 错误代码 Integer.parseInt(( ...

  10. 启动hive报错:java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang

    报错详情: b/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf ...

最新文章

  1. Log4j日志管理的用法
  2. 你真懂JavaScript吗?
  3. Maven学习——安装与修改Maven的本地仓库路径
  4. 全国计算机等级考试二级笔试样卷Java语言程序设计
  5. luogu P3796【模板】AC自动机(加强版)
  6. 世界上最热的地方在哪里?原来火焰山不是第一...
  7. php关于apache配置,关于PHP和apache的配置
  8. 【To Do】程序员面试金典——18.11最大子方阵
  9. DIY01_NE555叮咚门铃
  10. 人脸识别的loss总结
  11. 最小二乘法曲线拟合(代码环境:matlab)
  12. Django 2.1文档
  13. 友盟第三方分享 QQ QQ空间 微信 新浪 及走过的坑
  14. vue项目中 路径使用的@和~的区别
  15. 《跟开涛学SpringMVC》学习笔记
  16. JetBrains .idea项目目录泄露
  17. 彻底弄懂base64的编码与解码原理
  18. linux安装软件apt或者编译安装说明
  19. 优思学院|德国制造为何被受推崇?
  20. 酒店WiFi覆盖-无线覆盖方案

热门文章

  1. python量化期权_Python量化之期货期权无风险套利测试
  2. Cannot lock file hash cache (E:\blackWu\github\X5WebView\WebViewX5\.gradle\4.6\fileHashes) as it has
  3. 芯片级维修学习课程安排
  4. Gather-Excite:Exploiting Feature Context in Convolutional Neural Networks
  5. Linux批量操作------mov转MP4
  6. LSD计算机控制人体大脑,科学家发现了LSD对大脑的影响
  7. 美团Robust热修复工具使用记录
  8. 三轴加速度传感器LIS3DH使用心得
  9. linux查进程ps和top,linux进程查看top与ps
  10. 2022-4-16 c++ 杂记 mutex GUARDED_BY std::unique_ptr unordered_map ::开头