场景:在使用flume的时候报错

问题:类似的问题总共有4个

             20/06/20 09:09:37 ERROR hdfs.HDFSEventSink: process failedjava.lang.IllegalArgumentException: java.net.UnknownHostException: Projectat org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:445)at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:132)at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:350)at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:284)at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:167)at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3288)at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123)at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337)at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3305)at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:255)at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:247)at org.apache.flume.sink.hdfs.BucketWriter$8$1.run(BucketWriter.java:727)at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)at org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:724)at java.util.concurrent.FutureTask.run(FutureTask.java:266)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)Caused by: java.net.UnknownHostException: Project... 20 more20/06/20 09:09:37 ERROR flume.SinkRunner: Unable to deliver event. Exception follows.org.apache.flume.EventDeliveryException: java.lang.IllegalArgumentException: java.net.UnknownHostException: Projectat org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:464)at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)at java.lang.Thread.run(Thread.java:748)Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: Projectat org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:445)at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:132)at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:350)at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:284)at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:167)at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3288)at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123)at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337)at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3305)at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:255)at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:247)at org.apache.flume.sink.hdfs.BucketWriter$8$1.run(BucketWriter.java:727)at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)at org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:724)at java.util.concurrent.FutureTask.run(FutureTask.java:266)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)... 1 more

另外一个错误

                 20/06/20 09:20:55 WARN hdfs.BucketWriter: failed to rename() file (hdfs://bdpcdh6301:9870/Projecat/BDP/data/ineer/RAW/01/test_flume/20200620/logs-.1592616051200.tmp). Exception follows.java.io.IOException: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response exceedsmaximum data length; Host Details : local host is: "bdpcdh6303/172.18.0.93"; destination host is: "bdpcdh6301":9870;at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:808)at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1503)at org.apache.hadoop.ipc.Client.call(Client.java:1445)at org.apache.hadoop.ipc.Client.call(Client.java:1355)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source)at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1624)at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1495)at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1492)at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1507)at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668)at org.apache.flume.sink.hdfs.BucketWriter$7.call(BucketWriter.java:680)at org.apache.flume.sink.hdfs.BucketWriter$7.call(BucketWriter.java:677)at org.apache.flume.sink.hdfs.BucketWriter$8$1.run(BucketWriter.java:727)at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)at org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:724)at java.util.concurrent.FutureTask.run(FutureTask.java:266)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)Caused by: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data lengthat org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1818)at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1166)at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1062)20/06/20 09:23:55 WARN hdfs.BucketWriter: Renaming file: hdfs://bdpcdh6301:9870/Projecat/BDP/data/ineer/RAW/01/test_flume/20200620/logs-.1592616051200.tmp failed. Will retry again in 180 seconds.java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@6ff77997 rejected from java.util.concurrent.ThreadPoolExecutor@61308a19[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 3]at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)at org.apache.flume.sink.hdfs.BucketWriter.callWithTimeout(BucketWriter.java:721)at org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:677)at org.apache.flume.sink.hdfs.BucketWriter.access$1600(BucketWriter.java:60)at org.apache.flume.sink.hdfs.BucketWriter$ScheduledRenameCallable.call(BucketWriter.java:382)at org.apache.flume.sink.hdfs.BucketWriter$ScheduledRenameCallable.call(BucketWriter.java:367)at java.util.concurrent.FutureTask.run(FutureTask.java:266)at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)
         另外一个报错
             20/06/20 09:29:24 ERROR source.NetcatSource: Unable to bind to socket. Exception follows.java.net.BindException: Cannot assign requested addressat sun.nio.ch.Net.bind0(Native Method)at sun.nio.ch.Net.bind(Net.java:433)at sun.nio.ch.Net.bind(Net.java:425)at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)at org.apache.flume.source.NetcatSource.start(NetcatSource.java:164)at org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:249)at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)20/06/20 09:29:24 INFO source.NetcatSource: Source stopping20/06/20 09:29:24 ERROR lifecycle.LifecycleSupervisor: Unable to start EventDrivenSourceRunner: { source:org.apache.flume.source.NetcatSource{name:r1,state:STOP} } - Exception follows.org.apache.flume.FlumeException: java.net.BindException: Cannot assign requested addressat org.apache.flume.source.NetcatSource.start(NetcatSource.java:171)at org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:249)at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)Caused by: java.net.BindException: Cannot assign requested addressat sun.nio.ch.Net.bind0(Native Method)at sun.nio.ch.Net.bind(Net.java:433)at sun.nio.ch.Net.bind(Net.java:425)at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)at org.apache.flume.source.NetcatSource.start(NetcatSource.java:164)... 9 more20/06/20 09:29:27 INFO source.NetcatSource: Source starting20/06/20 09:29:27 ERROR source.NetcatSource: Unable to bind to socket. Exception follows.java.net.BindException: Cannot assign requested addressat sun.nio.ch.Net.bind0(Native Method)at sun.nio.ch.Net.bind(Net.java:433)at sun.nio.ch.Net.bind(Net.java:425)at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)at org.apache.flume.source.NetcatSource.start(NetcatSource.java:164)at org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:249)at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)
         另外一个报错
20/06/20 09:45:13 WARN hdfs.HDFSEventSink: HDFS IO errorjava.io.IOException: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response exceedsmaximum data length; Host Details : local host is: "bdpcdh6303/172.18.0.93"; destination host is: "47.102.155.106":9870;at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:808)at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1503)at org.apache.hadoop.ipc.Client.call(Client.java:1445)at org.apache.hadoop.ipc.Client.call(Client.java:1355)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)at com.sun.proxy.$Proxy12.create(Unknown Source)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:349)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)at com.sun.proxy.$Proxy13.create(Unknown Source)at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:276)at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1176)at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1155)at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1093)at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:463)at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:460)at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:474)at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:401)at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1103)at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1083)at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:972)at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:960)at org.apache.flume.sink.hdfs.HDFSDataStream.doOpen(HDFSDataStream.java:80)at org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:107)at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:257)at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:247)at org.apache.flume.sink.hdfs.BucketWriter$8$1.run(BucketWriter.java:727)at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)at org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:724)at java.util.concurrent.FutureTask.run(FutureTask.java:266)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)Caused by: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data lengthat org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1818)at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1166)at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1062)20/06/20 09:45:17 INFO hdfs.BucketWriter: Creating hdfs://47.102.155.106:9870/Projecat/BDP/data/ineer/RAW/01/test_flume/20200620/logs-.1592617512489.tmp

问题分析:上网各种查,各种试。

解决办法:首先你要确保sources的ip地址写本地的。端口写好。hdfs的地址写好,hdfs的端口写好。

RPC response exceeds maximum data length Cannot assign requested address java.net.UnknownH相关推荐

  1. java.net.NoRouteToHostException: Cannot assign requested address

    启动脚本 jmeter.sh -n -t test.jmx -l result.jtl 参数说明 - h 帮助 -> 打印出有用的信息并退出 - n 非GUI模式 -> 在GUI模式下进行 ...

  2. Dubbo :广播模式下Can't assign requested address问题

    1.背景 在MAC系统中使用dubbo的multicast模式,启动报错: Exception in thread "main" java.lang.IllegalStateExc ...

  3. ElasticSearch 6.4.3 启动报错: [Cannot assign requested address: bind]

    今天在本地搭建一个测试用的最新版ElasticSearch6.4.3 的环境时,遇到一个报: [Cannot assign requested address: bind]的错误. 错误日志内容如下: ...

  4. 记录一次etcd报错: listen tcp xxx.xxx.xxx.xxx:2380:bind: cannot assign requested address

    记录一次etcd报错 背景 问题定位 问题排查 解决方案 小结 背景 今天打算在腾讯云环境上安装k8s(采用ansible脚本自动化部署安装),当安装完etcd之后,发现启动报错. 机器:腾讯云服务器 ...

  5. docker异常--docker bind: cannot assign requested address.

    当我在Window上指定IP+端口号 去开启我的容器 会报如下错误. Error response from daemon: Cannot restart container test_mysql: ...

  6. 无法分配请求的地址(Cannot assign requested address)的解决方案

    搜索错误关键字"Cannot assign requested address","返回" JSON 内容,返回的引号是因为其实并不是远程接口返回的,而是 Id ...

  7. The client socket has failed to connect to X (errno: 99 - Cannot assign requested address).

    在跑DDP模型时遇到了如下问题. [W socket.cpp:558] [c10d] The client socket has failed to connect to [localhost]:12 ...

  8. 阿里云配置服务器报:bind: cannot assign requested address

    阿里云服务器写网络程序时必须使用阿里云服务器的内网ip,不是他的公网ip,访问时是公网ip.具体可见下面这篇文章,十分感谢!!!!使得我写的小程序可以在服务器上运行,和前端小伙伴项目可以进行下去. 参 ...

  9. 对于高并发短连接造成Cannot assign requested address解决方法

    对于高并发短连接造成Cannot assign requested address解决方法 参考文章: (1)对于高并发短连接造成Cannot assign requested address解决方法 ...

最新文章

  1. mysql利用存储过程批量插入数据
  2. pyx文件 生成pyd 文件用于 cython调用
  3. Oracle VARRAY的实际应用简介
  4. Hack Knowledges
  5. BLE 配对后通信其中一方LTK丢失情况(转自襄坤在线)
  6. Web前端行业的机遇与自我规划,如果你对未来没有方向 不如看一看,或许就是一道曙光!
  7. 土木工程和计算机专硕,第一次发帖 关于大工土木专硕
  8. 云服务器安全组配置(阿里云,ucloud云,华为云)
  9. SpringBoot整合freemarker找不到静态资源ftl文件解决办法
  10. JPA映射组合主键时错误:No default constructor for entity
  11. 初学python-练习_1使用python编写计算班级学生平均分程序
  12. linux 文件压缩与解压
  13. C++的multi_map如何输出所有key值相等的元素
  14. 学习笔记之——基于深度学习的分类网络
  15. Flash Builder 4.7 正式版下载、破解
  16. 安卓计算机切换用户,电脑模拟器小米游戏怎么切换账号
  17. 创建uni-app 微信小程序项目
  18. 【网络与系统安全实验】欺骗攻击及防御技术
  19. oracle 统计每天新增订单数量
  20. 七甲川荧光染料IR820 NHS ester,新吲哚菁绿-活化酯,New Indocyanine Green-nhs ester

热门文章

  1. MATLAB碎纸片的拼接复原
  2. 两箱高低温冲击试验箱的原理
  3. 关于CAN通信速率在某些频段下不通的解决办法
  4. c语言中assert函数,C++ 中assert断言函数的基本用法
  5. Verilog 流水线设计 Pipeline
  6. Uncaught Syntax Error: Unexpected identifier异常处理方法
  7. Android Jetpack
  8. MyBatis缓存和懒加载
  9. 25664oled模块
  10. 发财潮流文化节2.0圆满落幕 重庆华侨城打造商业IP跨界新蓝本