前言:我这里是在Centos7.2 64位单机做的测试

安装elasticsearch的时候不应该安装在root用户下,否则启动会报错:

Exception in thread "main" java.lang.RuntimeException: don't run elasticsearch as root.at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:93)at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:144)at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.

如果你非要装在root用户下的话,用这个命令启动可以解决:

./elasticsearch-2.2.0/bin/elasticsearch -d  -Des.insecure.allow.root=true

一、flume和elasticsearch整合

flume的es.conf配置:
[hadoop@h153 ~]$ cat apache-flume-1.6.0-cdh5.5.2-bin/conf/es.conf

a1.sources = s1
a1.sinks = k1
a1.channels = c1a1.sources.s1.type = com.urey.flume.source.taildir.TaildirSource
a1.sources.s1.positionFile = /home/hadoop/hui/taildir_position.json
a1.sources.s1.filegroups = f1
a1.sources.s1.filegroups.f1 = /home/hadoop/q1/test.*.log
a1.sources.s1.batchSize = 100
a1.sources.s1.backoffSleepIncrement = 1000
a1.sources.s1.maxBackoffSleep = 5000
a1.sources.s1.channels = c1a1.sinks.k1.type=org.apache.flume.sink.elasticsearch.ElasticSearchSink
a1.sinks.k1.batchSize=10000
a1.sinks.k1.hostNames=192.168.205.153:9300
a1.sinks.k1.indexType = flume_kafka
a1.sinks.k1.indexName=logstash
a1.sinks.k1.clusterName = elasticsearch
a1.sinks.k1.serializer=org.apache.flume.sink.elasticsearch.ElasticSearchLogStashEventSerializer
a1.sinks.k1.indexNameBuilder=org.apache.flume.sink.elasticsearch.SimpleIndexNameBuilder(索引名称不追加时间)a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100a1.sources.s1.channels = c1
a1.sinks.k1.channel = c1

启动flume:
[hadoop@h153 apache-flume-1.6.0-cdh5.5.2-bin]$ bin/flume-ng agent -c conf/ -f conf/es.conf -n a1 -Dflume.root.logger=INFO,console

注意:一开始我用的是elasticsearch-2.2.0版本,结果报错:

2017-09-27 03:26:10,849 (lifecycleSupervisor-1-1) [ERROR - org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:253)] Unable to start SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@3a7f44c9 counterGroup:{ name:null counters:{} } } - Exception follows.
java.lang.NoSuchMethodError: org.elasticsearch.common.transport.InetSocketTransportAddress.<init>(Ljava/lang/String;I)Vat org.apache.flume.sink.elasticsearch.client.ElasticSearchTransportClient.configureHostnames(ElasticSearchTransportClient.java:143)at org.apache.flume.sink.elasticsearch.client.ElasticSearchTransportClient.<init>(ElasticSearchTransportClient.java:77)at org.apache.flume.sink.elasticsearch.client.ElasticSearchClientFactory.getClient(ElasticSearchClientFactory.java:48)at org.apache.flume.sink.elasticsearch.ElasticSearchSink.start(ElasticSearchSink.java:357)at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)at org.apache.flume.SinkRunner.start(SinkRunner.java:79)at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)at java.lang.Thread.run(Thread.java:745)

原因:Elasticsearch的版本过高,导致Flume的jar包与Elasticsearch不兼容
解决方案:将Elasticsearch版本换为1.7.1

产生模拟数据:
[hadoop@h153 q1]$ echo "2015-10-30 16:05:00| 967837DE00026C8DB2E|127.0.0.1" >> test.1.log
[hadoop@h153 q1]$ echo "awfeeeeeeeeeeeeeeeeeeeeeeeee" >> test.1.log

参考:
http://blog.csdn.net/u010022051/article/details/50515725
有时间可以试试这个:
http://blog.csdn.net/u013673976/article/details/74319879

二、将Kafka数据导入elasticsearch(我这里用的是elasticsearch-2.2.0版本)
最近需要搭建一套日志监控平台,结合系统本身的特性总结一句话也就是:需要将Kafka中的数据导入到elasticsearch中。那么如何将Kafka中的数据导入到elasticsearch中去呢,总结起来大概有如下几种方式:
(1).Kafka->logstash->elasticsearch->kibana(简单,只需启动一个代理程序)
(2).Kafka->kafka-connect-elasticsearch->elasticsearch->kibana(与confluent绑定紧,有些复杂)
(3).Kafka->elasticsearch-river-kafka-1.2.1-plugin->elasticsearch->kibana(代码很久没更新,后续支持比较差)
elasticsearch-river-kafka-1.2.1-plugin插件的安装及配置可以参考:http://hqiang.me/2015/08/%E5%B0%86kafka%E7%9A%84%E6%95%B0%E6%8D%AE%E5%AF%BC%E5%85%A5%E8%87%B3elasticsearch/

根据以上情况,项目决定采用方案一将Kafka中的数据存入到elasticsearch中去。

1.拓扑图
项目拓扑图如下所示:

此时消息的整体流向为:日志/消息整体流向Flume => kafka => logstash => elasticsearch => kibana

2.Flume日志收集
[hadoop@h153 ~]$ cat apache-flume-1.6.0-cdh5.5.2-bin/conf/kafka.conf

a1.sources = s1
a1.sinks = k1
a1.channels = c1a1.sources.s1.type = com.urey.flume.source.taildir.TaildirSource
a1.sources.s1.positionFile = /home/hadoop/hui/taildir_position.json
a1.sources.s1.filegroups = f1
a1.sources.s1.filegroups.f1 = /home/hadoop/q1/test.*.log
a1.sources.s1.batchSize = 100
a1.sources.s1.backoffSleepIncrement = 1000
a1.sources.s1.maxBackoffSleep = 5000
a1.sources.s1.channels = c1a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.brokerList = h153:9092
a1.sinks.k1.topic = hui
a1.sinks.k1.channel = memoryChannel
a1.sinks.k1.batch-size = 100
a1.sinks.k1.requiredAcks = -1a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100a1.sources.s1.channels = c1
a1.sinks.k1.channel = c1

3.logstash的安装及配置
A.下载logstash的安装包:https://download.elastic.co/logstash/logstash/logstash-2.2.2.tar.gz
小窍门:如果你想要下载不同的logstash版本的话,不用去网上苦苦搜索了,你直接把上面这个链接的版本一改在网址上一输就能下载了,比如你想下载logstash-2.4.0版本的话,网址就为https://download.elastic.co/logstash/logstash/logstash-2.4.0.tar.gz。还有elasticsearch也可以这样做,给你个万能网址:https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-2.2.2.tar.gz(后来发现这个网址只可以下载2版本系列的,后来又找到了个能下载5版本和6版本的万能网址https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.0.0.tar.gz,但这两个网址还不能下载3版本系列和4版本系列的。。)
B.新建kafka-logstash-es.conf置于logstash/conf目录下;(我解压后居然没有conf目录,那就新建一个呗)
C.配置kafka-logstash-es.conf如下:

logstash的配置语法如下:
input {
  ...#读取数据,logstash已提供非常多的插件,可以从file、redis、syslog等读取数据
}
 
filter{
  ...#想要从不规则的日志中提取关注的数据,就需要在这里处理。常用的有grok、mutate等
}
 
output{
  ...#输出数据,将上面处理后的数据输出到file、elasticsearch等
}

示例:

input {kafka {zk_connect => "h153:2181"group_id => "elasticconsumer"   ---随意取topic_id => "hui"   ---所要导入kafka对应的topicreset_beginning => falseconsumer_threads => 2decorate_events => truecodec => "json"}}
output{elasticsearch {hosts => "192.168.205.153"index => "traceid"   ---与Kafka中json字段无任何关联关系,注意:index必须小写,或者写成index => "traceid-%{+YYYY-MM-dd}"}stdout {codec => rubydebug   ---还可以写成json类型}}

4.启动flume和kafka和es并产生测试数据
[hadoop@h153 q1]$ echo "hello world" >> test.1.log

5.运行logstash命令为:nohup bin/logstash -f /home/hadoop/logstash-2.2.2/conf/kafka-logstash-es.conf &(后台运行)
我这里为了观察方便,直接在控制台中运行了
[hadoop@h153 logstash-2.2.2]$ bin/logstash -f kafka-logstash-es.conf

log4j:WARN No appenders could be found for logger (org.apache.http.client.protocol.RequestAuthCache).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Settings: Default pipeline workers: 1
Logstash startup completed
JSON parse failure. Falling back to plain-text {:error=>#<LogStash::Json::ParserError: Unrecognized token 'hello': was expecting ('true', 'false' or 'null')at [Source: [B@10e07777; line: 1, column: 7]>, :data=>"hello world", :level=>:error}
{"message" => "hello world","tags" => [[0] "_jsonparsefailure"],"@version" => "1","@timestamp" => "2017-10-11T16:04:44.212Z","kafka" => {"msg_size" => 11,"topic" => "hui","consumer_group" => "elasticconsumer","partition" => 0,"key" => nil}
}

注意:一开始我为了测试logstash是否能将数据导入到es中,就直接运行测试文件
input{
    stdin{}
}
output{
    elasticsearch{
        hosts => "192.168.205.153"
     }
   stdout{codec => rubydebug}
}
结果却报错:(logstash-2.2.2和elasticsearch-2.2.2或elasticsearch-2.2.0)

The error reported is: SSLConnectionSocketFactory not found in packages org.apache.http.client.methods, org.apache.http.client.entity, org.apache.http.client.config, org.apache.http.config, org.apache.http.conn.socket, org.apache.http.impl, org.apache.http.impl.client, org.apache.http.impl.conn, org.apache.http.impl.auth, org.apache.http.entity, org.apache.http.message, org.apache.http.params, org.apache.http.protocol, org.apache.http.auth, java.util.concurrent, org.apache.http.client.protocol, org.apache.http.conn.ssl, java.security.cert, java.security.spec, java.security, org.apache.http.client.utils; last error: cannot load Java class org.apache.http.client.utils.SSLConnectionSocketFactory

在logstash-2.4.0和elasticsearch-2.3.3中报:

Settings: Default pipeline workers: 1
Pipeline aborted due to error {:exception=>"NameError", :backtrace=>["file:/home/hadoop/logstash-2.4.0/vendor/jruby/lib/jruby.jar!/jruby/java/core_ext/module.rb:45:in `const_missing'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/client.rb:587:in `ssl_socket_factory_from_options'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/client.rb:394:in `pool_builder'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/client.rb:402:in `pool'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/client.rb:208:in `initialize'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.1.0/lib/elasticsearch/transport/transport/http/manticore.rb:58:in `build_client'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.1.0/lib/elasticsearch/transport/transport/http/manticore.rb:49:in `initialize'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.1.0/lib/elasticsearch/transport/client.rb:118:in `initialize'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.1.0/lib/elasticsearch/transport.rb:26:in `new'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:129:in `build_client'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:20:in `initialize'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:44:in `build'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch.rb:134:in `build_client'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/common.rb:14:in `register'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/output_delegator.rb:75:in `register'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:181:in `start_workers'", "org/jruby/RubyArray.java:1613:in `each'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:181:in `start_workers'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:136:in `run'", "/home/hadoop/logstash-2.4.0/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/agent.rb:491:in `start_pipeline'"], :level=>:error}
stopping pipeline {:id=>"main"}

(后来我在同一台虚拟机上还原快照重新装了一遍却好使不报这个错了,可能是那会儿的环境有问题,具体原因我目前也不知道。后来我发现当时我装了单节点的hbase-1.0.0-cdh5.5.2,我把这个hbase全部删除掉或换成Apache版的hbase-1.0.0-bin.tar.gz就不报这个错了。并且当我把hbase-1.0.0-cdh5.5.2.tar.gz解压后什么都没有做再运行bin/logstash -f kafka-logstash-es.conf就又会报上面的错,后来我还原快照重装发现和hbase-1.0.0-cdh5.5.2.tar.gz并没有直接关系,也不知道我当时的环境是什么样才导致了这个原因,我也是醉了。。!)

补充:(测试kafka和logstash是否能正常传输数据)
kafka入logstash:
input{
    kafka{
        codec => "plain"
        group_id => "logstash1"
        auto_offset_reset => "smallest"
        reset_beginning => true 
        topic_id => "hui"  
        zk_connect => "192.168.205.153:2181"
    }
}
output {
     stdout{
         codec => rubydebug
     }
}
logstash数据进kafka:
input  {  
     stdin{}  
}  
output {  
     kafka{  
         topic_id => "hui"  
         bootstrap_servers => "192.168.205.153:9092"  
         batch_size => 5  
     }  
     stdout{  
         codec => json
     }  
}

参考:
http://www.cnblogs.com/moonandstar08/p/6556899.html
https://www.slahser.com/2016/04/21/日志监控平台搭建-关于Flume-Kafka-ELK/
http://www.jayveehe.com/2017/02/01/elk-stack/
http://wdxtub.com/2016/11/19/babel-log-analysis-platform-1/

还有另外一种简单的方式,那就是直接用Kafka的消费API和ES的存储API来处理分流数据。通过编写Kafka消费者,消费对应的业务数据,将消费的数据通过ES存储API,通过创建对应的索引,存储到ES中。废话不多说直接上代码(elasticsearch-2.2.0和kafka_2.10-0.8.2.0):

import java.io.IOException;
import java.net.InetAddress;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.message.MessageAndMetadata;import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;public class MyConsumer {public static void main(String[] args) throws IOException {String topic = "hui";  ConsumerConnector consumer = Consumer.createJavaConsumerConnector(createConsumerConfig());   Map<String, Integer> topicCountMap = new HashMap<String, Integer>();  topicCountMap.put(topic, new Integer(1));  Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);  KafkaStream<byte[], byte[]> stream =  consumerMap.get(topic).get(0);  ConsumerIterator<byte[], byte[]> it = stream.iterator();  int i = 0;while(it.hasNext()) {MessageAndMetadata<byte[], byte[]> mam = it.next();  String hui = new String(mam.message());i++;String a = String.valueOf(i);es(hui,a);}}private static ConsumerConfig createConsumerConfig() {Properties props = new Properties();props.put("group.id","hehe");
//在zookeeper的/consconsumers目录下会生成该组:heheprops.put("zookeeper.connect", "h153:2181");props.put("metadata.broker.list","h153:9092");props.put("zookeeper.session.timeout.ms", "4000");props.put("zookeeper.sync.time.ms", "200");props.put("auto.commit.interval.ms", "1000");props.put("auto.offset.reset", "smallest");return new ConsumerConfig(props);}public static void es(String messages, String i) throws IOException{Settings settings = Settings.settingsBuilder().put("cluster.name", "my-application").build();Client client = TransportClient.builder().settings(settings).build().addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("h153"), 9300));XContentBuilder builder = XContentFactory.jsonBuilder().startObject().field("user", "kity").field("postDate", new Date()).field("message", messages).endObject();String json = builder.string();IndexResponse response = client.prepareIndex("hui", "emp", i).setSource(json).execute().actionGet();System.out.println("创建"+i);}
}

参考:
http://www.cnblogs.com/smartloli/p/6978645.html(没看懂博主是咋么开始运行她的项目的,并且在myeclipse中导入它的代码也不知道还缺少什么jar包总是报错)
http://www.cnblogs.com/ygwx/p/5337835.html(也不知道该博主为什么要用avro传输数据,并且他也没给出websense.avsc的内容而导致无法运行程序)

Flume或Kafka和Elasticsearch整合相关推荐

  1. Flume与Kafka整合案例详解

    环境配置 名称 版本 下载地址 Centos 7.0 64x 百度 Zookeeper 3.4.5   Flume 1.6.0   Kafka 2.1.0   flume笔记 直接贴配置文件 [roo ...

  2. 大数据———Flume与Kafka整合

    环境配置 名称 版本 下载地址 Centos 7.0 64x 百度 Flume 1.8.0 http://flume.apache.org/download.html Kafka 2.11 http: ...

  3. 大数据集群搭建(12)——Flume和Kafka的整合

    Flume和Kafka的整合 1.配置flume,在flume的conf目录下新建文件(flume_kafka.conf)并配置.  ################################# ...

  4. 整合Flume和Kafka完成实时数据采集

    需要注意:参考的网站要与你的kafka的版本一致,因为里面的字段会不一致 例如:http://flume.apache.org/releases/content/1.6.0/FlumeUserGuid ...

  5. flume与kafka的整合

    案例1:syslog-memory-kafka 将flume采集到的数据落地到kafka上,即sink是kafka(生产者身份) vim syslog-mem-kafka.conf # 命名个组件 a ...

  6. sparkstreaming监听hdfs目录_flume kafka和sparkstreaming整合

    本文介绍Flume.Kafka和Sparkstreaming的整合.代码流程是,我们通过shell脚本重播测试轨迹数据到指定轨迹文件中,使用Flume监听该轨迹数据文件,实时将轨迹数据发送到Kafka ...

  7. Kafka+Storm+HDFS整合实践

    2019独角兽企业重金招聘Python工程师标准>>> 在基于Hadoop平台的很多应用场景中,我们需要对数据进行离线和实时分析,离线分析可以很容易地借助于Hive来实现统计分析,但 ...

  8. flume消费kafka数据太慢_kafka补充01

    为什么高吞吐? •写数据 –1.页缓存技术 •kafka写出数据时先将数据写到操作系统的pageCache上,由操作系统自己决定什么时候将数据写到磁盘上 –2.磁盘顺序写 •磁盘顺序写的性能会比随机写 ...

  9. [大数据] 搜索日志数据采集系统 flume+hbase+kafka架构 (数据搜狗实验室)

    1 采集规划 说明: D1 日志所在服务器1 -bigdata02.com D2 日志所在服务器2 -bigdata03.com A flume2 - bigdata02.com 日志收集 C flu ...

  10. es springboot 不设置id_es(elasticsearch)整合SpringCloud(SpringBoot)搭建教程详解

    注意:适用于springboot或者springcloud框架 1.首先下载相关文件 2.然后需要去启动相关的启动文件 3.导入相关jar包(如果有相关的依赖包不需要导入)以及配置配置文件,并且写一个 ...

最新文章

  1. oracle技术之Oracle 跟踪事件(一)
  2. linux安装jdk和tomcat命令
  3. shell经典脚本或命令行
  4. android staticlayout使用讲解,可实现文本绘制换行处理
  5. 利用WinPcap技术捕获数据包
  6. phpcmsV9 视频解决方案 - 第三方托管
  7. 内向的性格对我人生的影响
  8. 大厂十年研发经历,总结了12条安卓开发条经验
  9. SecureCRT连接阿里云ECS服务器,经常掉线的解决方案
  10. .chm 文档打不开
  11. Eclipse增加代码虚线对齐
  12. 【算法】Leetcode438. 找到字符串中所有字母异位词(每日一题)
  13. 5. Longest Palindromic Substring
  14. 嵌入式之linux用户空间与内核空间,进程上下文与中断上下文
  15. 国家自然基金的latex模版
  16. ucos II任务管理之三:删除任务
  17. 如何有效的降低低功耗设备的功耗
  18. ArchSummit讲师专访:EMC研究院资深研究员陶隽谈实时数据分析
  19. 手动扎捆机的全球与中国市场2022-2028年:技术、参与者、趋势、市场规模及占有率研究报告
  20. 滑动改变标题栏的颜色

热门文章

  1. 你打开的那些网页,大概率是被监控了
  2. OpenCV教程(5)函数整理
  3. 图形编程丨图形绘制基础imgui篇—D3D9 HOOK 创建内部Imgui窗口
  4. 企业邮箱是什么?企业邮箱和个人邮箱的区别在哪里
  5. android客服功能实现,基于环信实现android客户端客服聊天功能
  6. To install it, you can run: npm install --save @vue/composition-api/dist/vue-composition-api.mjs
  7. 配置Atari Gym环境
  8. NanUI 无边框拖拽
  9. MCE | 靶向 cGAS-STING 通路或可治疗渐冻症
  10. DOS命令之诊断网络