kafka分布式集群的操作

3.1客户端命令行

3.1.1 kafka-topic.sh

1,shell脚本的作用:Create:新建主题delete:删除主题describe:查看主题的详情change a topic:更新主题2,关键参数:
--alter 修改主题
--create Create a new topic(创建主题).
--delete Delete a topic(删除主题)
--describe  List details for the given topics(显示出给定主题的详情).
--list List all available topics(罗列出kafka分布式集群中所有有效的主题名).
--partitions 创建或是修改主题时通过该参数指定分区数。
--replication-factor 创建修改主题时通过该参数指定分区的副本数。
--topic 指定主题名
--zookeeper:用来指定zookeeper分布式集群
新建主题
需求1:新建名为hadoop的主题,要求分区数1,副本数1
需求2:新建名为spark的主题,要求分区数2,副本数3
需求3:新建名为flink的主题,要求分区数3,副本数3实操效果:
[root@NODE02 ~]# kafka-topics.sh --create --topic hadoop --zookeeper node01:2181 --partitions 1 --replication-factor 1
Created topic "hadoop".
[root@NODE02 ~]# kafka-topics.sh --create --topic spark --zookeeper node01:2181,node02:2181,node03:2181 --partitions 2 --replication-factor 3
Created topic "spark".
[root@NODE02 ~]# kafka-topics.sh --create --topic flink --zookeeper node01:2181,node02:2181,node03:2181 --partitions 3 --replication-factor 3
Created topic "flink".注意点:
[root@NODE02 ~]# kafka-topics.sh --create --topic storm --zookeeper node01:2181,node02:2181,node03:2181 --partitions 3 --replication-factor 4
Error while executing topic command : Replication factor: 4 larger than available brokers: 3.原因:副本一般是跨节点存储的。从安全性的角度考虑,不允许在一台节点上存在相同的副本(若是可以的话,硬盘要是破坏了,多个相同副本中的数据都会丢失,不安全!!)。
查询主题
方式1:--list参数,查看当前kafka分布式集群中存在的有效的主题名
方式2:--describe参数,查看当前kafka分布式集群中存在的有效的主题的详情(主题名,分区数,副本的分布,分区的角色→leader,follower,同一时刻,只有leader角色的分区才能接收读写操作)实操效果:
[root@NODE02 ~]# kafka-topics.sh --zookeeper node01:2181 --list
flink
hadoop
spark
[root@NODE02 ~]# kafka-topics.sh --zookeeper node01:2181 --describe
Topic:flink     PartitionCount:3        ReplicationFactor:3     Configs:Topic: flink    Partition: 0    Leader: 101     Replicas: 101,102,103   Isr: 101,102,103Topic: flink    Partition: 1    Leader: 102     Replicas: 102,103,101   Isr: 102,103,101Topic: flink    Partition: 2    Leader: 103     Replicas: 103,101,102   Isr: 103,101,102
Topic:hadoop    PartitionCount:1        ReplicationFactor:1     Configs:Topic: hadoop   Partition: 0    Leader: 102     Replicas: 102   Isr: 102
Topic:spark     PartitionCount:2        ReplicationFactor:3     Configs:Topic: spark    Partition: 0    Leader: 101     Replicas: 101,102,103   Isr: 101,102,103Topic: spark    Partition: 1    Leader: 102     Replicas: 102,103,101   Isr: 102,103,101PartitionCount:topic对应的partition的个数
ReplicationFactor:topic对应的副本因子,说白就是副本个数(包含自己,与hdfs上的副本数相同)
Partition:partition编号,从0开始递增
Leader:当前partition起作用的breaker.id
Replicas: 当前副本数据所在的breaker.id,是一个列表
Isr:当前kakfa集群中可用的breaker.id列表    

修改主题
1,不能修改副本因子,否则报错,实操效果如下:
[root@NODE02 ~]# kafka-topics.sh --alter  --zookeeper node02:2181   --topic hadoop --replication-factor 2
Option "[replication-factor]" can't be used with option"[alter]"2,可以修改分区数,实操效果如下:
[root@NODE02 ~]# kafka-topics.sh --alter  --zookeeper node02:2181   --topic hadoop --partitions 2
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!
[root@NODE02 ~]# kafka-topics.sh   --zookeeper node02:2181   --topic hadoop --describe
Topic:hadoop    PartitionCount:2        ReplicationFactor:1     Configs:Topic: hadoop   Partition: 0    Leader: 102     Replicas: 102   Isr: 102Topic: hadoop   Partition: 1    Leader: 103     Replicas: 103   Isr: 103注意:
①只能增加分区数,不能减少分区数。实操效果如下:
[root@NODE02 ~]# kafka-topics.sh --alter  --zookeeper node02:2181   --topic hadoop --partitions 1
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Error while executing topic command : The number of partitions for a topic can only be increased. Topic hadoop currently has 2 partitions, 1 would not be an increase.
[2019-11-12 11:29:04,668] ERROR org.apache.kafka.common.errors.InvalidPartitionsException: The number of partitions for a topic can only be increased. Topic hadoop currently has 2 partitions, 1 would not be an increase.(kafka.admin.TopicCommand$)②主题名不能修改,修改主题时,主题名是作为修改的条件存在的。
删除主题
删除名为hadoop的主题,实操效果如下:
[root@NODE02 flink-0]# kafka-topics.sh --list --zookeeper node01:2181
flink
hadoop
spark
[root@NODE02 flink-0]# kafka-topics.sh --delete --topic hadoop --zookeeper node03:2181
Topic hadoop is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
[root@NODE02 flink-0]# cd ..
[root@NODE02 kafka-logs]# kafka-topics.sh --list --zookeeper node01:2181
flink
spark
[root@NODE02 kafka-logs]# ll
total 20
-rw-r--r-- 1 root root   4 Nov 12 11:34 cleaner-offset-checkpoint
drwxr-xr-x 2 root root 141 Nov 12 10:51 flink-0
drwxr-xr-x 2 root root 141 Nov 12 10:51 flink-1
drwxr-xr-x 2 root root 141 Nov 12 10:51 flink-2
-rw-r--r-- 1 root root   4 Nov 12 11:34 log-start-offset-checkpoint
-rw-r--r-- 1 root root  56 Nov 12 10:29 meta.properties
-rw-r--r-- 1 root root  54 Nov 12 11:34 recovery-point-offset-checkpoint
-rw-r--r-- 1 root root  54 Nov 12 11:35 replication-offset-checkpoint
drwxr-xr-x 2 root root 141 Nov 12 10:51 spark-0
drwxr-xr-x 2 root root 141 Nov 12 10:51 spark-1注意:①针对于kafka的版本kafka-1.0.2,在server.properties资源文件中,参数delete.topic.enable默认值是true。就是物理删除。(低版本的kafka,如:0.10.0.1,确实是逻辑删除)②通过zookeeper进行确认,并且删除了元数据信息。
[zk: node03(CONNECTED) 11] ls /brokers/topics
[flink, spark]

3.1.2、关于消息的发布和订阅

kafka-console-producer.sh消息的生产

参数说明

kafka-console-producer.sh --broker-list  MyDis:9092 --topic hadoop参数说明如下:
--broker-list <String: broker-list>      REQUIRED: The broker list string in    the form HOST1:PORT1,HOST2:PORT2. 用来标识kafka分布式集群中的kafka服务器列表--topic <String: topic>                  REQUIRED: The topic id to produce      messages to.  指定主题名(消息属于哪个主题的)其余的参数使用默认值即可。说明:
①上述的shell脚本后,会进入到阻塞状态,启动一个名为ConsoleProducer的进程
②在控制台录入消息,一行就是一条消息,回车后,送往kafka分布式集群中的MQ(message queue)存储起来。
kafka-console-consumer->消息的订阅
kafka-console-consumer.sh --bootstrap-server MyDis:9092 --topic hadoop --from-beginning参数名:
--blacklist <String: blacklist>          Blacklist of topics to exclude from    consumption. 用来指定黑名单。使用该参数的时机:对绝大多数的主题感兴趣,对极少数主题不感兴趣。此时,可以将这些不感兴趣的主题名置于黑名单列表中。
--whitelist <String: whitelist>          Whitelist of topics to include for     consumption. 用来指定白名单列表。 使用该参数的时机:对极少数主题感兴趣,对绝大多数的主题不感兴趣。可以将感兴趣的主题置于到白名单列表中。--zookeeper <String: urls>               REQUIRED (only when using old          consumer): The connection string for the zookeeper connection in the form host:port.针对于旧的kafka版本,消费的偏移量通过zookeeper来进行维护的。偏移量:记录的是订阅消息的进度,就是消息数。--bootstrap-server <String: server to    REQUIRED (unless old consumer is       connect to>                              used): The server to connect to.针对于新版本的kafka,消费的偏移量的维护是通过kafka分布式集群自身的一个名为__consumer_offsets主题来维护来维护的。--from-beginning                         If the consumer does not already have  an established offset to consume     from, start with the earliest        message present in the log rather    than the latest message. 从头开始消费。否则,不带该参数,只会订阅新产生的消息(前提:订阅方要提前启动。)。说明:
①上述的shell脚本后,会进入到阻塞状态,启动一个名为ConsoleConsumer的进程
②会读取特定主题相应分区中存储的消息。a)若是带了参数--from-beginning ,读取该主题所有分区中的数据b)若是不带参数--from-beginning,当前的订阅方接收不到历史的消息,只能接收到该进程启动后,新产生的消息。
③若是带--zookeeper参数,消费的offset(偏移量),该偏移量通过zookeeer进行维护。如:[zk: node03(CONNECTED) 44] get /consumers/console-consumer-37260/offsets/spark/1
2
cZxid = 0x10b0000020e
ctime = Tue Nov 12 14:16:03 CST 2019
mZxid = 0x10b0000020e
mtime = Tue Nov 12 14:16:03 CST 2019
pZxid = 0x10b0000020e
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 1
numChildren = 0
④针对于消费offset的维护,高版本的kafka中,若是使用zookeeper来维护,有警告:
[root@NODE03 kafka-logs]# kafka-console-consumer.sh --topic spark  --zookeeper node01:2181
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
how do you do?
⑤针对于消费offset的维护,高版本的kafka中,建议kafka分布式集群来维护,会自动创建一个名为__consumer_offsets的主题,该主题默认有50个分区,每个分区默认有一个副本(可以在server.properties文件中手动进行定制):
[root@NODE03 ~]# kafka-topics.sh --describe --topic __consumer_offsets --zookeeper node01:2181
Topic:__consumer_offsets        PartitionCount:50       ReplicationFactor:1     Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producerTopic: __consumer_offsets       Partition: 0    Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 1    Leader: 103     Replicas: 103   Isr: 103Topic: __consumer_offsets       Partition: 2    Leader: 101     Replicas: 101   Isr: 101Topic: __consumer_offsets       Partition: 3    Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 4    Leader: 103     Replicas: 103   Isr: 103Topic: __consumer_offsets       Partition: 5    Leader: 101     Replicas: 101   Isr: 101Topic: __consumer_offsets       Partition: 6    Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 7    Leader: 103     Replicas: 103   Isr: 103Topic: __consumer_offsets       Partition: 8    Leader: 101     Replicas: 101   Isr: 101Topic: __consumer_offsets       Partition: 9    Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 10   Leader: 103     Replicas: 103   Isr: 103Topic: __consumer_offsets       Partition: 11   Leader: 101     Replicas: 101   Isr: 101Topic: __consumer_offsets       Partition: 12   Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 13   Leader: 103     Replicas: 103   Isr: 103Topic: __consumer_offsets       Partition: 14   Leader: 101     Replicas: 101   Isr: 101Topic: __consumer_offsets       Partition: 15   Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 16   Leader: 103     Replicas: 103   Isr: 103Topic: __consumer_offsets       Partition: 17   Leader: 101     Replicas: 101   Isr: 101Topic: __consumer_offsets       Partition: 18   Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 19   Leader: 103     Replicas: 103   Isr: 103Topic: __consumer_offsets       Partition: 20   Leader: 101     Replicas: 101   Isr: 101Topic: __consumer_offsets       Partition: 21   Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 22   Leader: 103     Replicas: 103   Isr: 103Topic: __consumer_offsets       Partition: 23   Leader: 101     Replicas: 101   Isr: 101Topic: __consumer_offsets       Partition: 24   Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 25   Leader: 103     Replicas: 103   Isr: 103Topic: __consumer_offsets       Partition: 26   Leader: 101     Replicas: 101   Isr: 101Topic: __consumer_offsets       Partition: 27   Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 28   Leader: 103     Replicas: 103   Isr: 103Topic: __consumer_offsets       Partition: 29   Leader: 101     Replicas: 101   Isr: 101Topic: __consumer_offsets       Partition: 30   Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 31   Leader: 103     Replicas: 103   Isr: 103Topic: __consumer_offsets       Partition: 32   Leader: 101     Replicas: 101   Isr: 101Topic: __consumer_offsets       Partition: 33   Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 34   Leader: 103     Replicas: 103   Isr: 103Topic: __consumer_offsets       Partition: 35   Leader: 101     Replicas: 101   Isr: 101Topic: __consumer_offsets       Partition: 36   Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 37   Leader: 103     Replicas: 103   Isr: 103Topic: __consumer_offsets       Partition: 38   Leader: 101     Replicas: 101   Isr: 101Topic: __consumer_offsets       Partition: 39   Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 40   Leader: 103     Replicas: 103   Isr: 103Topic: __consumer_offsets       Partition: 41   Leader: 101     Replicas: 101   Isr: 101Topic: __consumer_offsets       Partition: 42   Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 43   Leader: 103     Replicas: 103   Isr: 103Topic: __consumer_offsets       Partition: 44   Leader: 101     Replicas: 101   Isr: 101Topic: __consumer_offsets       Partition: 45   Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 46   Leader: 103     Replicas: 103   Isr: 103Topic: __consumer_offsets       Partition: 47   Leader: 101     Replicas: 101   Isr: 101Topic: __consumer_offsets       Partition: 48   Leader: 102     Replicas: 102   Isr: 102Topic: __consumer_offsets       Partition: 49   Leader: 103     Replicas: 103   Isr: 103
⑥关于偏移量的维护:a)真实项目中一般需要手动进行维护,达到的效果是:偏移量被某个同类型的进程所独享。b)偏移量的维护,可选的方案很多:zookeeperredis  →使用得较多hbaserdbms(mysql,oracle等等)⑦白名单:
情形1:通过kafka维护偏移量:
[root@NODE02 bin]# kafka-console-consumer.sh --whitelist 'storm|spark'  --bootstrap-server node02:9092,node01:9092,node03:9092 --from-beginning
呵呵 大大
storm storm
ok ok ok
最近可好?
how do you do?
hehe da da
今天参加了天猫双十一晚会,很happy!
are you ok?
storm 哦
hehe
are you ok?
好不好啊?
yes, I do.
和 呵呵哒
新的一天哦
hehe da da情形2:通过zookeeper维护偏移量 (高版本不推荐了)
[root@NODE02 bin]# kafka-console-consumer.sh --whitelist storm,spark  --zookeeper node01:2181  --from-beginningUsing the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
are you ok?
好不好啊?
yes, I do.
和 呵呵哒
新的一天哦
hehe da da
ok ok ok
呵呵 大大
storm storm
最近可好?
how do you do?
hehe da da
今天参加了天猫双十一晚会,很happy!
are you ok?
storm 哦
hehe⑧黑名单:
情形1:通过kafka维护偏移量:
[root@NODE02 bin]# kafka-console-consumer.sh --blacklist storm  --bootstrap-server node02:9092,node01:9092,node03:9092 --from-beginning
Exactly one of whitelist/topic is required.
注意:上述的方式,参数“--blacklist”不能单独使用,需要与--whitelist参数或者是--topic参数结合在一起使用。若是一起使用,显得累赘。一般不要带--blacklist。情形2:通过zookeeper维护偏移量 (不推荐使用)
[root@NODE02 bin]# kafka-console-consumer.sh --blacklist storm,spark  --zookeeper node01:2181 --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
flink 哦
关于偏移量的维护:
此时还没涉及到kafka组的概念,即所有的消费者属于一个默认的组,如果在实际使用中,有多个消费者对一个主题进行消费,测试时,可以看到,如果消费者同步存在,那么数据会同时被多个消费组收到,但是消费者不同时消费,就可能存在 数据(1-20) 消费者(1)拉取 3-5,此时偏移量已经到了5,其他消费者只能从头或者从5开始,那么就少拉去一部分数据。解决方案 ,自己维护自己的偏移量项目中需要手动进行维护,达到效果是:偏移量被某个同类型的进程所独享
偏移量的维护,可选的方案很多
redis:使用的较多
zookeeper
hbase
mysql

白名单黑名单问题

白名单正常使用,黑名单必须和白名单或者topic一起使用(虽然zookeeper可以单独使用黑名单。但是高版本不推荐,只考虑集群维护的情况)

(问题:黑名单是不是没用?,实验topic可不可以和多个topic一起使用)

1 kafka-console-producer.sh --broker-list  MyDis:9092 --topic 'hadoop|spark'WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 15 : {hadoop|spark=INVALID_TOPIC_EXCEPTION} (org.apache.kafka.clients.NetworkClient)
2 kafka-console-producer.sh --broker-list  MyDis:9092 --topic hadoop,sparkWARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 22 : {hadoop,spark=INVALID_TOPIC_EXCEPTION} (org.apache.kafka.clients.NetworkClient)3 kafka-console-consumer.sh --bootstrap-server MyDis:9092 --topic 'spark|hadoop' --from-beginningWARN [Consumer clientId=consumer-1, groupId=console-consumer-81455] The following subscribed topics are not assigned to any members: [spark|hadoop]  (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)4  kafka-console-consumer.sh --bootstrap-server MyDis:9092 --topic spark,hadoop --from-beginningWARN [Consumer clientId=consumer-1, groupId=console-consumer-60372] The following subscribed topics are not assigned to any members: [spark,hadoop]  (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)5  kafka-console-consumer.sh --bootstrap-server MyDis:9092 --topic spark --topic hadoop --blacklist spark --from-beginning
Exception in thread "main" joptsimple.MultipleArgumentsForOptionException: Found multiple arguments for option topic, but you asked for only oneat joptsimple.OptionSet.valueOf(OptionSet.java:179)at kafka.tools.ConsoleConsumer$ConsumerConfig.<init>(ConsoleConsumer.scala:404)at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:52)at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)首先,明显在--topic 后面,是将hadoop|spark 或者 hadoop,spark当成一个整体主题,所以报警。

白名单和黑名单一起使用时,黑名单是否对白名单过滤?(结果白名单比黑名单的范围大)

1 kafka-console-consumer.sh --bootstrap-server MyDis:9092 --whitelist '.*' --blacklist spark --from-beginning
res:spark spark
未过滤:(结果白名单比黑名单的范围大)

官网解释:

Sometimes it is easier to say what it is that you don't want. Instead of using --whitelist to say what you want to mirror you can use --blacklist to say what to exclude. This also takes a regular expression argument. However, --blacklist is not supported when the new consumer has been enabled (i.e. when bootstrap.servers has been defined in the consumer configuration).

kafka分布式集群的操作相关推荐

  1. Kafka分布式集群部署

    目标: 实现Kafka分布式集群的搭建部署 路径: step1:选择版本 step2:下载解压安装 step:3:修改配置文件 实施: step1:版本的选型 0.8.x:老的版本,很多的问题 0.1 ...

  2. 新闻网大数据实时分析可视化系统项目——7、Kafka分布式集群部署

    Kafka是由LinkedIn开发的一个分布式的消息系统,使用Scala编写,它以可水平扩展和高吞吐率而被广泛使用.目前越来越多的开源分布式处理系统如Cloudera.Apache Storm.Spa ...

  3. kafka分布式集群 SASL_PLAINTEXT 模式的加密部署

    大家好,我是Linux运维工程师 Linke .技术过硬,很少挖坑~ 此刻时间紧迫,来不及扯淡了,直接进入正题. 一.kafka依赖于 zookeeper 的选举机制 二.分布式工作原理 日志的分区p ...

  4. kafka Linux 下启动服务 测试,Linux下安装部署Kafka分布式集群与测试

    注意:部署Kafka之前先部署环境Java.Zookeeper 准备三台CentOS_6.5_x64服务器,分别是: IP: 192.168.0.249dbTest249 Kafka IP: 192. ...

  5. Linux下部署Kafka分布式集群,安装与测试

    注意:部署Kafka之前先部署环境JAVA.Zookeeper 准备三台CentOS_6.5_x64服务器,分别是:IP: 192.168.0.249 dbTest249 Kafka IP: 192. ...

  6. Kafka系列之:详细介绍部署Kafka Connect分布式集群

    Kafka系列之:详细介绍部署Kafka Connect分布式集群 一.部署分布式Kafka集群详细步骤 二.Kafka Worker节点安装部署Kafka 三.修改connect-distribut ...

  7. php哈希取模,PHP取模hash和一致性hash操作Memcached分布式集群

    本篇笔记记录了PHP使用Memcached扩展,采用取模hash和一致性hash算法操作Memcached分布式集群的实现对比 1.开启4个Memcached服务模拟集群 2.取模hash算法 php ...

  8. 大数据开发技术课程报告(搭建Hadoop完全分布式集群操作集群)

    文章目录 大数据开发技术课程报告内容及要求 一. 项目简介和实验环境 二. 虚拟机的各项准备工作 三. 安装JDK并配置环境变量 四. 安装Hadoop并配置环境变量 五. 配置Hadoop完全分布式 ...

  9. Hadoop分布式集群搭建以及案例运行-fs操作

    Hadoop分布式集群搭建案例步骤(也可以叫分布式文件系统) 一:创建分布式集群环境 二:设置静态ip以及主机名与映射关系 三:创建用户.配置SSH无密登录 四:子机dn操作 五:配置主机jdk.ha ...

  10. Zookeeper之Linux分布式集群搭建及客户端shell命令操作

    一.准备至少三台Linux服务器及对应的jdk环境 1.服务器及jdk环境准备 服务器:至少三台Linux服务器 JDK环境:三台Linux服务器上都需要安装好jdk环境(jdk环境安装参考我的博客: ...

最新文章

  1. UI培训教程分享:UI设计的分类有哪些?
  2. SyntaxError: (unicode error) ‘unicodeescape‘ codec can‘t decode bytes in positio n 131-135: truncate
  3. spring-cloud:熔断监控Hystrix Dashboard和Turbine的示例
  4. [笔记] 分频计数(七)
  5. 湖南人文科技学院没有计算机一级能毕业吗,在湖南人文科技学院读书真的是生不如死...
  6. PHP保留小数三种方法
  7. 3.1.1_Spring如何加载和解析@Configuration标签
  8. Idea单测执行报错“Command line is too long“ 解决办法
  9. golang——strconv包常用函数
  10. Java提高篇——equals()方法和“==”运算符
  11. opencv之解决对加载图片大小限制的问题
  12. 对话框的数据交换--MFC深入浅出
  13. Visual Attention Network
  14. C语言-1024小游戏
  15. Typora+picgo+gitee图片外链失效,Typora历史笔记无法显示图片
  16. Arduino 学习思考与记录
  17. python自动汇总表格_用Python自动生成Excel报表
  18. java面试题2021
  19. Feedback Control of Dynamic Systems 7th
  20. 财路网每日原创推送:现在是时候让汽车高管们开始使用区块链了

热门文章

  1. Oracle LiveLabs实验:Application Continuity Fundamentals
  2. 【网络安全】前端程序员务必掌握的图片防盗链
  3. 【CTA系列】Kelly公式在最优f问题上的应用
  4. 关于在SW中怎么放样凸台基体
  5. AI语音红外遥控配网教程
  6. 不积跬步无以至千里(C语言笔记)
  7. 关于Cookie和Session
  8. .NET 中的 GAC
  9. Linux 人大金仓安装部署记录
  10. stata01 - stata基础