接上篇第4章的4.2.4:Kafka(上):Kafka消息队列、Kafka架构、安装部署Kafka集群、命令行操作、工作流程分析、生产过程分析、Broker保存消息、消费过程、低级高级API、Kafka API实战、新旧API

文章目录

4.3 Kafka消费者Java API

4.3.1 高级API

4.3.2 低级API

第5章 Kafka producer拦截器(interceptor)

5.1 拦截器原理

5.2 拦截器案例

第6章 kafka Streams

6.1 概述

6.1.1 Kafka Streams

6.1.2 Kafka Streams特点

6.1.3 为什么要有Kafka Stream

6.2 Kafka Stream数据清洗案例

第7章 扩展

7.1 Kafka配置信息

7.1.1 Broker配置信息

7.1.2 Producer配置信息

7.1.3 Consumer配置信息

4.3 Kafka消费者Java API

4.3.1 高级API

0)在控制台创建发送者

[atguigu@hadoop104 kafka]$ bin/kafka-console-producer.sh \

--broker-list hadoop102:9092 --topic first

>hello world

1)创建消费者(过时API)

import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;public class CustomConsumer {@SuppressWarnings("deprecation")public static void main(String[] args) {Properties properties = new Properties();properties.put("zookeeper.connect", "hadoop102:2181");properties.put("group.id", "g1");properties.put("zookeeper.session.timeout.ms", "500");properties.put("zookeeper.sync.time.ms", "250");properties.put("auto.commit.interval.ms", "1000");// 创建消费者连接器ConsumerConnector consumer = Consumer.createJavaConsumerConnector(new ConsumerConfig(properties));HashMap<String, Integer> topicCount = new HashMap<>();topicCount.put("first", 1);Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCount);KafkaStream<byte[], byte[]> stream = consumerMap.get("first").get(0);ConsumerIterator<byte[], byte[]> it = stream.iterator();while (it.hasNext()) {System.out.println(new String(it.next().message()));}}
}

2)官方提供案例(自动维护消费情况)(新API)

import java.util.Arrays;
import java.util.Properties;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;public class CustomNewConsumer {public static void main(String[] args) {Properties props = new Properties();// 定义kakfa 服务的地址,不需要将所有broker指定上 props.put("bootstrap.servers", "hadoop102:9092");// 制定consumer group props.put("group.id", "test");// 是否自动确认offset props.put("enable.auto.commit", "true");// 自动确认offset的时间间隔 props.put("auto.commit.interval.ms", "1000");// key的序列化类props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");// value的序列化类 props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");// 定义consumer KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);// 消费者订阅的topic, 可同时订阅多个 consumer.subscribe(Arrays.asList("first", "second","third"));while (true) {// 读取数据,读取超时时间为100ms ConsumerRecords<String, String> records = consumer.poll(100);for (ConsumerRecord<String, String> record : records)System.out.println("topic:"+record.topic()+"--"+"partition:"+record.topic()+"--"+"offset:"+record.offset()+"--"+"value:"+record.value());}}
}

代码笔记:

MyConsumer类
package kafkaAPI.consumer;import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;import java.util.Arrays;
import java.util.Properties;/*** @author cherry* @create 2019-09-01-10:41*/
public class MyConsumer {public static void main(String[] args) {//连接Kafka集群Properties props = new Properties();// 定义kakfa 服务的地址,不需要将所有broker指定上props.put("bootstrap.servers", "hadoop102:9092");// 制定consumer groupprops.put("group.id", "test");// 是否自动确认offsetprops.put("enable.auto.commit", "true");// 自动确认offset的时间间隔props.put("auto.commit.interval.ms", "1000");// key的序列化类props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");// value的序列化类props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");// 定义consumerKafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);// 消费者订阅的topic, 可同时订阅多个consumer.subscribe(Arrays.asList("first", "second", "third"));while (true) {// 读取数据,读取超时时间为100msConsumerRecords<String, String> records = consumer.poll(100);for (ConsumerRecord<String, String> record : records)System.out.println("topic:" + record.topic() + "--" +"partition:" + record.topic() + "--" +"offset:" + record.offset() + "--" +"value:" + record.value());}}
}
MySimpleConsumer类
package kafkaAPI.consumer;import kafka.api.FetchRequest;
import kafka.api.FetchRequestBuilder;
import kafka.javaapi.*;
import kafka.javaapi.consumer.SimpleConsumer;
import kafka.javaapi.message.ByteBufferMessageSet;
import kafka.message.MessageAndOffset;import java.util.ArrayList;
import java.util.Collections;
import java.util.List;/*** @author cherry* @create 2019-09-01-11:23*/
public class MySimpleConsumer {//指定消费主题,指定偏移量数据public static void main(String[] args) {ArrayList<Object> brokers = new ArrayList<>();brokers.add("192.168.1.102");brokers.add("192.168.1.103");brokers.add("192.168.1.104");//端口号int port = 9092;//主题String topic = "first";//分区号int partition = 0;//偏移量long offset = 15;MySimpleConsumer mySimpleConsumer = new MySimpleConsumer();mySimpleConsumer.getData(brokers,port,topic, partition, (int) offset);}/*** 获取分区leader* @param brokers* @param port* @param topic* @param partition* @return*/public PartitionMetadata getLeader(List<String> brokers, int port, String topic, int partition) {SimpleConsumer consumer=null;for (String broker : brokers) {consumer = new SimpleConsumer(broker, port, 1000, 1024 * 4, "client");TopicMetadataRequest topicMetadataRequest = new TopicMetadataRequest(Collections.singletonList(topic));//获取topic的元数据信息TopicMetadataResponse topicMetadataResponse = consumer.send(topicMetadataRequest);List<TopicMetadata> topicMetadata = topicMetadataResponse.topicsMetadata();for (TopicMetadata topicMetadatum : topicMetadata) {//获取一个topic中所有分区元数据信息List<PartitionMetadata> partitionMetadata = topicMetadatum.partitionsMetadata();for (PartitionMetadata partitionMetadatum : partitionMetadata) {if (partitionMetadatum.partitionId() == partition) {consumer.close();return partitionMetadatum;}}}}return null;}public void getData(List brokers, int port, String topic, int partition,int offset) {PartitionMetadata partitionMetadata = getLeader(brokers,port,topic,partition);//获取指定分区的leader(String)String leader = partitionMetadata.leader().toString();//获取consumer对象SimpleConsumer consumer = new SimpleConsumer(leader, port, 1000, 1024 * 4, "client");FetchRequest fetchRequest = new FetchRequestBuilder().addFetch(topic, partition, offset, 1000).build();FetchResponse fetchResponse = consumer.fetch(fetchRequest);ByteBufferMessageSet messageAndOffsets = fetchResponse.messageSet(topic, partition);for (MessageAndOffset messageAndOffset : messageAndOffsets) {System.out.println(messageAndOffset.offset() +messageAndOffset.message().toString());}consumer.close();}}

启动三个生产者

消费者监控数据

4.3.2 低级API

1)消费者使用低级API 的主要步骤:

步骤

主要工作

1

根据指定的分区从主题元数据中找到主副本

2

获取分区最新的消费进度

3

从主副本拉取分区的消息

4

识别主副本的变化,重试

2)方法描述:

findLeader()

客户端向种子节点发送主题元数据,将副本集加入备用节点

getLastOffset()

消费者客户端发送偏移量请求,获取分区最近的偏移量

run()

消费者低级AP I拉取消息的主要方法

findNewLeader()

当分区的主副本节点发生故障,客户将要找出新的主副本

3)代码:

import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;import kafka.api.FetchRequest;
import kafka.api.FetchRequestBuilder;
import kafka.api.PartitionOffsetRequestInfo;
import kafka.cluster.BrokerEndPoint;
import kafka.common.ErrorMapping;
import kafka.common.TopicAndPartition;
import kafka.javaapi.FetchResponse;
import kafka.javaapi.OffsetResponse;
import kafka.javaapi.PartitionMetadata;
import kafka.javaapi.TopicMetadata;
import kafka.javaapi.TopicMetadataRequest;
import kafka.javaapi.consumer.SimpleConsumer;
import kafka.message.MessageAndOffset;public class SimpleExample {private List<String> m_replicaBrokers = new ArrayList<>();public SimpleExample() {m_replicaBrokers = new ArrayList<>();}public static void main(String args[]) {SimpleExample example = new SimpleExample();// 最大读取消息数量long maxReads = Long.parseLong("3");// 要订阅的topicString topic = "test1";// 要查找的分区int partition = Integer.parseInt("0");// broker节点的ipList<String> seeds = new ArrayList<>();seeds.add("192.168.9.102");seeds.add("192.168.9.103");seeds.add("192.168.9.104");// 端口int port = Integer.parseInt("9092");try {example.run(maxReads, topic, partition, seeds, port);} catch (Exception e) {System.out.println("Oops:" + e);e.printStackTrace();}}public void run(long a_maxReads, String a_topic, int a_partition, List<String> a_seedBrokers, int a_port) throws Exception {// 获取指定Topic partition的元数据PartitionMetadata metadata = findLeader(a_seedBrokers, a_port, a_topic, a_partition);if (metadata == null) {System.out.println("Can't find metadata for Topic and Partition. Exiting");return;}if (metadata.leader() == null) {System.out.println("Can't find Leader for Topic and Partition. Exiting");return;}String leadBroker = metadata.leader().host();String clientName = "Client_" + a_topic + "_" + a_partition;SimpleConsumer consumer = new SimpleConsumer(leadBroker, a_port, 100000, 64 * 1024, clientName);long readOffset = getLastOffset(consumer, a_topic, a_partition, kafka.api.OffsetRequest.EarliestTime(), clientName);int numErrors = 0;while (a_maxReads > 0) {if (consumer == null) {consumer = new SimpleConsumer(leadBroker, a_port, 100000, 64 * 1024, clientName);}FetchRequest req = new FetchRequestBuilder().clientId(clientName).addFetch(a_topic, a_partition, readOffset, 100000).build();FetchResponse fetchResponse = consumer.fetch(req);if (fetchResponse.hasError()) {numErrors++;// Something went wrong!short code = fetchResponse.errorCode(a_topic, a_partition);System.out.println("Error fetching data from the Broker:" + leadBroker + " Reason: " + code);if (numErrors > 5)break;if (code == ErrorMapping.OffsetOutOfRangeCode()) {// We asked for an invalid offset. For simple case ask for// the last element to resetreadOffset = getLastOffset(consumer, a_topic, a_partition, kafka.api.OffsetRequest.LatestTime(), clientName);continue;}consumer.close();consumer = null;leadBroker = findNewLeader(leadBroker, a_topic, a_partition, a_port);continue;}numErrors = 0;long numRead = 0;for (MessageAndOffset messageAndOffset : fetchResponse.messageSet(a_topic, a_partition)) {long currentOffset = messageAndOffset.offset();if (currentOffset < readOffset) {System.out.println("Found an old offset: " + currentOffset + " Expecting: " + readOffset);continue;}readOffset = messageAndOffset.nextOffset();ByteBuffer payload = messageAndOffset.message().payload();byte[] bytes = new byte[payload.limit()];payload.get(bytes);System.out.println(String.valueOf(messageAndOffset.offset()) + ": " + new String(bytes, "UTF-8"));numRead++;a_maxReads--;}if (numRead == 0) {try {Thread.sleep(1000);} catch (InterruptedException ie) {}}}if (consumer != null)consumer.close();}public static long getLastOffset(SimpleConsumer consumer, String topic, int partition, long whichTime, String clientName) {TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1));kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest(requestInfo, kafka.api.OffsetRequest.CurrentVersion(), clientName);OffsetResponse response = consumer.getOffsetsBefore(request);if (response.hasError()) {System.out.println("Error fetching data Offset Data the Broker. Reason: " + response.errorCode(topic, partition));return 0;}long[] offsets = response.offsets(topic, partition);return offsets[0];}private String findNewLeader(String a_oldLeader, String a_topic, int a_partition, int a_port) throws Exception {for (int i = 0; i < 3; i++) {boolean goToSleep = false;PartitionMetadata metadata = findLeader(m_replicaBrokers, a_port, a_topic, a_partition);if (metadata == null) {goToSleep = true;} else if (metadata.leader() == null) {goToSleep = true;} else if (a_oldLeader.equalsIgnoreCase(metadata.leader().host()) && i == 0) {// first time through if the leader hasn't changed give// ZooKeeper a second to recover// second time, assume the broker did recover before failover,// or it was a non-Broker issue//goToSleep = true;} else {return metadata.leader().host();}if (goToSleep) {Thread.sleep(1000);}}System.out.println("Unable to find new leader after Broker failure. Exiting");throw new Exception("Unable to find new leader after Broker failure. Exiting");}private PartitionMetadata findLeader(List<String> a_seedBrokers, int a_port, String a_topic, int a_partition) {PartitionMetadata returnMetaData = null;loop:for (String seed : a_seedBrokers) {SimpleConsumer consumer = null;try {consumer = new SimpleConsumer(seed, a_port, 100000, 64 * 1024, "leaderLookup");List<String> topics = Collections.singletonList(a_topic);TopicMetadataRequest req = new TopicMetadataRequest(topics);kafka.javaapi.TopicMetadataResponse resp = consumer.send(req);List<TopicMetadata> metaData = resp.topicsMetadata();for (TopicMetadata item : metaData) {for (PartitionMetadata part : item.partitionsMetadata()) {if (part.partitionId() == a_partition) {returnMetaData = part;break loop;}}}} catch (Exception e) {System.out.println("Error communicating with Broker [" + seed + "] to find Leader for [" + a_topic + ", " + a_partition + "] Reason: " + e);} finally {if (consumer != null)consumer.close();}}if (returnMetaData != null) {m_replicaBrokers.clear();for (BrokerEndPoint replica : returnMetaData.replicas()) {m_replicaBrokers.add(replica.host());}}return returnMetaData;}
}

第5章 Kafka producer拦截器(interceptor)

5.1 拦截器原理

Producer拦截器(interceptor)是在Kafka 0.10版本被引入的,主要用于实现clients端的定制化控制逻辑。

对于producer而言,interceptor使得用户在消息发送前以及producer回调逻辑前有机会对消息做一些定制化需求,比如修改消息等。同时,producer允许用户指定多个interceptor按序作用于同一条消息从而形成一个拦截链(interceptor chain)。Intercetpor的实现接口是org.apache.kafka.clients.producer.ProducerInterceptor,其定义的方法包括:

(1)configure(configs)

获取配置信息和初始化数据时调用。

(2)onSend(ProducerRecord):

该方法封装进KafkaProducer.send方法中,即它运行在用户主线程中。Producer确保在消息被序列化以及计算分区前调用该方法。用户可以在该方法中对消息做任何操作,但最好保证不要修改消息所属的topic和分区,否则会影响目标分区的计算

(3)onAcknowledgement(RecordMetadata, Exception):

该方法会在消息被应答或消息发送失败时调用,并且通常都是在producer回调逻辑触发之前。onAcknowledgement运行在producer的IO线程中,因此不要在该方法中放入很重的逻辑,否则会拖慢producer的消息发送效率

(4)close:

关闭interceptor,主要用于执行一些资源清理工作

如前所述,interceptor可能被运行在多个线程中,因此在具体实现时用户需要自行确保线程安全。另外倘若指定了多个interceptor,则producer将按照指定顺序调用它们,并仅仅是捕获每个interceptor可能抛出的异常记录到错误日志中而非在向上传递。这在使用过程中要特别留意。

5.2 拦截器案例

1)需求:

实现一个简单的双interceptor组成的拦截链。第一个interceptor会在消息发送前将时间戳信息加到消息value的最前部;第二个interceptor会在消息发送后更新成功发送消息数或失败发送消息数。

2)案例实操

(1)增加时间戳拦截器

import java.util.Map;
import org.apache.kafka.clients.producer.ProducerInterceptor;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;public class TimeInterceptor implements ProducerInterceptor<String, String> {@Overridepublic void configure(Map<String, ?> configs) {}@Overridepublic ProducerRecord<String, String> onSend(ProducerRecord<String, String> record) {// 创建一个新的record,把时间戳写入消息体的最前部return new ProducerRecord(record.topic(), record.partition(), record.timestamp(), record.key(),System.currentTimeMillis() + "," + record.value().toString());}@Overridepublic void onAcknowledgement(RecordMetadata metadata, Exception exception) {}@Overridepublic void close() {}
}

(2)统计发送消息成功和发送失败消息数,并在producer关闭时打印这两个计数器

import java.util.Map;
import org.apache.kafka.clients.producer.ProducerInterceptor;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;public class CounterInterceptor implements ProducerInterceptor<String, String>{private int errorCounter = 0;private int successCounter = 0;@Overridepublic void configure(Map<String, ?> configs) {}@Overridepublic ProducerRecord<String, String> onSend(ProducerRecord<String, String> record) {return record;}@Overridepublic void onAcknowledgement(RecordMetadata metadata, Exception exception) {// 统计成功和失败的次数if (exception == null) {successCounter++;} else {errorCounter++;}}@Overridepublic void close() {// 保存结果System.out.println("Successful sent: " + successCounter);System.out.println("Failed sent: " + errorCounter);}
}

(3)producer主程序

import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;public class InterceptorProducer {public static void main(String[] args) throws Exception {// 1 设置配置信息Properties props = new Properties();props.put("bootstrap.servers", "hadoop102:9092");props.put("acks", "all");props.put("retries", 0);props.put("batch.size", 16384);props.put("linger.ms", 1);props.put("buffer.memory", 33554432);props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");// 2 构建拦截链List<String> interceptors = new ArrayList<>();interceptors.add("com.atguigu.kafka.interceptor.TimeInterceptor");  interceptors.add("com.atguigu.kafka.interceptor.CounterInterceptor"); props.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, interceptors);String topic = "first";Producer<String, String> producer = new KafkaProducer<>(props);// 3 发送消息for (int i = 0; i < 10; i++) {ProducerRecord<String, String> record = new ProducerRecord<>(topic, "message" + i);producer.send(record);}// 4 一定要关闭producer,这样才会调用interceptor的close方法producer.close();}
}

3)测试

(1)在kafka上启动消费者,然后运行客户端java程序。

[atguigu@hadoop102 kafka]$ bin/kafka-console-consumer.sh \

--zookeeper hadoop102:2181 --from-beginning --topic first

1501904047034,message0

1501904047225,message1

1501904047230,message2

1501904047234,message3

1501904047236,message4

1501904047240,message5

1501904047243,message6

1501904047246,message7

1501904047249,message8

1501904047252,message9

(2)观察java平台控制台输出数据如下:

Successful sent: 10

Failed sent: 0

代码笔记:

CounterInterceptor类
package kafkaAPI.intercepter;import org.apache.kafka.clients.producer.ProducerInterceptor;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;import java.util.Map;/*** @author cherry* @create 2019-09-01-16:10*/
public class CounterInterceptor implements ProducerInterceptor<String, String> {private int successCount = 0;private int errorCount = 0;/*** 释放record** @param record* @return*/@Overridepublic ProducerRecord<String, String> onSend(ProducerRecord<String, String> record) {return record;}/*** 计数** @param metadata* @param exception*/@Overridepublic void onAcknowledgement(RecordMetadata metadata, Exception exception) {if (exception == null) {successCount++;} else {errorCount++;}}@Overridepublic void close() {System.out.println("成功次数:" + successCount + ",失败次数:" + errorCount);}@Overridepublic void configure(Map<String, ?> configs) {}
}
TimeInterceptor类
package kafkaAPI.intercepter;import org.apache.kafka.clients.producer.ProducerInterceptor;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;import java.util.Map;/*** @author cherry* @create 2019-09-01-16:06*/
public class TimeInterceptor implements ProducerInterceptor<String ,String > {/*** 给value添加时间戳* @param record* @return*/@Overridepublic ProducerRecord<String, String> onSend(ProducerRecord<String, String> record) {return new ProducerRecord<String, String>(record.topic(),record.partition(),record.timestamp(),record.key(),System.currentTimeMillis()+record.value());}@Overridepublic void onAcknowledgement(RecordMetadata metadata, Exception exception) {}@Overridepublic void close() {}@Overridepublic void configure(Map<String, ?> configs) {}
}
InterceptorProducer类
package kafkaAPI.intercepter;import org.apache.kafka.clients.producer.*;import java.util.ArrayList;
import java.util.List;
import java.util.Properties;/*** @author cherry* @create 2019-09-01-16:15*/
public class InterceptorProducer {public static void main(String[] args) throws Exception {// 1 设置配置信息Properties props = new Properties();props.put("bootstrap.servers", "hadoop102:9092");props.put("acks", "all");props.put("retries", 0);props.put("batch.size", 16384);props.put("linger.ms", 1);props.put("buffer.memory", 33554432);props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");// 2 构建拦截链List<String> interceptors = new ArrayList<>();interceptors.add("kafkaAPI.intercepter.TimeInterceptor");interceptors.add("kafkaAPI.intercepter.CounterInterceptor");props.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, interceptors);Producer<String, String> producer = new KafkaProducer<>(props);// 3 发送消息for (int i = 0; i < 10; i++) {producer.send(new ProducerRecord<>("second", String.valueOf(i), String.valueOf(i)),(metadata, exception) -> System.out.println(metadata.partition() + "--" + metadata.offset()));}// 4 一定要关闭producer,这样才会调用interceptor的close方法producer.close();}
}

第6章 kafka Streams

6.1 概述

6.1.1 Kafka Streams 

Kafka Streams。Apache Kafka开源项目的一个组成部分。是一个功能强大,易于使用的库。用于在Kafka上构建高可分布式、拓展性,容错的应用程序。

6.1.2 Kafka Streams特点

1)功能强大

高扩展性,弹性,容错

2)轻量级

无需专门的集群

一个库,而不是框架

3)完全集成

100%的Kafka 0.10.0版本兼容

易于集成到现有的应用程序

4)实时性

毫秒级延迟

并非微批处理

窗口允许乱序数据

允许迟到数据

6.1.3 为什么要有Kafka Stream

当前已经有非常多的流式处理系统,最知名且应用最多的开源流式处理系统有Spark Streaming和Apache Storm。Apache Storm发展多年,应用广泛,提供记录级别的处理能力,当前也支持SQL on Stream。而Spark Streaming基于Apache Spark,可以非常方便与图计算,SQL处理等集成,功能强大,对于熟悉其它Spark应用开发的用户而言使用门槛低。另外,目前主流的Hadoop发行版,如Cloudera和Hortonworks,都集成了Apache Storm和Apache Spark,使得部署更容易。

既然Apache Spark与Apache Storm拥用如此多的优势,那为何还需要Kafka Stream呢?主要有如下原因。

第一,Spark和Storm都是流式处理框架,而Kafka Stream提供的是一个基于Kafka的流式处理类库。框架要求开发者按照特定的方式去开发逻辑部分,供框架调用。开发者很难了解框架的具体运行方式,从而使得调试成本高,并且使用受限。而Kafka Stream作为流式处理类库,直接提供具体的类给开发者调用,整个应用的运行方式主要由开发者控制,方便使用和调试。

第二,虽然Cloudera与Hortonworks方便了Storm和Spark的部署,但是这些框架的部署仍然相对复杂。而Kafka Stream作为类库,可以非常方便的嵌入应用程序中,它对应用的打包和部署基本没有任何要求。

第三,就流式处理系统而言,基本都支持Kafka作为数据源。例如Storm具有专门的kafka-spout,而Spark也提供专门的spark-streaming-kafka模块。事实上,Kafka基本上是主流的流式处理系统的标准数据源。换言之,大部分流式系统中都已部署了Kafka,此时使用Kafka Stream的成本非常低。

第四,使用Storm或Spark Streaming时,需要为框架本身的进程预留资源,如Storm的supervisor和Spark on YARN的node manager。即使对于应用实例而言,框架本身也会占用部分资源,如Spark Streaming需要为shuffle和storage预留内存。但是Kafka作为类库不占用系统资源。

第五,由于Kafka本身提供数据持久化,因此Kafka Stream提供滚动部署和滚动升级以及重新计算的能力。

第六,由于Kafka Consumer Rebalance机制,Kafka Stream可以在线动态调整并行度。

6.2 Kafka Stream数据清洗案例

0)需求:

实时处理单词带有”>>>”前缀的内容。例如输入”atguigu>>>ximenqing”,最终处理成“ximenqing”

1)需求分析:

2)案例实操

(1)创建一个工程,并添加jar包,pom中添加如下依赖

<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-streams -->
<dependency><groupId>org.apache.kafka</groupId><artifactId>kafka-streams</artifactId><version>0.11.0.0</version>
</dependency>

(2)创建主类

import java.util.Properties;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorSupplier;
import org.apache.kafka.streams.processor.TopologyBuilder;public class Application {public static void main(String[] args) {// 定义输入的topicString from = "first";// 定义输出的topicString to = "second";// 设置参数Properties settings = new Properties();settings.put(StreamsConfig.APPLICATION_ID_CONFIG, "logFilter");settings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "hadoop102:9092");StreamsConfig config = new StreamsConfig(settings);// 构建拓扑TopologyBuilder builder = new TopologyBuilder();builder.addSource("SOURCE", from).addProcessor("PROCESS", new ProcessorSupplier<byte[], byte[]>() {@Overridepublic Processor<byte[], byte[]> get() {// 具体分析处理return new LogProcessor();}}, "SOURCE").addSink("SINK", to, "PROCESS");// 创建kafka streamKafkaStreams streams = new KafkaStreams(builder, config);streams.start();}
}

(3)具体业务处理

import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorContext;public class LogProcessor implements Processor<byte[], byte[]> {private ProcessorContext context;@Overridepublic void init(ProcessorContext context) {this.context = context;}@Overridepublic void process(byte[] key, byte[] value) {String input = new String(value);// 如果包含“>>>”则只保留该标记后面的内容if (input.contains(">>>")) {input = input.split(">>>")[1].trim();// 输出到下一个topiccontext.forward("logProcessor".getBytes(), input.getBytes());}else{context.forward("logProcessor".getBytes(), input.getBytes());}}@Overridepublic void punctuate(long timestamp) {}@Overridepublic void close() {}
}

代码笔记

LogProcessor类
package kafkaAPI.stream;import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorContext;/*** @author cherry* @create 2019-09-01-17:59*/
public class LogProcessor implements Processor<byte[],byte[]> {ProcessorContext context=null;@Overridepublic void init(ProcessorContext processorContext) {this.context=processorContext;}@Overridepublic void process(byte[] key, byte[] value) {String line = new String(value);if (line.contains(">>>")) {String[] split =line.split(">>>");line = split[1];}context.forward(key,line.getBytes());}@Overridepublic void punctuate(long l) {}@Overridepublic void close() {}
}
MyStream类
package kafkaAPI.stream;import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.processor.ProcessorSupplier;
import org.apache.kafka.streams.processor.TopologyBuilder;import java.util.Properties;/*** @author cherry* @create 2019-09-01-17:08*/
public class MyStream {public static void main(String[] args) {//配置信息Properties props = new Properties();props.put("bootstrap.servers", "hadoop102:9092");props.put(StreamsConfig.APPLICATION_ID_CONFIG, "logStream");TopologyBuilder builder = new TopologyBuilder();//构建拓扑builder.addSource("SOURCE", "first").addProcessor("PROCESSOR", (ProcessorSupplier<byte[], byte[]>)LogProcessor::new, "SOURCE").addSink("SINK", "second", "PROCESSOR");//创建Kafka的streamKafkaStreams streams = new KafkaStreams(builder, props);streams.start();}
}

运行程序

(5)在hadoop104上启动生产者

[atguigu@hadoop104 kafka]$ bin/kafka-console-producer.sh \

--broker-list hadoop102:9092 --topic first

>hello>>>world

>h>>>atguigu

>hahaha

生产消息

(6)在hadoop103上启动消费者

[atguigu@hadoop103 kafka]$ bin/kafka-console-consumer.sh \

--zookeeper hadoop102:2181 --from-beginning --topic second

world

atguigu

Hahaha

消费消息(此处用的是java程序Consumer)

第7章 扩展

7.1 Kafka配置信息

7.1.1 Broker配置信息

属性

默认值

描述

broker.id

必填参数,broker的唯一标识

log.dirs

/tmp/kafka-logs

Kafka数据存放的目录。可以指定多个目录,中间用逗号分隔,当新partition被创建的时会被存放到当前存放partition最少的目录。

port

9092

BrokerServer接受客户端连接的端口号

zookeeper.connect

null

Zookeeper的连接串,格式为:hostname1:port1,hostname2:port2,hostname3:port3。可以填一个或多个,为了提高可靠性,建议都填上。注意,此配置允许我们指定一个zookeeper路径来存放此kafka集群的所有数据,为了与其他应用集群区分开,建议在此配置中指定本集群存放目录,格式为:hostname1:port1,hostname2:port2,hostname3:port3/chroot/path 。需要注意的是,消费者的参数要和此参数一致。

message.max.bytes

1000000

服务器可以接收到的最大的消息大小。注意此参数要和consumer的maximum.message.size大小一致,否则会因为生产者生产的消息太大导致消费者无法消费。

num.io.threads

8

服务器用来执行读写请求的IO线程数,此参数的数量至少要等于服务器上磁盘的数量。

queued.max.requests

500

I/O线程可以处理请求的队列大小,若实际请求数超过此大小,网络线程将停止接收新的请求。

socket.send.buffer.bytes

100 * 1024

The SO_SNDBUFF buffer the server prefers for socket connections.

socket.receive.buffer.bytes

100 * 1024

The SO_RCVBUFF buffer the server prefers for socket connections.

socket.request.max.bytes

100 * 1024 * 1024

服务器允许请求的最大值, 用来防止内存溢出,其值应该小于 Java heap size.

num.partitions

1

默认partition数量,如果topic在创建时没有指定partition数量,默认使用此值,建议改为5

log.segment.bytes

1024 * 1024 * 1024

Segment文件的大小,超过此值将会自动新建一个segment,此值可以被topic级别的参数覆盖。

log.roll.{ms,hours}

24 * 7 hours

新建segment文件的时间,此值可以被topic级别的参数覆盖。

log.retention.{ms,minutes,hours}

7 days

Kafka segment log的保存周期,保存周期超过此时间日志就会被删除。此参数可以被topic级别参数覆盖。数据量大时,建议减小此值。

log.retention.bytes

-1

每个partition的最大容量,若数据量超过此值,partition数据将会被删除。注意这个参数控制的是每个partition而不是topic。此参数可以被log级别参数覆盖。

log.retention.check.interval.ms

5 minutes

删除策略的检查周期

auto.create.topics.enable

true

自动创建topic参数,建议此值设置为false,严格控制topic管理,防止生产者错写topic。

default.replication.factor

1

默认副本数量,建议改为2。

replica.lag.time.max.ms

10000

在此窗口时间内没有收到follower的fetch请求,leader会将其从ISR(in-sync replicas)中移除。

replica.lag.max.messages

4000

如果replica节点落后leader节点此值大小的消息数量,leader节点就会将其从ISR中移除。

replica.socket.timeout.ms

30 * 1000

replica向leader发送请求的超时时间。

replica.socket.receive.buffer.bytes

64 * 1024

The socket receive buffer for network requests to the leader for replicating data.

replica.fetch.max.bytes

1024 * 1024

The number of byes of messages to attempt to fetch for each partition in the fetch requests the replicas send to the leader.

replica.fetch.wait.max.ms

500

The maximum amount of time to wait time for data to arrive on the leader in the fetch requests sent by the replicas to the leader.

num.replica.fetchers

1

Number of threads used to replicate messages from leaders. Increasing this value can increase the degree of I/O parallelism in the follower broker.

fetch.purgatory.purge.interval.requests

1000

The purge interval (in number of requests) of the fetch request purgatory.

zookeeper.session.timeout.ms

6000

ZooKeeper session 超时时间。如果在此时间内server没有向zookeeper发送心跳,zookeeper就会认为此节点已挂掉。 此值太低导致节点容易被标记死亡;若太高,.会导致太迟发现节点死亡。

zookeeper.connection.timeout.ms

6000

客户端连接zookeeper的超时时间。

zookeeper.sync.time.ms

2000

H ZK follower落后 ZK leader的时间。

controlled.shutdown.enable

true

允许broker shutdown。如果启用,broker在关闭自己之前会把它上面的所有leaders转移到其它brokers上,建议启用,增加集群稳定性。

auto.leader.rebalance.enable

true

If this is enabled the controller will automatically try to balance leadership for partitions among the brokers by periodically returning leadership to the “preferred” replica for each partition if it is available.

leader.imbalance.per.broker.percentage

10

The percentage of leader imbalance allowed per broker. The controller will rebalance leadership if this ratio goes above the configured value per broker.

leader.imbalance.check.interval.seconds

300

The frequency with which to check for leader imbalance.

offset.metadata.max.bytes

4096

The maximum amount of metadata to allow clients to save with their offsets.

connections.max.idle.ms

600000

Idle connections timeout: the server socket processor threads close the connections that idle more than this.

num.recovery.threads.per.data.dir

1

The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.

unclean.leader.election.enable

true

Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.

delete.topic.enable

false

启用deletetopic参数,建议设置为true。

offsets.topic.num.partitions

50

The number of partitions for the offset commit topic. Since changing this after deployment is currently unsupported, we recommend using a higher setting for production (e.g., 100-200).

offsets.topic.retention.minutes

1440

Offsets that are older than this age will be marked for deletion. The actual purge will occur when the log cleaner compacts the offsets topic.

offsets.retention.check.interval.ms

600000

The frequency at which the offset manager checks for stale offsets.

offsets.topic.replication.factor

3

The replication factor for the offset commit topic. A higher setting (e.g., three or four) is recommended in order to ensure higher availability. If the offsets topic is created when fewer brokers than the replication factor then the offsets topic will be created with fewer replicas.

offsets.topic.segment.bytes

104857600

Segment size for the offsets topic. Since it uses a compacted topic, this should be kept relatively low in order to facilitate faster log compaction and loads.

offsets.load.buffer.size

5242880

An offset load occurs when a broker becomes the offset manager for a set of consumer groups (i.e., when it becomes a leader for an offsets topic partition). This setting corresponds to the batch size (in bytes) to use when reading from the offsets segments when loading offsets into the offset manager’s cache.

offsets.commit.required.acks

-1

The number of acknowledgements that are required before the offset commit can be accepted. This is similar to the producer’s acknowledgement setting. In general, the default should not be overridden.

offsets.commit.timeout.ms

5000

The offset commit will be delayed until this timeout or the required number of replicas have received the offset commit. This is similar to the producer request timeout.

7.1.2 Producer配置信息

属性

默认值

描述

metadata.broker.list

启动时producer查询brokers的列表,可以是集群中所有brokers的一个子集。注意,这个参数只是用来获取topic的元信息用,producer会从元信息中挑选合适的broker并与之建立socket连接。格式是:host1:port1,host2:port2。

request.required.acks

0

参见3.2节介绍

request.timeout.ms

10000

Broker等待ack的超时时间,若等待时间超过此值,会返回客户端错误信息。

producer.type

sync

同步异步模式。async表示异步,sync表示同步。如果设置成异步模式,可以允许生产者以batch的形式push数据,这样会极大的提高broker性能,推荐设置为异步。

serializer.class

kafka.serializer.DefaultEncoder

序列号类,.默认序列化成 byte[] 。

key.serializer.class

Key的序列化类,默认同上。

partitioner.class

kafka.producer.DefaultPartitioner

Partition类,默认对key进行hash。

compression.codec

none

指定producer消息的压缩格式,可选参数为: “none”, “gzip” and “snappy”。关于压缩参见4.1节

compressed.topics

null

启用压缩的topic名称。若上面参数选择了一个压缩格式,那么压缩仅对本参数指定的topic有效,若本参数为空,则对所有topic有效。

message.send.max.retries

3

Producer发送失败时重试次数。若网络出现问题,可能会导致不断重试。

retry.backoff.ms

100

Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata.

topic.metadata.refresh.interval.ms

600 * 1000

The producer generally refreshes the topic metadata from brokers when there is a failure (partition missing, leader not available…). It will also poll regularly (default: every 10min so 600000ms). If you set this to a negative value, metadata will only get refreshed on failure. If you set this to zero, the metadata will get refreshed after each message sent (not recommended). Important note: the refresh happen only AFTER the message is sent, so if the producer never sends a message the metadata is never refreshed

queue.buffering.max.ms

5000

启用异步模式时,producer缓存消息的时间。比如我们设置成1000时,它会缓存1秒的数据再一次发送出去,这样可以极大的增加broker吞吐量,但也会造成时效性的降低。

queue.buffering.max.messages

10000

采用异步模式时producer buffer 队列里最大缓存的消息数量,如果超过这个数值,producer就会阻塞或者丢掉消息。

queue.enqueue.timeout.ms

-1

当达到上面参数值时producer阻塞等待的时间。如果值设置为0,buffer队列满时producer不会阻塞,消息直接被丢掉。若值设置为-1,producer会被阻塞,不会丢消息。

batch.num.messages

200

采用异步模式时,一个batch缓存的消息数量。达到这个数量值时producer才会发送消息。

send.buffer.bytes

100 * 1024

Socket write buffer size

client.id

“”

The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request.

7.1.3 Consumer配置信息

属性

默认值

描述

group.id

Consumer的组ID,相同goup.id的consumer属于同一个组。

zookeeper.connect

Consumer的zookeeper连接串,要和broker的配置一致。

consumer.id

null

如果不设置会自动生成。

socket.timeout.ms

30 * 1000

网络请求的socket超时时间。实际超时时间由max.fetch.wait + socket.timeout.ms 确定。

socket.receive.buffer.bytes

64 * 1024

The socket receive buffer for network requests.

fetch.message.max.bytes

1024 * 1024

查询topic-partition时允许的最大消息大小。consumer会为每个partition缓存此大小的消息到内存,因此,这个参数可以控制consumer的内存使用量。这个值应该至少比server允许的最大消息大小大,以免producer发送的消息大于consumer允许的消息。

num.consumer.fetchers

1

The number fetcher threads used to fetch data.

auto.commit.enable

true

如果此值设置为true,consumer会周期性的把当前消费的offset值保存到zookeeper。当consumer失败重启之后将会使用此值作为新开始消费的值。

auto.commit.interval.ms

60 * 1000

Consumer提交offset值到zookeeper的周期。

queued.max.message.chunks

2

用来被consumer消费的message chunks 数量, 每个chunk可以缓存fetch.message.max.bytes大小的数据量。

auto.commit.interval.ms

60 * 1000

Consumer提交offset值到zookeeper的周期。

queued.max.message.chunks

2

用来被consumer消费的message chunks 数量, 每个chunk可以缓存fetch.message.max.bytes大小的数据量。

fetch.min.bytes

1

The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request.

fetch.wait.max.ms

100

The maximum amount of time the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy fetch.min.bytes.

rebalance.backoff.ms

2000

Backoff time between retries during rebalance.

refresh.leader.backoff.ms

200

Backoff time to wait before trying to determine the leader of a partition that has just lost its leader.

auto.offset.reset

largest

What to do when there is no initial offset in ZooKeeper or if an offset is out of range ;smallest : automatically reset the offset to the smallest offset; largest : automatically reset the offset to the largest offset;anything else: throw exception to the consumer

consumer.timeout.ms

-1

若在指定时间内没有消息消费,consumer将会抛出异常。

exclude.internal.topics

true

Whether messages from internal topics (such as offsets) should be exposed to the consumer.

zookeeper.session.timeout.ms

6000

ZooKeeper session timeout. If the consumer fails to heartbeat to ZooKeeper for this period of time it is considered dead and a rebalance will occur.

zookeeper.connection.timeout.ms

6000

The max time that the client waits while establishing a connection to zookeeper.

zookeeper.sync.time.ms

2000

How far a ZK follower can be behind a ZK leader

7.2flume对接Kafka

7.2.1 Flume拦截器代码

import org.apache.flume.Context;
import org.apache.flume.Event;import java.util.List;/*** @author liubo*/
public class Interceptor implements org.apache.flume.interceptor.Interceptor {@Overridepublic void initialize() {}@Overridepublic Event intercept(Event event) {if('1'<=event.getBody()[0]&&event.getBody()[0]<='9'){event.getHeaders().put("topic","number");}else if('a'<=event.getBody()[0]&&event.getBody()[0]<='z'){event.getHeaders().put("topic","letter");}return event;}@Overridepublic List<Event> intercept(List<Event> events) {return null;}@Overridepublic void close() {}public static class CustomBuilder implements Interceptor.Builder {@Overridepublic org.apache.flume.interceptor.Interceptor build() {return new Interceptor();}@Overridepublic void configure(Context context) {}}
}

7.2.2 Flume配置文件

# define
a1.sources = r1
a1.sinks = k1
a1.channels = c1# source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = com.atguigu.flume.topic.Interceptor$CustomBuilder
# sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1# channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100# bind
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

Kafka(下):Kafka消费者API,producer拦截器(interceptor)及案例,kafka流Streams,Stream数据清洗案例,Kafka配置信息,flume对接Kafka相关推荐

  1. Kafka详解与总结(七)-Kafka producer拦截器(interceptor)

    1. 拦截器原理 Producer拦截器(interceptor)是在Kafka 0.10版本被引入的,主要用于实现clients端的定制化控制逻辑. 对于producer而言,interceptor ...

  2. 60-50-010-API-Kafka producer拦截器(interceptor)

    文章目录 1.视界 概述 1.视界 概述 Producer拦截器(interceptor)是个相当新的功能,它和consumer端interceptor是在Kafka 0.10版本被引入的,主要用于实 ...

  3. 【Kafka笔记】4.Kafka API详细解析 Java版本(Producer API,Consumer API,拦截器等)

    简介 Kafka的API有Producer API,Consumer API还有自定义Interceptor (自定义拦截器),以及处理的流使用的Streams API和构建连接器的Kafka Con ...

  4. Resultful API的拦截(拦截器——Interceptor)

    目录 一.Resultful API的拦截三种方式 二.拦截器(Interceptor)的演示示例(springboot项目) 三.Interceptor拦截器特点 四.Filter过滤器.Inter ...

  5. 总结 拦截器(Interceptor) 和 过滤器(Filter)的区别

    一.前言 拦截器(Interceptor) 和 过滤器(Filter)的区别是面试中常问的问题,也是开发中容易被大家混淆的问题,在此总结下,希望对大家有所帮助. 二.Filter 介绍 2.1.概念 ...

  6. flume 对接 kafka 报错: Error while fetching metadata with correlation id 35 {=INVALID_TOPIC_EXCEPTION}

    flume 对接 kafka 报错:Error while fetching metadata with correlation id 35 : {=INVALID_TOPIC_EXCEPTION} ...

  7. spring过滤器Filter 、 拦截器Interceptor 、 切片Aspect 详解

    springboot 过滤器Filter vs 拦截器Interceptor vs 切片Aspect 详解 1 前言 最近接触到了过滤器和拦截器,网上查了查资料,这里记录一下,这篇文章就来仔细剖析下过 ...

  8. spring boot集成swagger,自定义注解,拦截器,xss过滤,异步调用,定时任务案例...

    本文介绍spring boot集成swagger,自定义注解,拦截器,xss过滤,异步调用,定时任务案例 集成swagger--对于做前后端分离的项目,后端只需要提供接口访问,swagger提供了接口 ...

  9. Struts2拦截器(Interceptor)原理详解

    1.    理解拦截器 1.1.    什么是拦截器: 拦截器,在AOP(Aspect-Oriented Programming)中用于在某个方法或字段被访问之前,进行拦截然后在之前或之后加入某些操作 ...

  10. struts2学习笔记--拦截器(Interceptor)和登录权限验证Demo

    理解 Interceptor拦截器类似于我们学过的过滤器,是可以在action执行前后执行的代码.是我们做web开发是经常使用的技术,比如权限控制,日志.我们也可以把多个interceptor连在一起 ...

最新文章

  1. 机器学习数据预处理之缺失值:后向填充
  2. 页面宽高,窗口宽高,元素宽高,元素位置,页面滚动距离
  3. 想在VR中体验暴雪爸爸的游戏,还得再等等
  4. CD管理和检索软件比较
  5. maile:教你程序员怎么发邮件
  6. 【项目管理】上线切割计划实践
  7. Linux下的编程入门
  8. 【题解】Luogu P3674 小清新人渣的本愿
  9. mysql php 变量赋值,mysql变量赋值要注意的_MySQL
  10. $(function(){}) 与(function(){})()在执行时的优先级
  11. 模拟实现单链表(三级)
  12. 黑科技小程序,无需前台登记直接刷脸秒住酒店!
  13. [JLOI2014]聪明的燕姿(搜索)
  14. java线程三种方法,Java基础_线程的使用及创建线程的三种方法
  15. 多线程的创建方式---继承Thread和实现Runnable
  16. (vue基础试炼_08)Vue模板语法
  17. delve 调试带参数_带你学够浪:Go语言基础系列-环境配置和 Hello world
  18. 软件测试跟踪工具Bugzilla的安装 - Linux版本
  19. System Explorer 2.0.4.2492 好用的系统管理器
  20. yii2之ActiveRecord 模型

热门文章

  1. Java基础入门(第2版)
  2. mPaSS小程序 路由跳转
  3. Windows系统、下的MySQL、版本升级、实操
  4. C语言 库函数:qsort 详解
  5. 基于java的田径运动会报名系统
  6. JAVA核心技术卷1 corejava.zip 下载地址
  7. linux adb 安装包下载地址,adb.exe下载|adb.exe 64位32位 官方版-520下载站
  8. 一个好用的在线java反编译工具
  9. 必看CSDN积分获取方法
  10. mapxtreme for java_在MapXtreme for Java 4.8.0 中公布新制造的电子地图