目录

Flink入门案例

前置说明

API

编程模型

准备工程

pom文件

log4j.properties

Flink初体验

需求

编码步骤

代码实现


Flink入门案例

前置说明

API

API

Flink提供了多个层次的API供开发者使用,越往上抽象程度越高,使用起来越方便;越往下越底层,使用起来难度越大

注意:在Flink1.12时支持流批一体,DataSetAPI已经不推荐使用了,后续其他案例都会优先使用DataStream流式API,既支持无界数据处理/流处理,也支持有界数据处理/批处理!当然Table&SQL-API会单独学习

Apache Flink 1.12 Documentation: Flink DataSet API Programming Guide

官宣 | Apache Flink 1.12.0 正式发布,流批一体真正统一运行!-阿里云开发者社区

Apache Flink 1.12 Documentation: Flink DataStream API Programming Guide

编程模型

Apache Flink 1.12 Documentation: Flink DataStream API Programming Guide

  • 编程模型

Flink 应用程序结构主要包含三部分,Source/Transformation/Sink,如下图所示:

准备工程

pom文件

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>cn.itcast</groupId><artifactId>flink_study_47</artifactId><version>1.0-SNAPSHOT</version><!-- 指定仓库位置,依次为aliyun、apache和cloudera仓库 --><repositories><repository><id>aliyun</id><url>http://maven.aliyun.com/nexus/content/groups/public/</url></repository><repository><id>apache</id><url>https://repository.apache.org/content/repositories/snapshots/</url></repository><repository><id>cloudera</id><url>https://repository.cloudera.com/artifactory/cloudera-repos/</url></repository></repositories><properties><encoding>UTF-8</encoding><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><maven.compiler.source>1.8</maven.compiler.source><maven.compiler.target>1.8</maven.compiler.target><java.version>1.8</java.version><scala.version>2.12</scala.version><flink.version>1.12.0</flink.version></properties><dependencies><!--依赖Scala语言--><dependency><groupId>org.scala-lang</groupId><artifactId>scala-library</artifactId><version>2.12.11</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-clients_2.12</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-scala_2.12</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-java</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-streaming-scala_2.12</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-streaming-java_2.12</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-table-api-scala-bridge_2.12</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-table-api-java-bridge_2.12</artifactId><version>${flink.version}</version></dependency><!-- blink执行计划,1.11+默认的--><dependency><groupId>org.apache.flink</groupId><artifactId>flink-table-planner-blink_2.12</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-table-common</artifactId><version>${flink.version}</version></dependency><!--<dependency><groupId>org.apache.flink</groupId><artifactId>flink-cep_2.12</artifactId><version>${flink.version}</version></dependency>--><!-- flink连接器--><dependency><groupId>org.apache.flink</groupId><artifactId>flink-connector-kafka_2.12</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-sql-connector-kafka_2.12</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-connector-jdbc_2.12</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-csv</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-json</artifactId><version>${flink.version}</version></dependency><!--<dependency><groupId>org.apache.flink</groupId><artifactId>flink-parquet_2.12</artifactId><version>${flink.version}</version></dependency>--><!--<dependency><groupId>org.apache.avro</groupId><artifactId>avro</artifactId><version>1.9.2</version></dependency><dependency><groupId>org.apache.parquet</groupId><artifactId>parquet-avro</artifactId><version>1.10.0</version></dependency>--><dependency><groupId>org.apache.bahir</groupId><artifactId>flink-connector-redis_2.11</artifactId><version>1.0</version><exclusions><exclusion><artifactId>flink-streaming-java_2.11</artifactId><groupId>org.apache.flink</groupId></exclusion><exclusion><artifactId>flink-runtime_2.11</artifactId><groupId>org.apache.flink</groupId></exclusion><exclusion><artifactId>flink-core</artifactId><groupId>org.apache.flink</groupId></exclusion><exclusion><artifactId>flink-java</artifactId><groupId>org.apache.flink</groupId></exclusion></exclusions></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-connector-hive_2.12</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.hive</groupId><artifactId>hive-metastore</artifactId><version>2.1.0</version><exclusions><exclusion><artifactId>hadoop-hdfs</artifactId><groupId>org.apache.hadoop</groupId></exclusion></exclusions></dependency><dependency><groupId>org.apache.hive</groupId><artifactId>hive-exec</artifactId><version>2.1.0</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-shaded-hadoop-2-uber</artifactId><version>2.7.5-10.0</version></dependency><dependency><groupId>org.apache.hbase</groupId><artifactId>hbase-client</artifactId><version>2.1.0</version></dependency><dependency><groupId>mysql</groupId><artifactId>mysql-connector-java</artifactId><version>5.1.38</version><!--<version>8.0.20</version>--></dependency><!-- 高性能异步组件:Vertx--><dependency><groupId>io.vertx</groupId><artifactId>vertx-core</artifactId><version>3.9.0</version></dependency><dependency><groupId>io.vertx</groupId><artifactId>vertx-jdbc-client</artifactId><version>3.9.0</version></dependency><dependency><groupId>io.vertx</groupId><artifactId>vertx-redis-client</artifactId><version>3.9.0</version></dependency><!-- 日志 --><dependency><groupId>org.slf4j</groupId><artifactId>slf4j-log4j12</artifactId><version>1.7.7</version><scope>runtime</scope></dependency><dependency><groupId>log4j</groupId><artifactId>log4j</artifactId><version>1.2.17</version><scope>runtime</scope></dependency><dependency><groupId>com.alibaba</groupId><artifactId>fastjson</artifactId><version>1.2.44</version></dependency><dependency><groupId>org.projectlombok</groupId><artifactId>lombok</artifactId><version>1.18.2</version><scope>provided</scope></dependency><!-- 参考:https://blog.csdn.net/f641385712/article/details/84109098--><!--<dependency><groupId>org.apache.commons</groupId><artifactId>commons-collections4</artifactId><version>4.4</version></dependency>--><!--<dependency><groupId>org.apache.thrift</groupId><artifactId>libfb303</artifactId><version>0.9.3</version><type>pom</type><scope>provided</scope></dependency>--><!--<dependency><groupId>com.google.guava</groupId><artifactId>guava</artifactId><version>28.2-jre</version></dependency>--></dependencies><build><sourceDirectory>src/main/java</sourceDirectory><plugins><!-- 编译插件 --><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-compiler-plugin</artifactId><version>3.5.1</version><configuration><source>1.8</source><target>1.8</target><!--<encoding>${project.build.sourceEncoding}</encoding>--></configuration></plugin><!-- 指定编译scala的插件 --><plugin><groupId>net.alchim31.maven</groupId><artifactId>scala-maven-plugin</artifactId><version>3.2.2</version><executions><execution><goals><goal>compile</goal><goal>testCompile</goal></goals><configuration><args><arg>-dependencyfile</arg><arg>${project.build.directory}/.scala_dependencies</arg></args></configuration></execution></executions></plugin><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-surefire-plugin</artifactId><version>2.18.1</version><configuration><useFile>false</useFile><disableXmlReport>true</disableXmlReport><includes><include>**/*Test.*</include><include>**/*Suite.*</include></includes></configuration></plugin><!-- 打包插件(会包含所有依赖) --><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-shade-plugin</artifactId><version>2.3</version><executions><execution><phase>package</phase><goals><goal>shade</goal></goals><configuration><filters><filter><artifact>*:*</artifact><excludes><!--zip -d learn_spark.jar META-INF/*.RSA META-INF/*.DSA META-INF/*.SF --><exclude>META-INF/*.SF</exclude><exclude>META-INF/*.DSA</exclude><exclude>META-INF/*.RSA</exclude></excludes></filter></filters><transformers><transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"><!-- 设置jar包的入口类(可选) --><mainClass></mainClass></transformer></transformers></configuration></execution></executions></plugin></plugins></build>
</project>

log4j.properties

log4j.properties


log4j.rootLogger=WARN, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{HH:mm:ss,SSS} %-5p %-60c %x - %m%n

Flink初体验

需求

使用Flink实现WordCount

编码步骤

Apache Flink 1.12 Documentation: Flink DataStream API Programming Guide

1.准备环境-env

2.准备数据-source

3.处理数据-transformation

4.输出结果-sink

5.触发执行-execute

其中创建环境可以使用如下3种方式:


getExecutionEnvironment() //推荐使用createLocalEnvironment()createRemoteEnvironment(String host, int port, String... jarFiles)

代码实现

基于DataSet

package cn.it.hello;import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.common.operators.Order;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.api.java.operators.UnsortedGrouping;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.util.Collector;/*** Author lanson* Desc* 需求:使用Flink完成WordCount-DataSet* 编码步骤* 1.准备环境-env* 2.准备数据-source* 3.处理数据-transformation* 4.输出结果-sink* 5.触发执行-execute//如果有print,DataSet不需要调用execute,DataStream需要调用execute*/
public class WordCount1 {public static void main(String[] args) throws Exception {//老版本的批处理API如下,但已经不推荐使用了//1.准备环境-envExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();//2.准备数据-sourceDataSet<String> lineDS = env.fromElements("it hadoop spark","it hadoop spark","it hadoop","it");//3.处理数据-transformation//3.1每一行数据按照空格切分成一个个的单词组成一个集合/*public interface FlatMapFunction<T, O> extends Function, Serializable {void flatMap(T value, Collector<O> out) throws Exception;}*/DataSet<String> wordsDS = lineDS.flatMap(new FlatMapFunction<String, String>() {@Overridepublic void flatMap(String value, Collector<String> out) throws Exception {//value就是一行行的数据String[] words = value.split(" ");for (String word : words) {out.collect(word);//将切割处理的一个个的单词收集起来并返回}}});//3.2对集合中的每个单词记为1/*public interface MapFunction<T, O> extends Function, Serializable {O map(T value) throws Exception;}*/DataSet<Tuple2<String, Integer>> wordAndOnesDS = wordsDS.map(new MapFunction<String, Tuple2<String, Integer>>() {@Overridepublic Tuple2<String, Integer> map(String value) throws Exception {//value就是进来一个个的单词return Tuple2.of(value, 1);}});//3.3对数据按照单词(key)进行分组//0表示按照tuple中的索引为0的字段,也就是key(单词)进行分组UnsortedGrouping<Tuple2<String, Integer>> groupedDS = wordAndOnesDS.groupBy(0);//3.4对各个组内的数据按照数量(value)进行聚合就是求sum//1表示按照tuple中的索引为1的字段也就是按照数量进行聚合累加!DataSet<Tuple2<String, Integer>> aggResult = groupedDS.sum(1);//3.5排序DataSet<Tuple2<String, Integer>> result = aggResult.sortPartition(1, Order.DESCENDING).setParallelism(1);//4.输出结果-sinkresult.print();//5.触发执行-execute//如果有print,DataSet不需要调用execute,DataStream需要调用execute//env.execute();//'execute()', 'count()', 'collect()', or 'print()'.}
}

​​​​​​​基于DataStream

package cn.it.hello;import org.apache.flink.api.common.RuntimeExecutionMode;
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.java.tuple.Tuple;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;/*** Author lanson* Desc* 需求:使用Flink完成WordCount-DataStream* 编码步骤* 1.准备环境-env* 2.准备数据-source* 3.处理数据-transformation* 4.输出结果-sink* 5.触发执行-execute*/
public class WordCount2 {public static void main(String[] args) throws Exception {//新版本的流批统一API,既支持流处理也支持批处理//1.准备环境-envStreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();env.setRuntimeMode(RuntimeExecutionMode.AUTOMATIC);//env.setRuntimeMode(RuntimeExecutionMode.STREAMING);//env.setRuntimeMode(RuntimeExecutionMode.BATCH);//2.准备数据-sourceDataStream<String> linesDS = env.fromElements("it hadoop spark","it hadoop spark","it hadoop","it");//3.处理数据-transformation//3.1每一行数据按照空格切分成一个个的单词组成一个集合/*public interface FlatMapFunction<T, O> extends Function, Serializable {void flatMap(T value, Collector<O> out) throws Exception;}*/DataStream<String> wordsDS = linesDS.flatMap(new FlatMapFunction<String, String>() {@Overridepublic void flatMap(String value, Collector<String> out) throws Exception {//value就是一行行的数据String[] words = value.split(" ");for (String word : words) {out.collect(word);//将切割处理的一个个的单词收集起来并返回}}});//3.2对集合中的每个单词记为1/*public interface MapFunction<T, O> extends Function, Serializable {O map(T value) throws Exception;}*/DataStream<Tuple2<String, Integer>> wordAndOnesDS = wordsDS.map(new MapFunction<String, Tuple2<String, Integer>>() {@Overridepublic Tuple2<String, Integer> map(String value) throws Exception {//value就是进来一个个的单词return Tuple2.of(value, 1);}});//3.3对数据按照单词(key)进行分组//0表示按照tuple中的索引为0的字段,也就是key(单词)进行分组//KeyedStream<Tuple2<String, Integer>, Tuple> groupedDS = wordAndOnesDS.keyBy(0);
KeyedStream<Tuple2<String, Integer>, String> groupedDS = wordAndOnesDS.keyBy(t -> t.f0);//3.4对各个组内的数据按照数量(value)进行聚合就是求sum//1表示按照tuple中的索引为1的字段也就是按照数量进行聚合累加!DataStream<Tuple2<String, Integer>> result = groupedDS.sum(1);//4.输出结果-sinkresult.print();//5.触发执行-executeenv.execute();//DataStream需要调用execute}
}

​​​​​​​Lambda版

Apache Flink 1.12 Documentation: Java Lambda Expressions

package cn.it.hello;import org.apache.flink.api.common.RuntimeExecutionMode;
import org.apache.flink.api.common.typeinfo.TypeHint;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.api.common.typeinfo.Types;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;import java.util.Arrays;/*** Author lanson* Desc* 需求:使用Flink完成WordCount-DataStream--使用lambda表达式* 编码步骤* 1.准备环境-env* 2.准备数据-source* 3.处理数据-transformation* 4.输出结果-sink* 5.触发执行-execute*/
public class WordCount3_Lambda {public static void main(String[] args) throws Exception {//1.准备环境-envStreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();env.setRuntimeMode(RuntimeExecutionMode.AUTOMATIC);//env.setRuntimeMode(RuntimeExecutionMode.STREAMING);//env.setRuntimeMode(RuntimeExecutionMode.BATCH);//2.准备数据-sourceDataStream<String> linesDS = env.fromElements("it hadoop spark", "it hadoop spark", "it hadoop", "it");//3.处理数据-transformation//3.1每一行数据按照空格切分成一个个的单词组成一个集合/*public interface FlatMapFunction<T, O> extends Function, Serializable {void flatMap(T value, Collector<O> out) throws Exception;}*///lambda表达式的语法:// (参数)->{方法体/函数体}//lambda表达式就是一个函数,函数的本质就是对象DataStream<String> wordsDS = linesDS.flatMap((String value, Collector<String> out) -> Arrays.stream(value.split(" ")).forEach(out::collect)).returns(Types.STRING);//3.2对集合中的每个单词记为1/*public interface MapFunction<T, O> extends Function, Serializable {O map(T value) throws Exception;}*//*DataStream<Tuple2<String, Integer>> wordAndOnesDS = wordsDS.map((String value) -> Tuple2.of(value, 1)).returns(Types.TUPLE(Types.STRING, Types.INT));*/DataStream<Tuple2<String, Integer>> wordAndOnesDS = wordsDS.map((String value) -> Tuple2.of(value, 1), TypeInformation.of(new TypeHint<Tuple2<String, Integer>>() {}));//3.3对数据按照单词(key)进行分组//0表示按照tuple中的索引为0的字段,也就是key(单词)进行分组//KeyedStream<Tuple2<String, Integer>, Tuple> groupedDS = wordAndOnesDS.keyBy(0);//KeyedStream<Tuple2<String, Integer>, String> groupedDS = wordAndOnesDS.keyBy((KeySelector<Tuple2<String, Integer>, String>) t -> t.f0);KeyedStream<Tuple2<String, Integer>, String> groupedDS = wordAndOnesDS.keyBy(t -> t.f0);//3.4对各个组内的数据按照数量(value)进行聚合就是求sum//1表示按照tuple中的索引为1的字段也就是按照数量进行聚合累加!DataStream<Tuple2<String, Integer>> result = groupedDS.sum(1);//4.输出结果-sinkresult.print();//5.触发执行-executeenv.execute();}
}

在Yarn上运行

注意

写入HDFS如果存在权限问题:

进行如下设置:

hadoop fs -chmod -R 777  /

并在代码中添加:

System.setProperty("HADOOP_USER_NAME", "root")

修改代码

package cn.it.hello;import org.apache.flink.api.common.RuntimeExecutionMode;
import org.apache.flink.api.common.typeinfo.Types;
import org.apache.flink.api.java.functions.KeySelector;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;import java.util.Arrays;/*** Author lanson* Desc* 需求:使用Flink完成WordCount-DataStream--使用lambda表达式--修改代码使适合在Yarn上运行* 编码步骤* 1.准备环境-env* 2.准备数据-source* 3.处理数据-transformation* 4.输出结果-sink* 5.触发执行-execute//批处理不需要调用!流处理需要*/
public class WordCount4_Yarn {public static void main(String[] args) throws Exception {//获取参数ParameterTool params = ParameterTool.fromArgs(args);String output = null;if (params.has("output")) {output = params.get("output");} else {output = "hdfs://node1:8020/wordcount/output_" + System.currentTimeMillis();}//1.准备环境-envStreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();env.setRuntimeMode(RuntimeExecutionMode.AUTOMATIC);//env.setRuntimeMode(RuntimeExecutionMode.STREAMING);//env.setRuntimeMode(RuntimeExecutionMode.BATCH);//2.准备数据-sourceDataStream<String> linesDS = env.fromElements("it hadoop spark", "it hadoop spark", "it hadoop", "it");//3.处理数据-transformationDataStream<Tuple2<String, Integer>> result = linesDS.flatMap((String value, Collector<String> out) -> Arrays.stream(value.split(" ")).forEach(out::collect)).returns(Types.STRING).map((String value) -> Tuple2.of(value, 1)).returns(Types.TUPLE(Types.STRING, Types.INT))//.keyBy(0);.keyBy((KeySelector<Tuple2<String, Integer>, String>) t -> t.f0).sum(1);//4.输出结果-sinkresult.print();//如果执行报hdfs权限相关错误,可以执行 hadoop fs -chmod -R 777  /System.setProperty("HADOOP_USER_NAME", "root");//设置用户名//result.writeAsText("hdfs://node1:8020/wordcount/output_"+System.currentTimeMillis()).setParallelism(1);result.writeAsText(output).setParallelism(1);//5.触发执行-executeenv.execute();}
}

打包

​​​​​​​

改名

上传

提交执行

Apache Flink 1.12 Documentation: Execution Mode (Batch/Streaming)

/export/server/flink/bin/flink run -Dexecution.runtime-mode=BATCH -m yarn-cluster -yjm 1024 -ytm 1024 -c cn.it.hello.WordCount4_Yarn /root/wc.jar --output hdfs://node1:8020/wordcount/output_xx

在Web页面可以观察到提交的程序:

http://node1:8088/cluster

http://node1:50070/explorer.html#/

或者在Standalone模式下使用web界面提交

2021年大数据Flink(八):Flink入门案例相关推荐

  1. 林子雨大数据实验八Flink部分代码

    1.pom.xml <?xml version="1.0" encoding="UTF-8"?> <project xmlns="h ...

  2. 2021年大数据HBase(十):Apache Phoenix的基本入门操作

    全网最详细的大数据HBase文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 目录 系列历史文章 前言 Apache Phoenix的基本入门操作 一.Pho ...

  3. 2021年大数据HBase(八):Apache Phoenix的基本介绍

    全网最详细的大数据HBase文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 目录 前言 系列历史文章 Apache Phoenix的基本介绍 Apache ...

  4. 2021年大数据Hive(八):Hive自定义函数

    全网最详细的Hive文章系列,强烈建议收藏加关注! 后面更新文章都会列出历史文章目录,帮助大家回顾知识重点. 目录 系列历史文章 前言 Hive自定义函数 一.概述 1.UDF(User-Define ...

  5. 大数据知识面试题-Flink(2022版)

    序列号 内容 链接 1 大数据知识面试题-通用(2022版) https://blog.csdn.net/qq_43061290/article/details/124819089 2 大数据知识面试 ...

  6. 2021年大数据ELK(八):Elasticsearch安装IK分词器插件

    全网最详细的大数据ELK文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 目录 系列历史文章 安装IK分词器 一.下载Elasticsearch IK分词器 ...

  7. 2021年大数据Kafka(八):Kafka如何保证数据不丢失

    全网最详细的大数据Kafka文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 目录 系列历史文章 Kafka如何保证数据不丢失 一.如何保证生产者数据不丢失 ...

  8. 大数据课程之Flink

    大数据课程之Flink 第一章 Flink简介 1.初识Flink Apache Flink是一个框架和分布式处理引擎,用于对无界和有界数据流进行有状态计算.Flink被设计在所有常见的集群环境中运行 ...

  9. 大数据框架复习-flink

    大数据框架复习-flink flink的简单介绍 Flink 是一个面向流处理和批处理的分布式数据计算引擎,能够基于同一个 Flink 运行,可以提供流处理和批处理两种类型的功能. 在 Flink 的 ...

  10. 2021年大数据Kafka(一):❤️消息队列和Kafka的基本介绍❤️

    全网最详细的大数据Kafka文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 目录 消息队列和Kafka的基本介绍 一.什么是消息队列 二.消息队列的应用场景 ...

最新文章

  1. PyTorch关键算法疑似侵权,Facebook被起诉
  2. GitHub上的编程语言:JavaScript领衔Java次之
  3. 台式无线网卡管理服务器,台式电脑设置wifi上网
  4. fd_set struct
  5. win10系统:VMware无法在Windows运行该怎么办?
  6. 【英语学习】【Daily English】U08 Dating L02 What would you do if you were me?
  7. html搜索框代码_解放双手 | 10行Python代码实现一款网页自动化工具
  8. Origin 在新打开的工作区添加列
  9. Xshell5 破解
  10. 解决U盘写保护,成功擦除
  11. AtCoder Grand Contest 007题解
  12. TikTok二面:“聊聊二维码扫码登录的原理”
  13. 用html制作奥运五环图案,用 canvas 制作奥运五环
  14. 明日之后服务器维修公告维修,《明日之后》排队问题维护进度公告
  15. 包装类-自动装箱、拆箱
  16. 固定利率,会是下一个异军突起的DeFi热点吗?
  17. Tailwind教程2 - 基础样式
  18. 助力中国 DevOps 运动,ONES 携手 DevOps 社区举办第12届 Meetup
  19. element做树形下拉_一个基于 elementUi 的树形下拉框组件vue
  20. 大前端 - 泛客户端开发 - UniAPP项目实战

热门文章

  1. 2022-2028年中国氟硅橡胶产业发展动态及投资前景分析报告
  2. 认清自己,就能活出更好的自己
  3. 不要抱怨,勇敢向前走,你就能拥有更好的自己
  4. 通俗理解条件熵-数学
  5. 内积和外积的物理意义-数学
  6. 解决plsql中文显示问号(???)问题
  7. Android Studio中RecycerView依赖库加载问题
  8. 第二天:Vue基础语法
  9. Git最新版从零开始详细教程(迅速搞定~)
  10. IaaS、PaaS 和 SaaS:云服务模型概述