案例一: 复制和多路复用

vim a1.conf
//第一道flume
#各个组件命名
a1.sources = r1
a1.channels = c1 c2
a1.sinks = k1 k2#Source
a1.sources.r1.type = netcat
a1.sources.r1.bind = hadoop102
a1.sources.r1.port = 4444#Channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 100a1.channels.c2.type = memory
a1.channels.c2.capacity = 10000
a1.channels.c2.transactionCapacity = 100#Sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop103
a1.sinks.k1.port = 6666a1.sinks.k2.type = avro
a1.sinks.k2.hostname = hadoop104
a1.sinks.k2.port = 8888#组合 绑定
a1.sources.r1.channels = c1 c2
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2
vim a2.conf
//第二道:连接hdfs
#各个组件命名
a2.sources = r1
a2.channels = c1
a2.sinks = k1#Source
a2.sources.r1.type = avro
a2.sources.r1.bind = hadoop103
a2.sources.r1.port = 6666#Channel
a2.channels.c1.type = memory
a2.channels.c1.capacity = 10000
a2.channels.c1.transactionCapacity = 100#Sink
a2.sinks.k1.type = hdfs
a2.sinks.k1.hdfs.path = hdfs://hadoop102:9820/flume/%Y%m%d/%H
#上传文件的前缀
a2.sinks.k1.hdfs.filePrefix = logs-
###是否按照时间滚动文件夹
a2.sinks.k1.hdfs.round = true
###多少时间单位创建一个新的文件夹
a2.sinks.k1.hdfs.roundValue = 1
###重新定义时间单位
a2.sinks.k1.hdfs.roundUnit = hour
###是否使用本地时间戳
a2.sinks.k1.hdfs.useLocalTimeStamp = true
###积攒多少个Event才flush到HDFS一次
a2.sinks.k1.hdfs.batchSize = 100
###设置文件类型,可支持压缩
a2.sinks.k1.hdfs.fileType = DataStream
###多久生成一个新的文件
a2.sinks.k1.hdfs.rollInterval = 60
###设置每个文件的滚动大小
a2.sinks.k1.hdfs.rollSize = 134217700
###文件的滚动与Event数量无关
a2.sinks.k1.hdfs.rollCount = 0
#组合 绑定
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1
vim a3.conf
//第二道flume。输出到控制台中
#各个组件命名
a3.sources = r1
a3.channels = c1
a3.sinks = k1#Source
a3.sources.r1.type = avro
a3.sources.r1.bind = hadoop104
a3.sources.r1.port = 8888#Channel
a3.channels.c1.type = memory
a3.channels.c1.capacity = 10000
a3.channels.c1.transactionCapacity = 100#Sink
a3.sinks.k1.type = logger#组合 绑定
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1

案例二:负载均衡

仅修改a1

vim a1.conf
#各个组件命名
a1.sources = r1
a1.channels = c1
a1.sinks = k1 k2#Source
a1.sources.r1.type = netcat
a1.sources.r1.bind = hadoop102
a1.sources.r1.port = 4444
#Channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 100#Sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop103
a1.sinks.k1.port = 6666a1.sinks.k2.type = avro
a1.sinks.k2.hostname = hadoop104
a1.sinks.k2.port = 8888a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type = load_balance
a1.sinkgroups.g1.processor.backoff = true
a1.sinkgroups.g1.processor.selector = random#组合 绑定
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c1

案例三:故障转移

仅仅修改a1

vim a1.conf
#各个组件命名
a1.sources = r1
a1.channels = c1
a1.sinks = k1 k2#Source
a1.sources.r1.type = netcat
a1.sources.r1.bind = hadoop102
a1.sources.r1.port = 4444
#Channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 100#Sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop103
a1.sinks.k1.port = 6666a1.sinks.k2.type = avro
a1.sinks.k2.hostname = hadoop104
a1.sinks.k2.port = 8888a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type = failover
a1.sinkgroups.g1.processor.priority.k1 = 5
a1.sinkgroups.g1.processor.priority.k2 = 10
a1.sinkgroups.g1.processor.maxpenalty = 10000

案例四:聚合

vim a1.conf
-----------------------
#各个组件命名
a1.sources = r1
a1.channels = c1
a1.sinks = k1#Source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /opt/module/flume-1.9.0/datas/hive.log#Channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 100#Sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop104
a1.sinks.k1.port = 8888#聚合
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
vim a2.conf
---------------------
#各个组件命名
a2.sources = r1
a2.channels = c1
a2.sinks = k1#Source
a2.sources.r1.type = netcat
a2.sources.r1.bind = hadoop103
a2.sources.r1.port = 6666#Channel
a2.channels.c1.type = memory
a2.channels.c1.capacity = 10000
a2.channels.c1.transactionCapacity = 100#Sink
a2.sinks.k1.type = avro
a2.sinks.k1.hostname = hadoop104
a2.sinks.k1.port = 8888#组合 绑定
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1
vim a3.conf
--------------------------
#各个组件命名
a3.sources = r1
a3.channels = c1
a3.sinks = k1#Source
a3.sources.r1.type = avro
a3.sources.r1.bind = hadoop104
a3.sources.r1.port = 8888#Channel
a3.channels.c1.type = memory
a3.channels.c1.capacity = 10000
a3.channels.c1.transactionCapacity = 100#Sink
a3.sinks.k1.type = hdfs
a3.sinks.k1.hdfs.path = hdfs://hadoop102:9820/flume/%Y%m%d/%H
##上传文件的前缀
a3.sinks.k1.hdfs.filePrefix = logs-
###是否按照时间滚动文件夹
a3.sinks.k1.hdfs.round = true
###多少时间单位创建一个新的文件夹
a3.sinks.k1.hdfs.roundValue = 1
###重新定义时间单位
a3.sinks.k1.hdfs.roundUnit = hour
###是否使用本地时间戳
a3.sinks.k1.hdfs.useLocalTimeStamp = true
###积攒多少个Event才flush到HDFS一次
a3.sinks.k1.hdfs.batchSize = 100
###设置文件类型,可支持压缩
a3.sinks.k1.hdfs.fileType = DataStream
###多久生成一个新的文件
a3.sinks.k1.hdfs.rollInterval = 60
###设置每个文件的滚动大小
a3.sinks.k1.hdfs.rollSize = 134217700
###文件的滚动与Event数量无关
a3.sinks.k1.hdfs.rollCount = 0#组合 绑定
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1

案例五:自定义Interceptor

1**)案例需求**

使用Flume采集服务器本地日志,需要按照日志类型的不同,将不同种类的日志发往不同的分析系统。

2**)需求分析**

在实际的开发中,一台服务器产生的日志类型可能有很多种,不同类型的日志可能需要发送到不同的分析系统。此时会用到Flume拓扑结构中的Multiplexing结构,Multiplexing的原理是,根据event中Header的某个key的值,将不同的event发送到不同的Channel中,所以我们需要自定义一个Interceptor,为不同类型的event的Header中的key赋予不同的值。

在该案例中,我们以端口数据模拟日志,以数字(单个)和字母(单个)模拟不同类型的日志,我们需要自定义interceptor区分数字和字母,将其分别发往不同的分析系统(Channel)。

3**)实现步骤**

(1)创建一个maven项目,并引入以下依赖。

<dependency><groupId>org.apache.flume</groupId><artifactId>flume-ng-core</artifactId><version>1.9.0</version>
</dependency>

(2)定义CustomInterceptor类并实现Interceptor接口。

public class LogInterceptor implements Interceptor {@Overridepublic void initialize() {}/*** 单个event的处理* @param event* @return*/@Overridepublic Event intercept(Event event) {//1. 获取bodyString body = new String(event.getBody());//2. 获取headersMap<String, String> headers = event.getHeaders();//3.判断处理if(body.contains("flume")){headers.put("title","at");}else{headers.put("title","ot");}return event;}/*** 多个event的处理* @param events* @return*/@Overridepublic List<Event> intercept(List<Event> events) {for (Event event : events) {intercept(event);}return events;}@Overridepublic void close() {}public static class MyBuilder implements Builder{@Overridepublic Interceptor build() {return new LogInterceptor();}/*** 用于读取配置信息的.* @param context*/@Overridepublic void configure(Context context) {}}
}

(3)编辑flume配置文件

为hadoop102上的Flume1配置1个netcat source,1个sink group(2个avro sink),并配置相应的ChannelSelector和interceptor

# Name the components on this agent
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1 c2# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = com.atguigu.flume.interceptor.CustomInterceptor$Builder
a1.sources.r1.selector.type = multiplexing
a1.sources.r1.selector.header = type
a1.sources.r1.selector.mapping.letter = c1
a1.sources.r1.selector.mapping.number = c2
# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop103
a1.sinks.k1.port = 6666a1.sinks.k2.type=avro
a1.sinks.k2.hostname = hadoop104
a1.sinks.k2.port = 8888# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100# Use a channel which buffers events in memory
a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100# Bind the source and sink to the channel
a1.sources.r1.channels = c1 c2
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2

为hadoop103上的Flume4配置一个avro source和一个logger sink。

a1.sources = r1
a1.sinks = k1
a1.channels = c1a1.sources.r1.type = avro
a1.sources.r1.bind = hadoop103
a1.sources.r1.port = 6666a1.sinks.k1.type = loggera1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100a1.sinks.k1.channel = c1
a1.sources.r1.channels = c1

为hadoop104上的Flume3配置一个avro source和一个logger sink。

a1.sources = r1
a1.sinks = k1
a1.channels = c1a1.sources.r1.type = avro
a1.sources.r1.bind = hadoop104
a1.sources.r1.port = 8888a1.sinks.k1.type = loggera1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100a1.sinks.k1.channel = c1
a1.sources.r1.channels = c1

(4)分别在hadoop102,hadoop103,hadoop104上启动flume进程,注意先后顺序。

(5)在hadoop102使用netcat向localhost:44444发送字母和数字。

(6)观察hadoop103和hadoop104打印的日志。

案例六:自定义Source

1**)介绍**

Source是负责接收数据到Flume Agent的组件。Source组件可以处理各种类型、各种格式的日志数据,包括avro、thrift、exec、jms、spooling directory、netcat、sequence generator、syslog、http、legacy。官方提供的source类型已经很多,但是有时候并不能满足实际开发当中的需求,此时我们就需要根据实际需求自定义某些source。

官方也提供了自定义source的接口:

https://flume.apache.org/FlumeDeveloperGuide.html#source根据官方说明自定义MySource需要继承AbstractSource类并实现Configurable和PollableSource接口。

实现相应方法:

getBackOffSleepIncrement() //backoff 步长

getMaxBackOffSleepInterval()//backoff 最长时间

configure(Context context)//初始化context(读取配置文件内容)

process()//获取数据封装成event并写入channel,这个方法将被循环调用。

使用场景:读取MySQL数据或者其他文件系统。

2**)需求**

使用flume接收数据,并给每条数据添加前缀,输出到控制台。前缀可从flume配置文件中配置。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-wA2YgOvG-1598360531925)(C:\Users\Administrator\AppData\Roaming\Typora\typora-user-images\1598359687928.png)]

4**)编码**

(1)导入pom依赖

<dependencies><dependency><groupId>org.apache.flume</groupId><artifactId>flume-ng-core</artifactId><version>1.9.0</version>
</dependency>

(2)编写代码

/*** 自定义Source需要继承Flume提供的AbstractSource类, 并实现 Configurable,PollableSource接口.**/
public class MySource  extends AbstractSource implements Configurable, PollableSource {private String prefix ;/*** source的核心处理方法** 此方法在flume的内部流程中会循环调用* @return* @throws EventDeliveryException*/@Overridepublic Status process() throws EventDeliveryException {try {TimeUnit.SECONDS.sleep(1);} catch (InterruptedException e) {e.printStackTrace();}Status status  = null ;try {// 获取eventEvent event = getSomeData();// 处理eventgetChannelProcessor().processEvent(event);//正确处理status = Status.READY;}catch (Throwable t){//出现问题status = Status.BACKOFF;}return status;}/*** 结合实际的业务编写采集数据的过程.** 需求:每次调用都随机生成一个UUID** @return*/public  Event getSomeData() {String uuid = UUID.randomUUID().toString();Event event = new SimpleEvent();event.setBody((prefix +"--" +  uuid).getBytes());event.getHeaders().put("flume","NB");return event ;}/*** 每次退避增长的超时时间* @return*/@Overridepublic long getBackOffSleepIncrement() {return 1;}/*** 最大的退避时间* @return*/@Overridepublic long getMaxBackOffSleepInterval() {return 10;}/*** 读取配置文件中的配置项** 假设将来给自定义Source类配置一个配置项:  a1.sources.r1.prefix*/@Overridepublic void configure(Context context) {prefix = context.getString("prefix","AT");}
}

5**)测试**

(1)打包

将写好的代码打包,并放到flume的lib目录(/opt/module/flume)下。

(2)配置文件

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1# Describe/configure the source
a1.sources.r1.type = com.atguigu.MySource
a1.sources.r1.delay = 1000
#a1.sources.r1.field = atguigu# Describe the sink
a1.sinks.k1.type = logger# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

(3)开启任务

bin/flume-ng agent -c conf/ -f job/mysource.conf -n a1 -Dflume.root.logger=INFO,console

案例六:自定义Sink

1**)介绍**

Sink不断地轮询Channel中的事件且批量地移除它们,并将这些事件批量写入到存储或索引系统、或者被发送到另一个Flume Agent。

Sink是完全事务性的。在从Channel批量删除数据之前,每个Sink用Channel启动一个事务。批量事件一旦成功写出到存储系统或下一个Flume Agent,Sink就利用Channel提交事务。事务一旦被提交,该Channel从自己的内部缓冲区删除事件。

Sink组件目的地包括hdfs、logger、avro、thrift、ipc、file、null、HBase、solr、自定义。官方提供的Sink类型已经很多,但是有时候并不能满足实际开发当中的需求,此时我们就需要根据实际需求自定义某些Sink。

官方也提供了自定义sink的接口:

https://flume.apache.org/FlumeDeveloperGuide.html#sink根据官方说明自定义MySink需要继承AbstractSink类并实现Configurable接口。

实现相应方法:

configure(Context context)//初始化context(读取配置文件内容)

process()//从Channel读取获取数据(event),这个方法将被循环调用。

使用场景:读取Channel数据写入MySQL或者其他文件系统。

2**)需求**

使用flume接收数据,并在Sink端给每条数据添加前缀和后缀,输出到控制台。前后缀可在flume任务配置文件中配置。

3**)编码**

/*** 自定义Sink需要继承Flume提供的AbstractSink类, 实现Configurable接口*/
public class MySink  extends AbstractSink implements Configurable {//定义Logger对象Logger logger = LoggerFactory.getLogger(MySink.class);/*** Sink的核心处理方法** 在flume的内部会循环调用process方法.* @return* @throws EventDeliveryException*/@Overridepublic Status process() throws EventDeliveryException {Status status = null ;//获取channel对象Channel channel = getChannel();// 获取事务对象Transaction transaction = channel.getTransaction();try {// 开启事务transaction.begin();//take数据Event event ;while(true){event = channel.take();if(event != null){break ;}//没有take到数据,休息一会TimeUnit.SECONDS.sleep(1);}//处理eventprocessEvent(event);//提交事务transaction.commit();//正常处理status = Status.READY;}catch (Throwable t){// 回滚事务transaction.rollback();// 出现问题status = Status.BACKOFF;}finally {//关闭事务transaction.close();}return status;}/*** 对event的处理** 需求:使用Logger的方式打印到控制台.* @param event*/public  void processEvent(Event event) {logger.info(new String(event.getBody()));}@Overridepublic void configure(Context context) {}
}

4**)测试**

(1)打包

将写好的代码打包,并放到flume的lib目录(/opt/module/flume)下。

(2)配置文件

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444# Describe the sink
a1.sinks.k1.type = com.atguigu.MySink
#a1.sinks.k1.prefix = atguigu:
a1.sinks.k1.suffix = :atguigu# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

(3)开启任务

bin/flume-ng agent -c conf/ -f job/mysink.conf -n a1 -Dflume.root.logger=INFO,console
nc localhost 44444
hello
OK

Flume数据流监控

Ganglia的安装与部署

Ganglia由gmond、gmetad和gweb三部分组成。

gmond(Ganglia Monitoring Daemon)是一种轻量级服务,安装在每台需要收集指标数据的节点主机上。使用gmond,你可以很容易收集很多系统指标数据,如CPU、内存、磁盘、网络和活跃进程的数据等。

gmetad(Ganglia Meta Daemon)整合所有信息,并将其以RRD格式存储至磁盘的服务。

gweb(Ganglia Web)Ganglia可视化工具,gweb是一种利用浏览器显示gmetad所存储数据的PHP前端。在Web界面中以图表方式展现集群的运行状态下收集的多种不同指标数据。

1**)安装ganglia**

​ (1)规划

hadoop102:     web  gmetad gmod hadoop103:     gmodhadoop104:     gmod

​ (2)在102 103 104分别安装epel-release

sudo yum -y install epel-release

​ (3)在102 安装

sudo yum -y install ganglia-gmetad

sudo yum -y install ganglia-web

sudo yum -y install ganglia-gmond

​ (4)在103 和 104 安装

sudo yum -y install ganglia-gmond

2**)在102修改配置文件/etc/httpd/conf.d/ganglia.conf**

udo vim /etc/httpd/conf.d/ganglia.conf修改为红颜色的配置:\# Ganglia monitoring system php web frontend\#Alias /ganglia /usr/share/ganglia
<Location /ganglia>\# Require local\# 通过windows访问ganglia,需要配置Linux对应的主机(windows)ip地址​    Require ip 192.168.202.1  \# Require ip 10.1.2.3\# Require host example.org</Location>

5**)在102修改配置文件/etc/ganglia/gmetad.conf**

sudo vim /etc/ganglia/gmetad.conf修改为:data_source "my cluster" hadoop102

6**)在102 103 104修改配置文件/etc/ganglia/gmond.conf**

sudo vim /etc/ganglia/gmond.conf
修改为:
cluster {name = "my cluster"owner = "unspecified"latlong = "unspecified"url = "unspecified"
}
udp_send_channel {#bind_hostname = yes # Highly recommended, soon to be default.# This option tells gmond to use a source address# that resolves to the machine's hostname.  Without# this, the metrics may appear to come from any# interface and the DNS names associated with# those IPs will be used to create the RRDs.# mcast_join = 239.2.11.71# 数据发送给hadoop102host = hadoop102port = 8649ttl = 1
}
udp_recv_channel {# mcast_join = 239.2.11.71port = 8649# 接收来自任意连接的数据bind = 0.0.0.0retry_bind = true# Size of the UDP buffer. If you are handling lots of metrics you really# should bump it up to e.g. 10MB or even higher.# buffer = 10485760
}

7)在102修改配置文件/etc/selinux/config

sudo vim /etc/selinux/config
修改为:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

尖叫提示:selinux本次生效关闭必须重启,如果此时不想重启,可以临时生效之:

sudo setenforce 0

8**)启动ganglia**

(1)在102 103 104 启动

sudo systemctl start gmond

(2)在102 启动

sudo systemctl start httpd

sudo systemctl start gmetad

9**)打开网页浏览ganglia页面**

http://hadoop102/ganglia

操作Flume测试监控

1**)启动Flume任务**

bin/flume-ng agent \-c conf/ \-n a1 \-f datas/netcat-flume-logger.conf \-Dflume.root.logger=INFO,console \-Dflume.monitoring.type=ganglia \-Dflume.monitoring.hosts=hadoop102:8649

2**)发送数据观察ganglia监测图**

nc localhost 44444SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

尖叫提示:selinux本次生效关闭必须重启,如果此时不想重启,可以临时生效之:

sudo setenforce 0

8**)启动ganglia**

(1)在102 103 104 启动

sudo systemctl start gmond

(2)在102 启动

sudo systemctl start httpd

sudo systemctl start gmetad

9**)打开网页浏览ganglia页面**

http://hadoop102/ganglia

操作Flume测试监控

1**)启动Flume任务**

bin/flume-ng agent \-c conf/ \-n a1 \-f datas/netcat-flume-logger.conf \-Dflume.root.logger=INFO,console \-Dflume.monitoring.type=ganglia \-Dflume.monitoring.hosts=hadoop102:8649

2**)发送数据观察ganglia监测图**

nc localhost 44444

Flume笔记二:案例相关推荐

  1. 大型网站技术架构:核心原理与案例分析阅读笔记二

    大型网站技术架构:核心原理与案例分析阅读笔记二 网站架构设计时可能会存在误区,其实不必一味追随大公司的解决方案,也不必为了技术而技术,要根据本公司的实际情况,制定适合本公司发展的网站架构设计,否则会变 ...

  2. oracle 百万记录 cache,学习笔记:通过案例深入学习In-Memory Database Cache 总结配置过程...

    天萃荷净 详细记录关于In-Memory Database Cache配置方法与使用案例 一.Oracle数据库创建相关用户和权限 1.创建timesten用户 store information a ...

  3. 3D游戏设计读书笔记二

    3D游戏设计读书笔记二 一.简答题 • 解释 游戏对象(GameObjects) 和 资源(Assets)的区别与联系.   GameObjects是一个具体的实例,Assets是包括诸多游戏素材的资 ...

  4. 徐无忌MySQL笔记:案例实战:如何实现MySQL数据库的读写分离?

    徐无忌MySQL笔记:案例实战:如何实现MySQL数据库的读写分离? 完成:第一遍 1.理想的主从架构实现的效果是怎样的? 主库负责所有读写操作,从库只实现对主库备份功能,这样的主从架构性价比是很低的 ...

  5. python学习笔记(二) 基本运算

    python学习笔记(二) 基本运算 1. 条件运算 基本语法 if condition1: do somethings1elif condition2: do somethings2else: do ...

  6. 3台云腾讯云开始hadoop学习之路笔记二

    3台云腾讯云开始hadoop学习之路笔记二(接上) 大三党开始学习hadoop之路了,菜鸟学习hadoop,有啥错误请大佬指教.由于自己电脑配置不够,只能买3台腾讯云服务器来学习了.以下笔记都是记录我 ...

  7. ES6学习笔记二arrow functions 箭头函数、template string、destructuring

    接着上一篇的说. arrow functions 箭头函数 => 更便捷的函数声明 document.getElementById("click_1").onclick = ...

  8. Java语言基础(Java自我进阶笔记二)

    Java语言基础(Java自我进阶笔记二) 一. 什么是Java 的主类结构? 1. #mermaid-svg-xWTL2A8kDyyRPexH .label{font-family:'trebuch ...

  9. qml学习笔记(二):可视化元素基类Item详解(上半场anchors等等)

    原博主博客地址:http://blog.csdn.net/qq21497936 本文章博客地址:http://blog.csdn.net/qq21497936/article/details/7851 ...

最新文章

  1. 工程介绍好处费性质_水运工程造价工程师继续教育课件上新丨海外水运工程造价编制介绍课程发布...
  2. RNA-seq分析流程
  3. 【星球知识卡片】模型蒸馏的核心技术点有哪些,如何对其进行长期深入学习...
  4. Properties相关
  5. [Python web开发] Web框架开发基础 (一)
  6. 从Windows复制文件到Linux显示乱码问题
  7. 非阻塞式编程 php,简单介绍PHP非阻塞模式
  8. HDU1505(HDU1506的加强版)
  9. 1005 Spell It Right (20)
  10. IMU使用入门——WT901CM
  11. python解常微分方程龙格库_求解二阶常微分方程的RungeKutta四阶方法
  12. 岳阳长沙深圳市区中考和高考难度对比
  13. 清末民初张家口地区服饰习俗变迁探研
  14. 我的一次创业经历--分享给希望创业的大学生们 .
  15. Linux进程、线程模型,LWP,pthread_self()
  16. ubuntu top命令详解
  17. 湖北省钟祥一中2021高考成绩查询,京山一中的2020高考喜报三天前就发布了,钟祥一中为什么还没有公布?...
  18. Python分析【标题党】文章
  19. 华为手机便捷好用的原因,终于被我找到了
  20. 更新后谷歌浏览器皮肤背景自动变成黑色模式

热门文章

  1. 计算机毕业设计-基于微信小程序高校学生课堂扫码考勤签到系统-校园考勤打卡签到小程序
  2. 安卓初学之基准线实战
  3. Fisher information解释和数学意义
  4. 【三种常见架构开发模式:MVC、MVP、MVVM】
  5. 搭建DEM企业管理器
  6. 在opensuse上安装TL-WDN5200免驱版无线网卡
  7. 小程序flex布局不生效
  8. 批量处理ios破解后的资源文件为android所用
  9. mysql procedure 存储过程
  10. 非门,NOT Gate