Stateful Stream Processing 有状态流处理

What is State? 状态是什么?

While many operations in a dataflow simply look at one individual event at a time (for example an event parser), some operations remember information across multiple events (for example window operators). These operations are called stateful.
虽然数据流中的许多操作一次只需要查看一个单独的事件(例如事件解析器),但有些操作会跨多个事件记录信息(例如窗口算子)。这些操作是有状态的。

Some examples of stateful operations:
有状态操作的一些示例:

>When an application searches for certain event patterns, the state will store the sequence of events encountered so far.
>When aggregating events per minute/hour/day, the state holds the pending aggregates.
>When training a machine learning model over a stream of data points, the state holds the current version of the model parameters.
>When historic data needs to be managed, the state allows efficient access to events that occurred in the past.
当应用程序搜索特定事件模式时,状态将存储事件序列(该序列记录了迄今为止遇到的所有事件)。
当聚合每分钟/小时/天的事件时,状态会保存挂起的聚合结果。
当在数据流上训练机器学习模型时,状态保存模型参数的当前版本。
当需要管理历史数据时,状态允许有效访问过去发生的事件。

Flink needs to be aware of the state in order to make it fault tolerant using checkpoints and savepoints.
Flink需要清楚状态的情况,以便使用检查点和保存点实现容错。

Knowledge about the state also allows for rescaling Flink applications, meaning that Flink takes care of redistributing state across parallel instances.
状态还支持Flink应用程序的重新调节,这意味着Flink可以跨并行实例重新分配状态。

Queryable state allows you to access state from outside of Flink during runtime.
可查询状态允许在程序运行时从Flink外部访问状态。

When working with state, it might also be useful to read about Flink’s state backends. Flink provides different state backends that specify how and where state is stored.
在使用状态时,阅读Flink的状态后端可能也很有用。Flink提供了不同的状态后端,用于指定状态存储的方式和位置。

Keyed State 按key分组的状态

Keyed state is maintained in what can be thought of as an embedded key/value store. The state is partitioned and distributed strictly together with the streams that are read by the stateful operators. Hence, access to the key/value state is only possible on keyed streams, i.e. after a keyed/partitioned data exchange, and is restricted to the values associated with the current event’s key. Aligning the keys of streams and state makes sure that all state updates are local operations, guaranteeing consistency without transaction overhead. This alignment also allows Flink to redistribute the state and adjust the stream partitioning transparently.
按key分组的状态保存在嵌入的键/值存储中。状态与有状态的算子读取的流一起被严格分区和分组。因此,只有在按key分组的流上才能访问键/值状态,即在keyed/partitioned数据交换之后,并且仅限于访问当前事件的key相关联的value。将流的状态和key对齐可以确保所有状态更新都是本地操作,从而保证一致性而不会产生事务开销。这种对齐还允许Flink重新分配状态,并透明地调整流分区。

Keyed State is further organized into so-called Key Groups. Key Groups are the atomic unit by which Flink can redistribute Keyed State; there are exactly as many Key Groups as the defined maximum parallelism. During execution each parallel instance of a keyed operator works with the keys for one or more Key Groups.
按key分组的状态进一步组织为所谓的键组。键组是Flink可以重新分配keyed state的原子单位;键组的数量与定义的最大并行度完全相同。在运行时,按key分组的算子的每个并行实例都与一个或多个键组的键一起工作。

State Persistence 状态持久化

Flink implements fault tolerance using a combination of stream replay and checkpointing. A checkpoint marks a specific point in each of the input streams along with the corresponding state for each of the operators. A streaming dataflow can be resumed from a checkpoint while maintaining consistency (exactly-once processing semantics) by restoring the state of the operators and replaying the records from the point of the checkpoint.
Flink通过流重放和检查点的结合使用来实现容错。检查点会为每个算子在它的每个输入流中标记一个特定点(随着对应的状态)。通过恢复算子的状态并从检查点标记的特定点重放记录,可以恢复数据流,同时保持一致性(精确一次处理语义)。

The checkpoint interval is a means of trading off the overhead of fault tolerance during execution with the recovery time (the number of records that need to be replayed).
可以通过检查点间隔来权衡执行期间容错开销与恢复时间(需要重放的记录数)。

The fault tolerance mechanism continuously draws snapshots of the distributed streaming data flow. For streaming applications with small state, these snapshots are very light-weight and can be drawn frequently without much impact on performance. The state of the streaming applications is stored at a configurable place, usually in a distributed file system.
容错机制连续绘制分布式数据流的快照。对于状态较小的流式应用程序,这些快照非常轻量级,可以频繁绘制,对性能没有太大影响。流应用程序的状态存储在可配置的位置,通常存储在分布式文件系统中。

In case of a program failure (due to machine-, network-, or software failure), Flink stops the distributed streaming dataflow. The system then restarts the operators and resets them to the latest successful checkpoint. The input streams are reset to the point of the state snapshot. Any records that are processed as part of the restarted parallel dataflow are guaranteed to not have affected the previously checkpointed state.
如果程序出现故障(由于机器、网络或软件故障),Flink会停止分布式数据流。然后,系统重新启动算子并将其重置为最新的成功检查点。输入流被重置到状态快照的记录点。重新启动的并行数据流的任何记录都保证不会影响以前的检查点状态。

By default, checkpointing is disabled. See Checkpointing for details on how to enable and configure checkpointing.
默认情况下,禁用检查点。有关如何启用和配置检查点的详细信息,请参阅检查点。

For this mechanism to realize its full guarantees, the data stream source (such as message queue or broker) needs to be able to rewind the stream to a defined recent point. Apache Kafka has this ability and Flink’s connector to Kafka exploits this. See Fault Tolerance Guarantees of Data Sources and Sinks for more information about the guarantees provided by Flink’s connectors.
为了这种机制的完全实现,数据流source(如消息队列或broker)需要能够将流倒回到定义的最近点。ApacheKafka具有这种能力,Flink与Kafka的连接器利用了这种能力。有关Flink连接器提供的保证的更多信息,请参见Data Source和Sink的容错保证。

Because Flink’s checkpoints are realized through distributed snapshots, we use the words snapshot and checkpoint interchangeably. Often we also use the term snapshot to mean either checkpoint or savepoint.
因为Flink的检查点是通过分布式快照实现的,所以我们可以互换地使用“快照”和“检查点”。通常,我们还使用术语快照来表示检查点或保存点。

Checkpointing 检查点

The central part of Flink’s fault tolerance mechanism is drawing consistent snapshots of the distributed data stream and operator state. These snapshots act as consistent checkpoints to which the system can fall back in case of a failure. Flink’s mechanism for drawing these snapshots is described in “Lightweight Asynchronous Snapshots for Distributed Dataflows”. It is inspired by the standard Chandy-Lamport algorithm for distributed snapshots and is specifically tailored to Flink’s execution model.
Flink容错机制的核心部分是绘制分布式数据流和算子状态的一致快照。这些快照充当一致的检查点,系统在发生故障时可以回退到这些检查点。Flink绘制这些快照的机制在“分布式数据流的轻量级异步快照”中进行了描述。它的灵感来自分布式快照的标准Chandy-Lamport算法,并专门为Flink的执行模型量身定制。

Keep in mind that everything to do with checkpointing can be done asynchronously. The checkpoint barriers don’t travel in lock step and operations can asynchronously snapshot their state.
请记住,与检查点有关的所有事情都可以异步完成。检查点分界线不会在锁定步骤中移动并且操作可以异步快照其状态。

Since Flink 1.11, checkpoints can be taken with or without alignment. In this section, we describe aligned checkpoints first.
自Flink 1.11以来,检查点可以在对齐或不对齐的情况下进行。在本节中,我们首先描述对齐的检查点。

Barriers 分界线

A core element in Flink’s distributed snapshotting are the stream barriers. These barriers are injected into the data stream and flow with the records as part of the data stream. Barriers never overtake records, they flow strictly in line. A barrier separates the records in the data stream into the set of records that goes into the current snapshot, and the records that go into the next snapshot. Each barrier carries the ID of the snapshot whose records it pushed in front of it. Barriers do not interrupt the flow of the stream and are hence very lightweight. Multiple barriers from different snapshots can be in the stream at the same time, which means that various snapshots may happen concurrently.
Flink分布式快照的核心元素是流分界线。这些分界线被注入到数据流中,并与记录一起作为数据流的一部分流动。分界线永远不会超过记录,它们严格按照顺序流动。分界线将数据流中的记录分为进入当前快照的记录和进入下一个快照的记录。每个分界线都带有快照的ID,它将快照的记录推到它前面。分界线不会中断数据流,并且非常轻量。来自不同快照的多个分界线可以同时出现在流中,这意味着各种快照可能同时发生。

Stream barriers are injected into the parallel data flow at the stream sources. The point where the barriers for snapshot n are injected (let’s call it Sn) is the position in the source stream up to which the snapshot covers the data. For example, in Apache Kafka, this position would be the last record’s offset in the partition. This position Sn is reported to the checkpoint coordinator (Flink’s JobManager).
流分界线被注入到source处的并行数据流中。在source流中,快照n的分界线被注入的点(我们称之为Sn)是快照包含数据的终点。例如,在ApacheKafka中,该位置将是分区中最后一条记录的偏移量。该位置Sn被汇报给检查点协调器(Flink的JobManager)。

The barriers then flow downstream. When an intermediate operator has received a barrier for snapshot n from all of its input streams, it emits a barrier for snapshot n into all of its outgoing streams. Once a sink operator (the end of a streaming DAG) has received the barrier n from all of its input streams, it acknowledges that snapshot n to the checkpoint coordinator. After all sinks have acknowledged a snapshot, it is considered completed.
然后分界线向下游流动。当中间算子从其所有输入流中接收到快照n的分界线时,它会向其所有输出流中发出快照n分界线。一旦sink 算子(流DAG的末端)从其所有输入流接收到分界线n,它将向检查点协调器确认快照n。在所有sink确认快照后,将认为快照已完成。

Once snapshot n has been completed, the job will never again ask the source for records from before Sn, since at that point these records (and their descendant records) will have passed through the entire data flow topology.
快照n完成后,作业将不再向source请求Sn之前的记录,因为此时这些记录(及其派生记录)已通过整个数据流。

Operators that receive more than one input stream need to align the input streams on the snapshot barriers. The figure above illustrates this:
接收多个输入流的算子需要在快照分界线上对齐输入流。上图说明了这一点:

>As soon as the operator receives snapshot barrier n from an incoming stream, it cannot process any further records from that stream until it has received the barrier n from the other inputs as well. Otherwise, it would mix records that belong to snapshot n and with records that belong to snapshot n+1.
一旦算子从一个输入流接收到快照分界线n,它就无法处理来自该流的任何其他记录,直到它也从其他输入流接收到屏障n。否则,它将混淆属于快照n的记录和属于快照n+1的记录。

>Once the last stream has received barrier n, the operator emits all pending outgoing records, and then emits snapshot n barriers itself.
一旦接收到最后一个输入流的分界线n,算子将发出所有挂起的记录,然后发出快照分界线n本身。

>It snapshots the state and resumes processing records from all input streams, processing records from the input buffers before processing the records from the streams.
接着算子快照状态并恢复处理所有输入流中的记录,且在处理来自流的记录之前先处理来自输入缓冲区的记录。

>Finally, the operator writes the state asynchronously to the state backend.
最后,算子将状态异步写入状态后端。

Note that the alignment is needed for all operators with multiple inputs and for operators after a shuffle when they consume output streams of multiple upstream subtasks.
请注意,对于具有多个输入的所有算子,以及在shuffle后使用多个上游子任务的输出流的算子,都需要对齐。

Snapshotting Operator State 快照operator状态

When operators contain any form of state, this state must be part of the snapshots as well.
当算子包含任何形式的状态时,该状态也必须是快照的一部分。

Operators snapshot their state at the point in time when they have received all snapshot barriers from their input streams, and before emitting the barriers to their output streams. At that point, all updates to the state from records before the barriers have been made, and no updates that depend on records from after the barriers have been applied. Because the state of a snapshot may be large, it is stored in a configurable state backend. By default, this is the JobManager’s memory, but for production use a distributed reliable storage should be configured (such as HDFS). After the state has been stored, the operator acknowledges the checkpoint, emits the snapshot barrier into the output streams, and proceeds.
算子在从其所有的输入流都接收到快照分界线后,以及在将分界线发出到其输出流之前,对其状态进行快照。在这一点上,所有状态更新都依赖分界线之前的记录,而没有依赖分界线之后的记录。因为快照的状态可能很大,所以它存储在可配置的状态后端中。默认情况下,这是JobManager的内存,但对于生产使用,应配置分布式可靠存储(如HDFS)。存储状态后,算子确认检查点,将快照分界线发送到输出流中,然后继续。

The resulting snapshot now contains:
>For each parallel stream data source, the offset/position in the stream when the snapshot was started
>For each operator, a pointer to the state that was stored as part of the snapshot
生成的快照将会包含:
对于每个并行流数据source,快照启动时流的偏移量/位置
对于每个算子,一个指向状态(状态存储为快照的一部分)的指针

Recovery 恢复

Recovery under this mechanism is straightforward: Upon a failure, Flink selects the latest completed checkpoint k. The system then re-deploys the entire distributed dataflow, and gives each operator the state that was snapshotted as part of checkpoint k. The sources are set to start reading the stream from position Sk. For example in Apache Kafka, that means telling the consumer to start fetching from offset Sk.
这种机制下的恢复非常简单:一旦发生故障,Flink选择最新完成的检查点k。然后系统重新部署整个分布式数据流,并向每个算子提供对应的状态(这些状态被快照为检查点k的一部分)。source被设置为开始从位置Sk读取流。例如,在ApacheKafka中,这意味着告诉消费者开始从偏移量Sk消费数据。

If state was snapshotted incrementally, the operators start with the state of the latest full snapshot and then apply a series of incremental snapshot updates to that state.
如果状态是增量快照,则算子从最新完整快照的状态开始,然后对该状态应用一系列增量快照更新。

See Restart Strategies for more information.
有关更多信息,请参阅重新启动策略。

Unaligned Checkpointing 未对齐检查点

Checkpointing can also be performed unaligned. The basic idea is that checkpoints can overtake all in-flight data as long as the in-flight data becomes part of the operator state.
检查点也可以不对齐地执行。基本思想是,只要in-flight数据成为算子状态的一部分,检查点就可以赶上所有in-flight数据。

Note that this approach is actually closer to the Chandy-Lamport algorithm , but Flink still inserts the barrier in the sources to avoid overloading the checkpoint coordinator.
请注意,这种方法实际上更接近Chandy-Lamport算法,但Flink仍然在source中插入分界线,以避免检查点协调器过载。

The figure depicts how an operator handles unaligned checkpoint barriers:
>The operator reacts on the first barrier that is stored in its input buffers.
>It immediately forwards the barrier to the downstream operator by adding it to the end of the output buffers.
>The operator marks all overtaken records to be stored asynchronously and creates a snapshot of its own state.
该图描述了算子如何处理未对齐的检查点分界线:
算子对存储在其输入缓存中的第一个分界线作出反应。
它通过将分界线添加到输出缓存的末端,立即将分界线转发给下游算子。
算子将所有超车记录标记为异步存储,并创建其自身状态的快照。

Consequently, the operator only briefly stops the processing of input to mark the buffers, forwards the barrier, and creates the snapshot of the other state.
因此,算子仅短暂停止处理输入以标记缓存 ,转发分界线,并创建其他状态的快照。

Unaligned checkpointing ensures that barriers are arriving at the sink as fast as possible. It’s especially suited for applications with at least one slow moving data path, where alignment times can reach hours. However, since it’s adding additional I/O pressure, it doesn’t help when the I/O to the state backends is the bottleneck. See the more in-depth discussion in ops for other limitations.
未对齐的检查点可确保分界线尽可能快地到达sink。它特别适用于至少有一条缓慢移动的数据路径的应用程序,其对齐时间可以达到数小时。然而,由于它增加了额外的I/O压力,当状态后端的I/O成为瓶颈时,它没有帮助。有关其他限制,请参阅ops中更深入的讨论。

Note that savepoints will always be aligned.
请注意,保存点将始终对齐分界线。

Unaligned Recovery 不一致恢复

Operators first recover the in-flight data before starting processing any data from upstream operators in unaligned checkpointing. Aside from that, it performs the same steps as during recovery of aligned checkpoints.
算子首先恢复in-flight中的数据,然后在未对齐的检查点中开始处理来自上游算子的任何数据。除此之外,它的执行步骤与恢复对齐的检查点时的步骤相同。

State Backends 状态后端

The exact data structures in which the key/values indexes are stored depends on the chosen state backend. One state backend stores data in an in-memory hash map, another state backend uses RocksDB as the key/value store. In addition to defining the data structure that holds the state, the state backends also implement the logic to take a point-in-time snapshot of the key/value state and store that snapshot as part of a checkpoint. State backends can be configured without changing your application logic.
存储键/值索引的确切数据结构取决于所选的状态后端。一个状态后端将数据存储在内存中的HashMap,另一个状态后台将RocksDB用作键/值存储。除了定义保存状态的数据结构外,状态后端还可以:获取键/值状态的时间点快照并将该快照存储为检查点的一部分。因此可以在不更改应用程序逻辑的情况下配置状态后端。

Savepoints 保存点

All programs that use checkpointing can resume execution from a savepoint. Savepoints allow both updating your programs and your Flink cluster without losing any state.
所有使用检查点的程序都可以从保存点恢复执行。保存点允许在不丢失任何状态的情况下更新程序和Flink集群。

Savepoints are manually triggered checkpoints, which take a snapshot of the program and write it out to a state backend. They rely on the regular checkpointing mechanism for this.
保存点是手动触发的检查点,用于获取程序快照并将其写入状态后端。为此,他们依赖常规检查点机制。

Savepoints are similar to checkpoints except that they are triggered by the user and don’t automatically expire when newer checkpoints are completed. To make proper use of savepoints, it’s important to understand the differences between checkpoints and savepoints which is described in checkpoints vs. savepoints.
保存点与检查点类似,只是它们是由用户触发的,并且在完成新的检查点时不会自动过期。为了正确使用保存点,了解检查点和保存点之间的差异非常重要,这在“检查点与保存点”中有描述。

Exactly Once vs. At Least Once 精确一次和至少一次

The alignment step may add latency to the streaming program. Usually, this extra latency is on the order of a few milliseconds, but we have seen cases where the latency of some outliers increased noticeably. For applications that require consistently super low latencies (few milliseconds) for all records, Flink has a switch to skip the stream alignment during a checkpoint. Checkpoint snapshots are still drawn as soon as an operator has seen the checkpoint barrier from each input.
对齐操作可能会增加流应用程序中的延迟。通常,这种额外的延迟大约为几毫秒,但我们有时会看到一些延迟显著增加的异常情况。对于要求所有事件都具有一致地超低延迟(几毫秒)的应用程序,Flink提供了一个在检查点期间跳过流对齐的开关。但当算子从每个输入中看到检查点分界线后,检查点快照仍会被立刻生成。

When the alignment is skipped, an operator keeps processing all inputs, even after some checkpoint barriers for checkpoint n arrived. That way, the operator also processes elements that belong to checkpoint n+1 before the state snapshot for checkpoint n was taken. On a restore, these records will occur as duplicates, because they are both included in the state snapshot of checkpoint n, and will be replayed as part of the data after checkpoint n.
当跳过对齐时,算子会继续处理所有输入,即使在检查点n的一些检查点分界线到达之后也是如此。这样,在获取检查点n的状态快照之前,操作员还处理属于检查点n+1的元素。在恢复时,这些记录将作为重复记录出现,因为它们都包含在检查点n的状态快照中,并将作为检查点n后数据的一部分进行重放。

Alignment happens only for operators with multiple predecessors (joins) as well as operators with multiple senders (after a stream repartitioning/shuffle). Because of that, dataflows with only embarrassingly parallel streaming operations (map(), flatMap(), filter(), …) actually give exactly once guarantees even in at least once mode.
对齐仅适用于具有多个输入(join)的算子以及具有多个输出的算子(在流重新分区/shuffle之后)。正因为如此,仅具有令人尴尬的并行流操作(map()、flatMap()、filter()、…)的数据流实际上提供了精确的一次保证,即使在至少一次模式下也是如此。

State and Fault Tolerance in Batch Programs 批处理程序中的状态和容错

Flink executes batch programs as a special case of streaming programs, where the streams are bounded (finite number of elements). A DataSet is treated internally as a stream of data. The concepts above thus apply to batch programs in the same way as well as they apply to streaming programs, with minor exceptions:
Flink批处理是流处理的特例,批处理中流是有界的(有限个元素)。DataSet在内部被视为数据流。因此,上述概念也适用于批处理(与流处理的方式相同),只有少数例外:

Fault tolerance for batch programs does not use checkpointing. Recovery happens by fully replaying the streams. That is possible, because inputs are bounded. This pushes the cost more towards the recovery, but makes the regular processing cheaper, because it avoids checkpoints.
批处理的容错不使用检查点。通过完全重放流进行恢复。这是可能的,因为输入是有界的。这样成本就主要是由恢复带来的,但会使常规处理成本更少,因为它避免了检查点。

Stateful operations in the DataSet API use simplified in-memory/out-of-core data structures, rather than key/value indexes.
DataSet API中的有状态操作使用简化的in-memory/out-of-core数据结构,而不是键/值索引。

The DataSet API introduces special synchronized (superstep-based) iterations, which are only possible on bounded streams. For details, check out the iteration docs.
DataSet API引入了特殊的同步(superstep-based)迭代,这仅适用于有界流。有关详细信息,请查看迭代文档。

Concepts:Stateful Stream Processing相关推荐

  1. 一周一论文(翻译)——[SIGMOD 19] Elasticutor:Rapid Elasticity for Realtime Stateful Stream Processing

    Abstract 弹性非常适用于流系统,以保证针对工作负载动态的低延迟,例如到达率的激增和数据分布的波动.现有系统使用以resource-centric的方法实现弹性,该方法在并行实例(即执行程序)之 ...

  2. Stream Processing:滑动窗口的聚集(aggregation)操作的优化算法讲解

    本文将要讲解流处理中滑动窗口聚集操作的相关优化算法.将分别从下面几个方面讲解: 什么是滑动窗口? 什么是滑动窗口的聚集操作? 聚集操作的优化的必要性在哪里? 有哪些优化算法,它们的原理分别是什么? 4 ...

  3. Netflix: 从 Batch ETL 到 Stream Processing 的转型之路

    大胆预测:重量级的数据应用,包括但不仅限于数据分析,数据挖掘,计算广告等,将全部会转换成实时数据处理架构.在电子化市场营销,尤其当今信息技术快速发展的前提下,数据处理的快慢直接影响变现的质量. 爱好收 ...

  4. Stream Processing: S4系统模型分析和关键源码读解

    S4(Simple Scalable Stream System) 流数据处理系统是Yahoo!公司提出的,在2011年的时候成为Apache软件基金下的一个孵化项目,可惜的是在2014年的时候该孵化 ...

  5. Stream Processing: Apache Kafka的Exactly-once的定义 原理和实现

    2018年,Apache Kafka以一种特殊的设计和方法实现了强语义的exactly-once和事务性.热泪盈眶啊! 这篇文章将讲解kafka中exactly-once和事务操作的原理,具体为 (1 ...

  6. Stream Processing:Apache Flink快照(snapshot)原理

    本文将要讲解的是Apache Flink分布式流处理的轻量异步的快照原理.网上已经有几篇相关的博文,而本文的不同之处在于,它不是论文的纯粹翻译(论文地址),而是用自己的语言结合自己的理解对其原理的阐述 ...

  7. 一周一论文(翻译)——[IEEE 14] Elastic scaling for data stream processing

    Abstract 本文讨论与通用分布式数据流处理应用程序的自动并行化相关的盈利问题.自动并行化涉及在应用程序的数据流图中定位区域,这些区域可以在运行时复制以应用数据分区,以实现扩展.为了使自动并行化在 ...

  8. 一周一论文(翻译)——[VLDB 19] Minimizing Cost by Reducing Scaling Operators in Distributed Stream Processing

    Abstract 弹性分布式流处理系统能够动态地适应工作负载的变化.通常,这些系统通过向上或向下扩展来对输入数据的速率或资源利用水平做出反应.目标是优化系统的资源使用,从而降低其运营成本.但是,这种扩 ...

  9. Discretized Streams: An Efficient and Fault-Tolerant Model for Stream Processing on Large Clusters

    阅读笔记 概述: 本文同样发表于2012年.提出了一种称为离散化数据流(Discretized Streams,D-Streams)的编程模型. 该模型提供了一种高级函数式API,具有高度的一致性和强 ...

最新文章

  1. linux中-i选项的作用,linux – find中的-prune选项有什么作用?
  2. 一身漏洞狂奔24年!人人都用的WiFi被曝重大漏洞,随时成为监控你的工具
  3. oracle11g +WindoWs7 安装错误:未找到文件WFMLRSVCApp.ear
  4. 乌当区利用大数据织密环境监测保护网
  5. OC中runtime的使用
  6. jqgrid本地数据例子_微型数据转换器如何通过更小尺寸为您带来更多价值
  7. 【django】二、构建个人博客网站
  8. 【Linux】Linux查看机器负载-IO负载
  9. 查看MySQL以及SQL Server 实际存储类型
  10. 十天精通CSS3学习笔记 part3
  11. [转]paint,update和repaint三种awt方法
  12. 在CentOS6.9中搭建HBase
  13. POJ 3134 - Power Calculus (IDDFS)
  14. coreldraw怎样定数等分_CorelDRAW基础教程,教你cdr如何等分分割图片
  15. python实现搜狗新闻挖掘(一)
  16. 半群 群 阿贝尔群 环 整数环 多项式环
  17. [图像]中值滤波(Matlab实现)
  18. 干货!常见waf识别
  19. 华硕飞行堡垒FX53VD拆机详细教程
  20. oracle建表备份数据,oracle建表备份脚本,如果update的数据不对,可以从WEB_RI_PLYEDR_CED_BAK找回...

热门文章

  1. 翻译 | 《JavaScript Everywhere》第1章 开发环境(^_^)
  2. Wi-Fi6凭什么助力企业数字化转型?
  3. 注册教育邮箱免费使用pycharm专业版一年
  4. cmstop php,CmsTop是什么
  5. php框架启动过程,自己写php框架启动类
  6. jlu C删除重复元素
  7. 华为交换机关机方法_华为交换机常用命令
  8. steps_per_epoch 与 epochs 的关系
  9. 国内云服务器商怎么选?阿里云、腾讯云、华为云、天翼云怎么选?
  10. 从新版电视剧《笑傲江湖》看到的颠覆思维