异常检测时间序列

I really talked up Hierarchical Temporal Memory a while ago. It’s still rather new and far from the industry standard for deep learning, but its results are hard to argue with. I’m also a big believer in “emulate form to get function”, so I dove right into Numenta’s NuPIC HTM Python library to try and show some results for all my adulation.

前一段时间,我真的谈到了“ 分层时间记忆” 。 它仍然是相当新的,并且与深度学习的行业标准相去甚远,但是其结果很难争论。 我还是“模仿形式获取功能” 的忠实拥护者 ,因此我直接进入Numenta的NuPIC HTM Python库,尝试为我的所有赞美展示一些结果。

Bad news: It’s written in 2.7. However, the open-source HTM community has put together their own fork, where they recoded the bindings (C++ base) to run in Python 3. I was able to install on a Mac running Mojave 10.14 via the command line PyPI option (after running pip install cmake) without too much hassle.

坏消息:它是用2.7编写的。 但是,开源HTM社区将他们自己的fork放在一起,在其中他们重新编码了绑定(C ++基础)以在Python 3中运行。我能够通过命令行PyPI选项在运行Mojave 10.14的Mac上安装(运行后pip install cmake )不必太麻烦。

There’s some different syntax and naming conventions (documented on the fork linked above), but it’s the same tech as NuPIC’s official package — just a bit more granular. HTM.Core feels like Pytorch compared to NuPIC’s Keras.

有一些不同的语法和命名约定(在上面链接的fork上有记录),但是它与NuPIC的官方软件包是相同的技术—只是粒度略微一点。 与NuPIC的Keras相比,HTM.Core感觉像Pytorch。

打体育馆 (Hitting the Gym)

The real strength of HTM lies in pattern recognition, so I explored HTM.Core’s hotgym.py example: Using a gym’s power consumption & timestamps, it trains a simple model to predict the next likely power value & detect anomalous activity. It’s an elegant way of showing the how HTM deals with patterns, and there’s a huge amount of industry applications for time series & anomaly detection.

HTM的真正优势在于模式识别,因此我探索了HTM.Core的hotgym.py示例:使用健身房的功耗和时间戳,它可以训练一个简单的模型来预测下一个可能的功率值并检测异常活动。 这是展示HTM如何处理模式的一种优雅方式,并且在时间序列和异常检测方面有大量的行业应用。

I ran into a few syntax & Runtime errors and had to recode some parts, so let’s go through the interesting bits. The entire code is up on my Github as alternate_hotgym.ipynb if you want to check out the whole Jupyter notebook.

我遇到了一些语法和运行时错误,不得不重新编码一些部分,所以让我们看一些有趣的部分。 如果您想查看整个Jupyter笔记本,则整个代码都在我的Github上显示为 alternate_hotgym.ipy nb

The process goes something like:

该过程类似于:

  1. Get data from .CSV
    从.CSV获取数据
  2. Create Encoder
    创建编码器
  3. Create Spatial Pooler
    创建空间池
  4. Create Temporal Memory
    创建时间记忆
  5. Training loop, predicting on each iteration
    训练循环,每次迭代预测
  6. Check outputs, make some graphs
    检查输出,制作一些图表

Starts with a CSV, ends with graphs. That’s how you know it’s data science.

以CSV开头,以图形结尾。 这就是您所知道的数据科学。

二进制编码 (Binary Encodings)

The distinguishing factor of HTM models are that they only work with binary inputs. Specifically, Sparse Distributed Representations: a bit vector (generally longer than 2000 places) of 1s and 0s. These can be visualized as a square for easy comparison.

HTM模型的独特之处在于它们仅适用于二进制输入。 具体来说, 稀疏分布式表示 :1和0的位向量(通常长于2000个位)。 这些可以显示为正方形以便于比较。

This is possible through any implementation of an Encoder: an object designed to take a data type (int, string, image etc) and convert it into an SDR. A good encoder ensures that “similar” input data creates SDRs that are also similar, by way of having overlapping bits.

这可以通过任何Encoder的实现来实现:一个旨在采用数据类型(int,字符串,图像等)并将其转换为SDR的对象。 好的编码器可通过重叠位来确保“相似”输入数据创建也相似的SDR。

Here’s an example from Cortical.io: Three SDRs, left to right representing mice, rat, and mammals :

这是来自Cortical.io的示例:三个SDR,从左到右分别代表miceratmammals

Cortical’s Retina皮质的视网膜

Just by looking at the shared activated bits (1s), you can see that mice and rat are a little closer to each other than mammals, but there’s still some mammalian bit overlap going on. Cortical’s Retina HTM model is wicked cool, but we’ll talk about their semantic embedding some other time.

仅通过查看共享的激活位(1s),您就可以看到miceratmammals彼此靠近一些,但是仍然存在一些哺乳动物比特重叠。 Cortical的Retina HTM模型非常酷,但是我们将在另一时间谈论它们的语义嵌入。

So we’ve got power_consumption (float) and timestamp (DateTime). How do we encode this?

所以我们有了power_consumption (float)timestamp (DateTime) 。 我们如何编码呢?

dateEncoder = DateEncoder(    timeOfDay = (30,1) # DateTime is a composite variable    weekend = 21 # how many bits to allocate to each partscalarEncoderParams = RDSE_Parameters() # encoding a continuous varscalarEncoderParams.size       = 700 # SDR sizescalarEncoderParams.sparsity   = 0.02 # 2% sparsity magic numberscalarEncoderParams.resolution = 0.88 scalarEncoder = RDSE(scalarEncoderParams) # 'random distributed scalar encoder'encodingWidth = (dateEncoder.size + scalarEncoder.size)enc_info = Metrics( [encodingWidth], 999999999) # performance metrics storage obj

And there’s the Encoder objects set up; we’ll combine those later in the training loop.

并设置了Encoder对象; 我们将在训练循环的后面将它们结合起来。

浸入游泳池 (A Dip in the Pool)

The next step is the Spatial Pooler: the part that takes the encoded input SDR & translates it to a sparse, more ‘balanced’ SDR while maintaining spatial relationships. It’s a little harder to explain without watching Matt’s nifty video, but I’ll give a quick rundown with the image below.

下一步是Spatial Pooler :获取编码的输入SDR并将其转换为稀疏,更“平衡”的SDR,同时保持空间关系的部分。 如果不看Matt的漂亮视频,要解释起来会有些困难,但是下面的图片将为您提供一个简短的摘要。

The left side is the Encoder’s output SDR (1s marked in blue), and the right is the Spatial Pooler’s output. The mouse hovers over one pool_cell, and displays circles over every input_cell connected to that pool_cell. As you feed SDR data to the pooler, it reinforces the connections of those green circles; the cells that ‘match’ have their synapses strengthened, and the inverse applies to ‘misses’.

左侧是编码器的输出SDR(蓝色标记为1),右侧是空间池工具的输出。 鼠标悬停在一个pool_cell上,并在连接到该pool_cell的每个input_cell上显示圆圈。 当您将SDR数据输入池中时,它会增强这些绿色圆圈的连接; “匹配”的细胞突触得到加强,反之适用于“未命中”。

so it’s Battleship, kinda所以是战舰,有点

Note how it says “Spatial Pooler Columns”. Each cell is actually a column of N cells (we’ll use 5); you’re looking from above at the topmost cell in each column. This’ll come into play later with temporal memory.

请注意它说的是“空间池列”。 每个单元实际上是N单元的一列(我们将使用5); 您正在从上方看每一列的最上方单元格。 这将在以后的时间记忆中发挥作用。

Initializing the pooler:

初始化池:

sp = SpatialPooler(    inputDimensions            = (encodingWidth,),    columnDimensions           = (spParams["columnCount"],), # 1638    potentialPct               = spParams["potentialPct"], # 0.85    potentialRadius            = encodingWidth,    globalInhibition           = True,    localAreaDensity           = spParams["localAreaDensity"], # .04    synPermInactiveDec         = spParams["synPermInactiveDec"], # .006    synPermActiveInc           = spParams["synPermActiveInc"], # 0.04    synPermConnected           = spParams["synPermConnected"], # 0.13    boostStrength              = spParams["boostStrength"], # 3    wrapAround                 = True)sp_info = Metrics(sp.getColumnDimensions(), 999999999)

A lot of this setup involves some well-tested default values that you can pickup from the documentation or examples. There’s a lot of room for parameter tweaking via swarming, of course, but that’s for another day.

许多此类设置涉及一些经过良好测试的默认值,您可以从文档或示例中获取这些默认值。 当然,通过群集进行参数调整的空间很大,但这是另一天的事情。

走在记忆里 (Walking Down Memory Lane)

Now the fancy part: Temporal Memory, which also runs on SDRs. I explained this in my last article, but again I believe HTM School’s video is invaluable for visualizing and understanding how the algorithm learns.

现在最重要的部分是:临时内存,它也可以在SDR上运行。 我在上一篇文章中对此进行了解释,但我再次相信, HTM School的视频对于可视化和了解算法的学习方式非常宝贵。

Remember how each pooler cell was actually a column? Now we’re looking at those columns from the side.

还记得每个池单元实际上是一列吗? 现在,我们从侧面查看这些列。

source资源

If you feed the TM the sequence A B C D it “bursts” various columns by activating all cells in a column. The four letters have distinct patterns (we’re seeing just a little piece of the same SDRs visualized earlier). This example feeds it X B C Y as well.

如果将TM序列ABCD送入TM,它将通过激活列中的所有单元格“爆发”各种列。 这四个字母具有不同的模式(我们只看到了较早可视化的相同SDR的一小部分)。 本示例也将其XBCY

Each cell is randomly connected to many others, and if B follows A, the connections between the cells_involved_in_B and cells_involved_in_A are strengthened. Those synaptic connections are what allows TM to ‘remember’ patterns.

每个单元随机连接到许多其他单元,并且如果B跟随A,则cell_involved_in_B和cell_involved_in_A之间的连接会增强。 这些突触连接使TM能够“记住”模式。

The number of cells per column is also important. Note how B’ (B_from_A) uses different cells in the same column as B’’ (B_from_X). If the columns only had one bit, there’d be no way of differentiating the two “past contexts”.

每列的单元数也很重要。 请注意B' (B_from_A)与B'' (B_from_X) 在同一列中如何使用不同的单元格 。 如果这些列只有一位,那么就无法区分两个“过去上下文”。

So the number of cells per column is essentially “how many inputs far back the TM can remember”.

因此,每列的单元格数量本质上是“ TM可以记住多少个输入。”

tm = TemporalMemory(    columnDimensions          = (spParams["columnCount"],),    cellsPerColumn            = tmParams["cellsPerColumn"], # 13    activationThreshold       = tmParams["activationThreshold"], # 17    initialPermanence         = tmParams["initialPerm"], # 0.21    connectedPermanence       = spParams["synPermConnected"],    minThreshold              = tmParams["minThreshold"], # 19    maxNewSynapseCount        = tmParams["newSynapseCount"], # 32    permanenceIncrement       = tmParams["permanenceInc"], # 0.1    permanenceDecrement       = tmParams["permanenceDec"], # 0.1    predictedSegmentDecrement = 0.0,    maxSegmentsPerCell        = tmParams["maxSegmentsPerCell"], # 128    maxSynapsesPerSegment     = tmParams["maxSynapsesPerSegment"] # 64)tm_info = Metrics( [tm.numberOfCells()], 999999999)

训练时间 (Time to Train)

Now we’ve got nearly all the parts set up, we’ll put them all together in a training loop that iterates over each row of data.

现在,我们几乎已经设置了所有部分,我们将它们放到一个训练循环中,对每个数据行进行迭代。

Not only is an HTM model unsupervised, it trains & predicts as it goes — no need to tinker with batching like conventional neural nets. Each power_consumption and timestamp pair is encoded, spatial pooled, temporally memorized, and used for predictions on the fly, so we’ll be able to see its predictions improving as it learns from each SDR.

HTM模型不仅不受监督,而且可以随时进行训练和预测-无需像传统的神经网络一样进行批处理。 每个power_consumptiontimestamp对都经过编码,空间池化,暂时存储,并用于动态预测,因此我们可以看到,随着从每个SDR学习到的结果,其预测都在改进。

We make use of the sp and tm objects created earlier:

我们利用之前创建的sptm对象:

predictor = Predictor(steps=[1,5], alpha=0.1) # continuous output predictorpredictor_resolution = 1inputs = [] # create inp/out listsanomaly = []anomalyProb = []predictions = {1: [], 5:[]}predictor.reset() # reset the predictorfor count, record in enumerate(records): # iterate through data    dateString = datetime.datetime.strptime(record[0], "%m/%d/%y %H:%M") # unstring timestamp    consumption = float(record[1]) # unstring power value    inputs.append(consumption) # add power to inputs

    # use encoder: create SDRs for each input value    dateBits = dateEncoder.encode(dateString)    consumptionBits = scalarEncoder.encode(consumption)

    # concatenate these encoded_SDRs into a larger one for pooling    encoding = SDR(encodingWidth).concatenate([consumptionBits, dateBits])    enc_info.addData(encoding) # enc_info is our metrics to keep track of how the encoder fares

    # create SDR to represent active columns. it'll be populated by .compute()    # notably, this activeColumns SDR has same dimensions as spatial pooler    activeColumns = SDR(sp.getColumnDimensions())

    # throw the input into the spatial pool and hope it swims    sp.compute(encoding, True, activeColumns) # we're training, so learn=True    tm_info.addData(tm.getActiveCells().flatten())

    # pass pooled SDR through temporal memory    tm.compute(activeColumns, learn=True)        # make prediction based on current input & memory-context    pdf = predictor.infer( tm.getActiveCells() )        for n in (1,5):            if pdf[n]:                predictions[n].append( np.argmax( pdf[n] ) *     predictor_resolution )            else:                predictions[n].append(float('nan'))

    anomalyLikelihood = anomaly_history.anomalyProbability( consumption, tm.anomaly )    anomaly.append(tm.anomaly)    anomalyProb.append(anomalyLikelihood)        # reinforce output connections    predictor.learn(count, tm.getActiveCells(), int(consumption/predictor_resolution))

The last piece of the puzzle is the predictor object, which is essentially a regular neural network that receives the TM’s output SDR and outputs the desired prediction — in our case, power consumption. It gets trained through backpropagation increment/decrements like most NNs.

最后一个难题是predictor对象,它实际上是一个规则的神经网络,可以接收TM的输出SDR并输出所需的预测(在我们的情况下为功耗)。 像大多数NN一样,通过反向传播增量/减量来训练它。

The predictor is the “head” of the model: at each iteration we ask the model “what level of power consumption do you think will happen next?” and record the prediction to compare with the real value later.

预测变量是模型的“头”:在每次迭代中,我们都会询问模型“您认为接下来会发生什么功耗水平?” 并记录预测,以便稍后与实际值进行比较。

结果 (Results)

We’ve got some neat metrics to inspect the health of our HTM model. You want to keep an eye on connected synapses & overlap, generally speaking.

我们有一些简洁的指标来检查HTM模型的运行状况。 一般来说,您想关注已connected synapsesoverlap

We calculate the Root-Mean-Squared-Error for output prediction accuracy of two “versions” of the model: 1) looking one step behind & 2) looking five steps behind.

我们计算模型的两个“版本”的输出预测准确度的均方根误差:1)落后1个步骤和2)落后5个步骤。

{1: 0.07548016042172133, 5: 0.0010324285729320193} # RMSEpower_consumption:    min: 10    max: 90.9    mean: 31.3

For comparison, the units of power consumption range from 10 to 90, so this RMSE is looking pretty good. It’s also nice to see that the 5-step model has a dramatically higher accuracy, reinforcing the idea that you pattern recognition leads to better predictions.

为了进行比较,功耗单位为10到90,因此该RMSE看起来不错。 也很高兴看到5步模型具有更高的准确度,进一步证明了模式识别可以带来更好的预测。

But remember— if you don’t make a graph, it’s not really data science:

但是请记住-如果您不作图,那实际上不是数据科学:

seabornseaborn

The X-axis is ‘timesteps’, which is ~4,400 hourly readings from the same gym — about six months. Take a look at the green 5-time line on the top graph: it starts out with some wild miscalculations, but eventually starts to predict in sync with the actual next-value (red).

X轴是“时间步长”,即来自同一体育馆的每小时约4,400个读数-大约六个月。 看一下顶部图形上的绿色5次线:它开始时有一些错误的计算,但最终开始与实际的下一个值(红色)同步进行预测。

The model does a good job understanding daily and weekly fluctuations (that’s why we encoded both “hourOfDay” and “weekend” as part of the Encoder SDR). Since we encode the Date, and thus the month, this model should pick up on seasonal power consumption shifts as well.

该模型可以很好地理解每日和每周的波动(这就是为什么我们将“hourOfDay”“weekend”编码为Encoder SDR的原因)。 由于我们对日期(也就是月份)进行了编码,因此该模型也应该考虑季节性用电量的变化。

The anomaly prediction seems to encounter some weekly signal; since there’s 26 “double spikes” in the above graph, I’d reckon it’s marking the start and end of each weekend as anomalous activity. For a real anomaly detection system, we’d probably want to tune that so it doesn’t give unneeded worries every week.

异常预测似乎每周遇到一些信号。 由于上图中有26个“双高峰”,我认为这将每个周末的开始和结束标记为异常活动。 对于真正的异常检测系统,我们可能需要对其进行调整,以使其不会在每周中引起不必要的担忧。

做得好,脑模型 (Good job, brain-model)

Not bad for fixing up an existing template. Again, this is mostly from the lovely lads behind HTM.Core — I just tidied up and commented. There’s tons of other HTM applications; it really depends on how you configure the encoder, but theoretically anything can be turned into an SDR. If you have a meat grinder, anything can be made into a sausage.

修复现有模板还不错。 同样,这主要是来自HTM.Core背后的可爱小伙子们-我刚刚整理并评论。 还有大量其他HTM应用程序; 这实际上取决于您如何配置编码器,但理论上任何东西都可以变成SDR。 如果您有绞肉机,那么任何东西都可以做成香肠。

I messed around with an MNIST numerical handwriting HTM classifier, which gets ~95% accuracy (though most models do good on MNIST these days). I’d imagine HTM image systems would excel at video-feed object recognition, a tricky task: “5-step memory” can look at “5 frames before this frame”. If the model can ‘see’ wings flapping, it’s probably a bird, etc.

我弄乱了MNIST数字手写HTM分类器,该分类器的准确度约为95%(尽管如今大多数模型在MNIST上都表现不错)。 我以为HTM图像系统将在视频馈送目标识别方面表现出色,这是一项棘手的任务 :“五步存储”可以查看“此帧之前的5帧”。 如果模型可以“看到”翅膀在拍打,那可能是一只鸟,等等。

If all this sounds interesting, that’s because it is. The most efficient way to get started learning about HTM tech is Numenta’s HTM School, which I found intuitive and quite delightful.

如果这一切听起来很有趣,那是因为。 开始学习HTM技术的最有效方法是Numenta的HTM School ,我发现它很直观,也很令人愉快。

Check out more at the official forums.

在官方论坛上查看更多信息。

翻译自: https://medium.com/@mark.s.cleverley/neurological-time-series-anomaly-detection-hierarchical-temporal-memory-ad0015c32170

异常检测时间序列

http://www.taodudu.cc/news/show-2536117.html

相关文章:

  • 真正的人工智能能实现吗_如何实现真正的人工智能
  • 【史上最全 | 编程入门指南无标题】
  • otlv4 mysql_OTL 使用记录
  • linux otl mysql_Linux下用OTL操作MySql(包含自己封装的类库及演示样例代码下载)...
  • mysql otl变量绑定_OTL翻译(5) -- otl_stream流相关绑定变量
  • otl连接mysql数据库_C++类库:OTL连接MySQL ODBC数据库(insert, update, select)
  • OTL常见报错
  • OTL简单介绍
  • OTL:通用数据库连接模板
  • otl连接mysql数据库_OTL--c++中连接数据库的方法
  • c 连接oracle otl,C++类库:OTL通用的数据库连接类库
  • OTL/OCL/BTL/甲类/乙类/甲乙类
  • otl c mysql_OTL的使用
  • oracle otl,使用OTL操作Oracle数据库
  • linux otl mysql_Linux下使用OTL操作mysql数据库
  • mysql otl变量绑定_otl_stream流相关绑定变量
  • otl连接mysql_otl通过myodbc连接mysql
  • OTL中文文档
  • OTL库
  • otl mysql 下载_OTL mySQL
  • otl c mysql_OTL
  • OTL简介
  • OTL 简介
  • 什么是OTL
  • Unity立体几何 点到直线距离计算
  • 异面直线距离的求法
  • 空间中直线到平面的距离的公式是什么?
  • 点到直线的距离公式: 一元微积分
  • 三维空间距离公式
  • 计算GPS坐标的直线距离

异常检测时间序列_神经病学时间序列/异常检测:分层时间记忆相关推荐

  1. 多元高斯分布异常检测代码_数据科学 | 异常检测的N种方法,阿里工程师都盘出来了...

    ↑↑↑↑↑点击上方蓝色字关注我们! 『运筹OR帷幄』转载 作者:黎伟斌.胡熠.王皓 编者按: 异常检测在信用反欺诈,广告投放,工业质检等领域中有着广泛的应用,同时也是数据分析的重要方法之一.随着数据量 ...

  2. python怎么检测按键_在python中检测按键?

    小智.. 41 Python有一个具有许多功能的键盘模块.安装它,也许使用此命令: pip3 install keyboard 然后在代码中使用它: import keyboard # using m ...

  3. java 异常 最佳实践_处理Java异常的10种最佳实践

    java 异常 最佳实践 在本文中,我们将看到处理Java异常的最佳实践. 用Java处理异常不是一件容易的事,因为新手很难理解,甚至专业的开发人员也可能浪费时间讨论应该抛出或处理哪些Java异常. ...

  4. 武汉锅检所检测机器人_宿迁水上行走管道检测机器人CCTV-武汉天仪仪器

    康巴传媒网 > 新闻列表 > 浏览文章 发布时间:2020-12-28 23:13:00 本网讯:宿迁水上行走管道检测机器人CCTV, 8.等待一段时间树脂固化成型,放气,拉出气囊科技井下 ...

  5. 拉索检测机器人_斜拉桥拉索检测机器人控制系统设计

    斜拉桥拉索检测机器人控制系统设计 郑李明 ; 王兴松 [期刊名称] <电子机械工程> [年 ( 卷 ), 期] 2008(024)006 [摘要] 拉索的维护对于斜拉索桥长期的可靠性是至关 ...

  6. 拉索检测机器人_昌都斜拉索锈蚀检测系统,斜拉索检测机器人推荐

    ()考虑到斜拉索的耐久性,在后续的斜拉桥设计中应着重研判和创新的内容包括:拉索自身可检修.易更换设计,例如钢绞线拉索防护体系,对于索体,单根钢绞线包裹环氧涂层,整束钢绞线外包防护,钢绞线与防护层之间灌 ...

  7. opencv 检测几何图形_使用OpenCV + ConvNets检测几何形状

    opencv 检测几何图形 A simple yet powerful pipeline for detecting shapes in scanned documents 一个简单而强大的管道,用于 ...

  8. 异常注意事项_子父类异常

    package com.learn.demo03.Exception; /*子父类的异常:- 如果父类抛出了多个异常,子类重写父类方法时,抛出和父类相同的异常或者是父类异常的子类或者不抛出异常.- 父 ...

  9. java 异常堆栈输出_打印Java异常堆栈信息

    背景 在开发Java应用程序的时候,遇到程序抛异常,我们通常会把抛异常时的运行时环境保存下来(写到日志文件或者在控制台中打印出来).这样方便后续定位问题. 需要记录的运行时环境包含两部分内容:抛异常时 ...

  10. java异常断点数组_使用IDEA异常断点来定位java.lang.ArrayStoreException的问题

    前言 最近对 base-spring-boot项目进行了升级.在将其用于应用开发中时遇到java.lang.ArrayStoreException的异常导致程序无法启动.平常开发过程中面对这种描述不够 ...

最新文章

  1. mysql密码遗忘和登陆报错问题
  2. js获取iframe中的元素_在 HTML 中包含资源的新思路
  3. (传送门) Ubuntu随身系统
  4. css匹配title,解决css中的匹配问题
  5. 面试题及答案_NET
  6. SLF4J简介与使用(整合log4j)
  7. selenium配置无界面chrome浏览器
  8. VMware 设置网络
  9. 《大型数据库技术》MySQL的进阶开发技巧
  10. Linux中shell提示符定制
  11. 在线CSV转XML工具
  12. Redhat linux AS4 环境下iSCSI协议配置
  13. docker 离线安装_企业级Docker私库Harbor安装详解
  14. 如何防范和应对Redis勒索,腾讯云教你出招
  15. 手机计算机键盘技巧,【盲打计算器】看似简单,你不一定会的小技巧
  16. 弘辽科技:淘宝保险保证金怎么开通?它和消保保证金有什么区别?
  17. 什么样的作品才能上抖音热门?
  18. 黑少微服务商店实战经验分享:从单体式架构迁移到微服务架构
  19. 终端服务器超出了最大允许连接数解决办法
  20. win11窗口桌面管理器突然很吃内存?

热门文章

  1. 数据库设计——概念模型
  2. 矩阵快速幂求斐波那契数列 poj3070
  3. 抽象代数笔记-群、子群、商群
  4. 中序和后序构建二叉树
  5. 小米路由r2d论坛_小米路由器R2D拆机换3T紫盘
  6. 学习使用 OpenCV 中的函数 cv2.kmeans() 对数据进行分类
  7. shopex使用经验
  8. HDL4SE:软件工程师学习Verilog语言(十)
  9. python压缩解压缩_Python实现压缩和解压缩ZIP文件的方法分析
  10. wireshark不同颜色报文含义(报文颜色)