之前在对conv和bn算子融合的时候偶然得知在pytorch1.10中是可以进行部分操作的。故写下此学习记录。torch中的fx主要功能是实现对nn.Module实例的变换,或者说用来操作模型。
torch.fx中主要包含三个组件:符号追踪器(symbolic tracer),中间表示(intermediate representation),python代码生成(python code generation)。
import torch
# Simple module for demonstration
class MyModule(torch.nn.Module):def __init__(self):super().__init__()self.param = torch.nn.Parameter(torch.rand(3, 4))self.linear = torch.nn.Linear(4, 5)def forward(self, x):return self.linear(x + self.param).clamp(min=0.0, max=1.0)module = MyModule()from torch.fx import symbolic_trace
# Symbolic tracing frontend - captures the semantics of the module
symbolic_traced : torch.fx.GraphModule = symbolic_trace(module)# High-level intermediate representation (IR) - Graph representation
print(symbolic_traced.graph)
"""
graph():%x : [#users=1] = placeholder[target=x]%param : [#users=1] = get_attr[target=param]%add : [#users=1] = call_function[target=operator.add](args = (%x, %param), kwargs = {})%linear : [#users=1] = call_module[target=linear](args = (%add,), kwargs = {})%clamp : [#users=1] = call_method[target=clamp](args = (%linear,), kwargs = {min: 0.0, max: 1.0})return clamp
"""# Code generation - valid Python code
print(symbolic_traced.code)
"""
def forward(self, x):param = self.paramadd = x + param;  x = param = Nonelinear = self.linear(add);  add = Noneclamp = linear.clamp(min = 0.0, max = 1.0);  linear = Nonereturn clamp
"""

符号追踪器对模块的forward代码进行符号执行,送入的是假的输入,叫Proxies,代码中所有的操作都会被记录下来。
这个追踪最终可以得到代码计算图的中间表示:torch.fx.Graph。Graph中记录了所有的操作具体的,一个Graph包括一系列的torch.fx.Node,Node是Graph的基本单元,它对应的是一个操作,Node.op记录的具体的操作类型,主要包括以下几种类型:placeholder,get_attr,call_function,call_module,call_method,output。

  1. 有关图的例子
import torch
import torch.fxclass MyModule(torch.nn.Module):def __init__(self):super().__init__()self.param = torch.nn.Parameter(torch.rand(3, 4))self.linear = torch.nn.Linear(4, 5)def forward(self, x):return torch.topk(torch.sum(self.linear(x + self.linear.weight).relu(), dim=-1), 3)m = MyModule()
gm = torch.fx.symbolic_trace(m)# 打印graph的所有node
gm.graph.print_tabular()

输出:

opcode         name           target                                                       args                kwargs
-------------  -------------  -----------------------------------------------------------  ------------------  -----------
placeholder    x              x                                                            ()                  {}
get_attr       linear_weight  linear.weight                                                ()                  {}
call_function  add            <built-in function add>                                      (x, linear_weight)  {}
call_module    linear         linear                                                       (add,)              {}
call_method    relu           relu                                                         (linear,)           {}
call_function  sum_1          <built-in method sum of type object at 0x00007FF80165E360>   (relu,)             {'dim': -1}
call_function  topk           <built-in method topk of type object at 0x00007FF80165E360>  (sum_1, 3)          {}
output         output         output                                                       (topk,)             {}

定义一个模块MyModule,实例化并追踪,然后调用graph.print_tabular方法打印出来,显示这个图的节点。在打印的信息中可以看到每个Node除了op之外,还有name,traget,args和kwargs,对于不同的op其中含义有点区别。placeholder其实就是Graph的输入,而output是Graph的输出,它们的target和name一样。get_attr就是获取module的参数,call_function是调用函数,它的target指明了具体的函数,call_module是调用子module,target就是子module名,call_method就是调用torch的函数。args和kwargs就是op对用的tuple和dict参数,可以看到很多ops的args其实就是其它Node的name,所以这样各个Node就是建立了联系,从而构成了Graph。

  1. 图形操作
    最后一个组件,就是用于Python代码生成,就是根据Graph的语义自动生成相应的执行代码。
    torch.fx做的就是将一个Module转换为静态图,这和转换Module有什么关系呢?如果我们将一个Module追踪得到的Graph进行变换,加上Python代码生成 工具,是不是就可以达到变换一个Module的目的,整个流程就是symbolic tracing -> intermediate representation -> transforms -> Python code generation,这就实现了一个Module到另外一个Module的Python-to-Python的转换流程流程如下所示:
import torch
import torch.fxdef transform(m: nn.Module,tracer_class : type = torch.fx.Tracer) -> torch.nn.Module:# 首先得到模块的graph# Step 1: Acquire a Graph representing the code in `m`# NOTE: torch.fx.symbolic_trace is a wrapper around a call to# fx.Tracer.trace and constructing a GraphModule. We'll# split that out in our transform to allow the caller to# customize tracing behavior.graph : torch.fx.Graph = tracer_class().trace(m)# 然后对graph做一些修改操作# Step 2: Modify this Graph or create a new onegraph = ...# 最后用新得到的graph构建新的模块# Step 3: Construct a Module to returnreturn torch.fx.GraphModule(m, graph)

这里最终得到的torch.fx.GraphModule除了包含graph和code属性外就和正常的nn.Module一样,它的forward执行的就是graph的语义代码。这里来看一个修改Module的简单例子,这个例子中我们将模块中所有的torch.add()操作替换成 torch.mul() :

import torch
import torch.fx# Sample module
class M(torch.nn.Module):def forward(self, x, y):return torch.add(x, y)def transform(m: torch.nn.Module,tracer_class : type = fx.Tracer) -> torch.nn.Module:graph : fx.Graph = tracer_class().trace(m)# FX represents its Graph as an ordered list of# nodes, so we can iterate through them.for node in graph.nodes:# Checks if we're calling a function (i.e:# torch.add)if node.op == 'call_function':# The target attribute is the function# that call_function calls.if node.target == torch.add:node.target = torch.mulgraph.lint() # Does some checks to make sure the# Graph is well-formed.return fx.GraphModule(m, graph)

① 删除活添加节点,FX的API torch.relu()
② replace_pattern()用于编辑图形的查找替换API

  1. 图像操作的相关案例
    ① Replace one op
    ② Conv/Batch Norm fusion
    ③ replace_pattern: Basic usage
    ④ Quantization
    ⑤ Invert Transformation

结合之前做的conv和bn融合的案例,因为在推理阶段将BN融合到Conv里合成一个操作可以加速推理速度,当时就查到了torch.fx是可以很快解决的,具体代码如下:

import torch.fx as fx
from torch.fx.node import Argument, Target
from torch.nn.utils.fusion import fuse_conv_bn_eval
from typing import Type, Dict, Any, Tuple, Iterable, Optional, List, cast
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.fx.passes.shape_prop import ShapeProp
import copy
from collections import defaultdict
import torch.utils.mkldnn as th_mkldnn
import operator
import time
import logging
from enum import Enumdef _parent_name(target : str) -> Tuple[str, str]:"""Splits a qualname into parent path and last atom.For example, `foo.bar.baz` -> (`foo.bar`, `baz`)"""*parent, name = target.rsplit('.', 1)return parent[0] if parent else '', name# Works for length 2 patterns with 2 modules
def matches_module_pattern(pattern: Iterable[Type], node: fx.Node, modules: Dict[str, Any]):if len(node.args) == 0:return Falsenodes: Tuple[Any, fx.Node] = (node.args[0], node)for expected_type, current_node in zip(pattern, nodes):if not isinstance(current_node, fx.Node):return Falseif current_node.op != 'call_module':return Falseif not isinstance(current_node.target, str):return Falseif current_node.target not in modules:return Falseif type(modules[current_node.target]) is not expected_type:return Falsereturn Truedef replace_node_module(node: fx.Node, modules: Dict[str, Any], new_module: torch.nn.Module):assert(isinstance(node.target, str))parent_name, name = _parent_name(node.target)modules[node.target] = new_modulesetattr(modules[parent_name], name, new_module)def fuse(model: torch.nn.Module, inplace=False) -> torch.nn.Module:"""Fuses convolution/BN layers for inference purposes. Will deepcopy yourmodel by default, but can modify the model inplace as well."""patterns = [(nn.Conv1d, nn.BatchNorm1d),(nn.Conv2d, nn.BatchNorm2d),(nn.Conv3d, nn.BatchNorm3d)]if not inplace:model = copy.deepcopy(model)fx_model = fx.symbolic_trace(model)modules = dict(fx_model.named_modules())new_graph = copy.deepcopy(fx_model.graph)for pattern in patterns:for node in new_graph.nodes:# 找到目标Node:args是Conv,target是BNif matches_module_pattern(pattern, node, modules):if len(node.args[0].users) > 1:  # Output of conv is used by other nodescontinueconv = modules[node.args[0].target]bn = modules[node.target]# 融合BN和Convfused_conv = fuse_conv_bn_eval(conv, bn)# 替换Node的module,其实就是将融合后的module替换Conv Node的target,背后是模块替换replace_node_module(node.args[0], modules, fused_conv)# 将所有用到BN Node的替换为Conv Node(已经融合后的Conv)node.replace_all_uses_with(node.args[0])# 删除BN Nodenew_graph.erase_node(node)return fx.GraphModule(fx_model, new_graph)def remove_dropout(model: nn.Module) -> nn.Module:"""Removes all dropout layers from the module."""fx_model = fx.symbolic_trace(model)class DropoutRemover(torch.fx.Transformer):def call_module(self, target : Target, args : Tuple[Argument, ...], kwargs : Dict[str, Any]) -> Any:if isinstance(self.submodules[target], nn.Dropout):assert len(args) == 1return args[0]else:return super().call_module(target, args, kwargs)return DropoutRemover(fx_model).transform()def extract_subgraph(orig_module: nn.Module, nodes: List[fx.Node], inputs: List[fx.Node], outputs: List[fx.Node]):"""Given lists of nodes from an existing graph that represent a subgraph, returns a submodule that executes that subgraph."""new_graph = fx.Graph()env: Dict[fx.Node, fx.Node] = {}for input in inputs:new_node = new_graph.placeholder(input.name)env[input] = new_nodefor node in nodes:new_node = new_graph.node_copy(node, lambda x: env[x])env[node] = new_nodenew_graph.output([env[output] for output in outputs])new_graph.lint()return fx.GraphModule(orig_module, new_graph)mkldnn_supported = [nn.Conv2d, nn.Linear, nn.BatchNorm2d, nn.ReLU, nn.MaxPool2d, nn.AvgPool2d, nn.AdaptiveAvgPool2d,torch.relu, torch.transpose, torch.sigmoid,F.relu, F.avg_pool2d, F.adaptive_avg_pool2d
]
# These are operators that may not be convertible into MKLDNN ops (e.g. the
# args are scalar values). Thus, we only include them in the subgraph if their
# arguments are already in MKLDNN.
# TODO: Determine whether this can be removed after type inference.
mkldnn_supported_unknown = [operator.add, operator.mul]
mkldnn_map = {nn.Conv2d: th_mkldnn.MkldnnConv2d,nn.Linear: th_mkldnn.MkldnnLinear,nn.BatchNorm2d: lambda a, _: th_mkldnn.MkldnnBatchNorm(a)
}def modules_to_mkldnn(nodes: List[fx.Node], modules: Dict[str, nn.Module]):"""For each node, if it's a module that can be preconverted into MKLDNN,then we do so and create a mapping to allow us to convert from the MKLDNNversion of the module to the original."""old_modules: Dict[nn.Module, nn.Module] = {}for node in nodes:if node.op == 'call_module':assert(isinstance(node.target, str))cur_module = modules[node.target]if type(cur_module) in mkldnn_map:new_module = mkldnn_map[type(cur_module)](cur_module, torch.float)assert(isinstance(new_module, nn.Module))old_modules[new_module] = copy.deepcopy(cur_module)replace_node_module(node, modules, new_module)return old_modulesdef reset_modules(nodes: List[fx.Node], modules: Dict[str, nn.Module], old_modules: Dict[nn.Module, nn.Module]):"""Maps each module that's been changed with `modules_to_mkldnn` back to itsoriginal."""for node in nodes:if node.op == 'call_module':assert(isinstance(node.target, str))cur_module = modules[node.target]if cur_module in old_modules:replace_node_module(node, modules, old_modules[cur_module])class MklSubgraph:def __init__(self, fx_graph: fx.Graph):self.fx_graph = fx_graphself.nodes: List[fx.Node] = []self.start_nodes: List[fx.Node] = []self.end_nodes: List[fx.Node] = []def gen_mkl_autotuner(example_inputs, iters=10, warmup=1):"""This generates a heuristic that can be passed into `optimize_for_inference` thatdetermines whether a subgraph should be run in MKL by running it with the example_inputs.Example usage:heuristic = gen_mkl_autotuner(example_inputs, iters=10)fast_model = optimization.optimize_for_inference(model, heuristic)"""fx_model = Noneold_modules = Nonedef use_mkl_heuristic(graph: MklSubgraph) -> bool:nonlocal fx_model, old_modulesinput_nodes = graph.start_nodesif fx_model is None:fx_model = graph.fx_graph.owning_moduleold_modules = graph.fx_graph.old_modules  # type: ignore[attr-defined]ShapeProp(fx_model).propagate(example_inputs)sample_inputs = [torch.randn(node.shape) for node in input_nodes]  # type: ignore[attr-defined]output_args = cast(List[fx.Node], [node.args[0] for node in graph.end_nodes])submodule = extract_subgraph(fx_model, graph.nodes, input_nodes, output_args)def benchmark(f):for _ in range(warmup):f()begin = time.time()for _ in range(iters):out = f()return time.time() - beginmkl_time = benchmark(lambda: [i.to_dense() for i in submodule(*[i.to_mkldnn() for i in sample_inputs])])reset_modules(submodule.graph.nodes, dict(submodule.named_modules()), old_modules)no_mkl_time = benchmark(lambda: submodule(*sample_inputs))return mkl_time < no_mkl_timereturn use_mkl_heuristicdef use_mkl_length(graph: MklSubgraph) -> bool:"""This is a heuristic that can be passed into `optimize_for_inference` thatdetermines whether a subgraph should be run in MKL by checking if thereare more than 2 nodes in it"""return len(graph.nodes) > 2class UnionFind:def __init__(self, n):self.parent: List[Optional[int]] = [None] * nself.size: List[int] = [0] * ndef make_set(self, v: int):self.parent[v] = vself.size[v] = 1def find(self, v: int) -> int:par = self.parent[v]if v == par:return vassert(par is not None)self.parent[v] = self.find(par)return cast(int, self.parent[v])def join(self, a: int, b: int):a, b = self.find(a), self.find(b)if a == b:return aif self.size[a] < self.size[b]:a, b = b, aself.parent[b] = aself.size[a] += self.size[b]def optimize_for_inference(model: torch.nn.Module,pass_config: Optional[Dict[str, Any]] = None,tracer: Type[fx.Tracer] = fx.Tracer
) -> torch.nn.Module:"""Performs a set of optimization passes to optimize a model for thepurposes of inference. Specifically, the passes that are run are:1. Conv/BN fusion2. Dropout removal3. MKL layout optimizationsThe third optimization takes a function `use_mkl_heuristic` that's usedto determine whether a subgraph should be explicity run in MKL layout.Note: As FX does not currently handle aliasing, this pass currentlyassumes nothing aliases. If that isn't true, use at your own risk."""default_pass_config = {"conv_bn_fuse": True,"remove_dropout": True,"mkldnn_layout_optimize": {'heuristic': use_mkl_length},}if pass_config is None:pass_config = {}default_pass_config.update(pass_config)if default_pass_config["conv_bn_fuse"]:model = fuse(model)if default_pass_config["remove_dropout"]:model = remove_dropout(model)if default_pass_config["mkldnn_layout_optimize"] is False:return modelif not isinstance(default_pass_config["mkldnn_layout_optimize"], dict):raise RuntimeError("mkldnn_layout_optimize config is not a dict")if "heuristic" not in default_pass_config["mkldnn_layout_optimize"]:raise RuntimeError("Heuristic not found in mkldnn_layout_optimize config")use_mkl_heuristic = default_pass_config["mkldnn_layout_optimize"]["heuristic"]cur_tracer = tracer()fx_graph = cur_tracer.trace(copy.deepcopy(model))fx_model = fx.GraphModule(cur_tracer.root, fx_graph)modules: Dict[str, nn.Module] = dict(model.named_modules())class MklSupport(Enum):NO = 1YES = 2UNKNOWN = 3# Inserts to_mkldnn and to_dense around every node we want to be a MKLDNN node.# If the op is in `mkldnn_supported` then we always treat it as a MKLDNN node.# However, if it's in `mkldnn_supported_unknown`, then we only treat it as# a MKLDNN node if its inputs are MKLDNN nodes.for node in list(fx_graph.nodes):supports_mkldnn = MklSupport.NOif node.op == 'call_module':cur_module = modules[node.target]if type(cur_module) in mkldnn_supported:supports_mkldnn = MklSupport.YESsample_parameter = next(cur_module.parameters(), None)if sample_parameter is not None:assert(sample_parameter.dtype == torch.float), "this pass is only for torch.float modules"assert(sample_parameter.device == torch.device('cpu')), "this pass is only for CPU modules"elif node.op == 'call_function':if node.target in mkldnn_supported:supports_mkldnn = MklSupport.YESelif node.target in mkldnn_supported_unknown:supports_mkldnn = MklSupport.UNKNOWNif supports_mkldnn != MklSupport.NO:if supports_mkldnn == MklSupport.UNKNOWN:if not any([arg.target == 'to_dense' for arg in node.args]):continuewith fx_graph.inserting_before(node):mkldnn_args = fx.map_arg(node.args, lambda n: fx_graph.call_method('to_mkldnn', (n, )))node.args = cast(Tuple[fx.node.Argument], mkldnn_args)with fx_graph.inserting_after(node):dense_x = fx_graph.create_node('call_method', 'to_dense', (node,))node.replace_all_uses_with(dense_x)dense_x.args = (node,)# Does pre-conversion of all modules into MKLDNN (when possible)old_modules = modules_to_mkldnn(list(fx_graph.nodes), modules)fx_graph.old_modules = old_modules  # type: ignore[attr-defined]# optimizes all a -> to_dense -> to_mkldnn -> b patterns into a -> bfor node in fx_graph.nodes:if node.op == 'call_method' and node.target == 'to_dense':prv_node = node.args[0]users = list(node.users)for user in users:if user.op == 'call_method' and user.target == 'to_mkldnn':user.replace_all_uses_with(prv_node)fx_graph.erase_node(user)if len(node.users) == 0:fx_graph.erase_node(node)num_nodes = len(fx_graph.nodes)uf = UnionFind(num_nodes)def get_color(n):if hasattr(n, 'color'):  # Current node is part of a MKL subgraphreturn uf.find(n.color)if hasattr(n, 'start_color'):  # Current node is input to MKL subgraphreturn uf.find(n.start_color)return None# This code is to find each MKLDNN subgraph. Each MKLDNN subgraph consists# of input nodes (which are only `to_mkldnn` calls), output nodes# (`to_dense` calls), and intermediate nodes, which are run entirely on# MKLDNN layout tensors.## Specifically, this code does a flood fill on a directed acyclic graph# (DAG), starting from each possible "start node" (i.e: `to_mkldnn` nodes).# If every node only had one input, this would be sufficient. However, in# the case that a node has multiple inputs coming from different start# nodes (i.e. colors), we need to join these 2 colors into 1. That's done# using a Disjoint Set Union.for cur_idx, node in enumerate(fx_graph.nodes):if node.op == 'call_method' and node.target == 'to_mkldnn':node.start_color = cur_idxuf.make_set(cur_idx)elif node.op == 'call_method' and node.target == 'to_dense':assert(get_color(node.args[0]) is not None)node.end_color = get_color(node.args[0])else:cur_colors = [get_color(i) for i in node.all_input_nodes if isinstance(i, fx.Node) if get_color(i) is not None]if len(cur_colors) == 0:continueassert(not any(i is None for i in cur_colors))cur_colors = sorted(cur_colors)node.color = cur_colors[0]for other_color in cur_colors[1:]:uf.join(cur_colors[0], other_color)mkldnn_graphs: Dict[int, MklSubgraph] = defaultdict(lambda: MklSubgraph(fx_graph))for node in fx_graph.nodes:if hasattr(node, 'color'):mkldnn_graphs[uf.find(node.color)].nodes.append(node)if hasattr(node, 'start_color'):mkldnn_graphs[uf.find(node.start_color)].start_nodes.append(node)if hasattr(node, 'end_color'):mkldnn_graphs[uf.find(node.end_color)].end_nodes.append(node)# Now that we have all the subgraphs, we need to decide which MKLDNN# subgraphs we actually want to keep in MKLDNN.for graph in mkldnn_graphs.values():if not use_mkl_heuristic(graph):for node in graph.start_nodes + graph.end_nodes:prv = node.args[0]node.replace_all_uses_with(prv)fx_graph.erase_node(node)reset_modules(graph.nodes, modules, old_modules)mkldnn_conversions = 0for node in fx_graph.nodes:if node.target == 'to_mkldnn' or node.target == 'to_dense':mkldnn_conversions += 1logging.info(f"mkldnn conversions: {mkldnn_conversions}")fx_graph.lint()result = fx.GraphModule(model, fx_graph)return result

注:
参考文章: https://zhuanlan.zhihu.com/p/428735136
文档地址:https://pytorch.org/docs/stable/fx.html

pytorch1.10之fx相关推荐

  1. conda安装pytorch1.10.1+paddlepaddle-gpu2.2.1+cuda10.2+cudnn7.6.5

    NV驱动下载安装https://www.nvidia.cn/Download/index.aspx 多卡的话,安装 NCCL https://developer.nvidia.com/nccl/ncc ...

  2. pytorch1.10新功能inference_mode

    类原型:CLASS torch.autograd.inference_mode(mode=True) InferenceMode是在pytorch1.10版本中引入的新功能,是一个类似于 no_gra ...

  3. 用pytorch官网命令 安装pytorch1.10.1+CUDA11.1报错

    因为服务器的cuda是11.1的所以需要安装历史版本 复制官网的命令 CUDA 11.1 pip install torch==1.10.1+cu111 torchvision==0.11.2+cu1 ...

  4. 暗夜精灵3060配cuda11.3+pytorch1.10.2

    简略写下安装思路 1.打开显卡直连才能装nvidia驱动 2.nvidia驱动可以选最新的,向下兼容 3.安装cuda11.3 试过11.1,11.6,11.5,都不行,可能要cuda和pytorch ...

  5. pytorch1.10+cuda11.3+minconda+pycharm(win11)

    Torch 1.10.1 下载并安装Cuda11.3 CUDA Toolkit 11.3 Downloads | NVIDIA Developer 下载miniconda Miniconda - Co ...

  6. Faster-RCNN模型跑通总结(使用pytorch1.10+cuda10.2版本)

    Faster-RCNN模型搭建跑通总结 0.前言 1.准备操作系统 2.安装驱动及cuda 2.1.安装驱动 2.2.安装cuda 3.安装anaconda和pytorch 3.1 安装anacond ...

  7. PyTorch 1.10正式版上线了!附相关资源

    广受人们欢迎的深度学习框架 PyTorch 刚刚更新了 1.10 正式版,在 CUDA Graphs API 等方面进行了诸多改进. 本文来源:机器之心 PyTorch 是一个开源的 Python 机 ...

  8. 目标跟踪pytracking框架运行遇到的问题以及各种解法(cuda11.3+pytroch1.10)

    pytracking网址:https://github.com/visionml/pytrackingpytracking 强烈推荐这个视频 最大的难点在于这个视频cuda是用cuda10+pytor ...

  9. 用什么tricks能让模型训练得更快?先了解下这个问题的第一性原理

    点击上方"视学算法",选择加"星标"或"置顶" 重磅干货,第一时间送达 作者丨Horace He 来源丨机器之心 编辑丨极市平台 导读 深度 ...

最新文章

  1. 根据多个条件删除mysql数据
  2. [Python人工智能] 三十二.Bert模型 (1)Keras-bert基本用法及预训练模型
  3. 有限元ansys/lsdyna学习笔记-01
  4. python图像加坐标_Python使用matplotlib模块绘制图像并设置标题与坐标轴等信息示例...
  5. 魅族POP2s真无线耳机正式发布:售价299元!
  6. 排序算法2:冒泡排序
  7. ai人工智能有哪些_进入AI有多么简单
  8. VMware Error | IP地址经常变更
  9. android 11如何剪裁上传图片
  10. thinkphp6 验证码总是提示不正确
  11. git 提交报错 Incorrect username or password ( access token )
  12. 五大企业面试真题(含腾讯)
  13. 基于FPGA的实时视频信号处理方案
  14. 西电软工oop面向对象程序设计实验二上机报告
  15. 我的Photoshop大师之路(二)
  16. AVKiller病毒的清除
  17. 人工智能中的线性代数与矩阵论学习秘诀之精品课程
  18. 王者该服务器未获取角色信息,王者荣耀体验服现实未获取角色信息 | 手游网游页游攻略大全...
  19. CCF-CSP题解 201912-3化学方程式
  20. 抢购、秒杀插件,秒杀助手

热门文章

  1. Python爬取百度翻译
  2. vscode实用快捷键_VS Code常用快捷键总结
  3. python特征相关性热力图怎么画_CNN可视化之类激活热力图Grad-CAM
  4. win10 桌面新建文件夹、重命名文件、删除文件及复制文件不能自动刷新问题的解决
  5. [spring boot] Table 'yhm.hibernate_sequence' doesn't exist
  6. css3动画实现摩天轮效果
  7. 高薪程序员面试题精讲系列15之Java中的对象如何实现排序?
  8. 恶意软件借助表情包现身Twitter
  9. 科幻小说里的知识芯片未来真的可行吗?(头脑风暴)
  10. MySQL索引失效的场景