HALCON 21.11:深度学习笔记---术语表(7)

HALCON 21.11.0.0中,实现了深度学习方法。下面,我们将描述深度学习环境中使用的最重要的术语:

anchor ()

Anchors are fixed bounding boxes. They serve as reference boxes (参考框), with the aid of which the network proposes bounding boxes for the objects to be localized (定位).

annotation (注释)

An annotation is the ground truth information, what a given instance in the data represents, in a way recognizable for the network. This is e.g., the bounding box and the corresponding label for an instance in object detection.

anomaly (异常)

An anomaly means something deviating from (偏离) the norm, something unknown.

backbone (骨干)

A backbone is a part of a pretrained classification network. Its task is to generate various feature maps (特征图), for what reason the classifying layer has been removed.

batch size (批大小) - hyperparameter 'batch_size'

The dataset is divided into smaller subsets of data, which are called batches. The batch size determines the number of images taken into a batch and thus processed simultaneously (同时).

bounding box (边界框)

Bounding boxes are rectangular boxes used to define a part within an image and to specify the localization of an object within an image.

class agnostic (类不可知论者)

Class agnostic means without the knowledge of the different classes. In HALCON, we use it for reduction of overlapping predicted bounding boxes (重叠的预测边界框). This means, for a class agnostic bounding box suppression the suppression of overlapping instances is done ignoring the knowledge of classes, thus strongly overlapping instances get suppressed independently of their class.

change strategy (改变策略)

A change strategy denotes the strategy, when and how hyperparameters are changed during the training of a DL model.

class ()

Classes are discrete categories (离散类别) (e.g., 'apple', 'peach', 'pear') that the network distinguishes. In HALCON, the class of an instance is given by its appropriate annotation.

classifier (分类器)

In the context (上下文) of deep learning we refer to the term classifier as follows. The classifier takes an image as input and returns the inferred confidence values (推断置信值), expressing how likely the image belongs to every distinguished class. E.g., the three classes 'apple', 'peach', and 'pear' are distinguished. Now we give an image of an apple to the classifier. As a result, the confidences 'apple': 0.92, 'peach': 0.07, and 'pear': 0.01 could be returned.

COCO (上下文常见对象)

COCO is an abbreviation (缩写) for "common objects in context", a large-scale object detection, segmentation, and captioning dataset. There is a common file format for each of the different annotation (注释) types.

confidence (置信度)

Confidence is a number expressing (表示) the affinity (亲缘关系) of an instance to a class. In HALCON the confidence is the probability, given in the range of [0,1]. Alternative name: score

confusion matrix (混淆矩阵)

A confusion matrix is a table which compares the classes predicted by the network (top-1) with the ground truth class affiliations (从属关系). It is often used to visualize the performance of the network on a validation or test set.

Convolutional Neural Networks (CNNs) (卷积神经网络)

Convolutional Neural Networks are neural networks used in deep learning, characterized by the presence of at least one convolutional layer (卷积层) in the network. They are particularly successful for image classification.

data (数据)

We use the term data in the context of deep learning for instances to be recognized (e.g., images) and their appropriate information concerning the predictable characteristics (可预测特征) (e.g., the labels in case of classification).

data augmentation (数据扩充)

Data augmentation is the generation of altered copies of samples within a dataset. This is done in order to augment the richness of the dataset, e.g., through flipping or rotating.

dataset (数据集): training (训练集), validation (验证集), and test set (测试集)

With dataset we refer to the complete set of data used for a training. The dataset is split into three, if possible disjoint, subsets:

  1. The training set contains the data on which the algorithm optimizes the network directly.
  2. The validation set contains the data to evaluate the network performance during training.
  3. The test set is used to test possible inferences (predictions), thus to test the performance on data without any influence on the network optimization.

deep learning (深度学习)

The term "deep learning" was originally used to describe the training of neural networks with multiple hidden layers. Today it is rather used as a generic term for several different concepts in machine learning. In HALCON, we use the term deep learning for methods using a neural network with multiple hidden layers.

epoch ()

In the context of deep learning, an epoch is a single training iteration over the entire training data, i.e., over all batches. Iterations over epochs should not be confused with the iterations over single batches (e.g., within an epoch).

在深度学习环境中,epoch是对整个训练数据的单一训练迭代,即对所有批次的训练迭代。在epoch上的迭代不应该与在单个批次(例如,在epoch内)上的迭代相混淆。

errors (错误)

In the context of deep learning, we refer to error when the inferred class of an instance does not match the real class (e.g., the ground truth label in case of classification). Within HALCON, we use the term error in deep learning when we refer to the top-1 error.

feature map (特征图)

A feature map is the output of a given layer.

feature pyramid (特征金字塔)

A feature pyramid is simply a group of feature maps, whereby every feature map origins from another level, i.e., it is smaller than its preceding levels.

head ()

Heads are subnetworks. For certain architectures they attach on selected pyramid levels. These subnetworks proceed information from previous parts of the total network in order to generate spatially resolved output, e.g., for the class predictions. Thereof they generate the output of the total network and therewith constitute the input of the losses.

hyperparameter (超参数)

Like every machine learning model, CNNs contain many formulas with many parameters. During training the model learns from the data in the sense of optimizing the parameters. However, such models can have other, additional parameters, which are not directly learned during the regular training. These parameters have values set before starting the training. We refer to this last type of parameters as hyperparameters in order to distinguish them from the network parameters that are optimized during training. Or from another point of view, hyperparameters are solver-specific parameters. Prominent examples are the initial learning rate or the batch size.

inference phase (推理阶段)

The inference phase is the stage when a trained network is applied to predict (infer) instances (which can be the total input image or just a part of it) and eventually their localization. Unlike during the training phase, the network is not changed anymore in the inference phase.

intersection over union (交集)

The intersection over union (IoU) is a measure to quantify (程度) the overlap of two areas. We can determine the parts common in both areas, the intersection, as well as the united areas, the union. The IoU is the ratio between the two areas intersection and union. The application of this concept may differ between the methods.

label (标签)

Labels are arbitrary strings used to define the class of an image. In HALCON these labels are given by the image name (eventually followed by a combination of underscore and digits) or by the directory name, e.g., 'apple_01.png', 'pear.png', 'peach/01.png'.

layer and hidden layer (层和隐藏层)

A layer is a building block in a neural network, thus performing specific tasks (e.g., convolution (卷积), pooling (池化), etc., for further details we refer to the “Solution Guide on Classification”). It can be seen as a container, which receives weighted input, transforms it, and returns the output to the next layer. Input and output layers are connected to the dataset, i.e., the images or the labels, respectively. All layers in between are called hidden layers.

learning rate (学习率) - hyperparameter 'learning_rate'

The learning rate is the weighting (权重), with which the gradient (see the entry for the stochastic gradient descent SGD) is considered when updating the arguments of the loss function. In simple words, when we want to optimize a function, the gradient tells us the direction in which we shall optimize and the learning rate determines how far along this direction we step. Alternative names: step size

level (层次)

The term level is used to denote within a feature pyramid network the whole group of layers, whose feature maps have the same width and height. Thereby the input image represents level 0.

loss (损失)

A loss function compares the prediction from the network with the given information, what it should find in the image (and, if applicable, also where), and penalizes deviations (惩罚偏差). This loss function is the function we optimize during the training process to adapt the network to a specific task. Alternative names: objective (目标) function, cost (成本) function, utility (效用) function

momentum (动量) - hyperparameter 'momentum'

The momentum  is used for the optimization of the loss function arguments. When the loss function arguments are updated (after having calculated the gradient), a fraction

HALCON 21.11:深度学习笔记---术语表(7)相关推荐

  1. HALCON 21.11:学习笔记---OPC_UA(I/O)

    HALCON 21.11:学习笔记---OPC_UA(I/O) 本章主要提供有关OPC_UA的信息. 系统要求 Intel compatible PC with Windows 7 (32-bit o ...

  2. HALCON 20.11:学习笔记---一维测量(Measuring)

    HALCON 20.11:学习笔记---一维测量(Measuring) 本章主要提供有关一维测量的信息. 一维测量的概念 通过一维测量可以沿着预定义的线或弧定位从亮到暗或从暗到亮的过渡边缘.这使您可以 ...

  3. halcon 21.05深度学习下载和安装

    halcon21版本下载连接地址: 链接:https://pan.baidu.com/s/142qWteiIgHm6QuZVOkX_pw?pwd=2tw5 提取码:2tw5 下载后目录如下: 下载完毕 ...

  4. HALCON 20.11:深度学习笔记(7)---术语表

    HALCON 20.11:深度学习笔记(7)---术语表 HALCON 20.11.0.0中,实现了深度学习方法.下面,我们将描述深度学习环境中使用的最重要的术语: anchor (锚) Anchor ...

  5. HALCON 21.11:深度学习笔记---模型(8)

    HALCON 21.11:深度学习笔记---模型(8) HALCON 21.11.0.0中,实现了深度学习方法. 本章阐述了HALCON中深度学习(DL)模型的一般概念和数据处理. 从概念上讲,HAL ...

  6. HALCON 21.11:深度学习笔记---有监督训练(6)

    HALCON 21.11:深度学习笔记---有监督训练(6) HALCON 21.11.0.0中,实现了深度学习方法.不同的DL方法有不同的结果.相应地,它们也使用不同的衡量标准来确定网络的" ...

  7. HALCON 21.11:深度学习笔记---Data(数据)(3)

    HALCON 21.11:深度学习笔记---Data(数据)(3) HALCON 21.11.0.0中,实现了深度学习方法.其中,关于术语"数据"的介绍如下: 术语"数据 ...

  8. HALCON 21.11:深度学习笔记(1)

    HALCON 21.11:深度学习笔记(1) HALCON 21.11.0.0中,实现了以下深度学习方法: 1. Anomaly Detection(异常检测) 给每个像素分配显示未知特征的可能性.更 ...

  9. HALCON 21.11:深度学习笔记---语义分割/边缘提取(12)

    HALCON 21.11:深度学习笔记---语义分割/边缘提取(12) HALCON 21.11.0.0中,实现了深度学习方法. 本章介绍了如何使用基于深度学习的语义分割,包括训练和推理阶段. 通过语 ...

最新文章

  1. Python实用笔记 (16)函数式编程——偏函数
  2. 找字符串中第一个只出现一次的字符
  3. 道路交通实时流量监控预测系统(大讲台)
  4. 你可能学了假流程图,三步教会你绘制大厂流程图
  5. Unity MVC框架 StrangeIoC
  6. .NET Core,PostgreSQL和文档数据库
  7. 用SpringGraph制作拓扑图和关系图
  8. 华为G610开机第一屏G3替换教程
  9. MySQL十四:单表最大2000W行数据
  10. 基于threejs实现中国地图轮廓动画
  11. 三阶魔方还原步骤图_三阶魔方七步还原法口诀,魔方新手入门图解步骤
  12. fastdfs fild_id
  13. word在任意页面添加页码
  14. Andriod程序的结构
  15. 计算机打不开 显示正在处理它,Win10系统打开此电脑显示正在处理它的解决方法...
  16. Unity Shader 学习记录(5) —— 实现漫反射光照模型
  17. 浅谈solrCloud的分布式设计
  18. General error: 8 attempt to write a readonly database
  19. java wgs84转西安80_java 地心坐标系(ECEF)和WGS-84坐标系(WGS84)互转的实现
  20. spring5之AOP

热门文章

  1. 如何在JSP里添加删除cookie
  2. java中this关键字的基本使用
  3. linux下将编译错误输出到一个文本文件
  4. unity, 取消ugui button响应键盘
  5. 实时数据处理环境搭建flume+kafka+storm:4.storm安装配置
  6. mysql 杂记(二)
  7. 在线ASCII艺术字,Spring Boot banner生成工具
  8. 高性能爬虫原理与应用
  9. 关于新版chrome设置编码格式(55以上)
  10. 记一次DRBD Unknown故障处理过程