无监督学习与监督学习

If we don’t know what the objective of the machine learning algorithm is, we may fail to build an accurate model. Knowing the types of Machine learning algorithms is essential. It helps us to see a bigger picture of machine learning, what is the goal of all the things that are being done in the field and especially, put us in a better position to break down a real problem and design a machine learning system.

如果我们不知道机器学习算法的目标是什么,则可能无法建立准确的模型。 了解机器学习算法的类型至关重要。 它可以帮助我们看到更大的机器学习图景,这是该领域中所有工作的目标,尤其是使我们处于更好的位置,可以解决一个实际问题并设计一个机器学习系统。

The goal of most machine learning algorithms is to construct a model or a hypothesis. All machine learning models categorize as either supervised or unsupervised. In this note, we will discuss these two types, how they worked and how to use each of them in various fields.

大多数机器学习算法的目标是构建模型或假设。 所有机器学习模型都分为有监督的或无监督的。 在本说明中,我们将讨论这两种类型,它们是如何工作的以及如何在各个领域中使用它们。

The structure of this note:

本注释的结构:

  1. Supervised learning: categorizations and its applications.监督学习:分类及其应用。
  2. Unsupervised learning: categorizations and its applications.无监督学习:分类及其应用。
  3. Supervised learning vs. unsupervised learning.监督学习与无监督学习。

Let’s begin by taking a look at supervised learning.

让我们从监督学习开始。

什么是监督学习? (What is Supervised Learning?)

Supervise means to watch over, to provide direction for someone or something. Supervised learning is a process in which we teach or train the machine using data that is well labeled.

监督是指监视,为某人或某物提供指示。 监督学习是一个过程,在该过程中,我们使用标记良好的数据来教学或训练机器。

The most important concept to remember:

要记住的最重要概念:

Supervised learning means learning by example.

监督学习意味着通过榜样学习。

The objective of a supervised learning model is to predict the correct label for the newly presented input data. When training a supervised learning algorithm, the computer learns by example. The machine learns from past data and applies the learning to present data to predict future events. Our training data will consist of inputs paired with the correct outputs; that is, the input data is labeled or tagged as the right answer. In short, the machine already knows the output of the algorithm before it starts working on it or learning it.

监督学习模型的目的是为新显示的输入数据预测正确的标签。 在训练监督学习算法时,计算机将通过示例进行学习。 机器从过去的数据中学习,并将学习的结果应用于当前数据以预测未来事件。 我们的训练数据将由输入和正确的输出组成; 也就是说,将输入数据标记或标记为正确答案。 简而言之,机器在开始执行或学习算法之前已经知道算法的输出。

During training, the algorithm will search for patterns in the data that correlate with the desired outputs. After training, a supervised learning algorithm will take in new unseen inputs and classify the label for them based on prior training data.

在训练过程中,该算法将在数据中搜索与所需输出相关的模式。 训练后,监督学习算法将接收新的看不见的输入,并根据先前的训练数据为它们分类标签。

That definition might be too academic, so we could instead think about a real-life example related to that concept. Let’s say; we show a picture to a baby. We tell the baby that “these are ice-creams.” The baby here plays a role as a computer. The ice creams photo is our input, and annotation is our output data. The baby keeps in the mind that if the color is red, and it has a cone shape, then it’s an ice-cream. That’s how she learns. The baby will recognize the ice-cream picture the next time she sees it. That is because, well, we have already labeled the image, the baby knows what ice-cream is. That’s how supervised learning works.

该定义可能过于学术化,因此我们可以考虑一个与该概念相关的现实示例。 比方说; 我们给婴儿看照片。 我们告诉婴儿“这些是冰淇淋”。 婴儿在这里扮演着计算机的角色。 冰淇淋照片是我们的输入,注释是我们的输出数据。 婴儿要记住,如果颜色是红色并且呈圆锥形,那就是冰淇淋。 她就是这样学习的。 婴儿在下次看到冰淇淋时会认出它。 那是因为,好,我们已经给图像贴上标签,婴儿知道什么是冰淇淋。 这就是监督学习的工作方式。

监督机器学习分类 (Supervised Machine Learning Categorization)

Supervised learning classified into two categories: classification and regression.

监督学习分为两类:分类和回归。

分类 (Classification)

A classification problem is when the output variable is a category, such as a pass or fail, red or blue, etc. We use classification algorithms to predict a group of data.

分类问题是当输出变量是类别(例如通过或失败,红色或蓝色等)时。我们使用分类算法来预测一组数据。

During training, a classification algorithm will be given data points with an assigned category. The job of a classification algorithm is to take an input value then and assign it a class or group that it fits into based on the training data provided. The most common example of classification is determining if an email is spam or not. This problem is called a binary classification problem. The algorithm will be given training data with all emails (spam and note spam.) The model will find the features within the data that correlate to either class and create the mapping function to map input data to output: Y=f(x). Then, when provided with an unseen email, the model will use this function to predict whether the email is spam or not.

在训练期间,将为分类算法提供具有指定类别的数据点。 分类算法的工作是获取输入值,然后根据提供的训练数据为它分配适合的类或组。 最常见的分类示例是确定电子邮件是否为垃圾邮件。 该问题称为二进制分类问题。 将为该算法提供所有电子邮件(垃圾邮件和便笺垃圾邮件)的训练数据。该模型将在数据中找到与任一类相关的特征,并创建将输入数据映射到输出的映射函数: Y=f(x) 。 然后,当收到看不见的电子邮件时,模型将使用此功能来预测电子邮件是否为垃圾邮件。

Note:

注意 :

  • We use Binary or Binomial Classification grouping data using two kinds of labels.我们使用两种标签使用二元或二项分类分组数据。
  • We use Multi-class or Multinomial Classification grouping data using more than two kinds of labels.我们使用两种以上的标签来使用多类或多项式分类分组数据。

Here are a few popular classification algorithms:

以下是一些流行的分类算法:

  • Decision Trees are one of the simplest and yet most useful machine learning algorithms. We split the data according to a certain parameter. The tree has two entities, namely decision nodes and leaves. The leaves are the decisions or outcomes. And the decision nodes are where we split the data.

    决策树是最简单但也是最有用的机器学习算法之一。 我们根据某个参数拆分数据。 该树具有两个实体,即决策节点和叶子。 叶子是决定或结果。 决策节点是我们分割数据的地方。

  • Random Forest is a set of decision trees on various subsets of the given dataset. It takes the average to improve the predictive accuracy of that dataset. Instead of relying on one decision tree, the random forest takes the prediction from each tree and based on the majority votes of predictions; it predicts the final output.

    随机森林是在给定数据集的各个子集上的一组决策树。 需要平均才能提高该数据集的预测准确性。 随机森林不依赖一棵决策树,而是根据预测的多数票从每一棵树获取预测。 它可以预测最终的输出。

  • Support Vector Machines (SVM): The objective of the SVM algorithm is to find a hyperplane in N-dimensional space(N — the number of features). A hyperplane distinctly classifies the data points. That is, given a set of training examples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new examples to one group or the other, making it a non-probabilistic binary linear classifier.

    支持向量机(SVM) :SVM算法的目标是在N维空间(N —特征数)中找到一个超平面。 超平面将数据点明确分类。 也就是说,给定一组训练示例,每个训练示例都标记为属于两个类别中的一个或另一个,则SVM训练算法会建立一个模型,该模型分配 一组或一组新示例,使其成为非概率二进制线性分类器。

  • We can use the K-Nearest Neighbor (KNN) for both classification and regression predictive problems. However, KNN is more widely used in classification problems in the industry. In the KNN algorithm, “k” means the number of nearest neighbors the model will consider. KNN is a model that classifies data points based on the points that are most similar to it. It uses test data to make an “educated guess” on what an unclassified position should be classified. If k=1, then the situation is simply attached to the class of its nearest neighbor.

    我们可以将K最近邻(KNN)用于分类和回归预测问题。 但是,KNN在行业中被更广泛地用于分类问题。 在KNN算法中,“ k”表示最近的数字 模型将考虑的邻居。 KNN是一个模型,该模型根据与数据点最相似的点对数据点进行分类。 它使用测试数据对未分类的职位应进行何种分类做出“有根据的猜测”。 如果k = 1,那么情况将被简单地附加到其最近邻居的类中。

回归 (Regression)

A regression problem is when the output variable has a real value, such as weight, height, or dollars. It is most often used to predict numerical values based on previous data observations. The typical example of regression is predicting housing prices of future sales based on the prevailing market price.

回归问题是输出变量具有真实值(例如重量,高度或美元)时。 它最常用于根据先前的数据观察来预测数值。 回归的典型示例是根据当前市场价格预测未来销售的房屋价格。

Some of the more familiar regression algorithms include

一些更熟悉的回归算法包括

  • Linear regression performs the task of predicting a target y (output) based on given features x (input). The input variable is called the Independent Variable, and the output variable is called the Dependent Variable. This regression technique finds a linear relationship between the independent variables and dependent variables. Linear regression classifies into two categories: simple linear regression and multiple linear regression. Simple linear regression has only one x and one y variable. In comparison, multiple linear regression has one y and two or more x variables.

    线性回归基于给定特征x(输入)执行预测目标y(输出)的任务。 输入变量称为自变量,输出变量称为因变量。 这种回归技术找到了自变量和因变量之间的线性关系。 线性回归分为两类:简单线性回归和多重线性回归。 简单的线性回归只有一个x和一个y变量。 相比之下,多元线性回归具有一个y和两个或多个x变量。

Multiple linear regression with PythonPython的多元线性回归
  • Logistic regression performs the task of predicting the discrete values for the set of independent variables that have been passed to it. It predicts by mapping the unseen data to the logit function that has been programmed into it. The algorithm predicts the probability of the new data, and so it’s output lies between the range of 0 and 1.

    Logistic回归执行预测传递给它的一组独立变量的离散值的任务。 它通过将看不见的数据映射到已编程到其中的logit函数来进行预测。 该算法可预测新数据的概率,因此其输出介于0到1之间。

  • Polynomial regression: Polynomial regression is a particular case of linear regression. This regression technique finds the curvilinear relationship between the independent variable x and the dependent variable y.

    多项式回归:多项式回归是线性回归的一种特殊情况。 该回归技术找到曲线关系 在自变量x和因变量y之间。

  • Ridge regression is a technique for analyzing multiple regression data that suffer from multicollinearity. Multicollinearity is a state of very high intercorrelations or inter-association among the independent variables. When multicollinearity occurs, least squares estimates are unbiased, but their variances are significant so that they may be far from the actual value.

    Ridge回归是用于分析遭受多重共线性的多个回归数据的技术。 多重共线性是自变量之间具有非常高的相互关系或相互联系的状态。 当发生多重共线性时,最小二乘估计是无偏的,但是它们的方差很大,因此它们可能与实际值相去甚远。

Note:

注意 :

  • If the label is categorical, the model is known as a “classification.”

    如果标签是分类的,则该模型称为“ 分类”。

  • If the label is numeric, the model is known as a “regression.”

    如果标签为数字 ,则该模型称为“ 回归”。

Some practical applications of supervised learning algorithms in real life:

监督学习算法在现实生活中的一些实际应用:

  • BioInformatics: fingerprints, iris texture, earlobe, and so on.生物信息学:指纹,虹膜纹理,耳垂等。
  • Face detection, spam detection.人脸检测,垃圾邮件检测。
  • Signature recognition, speech recognition.签名识别,语音识别。
  • Weather forecasting天气预报
  • Stock price predictions, among others股票价格预测等

什么是无监督学习? (What is Unsupervised Learning?)

Now we know the basic to supervised learning, it would be pertinent to hop on unsupervised learning.

现在我们知道了监督学习的基础,因此跳到无监督学习就很有意义了。

Unsupervised learning is the method that trains machines to use data that is neither classified nor labeled. It means there is no training data set, and the machine learns by itself. The computer needs to be programmed to learn by itself. It needs to understand and provide insights from both structured and unstructured data.

无监督学习是一种训练机器使用既未分类也未标记的数据的方法。 这意味着没有训练数据集,机器可以自行学习。 需要对计算机进行编程以自行学习。 它需要从结构化和非结构化数据中理解并提供见解。

The idea is to expose the machines to large volumes of varying data and allow it to learn from that data to provide insights that were previously unknown and to identify hidden patterns. As such, there aren’t necessarily defined outcomes from unsupervised learning algorithms. Instead, it determines what is different or exciting from the given dataset.

这个想法是将机器暴露给大量变化的数据,并允许其从数据中学习,以提供以前未知的见解并识别隐藏的模式。 因此,不一定要定义无监督学习算法的结果。 相反,它确定与给定数据集有何不同或令人兴奋。

During the process of unsupervised learning, the system does not have particular data sets, and the outcomes to most of the problems are mostly unknown. In simple terminology, the AI system and the machine learning objective is blinded when it goes into the operation. The lack of proper input and output algorithms makes the process even more challenging.

在无监督学习的过程中,系统没有特定的数据集,并且大多数问题的结果大多是未知的。 用简单的术语来说,人工智能系统和机器学习目标在投入运营时是盲目的。 缺乏适当的输入和输出算法使该过程更具挑战性。

Let’s make the concept simpler through the use of an example. We have shown a group of ice-creams and cupcakes pictures to the baby. Assume the baby hasn’t seen ice-creams and cakes earlier. So the baby doesn’t know what the feature of an ice-cream or a cupcake is. In this example, the baby is not able to categorize ice-creams and cakes as a supervised learning example. The whole process that follows supervised learning is simple. It is incredibly straightforward, as we teach the baby all the details on the figures.

让我们通过使用示例来简化概念。 我们给婴儿看了一组冰淇淋和纸杯蛋糕的照片。 假设婴儿较早没有看过冰淇淋和蛋糕。 因此,婴儿不知道冰淇淋或纸杯蛋糕的特征是什么。 在此示例中,婴儿无法将冰淇淋和蛋糕分类为有监督的学习示例。 监督学习之后的整个过程很简单。 这是非常简单明了的,因为我们教给婴儿有关数字的所有细节。

However, in unsupervised learning, the whole process becomes a little trickier. The algorithm for an unsupervised learning system has the same input data as the one for its supervised counterpart (in our case, ice-creams and cupcakes have different shapes and colors). However, have no specific outcomes. In a simple word, there is no label associated with this learning. Once the baby (the computer) has seen the picture (our input data), she learns from the information at hand. Now, with information related to the problem, our baby will recognize all similar objects and group them. In other words, the computer will design and label the objects itself. Technically, there are bound to be wrong answers, since there is a certain degree of probability. However, just like how humans work, the strength of machine learning lies in its ability to recognize mistakes, learn from them, and make better estimations next time. That process is known as unsupervised learning.

但是,在无监督学习中,整个过程变得有些棘手。 无监督学习系统的算法与受监督学习系统的算法具有相同的输入数据(在我们的案例中,冰淇淋和纸杯蛋糕的形状和颜色不同)。 但是,没有具体结果。 简而言之,没有与此学习相关的标签。 婴儿(计算机)看到图片(我们的输入数据)后,她将从手头的信息中学习。 现在,有了与问题相关的信息,我们的宝宝将识别出所有相似的物体并将其分组。 换句话说,计算机将设计和标记对象本身。 从技术上讲,肯定存在错误的答案,因为存在一定程度的可能性。 但是,就像人类的工作方式一样,机器学习的优势在于它能够识别错误,从错误中学习并在下一次做出更好的估计。 该过程称为无监督学习。

无监督机器学习分类 (Unsupervised Machine Learning Categorization)

Unsupervised learning classified into two categories: clustering and association problems.

无监督学习分为两类:聚类和关联问题。

Clustering: A clustering problem involves organizing unlabelled data into similar groups, such as grouping customers by purchasing behavior. It is one of the most common unsupervised learning methods. We often use clustering in marketing campaigns. For example, clustering algorithms can group people with similar traits and likelihood to purchase. Once we have the groups, we can run tests on each group with different marketing copy that will help us better target our messaging to them in the future.

群集:群集问题涉及将未标记的数据组织到相似的组中,例如通过购买行为对客户进行分组。 它是最常见的无监督学习方法之一。 我们经常在营销活动中使用群集。 例如,聚类算法可以将具有相似特征和购买可能性的人分组。 一旦有了小组,我们就可以对每个小组使用不同的营销副本进行测试,这将有助于我们将来更好地针对他们发送消息。

  • Hierarchical clustering — is an algorithm that groups similar objects into groups called clusters. In this technique, initially, each data point is considered an individual cluster. The algorithm goes over the various features of the data points and looks for the similarity between them. If the algorithm finds similar data, they group those data. The process continues until the dataset has been grouped, which creates a hierarchy for each of these clusters.

    分层聚类 —是一种将相似对象分为几类的算法。 在此技术中,最初,每个数据点被视为一个单独的群集。 该算法遍历数据点的各种特征,并寻找它们之间的相似性。 如果算法找到相似的数据,则将这些数据分组。 该过程将继续进行,直到将数据集分组为止,这将为每个群集创建层次结构。

researchgateresearchgate
  • K-Means Clustering — This algorithm works step-by-step, where the main goal is to achieve clusters that have labels to identify them. K-means is a centroid-based algorithm, or a distance-based algorithm, where we calculate the distances to assign a point to a cluster. The smallest distance between the data point and the centroid determines which group it belongs to while making sure the clusters do not interlay with each other. The centroid acts like the heart of the cluster. That ultimately gives us the cluster, which can be labeled as needed.

    K均值聚类 -此算法分步工作,主要目标是获得带有标签的聚类以识别它们。 K均值是基于质心的算法或基于距离的算法,我们在其中计算将点分配给聚类的距离。 数据点和形心之间的最小距离确定了它属于哪个组,同时确保群集之间不会相互交错。 质心的作用就像集群的心脏。 最终为我们提供了集群,可以根据需要对其进行标记。

Association problem is where you want to discover rules that describe large portions of your data, for example, if a person buys hamburger buns, she will likely buy hamburgers.

关联问题是您想要发现描述数据大部分的规则的地方,例如,如果某人购买汉堡包,那么她很可能会购买汉堡包。

  • Apriori algorithm is used for frequent mining itemsets and relevant association rules. This support maps the dependency of one data item with another, which can help us understand what data item influences the possibility of something happening to the other data item. For example, bread affects the buyer to buy milk and eggs. So that mapping helps increase profits for the store. This mapping process can be learned using this algorithm, which yields rules for its output.

    Apriori算法用于频繁挖掘项目集和相关的关联规则。 这种支持将一个数据项与另一个数据项之间的依赖关系映射到一起,这可以帮助我们了解哪些数据项会影响另一数据项发生某些事情的可能性。 例如,面包会影响买方购买牛奶和鸡蛋。 这样映射有助于增加商店的利润。 可以使用此算法学习此映射过程,该算法为其输出产生规则。

  • Frequent Pattern Growth Algorithm(FP-Growth Algorithm) is the method of finding frequent patterns without candidate generation. The algorithm finds the count of the repeated pattern, adds that to a table, and then finds the most plausible item and sets that as the root of the tree. We then add other data items into the tree and calculate the support. If that particular branch fails to meet the threshold of the support, it is pruned. Once all the iterations are completed, a tree with the root of the item will be created, which are then used to make rules of the association. FP-Growth algorithm is faster than apriori as the support is calculated and checked for increasing iterations rather than creating a standard and testing the support from the dataset.

    频繁模式增长算法(FP-Growth Algorithm,FP-Growth Algorithm)是一种无需候选者生成即可找到频繁模式的方法。 该算法找到重复模式的计数,将其添加到表中,然后找到最合理的项目并将其设置为树的根。 然后,我们将其他数据项添加到树中并计算支持度。 如果该特定分支未达到支持的阈值,则将其修剪。 一旦所有迭代都完成,将创建带有项目根的树,然后将其用于制定关联规则。 FP-Growth算法比先验算法要快,因为要计算和检查支撑以增加迭代次数,而不是创建标准并从数据集中测试支撑。

无监督学习算法的应用 (Applications of Unsupervised Learning Algorithms)

Some practical applications of unsupervised learning algorithms include:

无监督学习算法的一些实际应用包括:

  • Credit-Card Fraud Detection.信用卡欺诈检测。
  • Identification of human errors during data entry识别数据输入过程中的人为错误
  • Amazon uses unsupervised learning to learn the customer’s purchase and recommend the products which are most frequently bought together (an example of association rule mining).亚马逊使用无监督学习来学习客户的购买并推荐最常一起购买的产品(关联规则挖掘的示例)。

监督学习与无监督学习 (Supervise Learning vs. Unsupervised Learning)

The most significant difference between supervised and unsupervised learning is that each data have a label in the case of supervised learning. In contrast, there is NO label for each input in the case of unsupervised learning, implying, our data have not been classified.

监督学习和非监督学习之间的最大区别是,在监督学习的情况下,每个数据都有一个标签 相反,在无监督学习的情况下,每个输入都没有标签 ,这意味着我们的数据尚未分类。

Note:

注意 :

  • Supervised learning will always have an input-output pair.监督学习将始终具有输入输出对。
  • Unsupervised learning is just data without a label nor meaning that we try to make some sense out of it.无监督学习只是没有标签的数据,也不意味着我们试图从中获得一些意义。
Quick summary
快速总结

什么时候应该选择监督学习与无监督学习? (When Should you Choose Supervised Learning vs. Unsupervised Learning?)

A good strategy for honing in on the right machine learning approach is to:

磨练正确的机器学习方法的一个好策略是:

  • Evaluate the data: Is our data labeled or unlabelled? Is there available expert knowledge to support additional labeling? That will help to determine whether a supervised, we should use unsupervised.

    评估数据:我们的数据是带标签的还是未带标签的? 是否有可用的专家知识来支持附加标签? 那将有助于确定是否有监督,我们应该使用无监督。

  • Review available algorithms that may suit the problem with regards to dimensionality (number of features, attributes, or characteristics). Candidate algorithms should be tailored to the overall volume of data and its structure.

    复习可能在维度(特征数量,属性或特征)方面适合该问题的可用算法 。 应根据整体数据量及其结构来调整候选算法。

In general, we use unsupervised machine learning when we do not have data on desired outcomes, such as determining a target market for a new product that a business has never sold before. However, if we are trying to get a better understanding of our existing consumer base, then supervised learning is the optimal technique.

通常,当我们没有所需结果的数据时(例如,确定企业从未出售过的新产品的目标市场),我们将使用无监督机器学习。 但是,如果我们试图更好地了解我们现有的消费者基础,那么监督学习是最佳技术。

尾注 (End Notes)

Supervised learning and unsupervised learning are critical concepts in the field of machine learning. A proper understanding of the basics is crucial before you jump into the pool of different machine learning algorithms.

监督学习和无监督学习是机器学习领域中的关键概念。 在跳入不同的机器学习算法之前,对基础知识有适当的了解至关重要。

Learn on!

继续学习!

me.meme.me

资源: (Resource:)

There are many machine learning books you can read. I certainly didn’t cover enough information here to fill a chapter, but that doesn’t mean you can’t keep learning! Fill your mind with more awesomeness, starting with the excellent links below.

您可以阅读许多机器学习书籍。 我当然没有在此处提供足够的信息来填写一章,但这并不意味着您无法继续学习! 从下面的出色链接开始,让您更加精采。

  1. Supervised and unsupervised learning

    有监督和无监督学习

  2. Machine learning course by Andrew Ng

    Ng的机器学习课程

  3. 5 Beginner Friendly Steps to Learn Machine Learning and Data Science with Python

    5个初学者友好的步骤,以使用Python学习机器学习和数据科学

  4. Machine Learning (ML) vs. AI and their Important Differences

    机器学习(ML)与AI及其重要区别

  5. Dojo Data ScienceDojo数据科学
https://datasciencedojo.com/https://datasciencedojo.com/

翻译自: https://medium.com/nothingaholic/supervised-vs-unsupervised-learning-eb4edc1c803b

无监督学习与监督学习


http://www.taodudu.cc/news/show-863689.html

相关文章:

  • 分类决策树 回归决策树_决策树分类器背后的数学
  • 检测对抗样本_对抗T恤以逃避ML人检测器
  • 机器学习中一阶段网络是啥_机器学习项目的各个阶段
  • 目标检测 dcn v2_使用Detectron2分6步进行目标检测
  • 生成高分辨率pdf_用于高分辨率图像合成的生成变分自编码器
  • 神经网络激活函数对数函数_神经网络中的激活函数
  • 算法伦理
  • python 降噪_使用降噪自动编码器重建损坏的数据(Python代码)
  • bert简介_BERT简介
  • 卷积神经网络结构_卷积神经网络
  • html两个框架同时_两个框架的故事
  • 深度学习中交叉熵_深度计算机视觉,用于检测高熵合金中的钽和铌碎片
  • 梯度提升树python_梯度增强树回归— Spark和Python
  • 5行代码可实现5倍Scikit-Learn参数调整的更快速度
  • tensorflow 多人_使用TensorFlow2.x进行实时多人2D姿势估计
  • keras构建卷积神经网络_在Keras中构建,加载和保存卷积神经网络
  • 深度学习背后的数学_深度学习背后的简单数学
  • 深度学习:在图像上找到手势_使用深度学习的人类情绪和手势检测器:第1部分
  • 单光子探测技术应用_我如何最终在光学/光子学应用程序中使用机器学习作为博士学位
  • 基于深度学习的病理_组织病理学的深度学习(第二部分)
  • ai无法启动产品_启动AI启动的三个关键教训
  • 达尔文进化奖_使用Kydavra GeneticAlgorithmSelector将达尔文进化应用于特征选择
  • 变异函数 python_使用Python进行变异测试
  • 信号处理深度学习机器学习_机器学习与信号处理
  • PinnerSage模型
  • 零信任模型_关于信任模型
  • 乐器演奏_深度强化学习代理演奏的蛇
  • 深度学习模型建立过程_所有深度学习都是统计模型的建立
  • 使用TensorFlow进行鬼写
  • 使用OpenCV和Python从图像中提取形状

无监督学习与监督学习_有监督与无监督学习相关推荐

  1. crtsiii型无砟轨道板_山洞岩隧道无砟轨道施工快速推进

    11月29日,中铁二十局集团市政公司重庆铁路枢纽东环线项目山洞岩隧道无砟轨道施工快速推进. 山洞岩隧道全长3618米,为单洞双线隧道,最大埋深330米,进.出口地形斜交,设置偏压式明洞门,进.出口分别 ...

  2. python无返回值函数_理解Python 中无返回值函数的问题

    例如 list 的 append 操作就是无返回值的,换句话说就是不能进行形如 list = [] list.append(1).append(2) 这样的连续操作 注意函数返回的数据类型注意是 li ...

  3. 笔记本html连接电视机黑屏是怎么回事,电脑连接电视机无信号怎么回事_电脑连接电视机无信号的解决措施...

    现在有很多用户都喜欢将自己的电脑连接电视来观看电影.但在使用的过程中却遇到了电脑连接电视机无信号的情况,很多不熟悉电脑的用户不知道怎么处理,所以今天本文为大家整理的就是关于电脑连接电视机无信号的解决措 ...

  4. 腐烂国度2服务器无响应,腐烂国度2Installapp无反应解决方法_腐烂国度2Installapp无反应怎么解决_玩游戏网...

    <腐烂国度2>什么配置能玩 还有两天<腐烂国度2>就要正式上线了,很多朋友都比较关心这款游戏对于配置有什么要求,今天玩游网的小编就来为大家分析一下<腐烂国度2>什么 ...

  5. 无监督和有监督的区别_干货|全面理解无监督学习基础知识

    一.无监督学习 无监督学习的特点是,模型学习的数据没有标签,因此无监督学习的目标是通过对这些无标签样本的学习来揭示数据的内在特性及规律,其代表就是聚类.与监督学习相比,监督学习是按照给定的标准进行学习 ...

  6. 无监督学习与有监督学习的本质差异是什么_机器学习入门:有监督、无监督和强化学习都是什么,有什么差别?...

    导读:机器学习是使数据具有意义的算法的应用和科学,也是计算机科学中最令人兴奋的领域!在数据丰沛的时代,计算机可以通过自我学习获得算法把数据转化为知识.近年来涌现出了许多强大的机器学习开源软件库,现在是 ...

  7. 【无监督学习】1、MOCOv1 | 用于提升无监督学习效果的动量对比学习

    文章目录 一.背景 二.方法 2.1 对比学习(字典查表) 2.2 动量对比函数 2.3 Pretext Task 三.效果 3.1 数据集 3.2 训练细节 3.3 实验 四.代码 论文:Momen ...

  8. 基于自编码器的表征学习:如何攻克半监督和无监督学习?

    选自NeurIPS 2018 作者:Michael Tschannen等 机器之心编译 参与:Panda 苏黎世联邦理工学院和谷歌大脑团队研究者的 NeurIPS 2018 会议贝叶斯深度学习(Bay ...

  9. 什么是无监督、监督、半监督学习

    先看个图片 什么是监督.无监督学习 区分有监督和无监督,就是看是否有监督(supervised),也就看输入数据是否有标签(label).输入数据有标签,则为有监督学习(x,y),没标签则为无监督学习 ...

最新文章

  1. SQL Server 查询性能优化——堆表、碎片与索引(一)
  2. 在java中必须要有main吗_在一个Java应用程序中main方法必须被说明为_____。
  3. goodness of classification
  4. 一起来用Websocket(一)开篇 Websocket!Socket在HTML5复活
  5. Android学习笔记(三):android画图之paint
  6. 【计算机算法设计与分析】——5.4最优二分检索树
  7. HTML期末作业-美食网站
  8. Codeforces_448C 分治
  9. macOS Sierra 10.12.6 odoo 10.0 开发环境配置
  10. 一个websocket 可以多个页面创建吗_聊聊 WebSocket,还有 HTTP
  11. 惠普电脑u盘重装系统步骤_惠普电脑优盘装系统步骤
  12. Pyltp的安装使用笔记
  13. shell之数学运算
  14. python中syntaxerror_解决python中syntaxerror错误的方法
  15. 修改el-table表头高度 表格高度 行鼠标悬停颜色
  16. 双线双IP空间或者服务器域名解析说明
  17. 程序存储器编址及程序执行顺序
  18. 【软件工程】把Jackson图转换为流程图例题+画状态描述图
  19. Hyperledger Fabric之Explorer区块链浏览器
  20. cairo填充_cairo 图形库

热门文章

  1. HDU——1054 Strategic Game
  2. 【iOS 开发】使用 iMazing 进行沙盒调试
  3. 选择列表和可多选的选择列表
  4. 使用SharedPreferences
  5. mongodb及其索引的使用例子
  6. Medoo 开源项目发布,超轻量级的PHP SQL数据库框架
  7. python模块之paramiko学习二
  8. ASP调用web services
  9. Web学习之跨域问题及解决方案
  10. Solve The Maze CodeForces - 1365D(贪心+dfs)