Predictive learning vs. representation learning  预测学习 与 表示学习

When you take a machine learning class, there’s a good chance it’s divided into a unit on supervised learning and a unit on unsupervised learning. We certainly care about this distinction for a practical reason: often there’s orders of magnitude more data available if we don’t need to collect ground-truth labels. But we also tend to think it matters for more fundamental reasons. In particular, the following are some common intuitions:

  • In supervised learning, the particular algorithm is usually less important than engineering and tuning it really well. In unsupervised learning, we’d think carefully about the structure of the data and build a model which reflects that structure.
  • In supervised learning, except in small-data settings, we throw whatever features we can think of at the problem. In unsupervised learning, we carefully pick the features we think best represent the aspects of the data we care about.
  • Supervised learning seems to have many algorithms with strong theoretical guarantees, and unsupervised learning very few.
  • Off-the-shelf algorithms perform very well on a wide variety of supervised tasks, but unsupervised learning requires more care and expertise to come up with an appropriate model.

I’d argue that this is deceptive. I think real division in machine learning isn’t between supervised and unsupervised, but what I’ll term predictive learning and representation learning. I haven’t heard it described in precisely this way before, but I think this distinction reflects a lot of our intuitions about how to approach a given machine learning problem.

In predictive learning, we observe data drawn from some distribution, and we are interested in predicting some aspect of this distribution. In textbook supervised learning, for instance, we observe a bunch of pairs (x_1, y_1), \ldots, (x_N, y_N), and given some new example x, we’re interested in predicting something about the corresponding y. In density modeling (a form of unsupervised learning), we observe unlabeled data x_1, \ldots, x_N, and we are interested in modeling the distribution the data comes from, perhaps so we can perform inference in that distribution. In each of these cases, there is a well-defined predictive task where we try to predict some aspect of the observable values possibly given some other aspect.

In representation learning, our goal isn’t to predict observables, but to learn something about the underlying structure. In cognitive science and AI, a representation is a formal system which maps to some domain of interest in systematic ways. A good representation allows us to answer queries about the domain by manipulating that system. In machine learning, representations often take the form of vectors, either real- or binary-valued, and we can manipulate these representations with operations like Euclidean distance and matrix multiplication. For instance, PCA learns representations of data points as vectors. We can ask how similar two data points are by checking the Euclidean distance between them.

In representation learning, the goal isn’t to make predictions about observables, but to learn a representation which would later help us to answer various queries. Sometimes the representations are meant for people, such as when we visualize data as a two-dimensional embedding. Sometimes they’re meant for machines, such as when the binary vector representations learned by deep Boltzmann machines are fed into a supervised classifier. In either case, what’s important is that mathematical operations map to the underlying relationships in the data in systematic ways.

Whether your goal is prediction or representation learning influences the sorts of techniques you’ll use to solve the problem. If you’re doing predictive learning, you’ll probably try to engineer a system which exploits as much information as possible about the data, carefully using a validation set to tune parameters and monitor overfitting. If you’re doing representation learning, there’s no good quantitative criterion, so you’ll more likely build a model based on your intuitions about the domain, and then keep staring at the learned representations to see if they make intuitive sense.

In other words, it parallels the differences I listed above between supervised and unsupervised learning. This shouldn’t be surprising, because the two dimensions are strongly correlated: most supervised learning is predictive learning, and most unsupervised learning is representation learning. So to see which of these dimensions is really the crux of the issue, let’s look at cases where the two differ.

Language modeling is a perfect example of an application which is unsupervised but predictive. The goal is to take a large corpus of unlabeled text (such as Wikipedia) and learn a distribution over English sentences. The problem is motivated by Bayesian models for speech recognition: a distribution over sentences can be used as a prior for what a person is likely to say. The goal, then, is to model the distribution, and any additional structure is unnecessary. Log-linear models, such as that of Mnih et al. [1], are very good at this, and recurrent neural nets [2] are even better. These are the sorts of approaches we’d normally apply in a supervised setting: very good at making predictions, but often hard to interpret. One state-of-the-art algorithm for density modeling of text is PAQ [3], which is a heavily engineered ensemble of sequential predictors, somewhat reminiscent of the winning entries of the Netflix competition.

On the flip side, supervised neural nets are often used to learn representations. One example is Collobert-Weston networks [4], which attempt to solve a number of supervised NLP tasks by learning representations which are shared between them. Some of the tasks are fairly simple and have a large amount of labeled data, such as predicting which of two words should be used to fill in the blank. Others are harder and have less data available, such as semantic role labeling. The simpler tasks are artificial, and they are there to help learn a representation of words and phrases as vectors, where similar words and phrases map to nearby vectors; this representation should then help performance on the harder tasks. We don’t care about the performance on those tasks per se; we care whether the learned embeddings reflect the underlying structure. To debug and tune the algorithm, we’d focus on whether the representations make intuitive sense, rather than on the quantitative performance. There are no theoretical guarantees that such an approach would work — it all depends on our intuitions of how the different tasks are related.

Based on these two examples, it seems like it’s the predictive/representation dimension which determines how we should approach the problem, rather than supervised/unsupervised.

In machine learning, we tend to think there’s no solid theoretical framework for unsupervised learning. But really, the problem is that we haven’t begun to formally characterize the problem of representation learning. If you just want to build a density modeler, that’s about as well understood as the supervised case. But if the goal is to learn representations which capture the underlying structure, that’s much harder to formalize. In my next post, I’ll try to take a stab at characterizing what representation learning is actually about.

[1] Mnih, A., and Hinton, G. E. Three new graphical models for statistical language modeling. NIPS 2009

[2] Sutskever, I., Martens, J., and Hinton, G. E. Generating text with recurrent neural networks. ICML 2011

[3] Mahoney, M. Adaptive weighting of context models for lossless data compression. Florida Institute of Technology Tech report, 2005

[4] Collobert, R., and Weston, J. A unified architecture for natural language processing: deep neural networks with multitask learning. ICML 2008

Posted in Machine Learning.

(转)Predictive learning vs. representation learning 预测学习 与 表示学习相关推荐

  1. 【读点论文】Deep Learning Face Representation by Joint Identification-Verification,深度学习应用在优化问题上,deepid2

    Deep Learning Face Representation by Joint Identification-Verification 人脸识别的关键挑战是开发有效的特征表示,以减少个体内的差异 ...

  2. Representation Learning: A Review and New Perspectives 综述翻译总结

    2012年的一篇关于表示学习的综述文章,至今引用近2000篇,翻译出来学习一下 之前看了其他的翻译,将其中的逻辑没有翻译出来,一头雾水,所以自己总结翻译一下,希望对大家有帮助 文中有几部分没有翻译,主 ...

  3. 深度学习和浅层学习 Deep Learning and Shallow Learning

    由于 Deep Learning 现在如火如荼的势头,在各种领域逐渐占据 state-of-the-art 的地位,上个学期在一门课的 project 中见识过了 deep learning 的效果, ...

  4. Deep Learning and Shallow Learning

    由于 Deep Learning 现在如火如荼的势头,在各种领域逐渐占据 state-of-the-art 的地位,上个学期在一门课的 project 中见识过了 deep learning 的效果, ...

  5. 对比学习系列论文CPC(二)—Representation Learning with Contrastive Predictive Coding

    0.Abstract 0.1逐句翻译 While supervised learning has enabled great progress in many applications, unsupe ...

  6. 表征学习 Representation Learning(特征学习、表示学习)是什么?

    在机器学习领域,表征学习(或特征学习)是一种将原始数据转换成为能够被机器学习有效开发的一种技术的集合.在特征学习算法出现之前,机器学习研究人员需要利用手动特征工程(manual feature lea ...

  7. 论文阅读《Representation learning with contrastive predictive coding 》(CPC)对比预测编码

    论文地址:Representation Learning with Contrastive Predictive Coding 目录 一.Background(背景) 二.Motivation and ...

  8. 表示学习(Representation Learning) Part1--Pretext Text

    文章目录 Representation Learning Inferring structure(推断结构) Transformation prediction Rotation prediction ...

  9. 对比学习系列论文MoCo v1(二):Momentum Contrast for Unsupervised Visual Representation Learning

    0.Abstract 0.1逐句翻译 We present Momentum Contrast (MoCo) for unsupervised visual representation learni ...

最新文章

  1. 三星s10android10功能,三星S10系列现场上手体验:“安卓机皇”真的名副其实
  2. (转)创业的注意事项
  3. Linux SPI总线和设备驱动架构
  4. Redis学习笔记——简介及配置
  5. DRL实战 : 强化学习在广告点击业务中的应用
  6. 如何建立复杂城市排水系统模型?基于SWMM与城市内涝一维二维耦合模型的复杂排水系统建模/城市排涝/海绵城市技术
  7. Tampermonkey 油猴插件使用
  8. 【python技巧】RGB值组合三元色(红绿蓝)
  9. 最强代码审查工具报告
  10. 淘宝双11大数据分析(Spark 分析篇)
  11. Android jPBC 2.0.0配置与测试
  12. 微信小程序实现列表页的点赞和取消点赞功能!
  13. python处理excel和word文档
  14. 《计算之魂》第1章 毫厘千里之差——大O概念(1.4节)
  15. 聊一聊 Web 框架的新趋势
  16. 华为数字化转型之道 实践篇 第十章 数字化办公:构建全方位的连接与协同
  17. 考研数二第十七讲 反常积分与反常积分之欧拉-泊松(Euler-Poisson)积分
  18. UGC、PGC和OGC的区别
  19. OpenLayers项目分析
  20. 《网络安全工程师笔记》 第五章:用户与组管理

热门文章

  1. 程序员如何优雅地使用 Mac?
  2. OCA读书笔记(1) - 浏览Oracle数据库架构
  3. JS 判断URL中是否含有 http:// 如果没有则自动为URL加上
  4. 03 HttpServletRequest_HttpServletResponse
  5. SpringCloud学习2-Springboot监控模块(actuator)
  6. gitlab不小心把sign-in取消了怎么恢复
  7. Objective-C之category
  8. asp.net C# 直接读取或访问其它网站的URL示例
  9. p4.pm p4python p4perl p4api 的使用方法
  10. python列表元组字典相互转化_python中字典元组和列表的互相转化