文章来源 | 深度学习与NLP

Few Shot Learning(FSL)又称少样本学习,这是做AI研究经常遇到的一个问题。深度学习技术需要大量的数据来训练一个好的模型。例如典型的 MNIST 分类问题,一共有 10 个类,训练集一共有 6000 个样本,平均下来每个类大约 600 个样本,但是我们想一下我们人类自己,我们区分 0 到 9 的数字图片的时候需要看 6000 张图片才知道怎么区分吗?很显然,不需要!这表明当前的深度学习技术和我们人类智能差距还是很大的,要想弥补这一差距,少样本学习是一个很关键的问题。

另外还有一个重要原因是如果想要构建新的数据集,还是举分类数据集为例,我们需要标记大量的数据,但是有的时候标记数据集需要某些领域的专家(例如医学图像的标记),这费时又费力,因此如果我们可以解决少样本学习问题,只需要每个类标记几张图片就可以高准确率的给剩余大量图片自动标记。

基于以上两个重要的的原因,少样本学习是一个非常吸引人且具有非常重要研究意义,工业实用价值的一个领域,本资源整理了近几年在深度学习领域,少样本学习相关综述、数据集、模型/算法和应用资源,分享给大家。

资源整理自网络,源地址:https://github.com/tata1661/FewShotPapers

目录

综述论文

    相关数据集

    相关模型

        多任务学习

嵌入学习

利用外部记忆学习

生成建模

    算法相关

Fine tuning现有参数

Fine tuning元学习参数

参数学习搜索

    应用场景

计算机视觉

机器人学

自然语言处理

声音信号处理

其他

    理论研究相关

    综述论文

Generalizing from a few examples: A survey on few-shot learning, CSUR, 2020 Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni.

    相关数据集

Learning from one example through shared densities on transforms, in CVPR, 2000. E. G. Miller, N. E. Matsakis, and P. A. Viola.

Domain-adaptive discriminative one-shot learning of gestures, in ECCV, 2014. T. Pfister, J. Charles, and A. Zisserman.

One-shot learning of scene locations via feature trajectory transfer, in CVPR, 2016. R. Kwitt, S. Hegenbart, and M. Niethammer.

Low-shot visual recognition by shrinking and hallucinating features, in ICCV, 2017. B. Hariharan and R. Girshick.

Improving one-shot learning through fusing side information, arXiv preprint, 2017. Y.H.Tsai and R.Salakhutdinov.

Fast parameter adaptation for few-shot image captioning and visual question answering, in ACM MM, 2018. X. Dong, L. Zhu, D. Zhang, Y. Yang, and F. Wu.

Exploit the unknown gradually: One-shot video-based person re-identification by stepwise learning, in CVPR, 2018. Y. Wu, Y. Lin, X. Dong, Y. Yan, W. Ouyang, and Y. Yang.

Low-shot learning with large-scale diffusion, in CVPR, 2018. M. Douze, A. Szlam, B. Hariharan, and H. Jégou.

Diverse few-shot text classification with multiple metrics, in NAACL-HLT, 2018. M. Yu, X. Guo, J. Yi, S. Chang, S. Potdar, Y. Cheng, G. Tesauro, H. Wang, and B. Zhou.

Delta-encoder: An effective sample synthesis method for few-shot object recognition, in NeurIPS, 2018. E. Schwartz, L. Karlinsky, J. Shtok, S. Harary, M. Marder, A. Kumar, R. Feris, R. Giryes, and A. Bronstein.

Low-shot learning via covariance-preserving adversarial augmentation networks, in NeurIPS, 2018. H. Gao, Z. Shou, A. Zareian, H. Zhang, and S. Chang.

AutoAugment: Learning augmentation policies from data, in CVPR, 2019. E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le.

EDA: Easy data augmentation techniques for boosting performance on text classification tasks, in EMNLP and IJCNLP, 2019. J. Wei and K. Zou.

    相关模型

   多任务学习

Multi-task transfer methods to improve one-shot learning for multimedia event detection, in BMVC, 2015. W. Yan, J. Yap, and G. Mori.

Label efficient learning of transferable representations acrosss domains and tasks, in NeurIPS, 2017. Z. Luo, Y. Zou, J. Hoffman, and L. Fei-Fei.

Multi-content GAN for few-shot font style transfer, in CVPR, 2018. S. Azadi, M. Fisher, V. G. Kim, Z. Wang, E. Shechtman, and T. Darrell.

Feature space transfer for data augmentation, in CVPR, 2018. B. Liu, X. Wang, M. Dixit, R. Kwitt, and N. Vasconcelos.

One-shot unsupervised cross domain translation, in NeurIPS, 2018. S. Benaim and L. Wolf.

Fine-grained visual categorization using meta-learning optimization with sample selection of auxiliary data, in ECCV, 2018. Y. Zhang, H. Tang, and K. Jia.

Few-shot charge prediction with discriminative legal attributes, in COLING, 2018. Z. Hu, X. Li, C. Tu, Z. Liu, and M. Sun.

Few-shot adversarial domain adaptation, in NeurIPS, 2017. S. Motiian, Q. Jones, S. Iranmanesh, and G. Doretto.

嵌入学习

Object classification from a single example utilizing class relevance metrics, in NeurIPS, 2005.* M. Fink.

Few-shot learning through an information retrieval lens, in NeurIPS, 2017. E. Triantafillou, R. Zemel, and R. Urtasun.

Optimizing one-shot recognition with micro-set learning, in CVPR, 2010. K. D. Tang, M. F. Tappen, R. Sukthankar, and C. H. Lampert.

Siamese neural networks for one-shot image recognition, ICML deep learning workshop, 2015. G. Koch, R. Zemel, and R. Salakhutdinov

Matching networks for one shot learning, in NeurIPS, 2016. O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra et al.

Learning feed-forward one-shot learners, in NeurIPS, 2016. L. Bertinetto, J. F. Henriques, J. Valmadre, P. Torr, and A. Vedaldi.

Low data drug discovery with one-shot learning, ACS Central Science, 2017. H. Altae-Tran, B. Ramsundar, A. S. Pappu, and V. Pande.

Prototypical networks for few-shot learning, in NeurIPS, 2017. J. Snell, K. Swersky, and R. S. Zemel.

Attentive recurrent comparators, in ICML, 2017. P. Shyam, S. Gupta, and A. Dukkipati.

Learning algorithms for active learning, in ICML, 2017. P. Bachman, A. Sordoni, and A. Trischler.

Active one-shot learning, arXiv preprint, 2017. M. Woodward and C. Finn.

Structured set matching networks for one-shot part labeling, in CVPR, 2018. J. Choi, J. Krishnamurthy, A. Kembhavi, and A. Farhadi.

Low-shot learning from imaginary data, in CVPR, 2018. Y.-X. Wang, R. Girshick, M. Hebert, and B. Hariharan.

Learning to compare: Relation network for few-shot learning, in CVPR, 2018. F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales.

Dynamic conditional networks for few-shot learning, in ECCV, 2018. F. Zhao, J. Zhao, S. Yan, and J. Feng.

Tadam: Task dependent adaptive metric for improved few-shot learning, in NeurIPS, 2018. B. Oreshkin, P. R. López, and A. Lacoste.

Meta-learning for semi- supervised few-shot classification, in ICLR, 2018. M. Ren, S. Ravi, E. Triantafillou, J. Snell, K. Swersky, J. B. Tenen- baum, H. Larochelle, and R. S. Zemel.

Few-shot learning with graph neural networks, in ICLR, 2018. V. G. Satorras and J. B. Estrach.

A simple neural attentive meta-learner, in ICLR, 2018. N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel.

Meta-learning with differentiable closed-form solvers, in ICLR, 2019. L. Bertinetto, J. F. Henriques, P. Torr, and A. Vedaldi.

Learning to propopagate labels: Transductive propagation network for few-shot learning, in ICLR, 2019. Y. Liu, J. Lee, M. Park, S. Kim, E. Yang, S. Hwang, and Y. Yang.

    利用外部记忆学习

Meta-learning with memory-augmented neural networks, in ICML, 2016. A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap.

Few-shot object recognition from machine-labeled web images, in CVPR, 2017. Z. Xu, L. Zhu, and Y. Yang.

Learning to remember rare events, in ICLR, 2017. Ł. Kaiser, O. Nachum, A. Roy, and S. Bengio.

Meta networks, in ICML, 2017. T. Munkhdalai and H. Yu.

Memory matching networks for one-shot image recognition, in CVPR, 2018. Q. Cai, Y. Pan, T. Yao, C. Yan, and T. Mei.

Compound memory networks for few-shot video classification, in ECCV, 2018. L. Zhu and Y. Yang.

Memory, show the way: Memory based few shot word representation learning, in EMNLP, 2018. J. Sun, S. Wang, and C. Zong.

Rapid adaptation with conditionally shifted neurons, in ICML, 2018. T. Munkhdalai, X. Yuan, S. Mehri, and A. Trischler.

Adaptive posterior learning: Few-shot learning with a surprise-based memory module, in ICLR, 2019. T. Ramalho and M. Garnelo.

    生成建模

One-shot learning of object categories, TPAMI, 2006. L. Fei-Fei, R. Fergus, and P. Perona.

Learning to learn with compound HD models, in NeurIPS, 2011. A. Torralba, J. B. Tenenbaum, and R. R. Salakhutdinov.

One-shot learning with a hierarchical nonparametric bayesian model, in ICML Workshop on Unsupervised and Transfer Learning, 2012. R. Salakhutdinov, J. Tenenbaum, and A. Torralba.

Human-level concept learning through probabilistic program induction, Science, 2015. B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum.

One-shot generalization in deep generative models, in ICML, 2016. D. Rezende, I. Danihelka, K. Gregor, and D. Wierstra.

One-shot video object segmentation, in CVPR, 2017. S. Caelles, K.-K. Maninis, J. Pont-Tuset, L. Leal-Taixe ́, D. Cremers, and L. Van Gool.

Towards a neural statistician, in ICLR, 2017. H. Edwards and A. Storkey.

Extending a parser to distant domains using a few dozen partially annotated examples, in ACL, 2018. V. Joshi, M. Peters, and M. Hopkins.

MetaGAN: An adversarial approach to few-shot learning, in NeurIPS, 2018. R. Zhang, T. Che, Z. Ghahramani, Y. Bengio, and Y. Song.

Few-shot autoregressive density estimation: Towards learning to learn distributions, in ICLR, 2018. S. Reed, Y. Chen, T. Paine, A. van den Oord, S. M. A. Eslami, D. Rezende, O. Vinyals, and N. de Freitas.

The variational homoencoder: Learning to learn high capacity generative models from few examples, in UAI, 2018. L. B. Hewitt, M. I. Nye, A. Gane, T. Jaakkola, and J. B. Tenenbaum.

Meta-learning probabilistic inference for prediction, in ICLR, 2019. J. Gordon, J. Bronskill, M. Bauer, S. Nowozin, and R. Turner.

    算法相关

    Fine tuning现有参数

Cross-generalization: Learning novel classes from a single example by feature replacement, in CVPR, 2005. E. Bart and S. Ullman.

One-shot adaptation of supervised deep convolutional models, in ICLR, 2013. J. Hoffman, E. Tzeng, J. Donahue, Y. Jia, K. Saenko, and T. Darrell.

Learning to learn: Model regression networks for easy small sample learning, in ECCV, 2016. Y.-X. Wang and M. Hebert.

Learning from small sample sets by combining unsupervised meta-training with CNNs, in NeurIPS, 2016. Y.-X. Wang and M. Hebert.

Efficient k-shot learning with regularized deep networks, in AAAI, 2018. D. Yoo, H. Fan, V. N. Boddeti, and K. M. Kitani.

CLEAR: Cumulative learning for one-shot one-class image recognition, in CVPR, 2018. J. Kozerawski and M. Turk.

Learning structure and strength of CNN filters for small sample size training, in CVPR, 2018. R. Keshari, M. Vatsa, R. Singh, and A. Noore.

Dynamic few-shot visual learning without forgetting, in CVPR, 2018. S. Gidaris and N. Komodakis.

Low-shot learning with imprinted weights, in CVPR, 2018. H. Qi, M. Brown, and D. G. Lowe.

Neural voice cloning with a few samples, in NeurIPS, 2018. S.Arik,J.Chen,K.Peng,W.Ping,andY.Zhou.

    Fine tuning元学习参数

Model-agnostic meta-learning for fast adaptation of deep networks, in ICML, 2017. C. Finn, P. Abbeel, and S. Levine.

Bayesian model-agnostic meta-learning, in NeurIPS, 2018. J. Yoon, T. Kim, O. Dia, S. Kim, Y. Bengio, and S. Ahn.

Probabilistic model-agnostic meta-learning, in NeurIPS, 2018. C. Finn, K. Xu, and S. Levine.

Gradient-based meta-learning with learned layerwise metric and subspace, in ICML, 2018. Y. Lee and S. Choi.

Recasting gradient-based meta-learning as hierarchical Bayes, in ICLR, 2018. E. Grant, C. Finn, S. Levine, T. Darrell, and T. Griffiths.

Few-shot human motion prediction via meta-learning, in ECCV, 2018. L.-Y. Gui, Y.-X. Wang, D. Ramanan, and J. Moura.

The effects of negative adaptation in model-agnostic meta-learning, arXiv preprint, 2018. T. Deleu and Y. Bengio.

Amortized bayesian meta-learning, in ICLR, 2019. S. Ravi and A. Beatson.

Meta-learning with latent embedding optimization, in ICLR, 2019. A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell.

    参数学习搜索

Optimization as a model for few-shot learning, in ICLR, 2017. S. Ravi and H. Larochelle.

    应用场景

    计算机视觉

Learning robust visual-semantic embeddings, in CVPR, 2017. Y.-H. Tsai, L.-K. Huang, and R. Salakhutdinov.

Multi-attention network for one shot learning, in CVPR, 2017. P. Wang, L. Liu, C. Shen, Z. Huang, A. van den Hengel, and H. Tao Shen.

One-shot action localization by learning sequence matching network, in CVPR, 2018. H. Yang, X. He, and F. Porikli.

Few-shot and zero-shot multi-label learning for structured label spaces, in EMNLP, 2018. A. Rios and R. Kavuluru.

Meta-dataset: A dataset of datasets for learning to learn from few examples, arXiv preprint, 2019. E. Triantafillou, T. Zhu, V. Dumoulin, P. Lamblin, K. Xu, R. Goroshin, C. Gelada, K. Swersky, P.-A. Manzagol et al.

    机器人学

Towards one shot learning by imitation for humanoid robots, in ICRA, 2010. Y. Wu and Y. Demiris.

Learning manipulation actions from a few demonstrations, in ICRA, 2013. N. Abdo, H. Kretzschmar, L. Spinello, and C. Stachniss.

Learning assistive strategies from a few user-robot interactions: Model-based reinforcement learning approach, in ICRA, 2016. M. Hamaya, T. Matsubara, T. Noda, T. Teramae, and J. Morimoto.

One-shot imitation learning, in NeurIPS, 2017. Y. Duan, M. Andrychowicz, B. Stadie, J. Ho, J. Schneider, I. Sutskever, P. Abbeel, and W. Zaremba.

Continuous adaptation via meta-learning in nonstationary and competitive environments, in ICLR, 2018. M. Al-Shedivat, T. Bansal, Y. Burda, I. Sutskever, I. Mordatch, and P. Abbeel.

Deep online learning via meta-learning: Continual adaptation for model-based RL, in ICLR, 2018. A. Nagabandi, C. Finn, and S. Levine.

Meta-learning language-guided policy learning, in ICLR, 2019. J. D. Co-Reyes, A. Gupta, S. Sanjeev, N. Altieri, J. DeNero, P. Abbeel, and S. Levine.

    自然语言处理

High-risk learning: Acquiring new word vectors from tiny data, in EMNLP, 2017. A. Herbelot and M. Baroni.

FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation, in EMNLP, 2018. X. Han, H. Zhu, P. Yu, Z. Wang, Y. Yao, Z. Liu, and M. Sun.

    声音信号处理

One-shot learning of generative speech concepts, in CogSci, 2014. B. Lake, C.-Y. Lee, J. Glass, and J. Tenenbaum.

Machine speech chain with one-shot speaker adaptation, INTERSPEECH, 2018. A. Tjandra, S. Sakti, and S. Nakamura.

Investigation of using disentangled and interpretable representations for one-shot cross-lingual voice conversion, INTERSPEECH, 2018. S. H. Mohammadi and T. Kim.

    其他

A meta-learning perspective on cold-start recommendations for items, in NeurIPS, 2017. M. Vartak, A. Thiagarajan, C. Miranda, J. Bratman, and H. Larochelle.

SMASH: One-shot model architecture search through hypernetworks, in ICLR, 2018. A. Brock, T. Lim, J. Ritchie, and N. Weston.

    理论研究相关

Learning to learn around a common mean, in NeurIPS, 2018. G. Denevi, C. Ciliberto, D. Stamos, and M. Pontil.

Meta-learning and universality: Deep representations and gradient descent can approximate any learning algorithm, in ICLR, 2018. C. Finn and S. Levine.

更多分享、长按关注

2020年最全 | 少样本学习(FSL)相关综述、数据集、模型/算法和应用资源整理分享...相关推荐

  1. 悟道·文汇详解:少样本学习等近十个数据集取得第一

    智源导读:预训练模型如今已经成为深度学习研究中的一种主流范式,智源研究院认为「深度学习已经从『大炼模型』步入到『练大模型』的阶段」.基于此种考虑,由智源研究院牵头,汇聚清华.北大.中科院.人大等高校院 ...

  2. NeurIPS 2019 少样本学习研究亮点全解析

    作者:Angulia Chao 编辑:Joni Zhong 少样本学习(Few-Shot Learning)是近两年来非常有研究潜力的一个子方向,由于深度学习在各学科交叉研究与商业场景都有比较普遍的应 ...

  3. weka分类器怎么设置样本类别_NeurIPS 2019 少样本学习研究亮点全解析

    少样本学习(Few-Shot Learning)是近两年来非常有研究潜力的一个子方向,由于深度学习在各学科交叉研究与商业场景都有比较普遍的应用,然而训练出高精度模型的情况大部分来源于充足的训练数据,这 ...

  4. 少样本学习新突破!创新奇智入选ECCV 2020 Oral论文

    点击上方"机器学习与生成对抗网络",关注"星标" 获取有趣.好玩的前沿干货! 转自 创新奇智 近日,创新奇智有关少样本学习(Few-shot Learning) ...

  5. 少样本学习原理快速入门,并翻译《Free Lunch for Few-Shot Learning: Distribution Calibration》

    ICLR2021 Oral<Free Lunch for Few-Shot Learning: Distribution Calibration> 利用一个样本估计类别数据分布 9行代码提 ...

  6. Yann LeCun、吴恩达的新年AI预测:强调“少样本学习”,AI恐慌在减少

    来源:大数据文摘 新年伊始,海外媒体VentureBeat电话访谈了包括吴恩达.Yann Lecun在内的四位人工智能领域领军者,询问了他们对于过去一年人工智能领域发展的看法,以及他们认为新一年人工智 ...

  7. NeurIPS 2021 | 微软研究院提出CLUES,用于NLU的少样本学习评估

    ©作者 | 雪麓 单位 | 北京邮电大学 研究方向 | 序列标注 自然语言理解 (NLU) 的最新进展部分是由 GLUE.SuperGLUE.SQuAD 等基准驱动的.事实上,许多 NLU 模型现在在 ...

  8. 元学习、迁移学习、对比学习、自监督学习与少样本学习的关系解读

    文章目录 前言 一.对比自监督学习与FSL 1.对比学习与自监督学习 2.自监督学习与FSL 二.元学习与FSL 1.元学习是什么 2.元学习与FSL 三.迁移学习与FSL 1.迁移学习 2.迁移学习 ...

  9. 清华张长水等人30页少样本学习综述论文,涵盖400+参考文献

    来源:机器之心 本文长度为2000字,建议阅读5分钟 这篇综述文章回顾了少样本学习(FSL)的演进历史和当前进展,对 FSL 方法进行了层次分类,并总结了近期多个 FSL 扩展性主题及其最新进展,介绍 ...

最新文章

  1. 怎么样拒绝服务器重新启动?
  2. Redis添加密码认证Cacti监控读取Redis状态值为 -1 的最快速解决方案
  3. 通过从备份中排除这些文件夹来节省Time Machine驱动器上的空间
  4. UE3 iPhonePackager 工具
  5. java停止循环label_Java中的break Label 和continue Label 例子(跳出多重循环)(转)...
  6. linux列出组_如何列出Linux中的所有组?
  7. 菜鸟上路-Web开发模式
  8. 心法利器[57] | 文本多分类问题经验
  9. (转)微服务架构的理论基础 - 康威定律
  10. 廖雪峰Git教程笔记(十一)添加远程库
  11. c3p0连接池配置连接不上mysql_数据库连接池之c3p0的配置 + 问题解决方案
  12. QQ在线客服代码(不需要加好友即可发起临时会话)
  13. Racket 8.3下载安装(Win10)
  14. 网络安全专家教你设置史上最安全的WiFi密码
  15. JavaScript基础(五)——ES2015(ES6)基础语法
  16. java 条形码_Java 生成、识别条形码
  17. macbook无法下载软件问题解决
  18. 51单片机开发实例 基于51单片机的万年历
  19. python pyqt5 股票分时_pythonpyqt5股票分时:股票风险与提示_XAC配资之家
  20. C语言实现,古典问题(兔子生崽):有一对兔子,从出生后第3个月起每个月都生一对兔子,小兔子长到第三个月后每个月又生一对兔子,假如兔子都不死,问每个月的兔子总数为多少?(输出前40个月即可)

热门文章

  1. c# 委托实例的几种执行方式详解
  2. 阿里巴巴面试都问什么问题 [面试过程]
  3. CSS animation动画:实现台球运动效果
  4. python位置函数_Python之函数——基础篇
  5. Win7首个动态主题 可自动获取Bing新壁纸
  6. SNMP服务实验操作
  7. 16位I2C寄存器地址读写接口
  8. sqlite constraint
  9. Cypress(赛普拉斯)电容式感应(CapSense)触摸按键应用笔记(工程配置+功能调校+IIC通讯)
  10. AutoCAD二次开发三种添加插件按钮的方法之一