写在前面:

本篇博文将作为“主动学习系列”博文的结尾。目前,本人在职的相关工作暂无与主动学习相关的需求。因此,之后大概率是不会再更新相关的博文了。能够分享的内容和资料,我都分享在这些博文了。希望能够对大家有帮助!


主动学习系列博文:

【Active Learning - 00】主动学习重要资源总结、分享(提供源码的论文、一些AL相关的研究者):https://blog.csdn.net/Houchaoqun_XMU/article/details/85245714

【Active Learning - 01】深入学习“主动学习”:如何显著地减少标注代价:https://blog.csdn.net/Houchaoqun_XMU/article/details/80146710

【Active Learning - 02】Fine-tuning Convolutional Neural Networks for Biomedical Image Analysis: Actively and Incrementally:https://blog.csdn.net/Houchaoqun_XMU/article/details/78874834

【Active Learning - 03】Adaptive Active Learning for Image Classification:https://blog.csdn.net/Houchaoqun_XMU/article/details/89553144

【Active Learning - 04】Generative Adversarial Active Learning:https://blog.csdn.net/Houchaoqun_XMU/article/details/89631986

【Active Learning - 05】Adversarial Sampling for Active Learning:https://blog.csdn.net/Houchaoqun_XMU/article/details/89736607

【Active Learning - 06】面向图像分类任务的主动学习系统(理论篇):https://blog.csdn.net/Houchaoqun_XMU/article/details/89717028

【Active Learning - 07】面向图像分类任务的主动学习系统(实践篇 - 展示):https://blog.csdn.net/Houchaoqun_XMU/article/details/89955561

【Active Learning - 08】主动学习(Active Learning)资料汇总与分享:https://blog.csdn.net/Houchaoqun_XMU/article/details/96210160

【Active Learning - 09】主动学习策略研究及其在图像分类中的应用:研究背景与研究意义:https://blog.csdn.net/Houchaoqun_XMU/article/details/100177750

【Active Learning - 10】图像分类技术和主动学习方法概述:https://blog.csdn.net/Houchaoqun_XMU/article/details/101126055

【Active Learning - 11】一种噪声鲁棒的半监督主动学习框架:https://blog.csdn.net/Houchaoqun_XMU/article/details/102417465

【Active Learning - 12】一种基于生成对抗网络的二阶段主动学习方法:https://blog.csdn.net/Houchaoqun_XMU/article/details/103093810

【Active Learning - 13】总结与展望 & 参考文献的整理与分享(The End...):https://blog.csdn.net/Houchaoqun_XMU/article/details/103094113


总结与展望

6.1 总结

主动学习作为一种能够显著减少标注成本的机器学习方法,备受学术界和工业界的广泛关注。自 1974 年起,越来越多的主动学习策略和框架不断被提出并应用到不同的领域中。此外,深度学习兴起的同时,带来了对大量标注样本的需求,更加突出了主动学习方法的重要性。图像分类作为人工智能的研究目的之一,能够帮助人类对海量的图像进行分类,在日常生活中具有广泛的应用前景。基于主动学习方法的重要性和图像分类技术的应用前景,本文着重研究了主动学习策略及其在图像分类中的应用,为将来主动学习方法在图像分类任务的应用及其发展提供了一些实践经验和建议。本文的主要研究内容总结如下:首先,本文对主动学习方法和图像分类技术进行了广泛的调研并进行综述。本文总结了主动学习方法的基本框架以及几种常见的基本策略,并围绕主动学习的扩展方法展开了简要的讨论。并且,概括了部分主流的基于传统机器学习以及基于深度学习的图像分类技术。然后,本文详细地讨论了半监督主动学习方法,提出了 NRMSL-BMAL 框架。核心内容包括: 1)针对噪声样本问题:提出了 NRMSL 方法,既能够减少部分噪声样本的产生,又能通过 SEC-CNN 方法提升模型的抗噪能力。 2)针对 BMAL 筛选的样本之间具有大量冗余信息的问题:引入基于卷积自编码的聚类算法,从而提升了被筛选样本的多样性,在一定程度上降低了样本之间的冗余信息。 3)在五组图像分类数据集上进行实验,结果表明 NRMSL-BMAL 能够减少 44.34% 至 95.93% 的标注成本。此外,我们从时间成本和标注成本的角度对单模式的主动学习和 BMAL进行了实验和讨论:虽然单模式的主动学习算法可能更进一步减少标注成本,但在不同数据集上的效果不稳定且提升的空间较低,同时需要消耗大量的时间成本。紧接着,本文详细地讨论了生成对抗网络及其改进方法,并提出了一种基于成对抗网络的二阶段主动学习方法。核心内容包括: 1)将 AAE 和 DCGAN 进行融合,以半监督的学习方式对 AAE-DCGAN 模型进行训练,充分利用了主动学习方法增量式产生标注样本的特性; 2)结合了生成式成员查询和基于未标注样本池的主动学习方法,在提高生成图像的质量和减少主动学习环节的计算成本的同时,又能够显著地减少样本的标注成本。最后,本文分析和讨论了主动学习方法的实际应用场景,设计并实现了一个面向图像分类任务的主动学习系统,通过图像分类任务验证了系统的有效性和稳定性。

6.2 展望

本文提出的两种主动学习方法和一个主动学习系统中,虽然在图像分类任务上取得了一定的研究成果: 1)显著地减少了标注成本; 2)以系统的形式应用到实际需求中。但是本文提出的方法仍有待改进,具体如下:

(1)处理更复杂的图像:本文提出的两个主动学习方法中,在常见的几组图像分类数据集中取得了较好的测试效果,但图像的复杂性较低。然而,在实际应用场景中,往往需要处理更为复杂的图像。在未来的工作中,我们将尝试改进相关的方法(例如: NRMSL-BMAL 框架中的卷积聚类方法; AAE-GANs-AL 生成图像的质量等),使其能够处理更复杂的图像。

(2)引入更优质的主动学习策略:本文提出的相关方法仅使用了一些基础的主动学习策略,例如不确定性策略和多样性策略。在未来的工作中,我们将根据实际需求引入更多优质的策略。

(3)引入增量训练方法:增量学习能够充分利用历史训练结果,不需要对已训练过的样本进行重复训练。因此,在主动学习方法的迭代过程中,能够显著地减少重复训练过程带来的时间成本。

(4)主动学习系统的局限性:本文实现的主动学习系统主要面向图像分类任务。在未来的工作中,我们将不断完善相关的核心功能模块(特别是主动学习策略),从而提升系统的稳定性和交互性,并将其扩展到其他领域中。

参考文献列表:

[1] A. L. Yuille, C. Liu. Deep nets: What have they ever done for vision?[J]. Computer Science, 2018.

[2] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein. Imagenet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3):211–252.

[3] S. Dasgupta. Two faces of active learning[J]. Theoretical Computer Science, 2011, 412(19):1767–1781.

[4] B. Settles. Active learning[J]. Synthesis Lectures on Artifcial Intelligence and Machine Learning, 2012, 6(1):1–114.

[5] Y. Fu, X. Zhu, B. Li. A survey on instance selection for active learning[J]. Knowledge and Information Systems, 2013, 35(2):249–283.

[6] X. Zhu. Semi-supervised learning literature survey[J]. Computer Science, University of Wisconsin-Madison, 2006, 2(3):4.

[7] B. Settles, M. Craven, L. Friedland. Active learning with real annotation costs[C]. Proceedings of the International Conference on Neural Information Processing Systems, 2008, 1–10.

[8] F. Olsson. A literature survey of active machine learning in the context of natural language processing[J], 2009.

[9] M. Ghayoomi. Using variance as a stopping criterion for active learning of frame assignment[C]. Proceedings of the NAACL HLT 2010 Workshop on Active Learning for Natural Language Processing, 2010, 1–9.

[10] Z.-J. Zha, M. Wang, Y.-T. Zheng, Y. Yang, R. Hong, T.-S. Chua. Interactive video indexing with statistical active learning[J]. IEEE Transactions on Multimedia, 2012, 14(1):17–27.

[11] J. Zhu, H. Wang, B. K. Tsou, M. Y. Ma. Active learning with sampling by uncertainty and density for data annotations.[J]. IEEE Transactions. Audio, Speech & Language Processing, 2010, 18(6):1323–1331.

[12] S. Tong, D. Koller. Support vector machine active learning with applications to text classifcation[J]. Machine Learning, 2001, 2(11):45–66.

[13] S. C. Hoi, R. Jin, M. R. Lyu. Batch mode active learning with applications to text categorization and image retrieval[J]. IEEE Transactions on Knowledge and Data Engineering, 2009, 21(9):1233–1248.

[14] Z.-H. Zhou. A brief introduction to weakly supervised learning[J]. National Science Review, 2017, 5(1):44–53.

[15] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio. Generative adversarial nets[C]. Proceedings of the International Conference on Neural Information Processing Systems, 2014, 2672–2680.

[16] A. Radford, L. Metz, S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks[J]. arXiv preprint arXiv:1511.06434, 2015.

[17] M. Arjovsky, S. Chintala, L. Bottou. Wasserstein gan[J]. arXiv preprint arXiv: 1701.07875, 2017.

[18] A. Brock, J. Donahue, K. Simonyan. Large scale gan training for high fdelity natural image synthesis[J]. arXiv preprint arXiv:1809.11096, 2018.

[19] J.-J. Zhu, J. Bento. Generative adversarial active learning[J]. arXiv preprint arXiv: 1702.07956, 2017.

[20] M. Huijser, J. C. van Gemert. Active decision boundary annotation with deep generative models[C]. Proceedings of the IEEE International Conference on Computer Vision, 2017, 5286–5295.

[21] Y. Liu, Z. Li, C. Zhou, Y. Jiang, J. Sun, M. Wang, X. He. Generative adversarial active learning for unsupervised outlier detection[J]. arXiv preprint arXiv:1809.10816, 2018.

[22] M. Ducoffe, F. Precioso. Adversarial active learning for deep networks: a margin based approach[J]. arXiv preprint arXiv:1802.09841, 2018.

[23] H. A. Simon, G. Lea. Problem solving and rule induction: A unifed view.[J], 1974.

[24] D. Angluin. Queries and concept learning[J]. Machine Learning, 1988, 2(4):319–342.

[25] L. E. Atlas, D. A. Cohn, R. E. Ladner. Training connectionist networks with queries and selective sampling[C]. Proceedings of the International Conference on Neural Information Processing Systems, 1990, 566–573.

[26] D. D. Lewis, W. A. Gale. A sequential algorithm for training text classifers[C]. Proceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1994, 3–12.

[27] A. Kapoor, G. Hua, A. Akbarzadeh, S. Baker. Which faces to tag: Adding prior constraints into active learning[C]. Proceedings of the IEEE International Conference on Computer Vision, 2009, 1058–1065.

[28] B. Settles, M. Craven. An analysis of active learning strategies for sequence labeling tasks[C]. Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2008, 1070–1079.

[29] C. Campbell, N. Cristianini, A. Smola, et al. Query learning with large margin classifers[C]. Proceedings of International Conference on Machine Learning, 2000, 111–118.

[30] G. Schohn, D. Cohn. Less is more: Active learning with support vector machines[C]. Proceedings of International Conference on Machine Learning, 2000, 839–846.

[31] H. S. Seung, M. Opper, H. Sompolinsky. Query by committee[C]. Proceedings of the Annual Workshop on Computational Learning Theory, 1992, 287–294.

[32] L. Breiman. Bagging predictors[J]. Machine Learning, 1996, 24(2):123–140.

[33] N. A. H. Mamitsuka, et al. Query learning strategies using boosting and bagging[C]. Proceedings of International Conference on Machine Learning, 1998.

[34] M. McCallum, K. Nigam. Employing em in pool-based active learning for text classifcation[C]. Proceedings of International Conference on Machine Learning, 1998.

[35] C. Zhang, T. Chen. An active learning framework for content-based information retrieval[J]. IEEE Transactions on Multimedia, 2002, 4(2):260–268.

[36] X. Li, Y. Guo. Adaptive active learning for image classifcation[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, 859–866.

[37] Y. Gu, Z. Jin, S. C. Chiu. Active learning combining uncertainty and diversity for multi-class image classifcation[J]. IET Computer Vision, 2014, 9(3):400–407.

[38] Z. Zhou, J. Shin, L. Zhang, S. Gurudu, M. Gotway, J. Liang. Fine-tuning convolutional neural networks for biomedical image analysis: Actively and incrementally *[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, 4761–4772.

[39] T. N. C. Cardoso, R. M. Silva, S. Canuto, M. M. Moro, M. A. Gonçalves. Ranked batch mode active learning[J]. Information Sciences, 2017, 379:313–337.

[40] S. Xiong, J. Azimi, X. Z. Fern. Active learning of constraints for semi-supervised clustering[J]. IEEE Transactions on Knowledge & Data Engineering, 2014, 26(1):43–54.

[41] S. Patra, L. Bruzzone. A cluster-assumption based batch mode active learning technique[J]. Pattern Recognition Letters, 2012, 33(9):1042–1048.

[42] H. T. Nguyen, A. Smeulders. Active learning using pre-clustering[C]. Proceedings of International Conference on Machine Learning, 2004, 79.

[43] X. Guo, X. Liu, E. Zhu, J. Yin. Deep clustering with convolutional autoencoders[C]. Proceedings of the International Conference on Neural Information Processing, 2017, 373–382.

[44] A. K. McCallumzy, K. Nigamy. Employing em and pool-based active learning for text classifcation[C]. Proceedings of International Conference on Machine Learning, 1998, 359–367.

[45] I. Muslea, S. Minton, C. A. Knoblock. Active + semi-supervised learning = robust multiview learning[C]. Proceedings of International Conference on Machine Learning, 2002, 435–442.

[46] Z.-H. Zhou, K.-J. Chen, Y. Jiang. Exploiting unlabeled data in content-based image retrieval[C]. European Conference on Machine Learning, 2004, 525–536.

[47] W. Han, E. Coutinho, H. Ruan, H. Li, B. Schuller, X. Yu, X. Zhu. Semi-supervised active learning for sound classifcation in hybrid learning environments[J]. Plos One, 2016, 11(9):e0162075.

[48] K. Tomanek, U. Hahn. Semi-supervised active learning for sequence labeling[C]. Proceedings Meeting of the Association for Computational Linguistics, 2009, 1039–1047.

[49] G. Tur, D. Hakkani-Tür, R. E. Schapire. Combining active and semi-supervised learning for spoken language understanding[J]. Speech Communication, 2005, 45(2):171–186.

[50] C. Mayer, R. Timofte. Adversarial sampling for active learning[J]. Computer Science, 2018.

[51] S.-J. Huang, J.-W. Zhao, Z.-Y. Liu. Cost-effective training of deep cnns with active model adaptation[J]. arXiv preprint arXiv:1802.05394, 2018.

[52] S.-J. Huang, M. Xu, M.-K. Xie, M. Sugiyama, G. Niu, S. Chen. Active feature acquisition with supervised matrix completion[J]. arXiv preprint arXiv:1802.05380, 2018.

[53] H.-M. Chu, H.-T. Lin. Can active learning experience be transferred?[C]. Proceedings of the International Conference on Data Mining, 2016, 841–846.

[54] J. A. Hartigan, M. A. Wong. Algorithm as 136: A k-means clustering algorithm[J]. Journal of the Royal Statistical Society. Series C (Applied Statistics), 1979, 28(1):100– 108.

[55] D. Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods[J]. Proceedings of Annual Meeting of the Association for Computational Linguistics, 1995, 189–196.

[56] T. Ojala, M. Pietikainen, D. Harwood. Performance evaluation of texture measures with classifcation based on kullback discrimination of distributions[C]. Proceedings of the International Conference on Pattern Recognition, 1994, 582–585.

[57] N. Dalal, B. Triggs. Histograms of oriented gradients for human detection[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2005, 886–893.

[58] D. G. Lowe. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2):91–110.

[59] T. Ahonen, A. Hadid, M. Pietikäinen. Face recognition with local binary patterns[C]. Proceedings of the European Conference on Computer Vision, 2004, 469–481.

[60] T. Ahonen, A. Hadid, M. Pietikainen. Face description with local binary patterns: Application to face recognition[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2006, (12):2037–2041.

[61] S. Liao, X. Zhu, Z. Lei, L. Zhang, S. Z. Li. Learning multi-scale block local binary patterns for face recognition[C]. Proceedings of the International Conference on Biometrics, 2007, 828–837.

[62] T. M. Cover, P. E. Hart, et al. Nearest neighbor pattern classifcation[J]. IEEE Transactions on Information Theory, 1967,13(1):21–27.

[63] C. Cortes, V. Vapnik. Support-vector networks[J]. Machine Learning, 1995, 20(3): 273–297.

[64] Y. Li, L. Guo. An active learning based tcm-knn algorithm for supervised network intrusion detection[J]. Computers & Security, 2007, 26(7-8):459–467.

[65] Y. Yang, Z. Ma, F. Nie, X. Chang, A. G. Hauptmann. Multi-class active learning by uncertainty sampling with diversity maximization[J]. International Journal of Computer Vision, 2015, 113(2):113–127.

[66] S. C. Hoi, R. Jin, J. Zhu, M. R. Lyu. Semisupervised svm batch mode active learning with applications to image retrieval[J]. ACM Transactions on Information Systems (TOIS), 2009, 27(3):16.

[67] X. Li, L. Wang, E. Sung. Multilabel svm active learning for image classifcation[C]. Proceedings of the International Conference on Image Processing, 2004, 2207–2210.

[68] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11):2278–2324.

[69] A. Krizhevsky, I. Sutskever, G. E. Hinton. Imagenet classifcation with deep convolutional neural networks[C]. Proceedings of the International Conference on Neural Information Processing Systems, 2012, 1097–1105.

[70] M. D. Zeiler, R. Fergus. Visualizing and understanding convolutional networks[C]. Proceedings of the European Conference on Computer Vision, 2014, 818–833.

[71] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein. Imagenet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3):211–252.

[72] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich. Going deeper with convolutions[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, 1–9.

[73] M. Lin, Q. Chen, S. Yan. Network in network[J]. arXiv preprint arXiv:1312.4400, 2013.

[74] R. K. Srivastava, K. Greff, J. Schmidhuber. Highway networks[J]. arXiv preprint arXiv: 1505.00387, 2015.

[75] S. Hochreiter, J. Schmidhuber. Long short-term memory[J]. Neural Computation, 1997, 9(8):1735–1780.

[76] K. He, X. Zhang, S. Ren, J. Sun. Deep residual learning for image recognition[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, 770–778.

[77] G. Huang, Z. Liu, L. v. Maaten, K. Q. Weinberger. Densely connected convolutional networks[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, 2261–2269.

[78] A. Krizhevsky. One weird trick for parallelizing convolutional neural networks[J]. arXiv preprint arXiv:1404.5997, 2014.

[79] A. J. Joshi, F. Porikli, N. Papanikolopoulos. Multi-class active learning for image classifcation[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2009, 2372–2379.

[80] D. Tuia, F. Ratle, F. Pacifci, M. F. Kanevski, W. J. Emery. Active learning methods for remote sensing image classifcation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2009, 47(7):2218.

[81] A. Atkinson, A. Donev, R. Tobias. Optimum Experimental Designs, with SAS[M]. Oxford University Press, 2007.

[82] M. Ji, J. Han. A variance minimization criterion to active learning on graphs[C]. Artifcial Intelligence and Statistics, 2012, 556–564.

[83] S. Patra, L. Bruzzone. A batch-mode active learning technique based on multiple uncertainty for svm classifer[J]. IEEE Geoscience & Remote Sensing Letters, 2012, 9(3): 497–501.

[84] A. Blum, T. Mitchell. Combining labeled and unlabeled data with co-training[C]. Proceedings of the Annual Conference on Computational Learning Theory, 1998, 92–100.

[85] Z.-H. Zhou, M. Li. Tri-training: Exploiting unlabeled data using three classifers[J].IEEE Transactions on Knowledge and Data Engineering, 2005, 17(11):1529–1541.

[86] A. Krizhevsky, G. Hinton. Learning multiple layers of features from tiny images[R].Technical report, Technical Report, 2009.

[87] M. Hon, N. M. Khan. Towards alzheimer’s disease classifcation through transfer learning[C]. International Conference on Bioinformatics and Biomedicine, 2017, 1166–1169.

[88] X. Liu, S. Li, M. Kan, S. Shan, X. Chen. Self-error-correcting convolutional neuralnetwork for learning with noisy labels[C]. IEEE International Conference on AutomaticFace & Gesture Recognition, 2017, 111–117.

[89] J. Xie, R. Girshick, A. Farhadi. Unsupervised deep embedding for clustering analysis[C].Proceedings of International Conference on Machine Learning, 2016, 478–487.

[90] X. Guo, L. Gao, X. Liu, J. Yin. Improved deep embedded clustering with local structurepreservation[C]. International Joint Conference on Artifcial Intelligence, 2017, 1753–1759.

[91] S. Ioffe, C. Szegedy. Batch normalization: Accelerating deep network training by reducinginternal covariate shift[J]. Computer Science, 2015.

[92] B. Xu, N. Wang, T. Chen, M. Li. Empirical evaluation of rectifed activations inconvolutional network[J].ComputerScience,2015.

[93] L. Luo, X. U. Guo-Jin. Extended tanh-function method and its applications to nonlinearequations[J]. Physics Letters A, 2000, 277(4):212–218.

[94] A. L. Maas, A. Y. Hannun, A. Y. Ng. Rectifer nonlinearities improve neural networkacoustic models[C]. Proceedings of International Conference on Machine Learning, 2013,1, 3.

[95] M. Arjovsky, L. Bottou. Towards principled methods for training generative adversarialnetworks[J]. Stat, 2017, 1050.

[96] A. Makhzani, J. Shlens, N. Jaitly, I. J. Goodfellow. Adversarial autoencoders[J]. Computer Science, 2015.

[97] E. B. Baum, K. Lang. Query learning can work poorly when a human oracle is used[C].International Joint Conference on Neural Networks, 1992, 8.

[98] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, L. D.Jackel. Backpropagation applied to handwritten zip code recognition[J]. Neural Computation, 1989, 1(4):541–551.

[99] O. Reyes, E. Pérez, M. Del Carmen Rodríguez-Hernández, H. M. Fardoun, S. Ventura.Jclal: A java framework for active learning[J]. Machine Learning, 2016, 17(1):3271–3275.

[100] Y.-Y. Yang, S.-C. Lee, Y.-A. Chung, T.-E. Wu, S.-A. Chen, H.-T. Lin. libact: Pool-basedactive learning in python[J]. arXiv preprint arXiv:1710.00379, 2017.

[101] Y. Tang, G. Li, S. Huang. Alipy: Active learning in python[J]. Computer Science, 2019.[102] F. P. S. Luus, N. Khan, I. Akhalwaya. Active learning with tensorboard projector[J].Computer Science, 2019.

文献下载方式:

https://blog.csdn.net/Houchaoqun_XMU/article/details/96210160

【Active Learning - 13】总结与展望 参考文献的整理与分享(The End...)相关推荐

  1. 【Active Learning - 00】主动学习重要资源总结、分享(提供源码的论文、一些AL相关的研究者)

    主动学习系列博文: [Active Learning - 00]主动学习重要资源总结.分享(提供源码的论文.一些AL相关的研究者):https://blog.csdn.net/Houchaoqun_X ...

  2. 【Active Learning - 03】Adaptive Active Learning for Image Classification

    主动学习系列博文: [Active Learning - 00]主动学习重要资源总结.分享(提供源码的论文.一些AL相关的研究者):https://blog.csdn.net/Houchaoqun_X ...

  3. 【Active Learning - 10】图像分类技术和主动学习方法概述

    主动学习系列博文: [Active Learning - 00]主动学习重要资源总结.分享(提供源码的论文.一些AL相关的研究者):https://blog.csdn.net/Houchaoqun_X ...

  4. [论文解读]Deep active learning for object detection

    Deep active learning for object detection 文章目录 Deep active learning for object detection 简介 摘要 初步 以前 ...

  5. A novel framework for detecting social bots with deep neural networks and active learning(SCI一区)

    目录 摘要 1 绪论 1.1. Social bots in OSNs 1.2. Challenges 1.3. Contribution and organization 2 相关工作 2.1. G ...

  6. 主动学习(Active Learning)系列介绍(三)搜索假设空间(Searching Through the Hypothesis Space)

    本文介绍主动学习Active Learning中的第二种query selection framework -- 搜索假设空间Searching Through the Hypothesis Spac ...

  7. 《A sparse annotation strategy based on attention-guided active learning for 3D medic》--阅读笔记-Arxiv

    之前读过,但是没做笔记,就直接拉的其它作者的笔记了.感谢 https://blog.csdn.net/sinat_35779431/article/details/99682540 文章链接:http ...

  8. 主动学习(Active Learning,AL)综述

    目录 1. 基本概念 2. 基于不确定性的主动学习方法 3.基于最近邻和支持向量的分类器的方法 3.1 NNClassifier 3.2 RBF network + Gradient Penalty ...

  9. active learning主动学习

    active learning 是半监督式的机器学习的一种,这种机器学习算法能够交互式地查询用户或者信息源,从而对于一个新的数据样例得到可人的输出.在统计学文献中,它有时也被称为最佳实验设计. 在这样 ...

最新文章

  1. PL/SQL Developer 连接远程oracle的方法
  2. leetcode 1293. Shortest Path in a Grid with Obstacles Elimination | 1293. 网格中的最短路径(BFS)
  3. kafka reassign 限速_RabbitMQ 七战 Kafka,差异立现!
  4. JVM—引用计数和可达性分析算法(存活性判断)
  5. android gridview横向显示图片,Android使用Gridview单行横向滚动显示
  6. 老李分享:接电话扩展之uiautomator 1
  7. allure 测试报告本地打开_Allure自动化测试报告我是这样用的
  8. jsonp多次请求报错 not a function的解决方法
  9. ios设置中性黑体_iOS 自定义-苹方字体的使用
  10. [读书笔录]解析卷积神经网络(魏秀参)——目录和绪论
  11. MCSA Server 2012 R2 Passthrough Disk
  12. 服务器机械硬盘坏了怎么修复,硬盘修复软件:如何修复硬盘错误?
  13. ArcGIS基本使用介绍
  14. 从React专利事件看开源软件许可
  15. 自己动手该 博客 百度给的模板不好看,有没个性
  16. 字节跳动梁汝波:管理者过于依靠规则会使组织僵化 |王兴:反垄断无损美团竞争优势...
  17. 案例分析:回归-克里金方法生成气温表面图(1)
  18. vue脚手架创建项目时的 linter / formatter 配置选择
  19. 【深度学习中模型评价指标汇总(混淆矩阵、recall、precision、F1、AUC面积、ROC曲线、ErrorRate)】
  20. 机器学习模型训练测试完整步骤

热门文章

  1. 失业三星期:我寻找第二份编程工作之路
  2. 计算机网络技术(二)——数据通信
  3. BigBrother的大数据之旅 Day 1 Linux(1)
  4. Russia Proposes First Multinational Cryptocurrency
  5. 2013年6月2日星期日
  6. 李小龙的传奇人生(2)
  7. 【解决方案】国标GB28181协议视频智能分析平台打造智慧企业AR云景解决方案
  8. vue控制台报错Duplicate keys detected:‘xxxx‘.This may canse an update error
  9. 写给运维新手的十一条 Docker 守则,牢记!
  10. 正确的IE卸载与重装方法