https://mp.weixin.qq.com/s/2VhgEieBwXymAv2qxO3MPw

【导读】机器阅读理解(Machine Reading Comprehension)是指让机器阅读文本,然后回答和阅读内容相关的问题。阅读理解是自然语言处理和人工智能领域的重要前沿课题,对于提升机器智能水平、使机器具有持续知识获取能力具有重要价值,近年来受到学术界和工业界的广泛关注。清华 NLP 团队近期在 Github 上开源了一个必读的机器阅读理解文章项目,满满的都是干货,相信读完这个 list 我们离 NLP 大牛更近了一步。

Github | https://github.com/thunlp/RCPapers

作者 | Yankai Lin, Deming Ye and Haoze Ji

整理报道 | huaiwen

【模型结构篇】

  1. Memory networks. Jason Weston, Sumit Chopra, and Antoine Bordes. arXiv preprint arXiv:1410.3916 (2014).

  2. Teaching Machines to Read and Comprehend. Hermann, Karl Moritz, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. NIPS 2015.

  3. Text Understanding with the Attention Sum Reader Network. Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. ACL 2016.

  4. A Thorough Examination of the Cnn/Daily Mail Reading Comprehension Task. Danqi Chen, Jason Bolton, and Christopher D. Manning. ACL 2016.

  5. Long Short-Term Memory-Networks for Machine Reading. Jianpeng Cheng, Li Dong, and Mirella Lapata. EMNLP 2016.

  6. Key-value Memory Networks for Directly Reading Documents. Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. EMNLP 2016.

  7. Modeling Human Reading with Neural Attention. Michael Hahn and Frank Keller. EMNLP 2016.

  8. Learning Recurrent Span Representations for Extractive Question Answering Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, and Jonathan Berant. arXiv preprint arXiv:1611.01436 (2016).

  9. Multi-Perspective Context Matching for Machine Comprehension. Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. arXiv preprint arXiv:1612.04211.

  10. Natural Language Comprehension with the Epireader. Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. EMNLP 2016.

  11. Iterative Alternating Neural Attention for Machine Reading. Alessandro Sordoni, Philip Bachman, Adam Trischler, and Yoshua Bengio. arXiv preprint arXiv:1606.02245 (2016).

  12. Bidirectional Attention Flow for Machine Comprehension. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. ICLR 2017.

  13. Machine Comprehension Using Match-lstm and Answer Pointer. Shuohang Wang and Jing Jiang. arXiv preprint arXiv:1608.07905 (2016).

  14. Gated Self-matching Networks for Reading Comprehension and Question Answering. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. ACL 2017.

  15. Attention-over-attention Neural Networks for Reading Comprehension. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. ACL 2017.

  16. Gated-attention Readers for Text Comprehension. Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W. Cohen, and Ruslan Salakhutdinov. ACL 2017.

  17. A Constituent-Centric Neural Architecture for Reading Comprehension. Pengtao Xie and Eric Xing. ACL 2017.

  18. Structural Embedding of Syntactic Trees for Machine Comprehension. Rui Liu, Junjie Hu, Wei Wei, Zi Yang, and Eric Nyberg. EMNLP 2017.

  19. Accurate Supervised and Semi-Supervised Machine Reading for Long Documents. Izzeddin Gur, Daniel Hewlett, Alexandre Lacoste, and Llion Jones. EMNLP 2017.

  20. MEMEN: Multi-layer Embedding with Memory Networks for Machine Comprehension. Boyuan Pan, Hao Li, Zhou Zhao, Bin Cao, Deng Cai, and Xiaofei He. arXiv preprint arXiv:1707.09098 (2017).

  21. Dynamic Coattention Networks For Question Answering. Caiming Xiong, Victor Zhong, and Richard Socher. ICLR 2017

  22. R-NET: Machine Reading Comprehension with Self-matching Networks. Natural Language Computing Group, Microsoft Research Asia.

  23. Reasonet: Learning to Stop Reading in Machine Comprehension. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. KDD 2017.

  24. FusionNet: Fusing via Fully-Aware Attention with Application to Machine Comprehension. Hsin-Yuan Huang, Chenguang Zhu, Yelong Shen, and Weizhu Chen. ICLR 2018.

  25. Making Neural QA as Simple as Possible but not Simpler. Dirk Weissenborn, Georg Wiese, and Laura Seiffe. CoNLL 2017.

  26. Efficient and Robust Question Answering from Minimal Context over Documents. Sewon Min, Victor Zhong, Richard Socher, and Caiming Xiong. ACL 2018.

  27. Simple and Effective Multi-Paragraph Reading Comprehension. Christopher Clark and Matt Gardner. ACL 2018.

  28. Neural Speed Reading via Skim-RNN. Minjoon Seo, Sewon Min, Ali Farhadi, and Hannaneh Hajishirzi. ICLR2018.

  29. Hierarchical Attention Flow forMultiple-Choice Reading Comprehension. Haichao Zhu,� Furu Wei, Bing Qin, and Ting Liu. AAAI 2018.

  30. Towards Reading Comprehension for Long Documents. Yuanxing Zhang, Yangbin Zhang, Kaigui Bian, and Xiaoming Li. IJCAI 2018.

  31. Joint Training of Candidate Extraction and Answer Selection for Reading Comprehension. Zhen Wang, Jiachen Liu, Xinyan Xiao, Yajuan Lyu, and Tian Wu. ACL 2018.

  32. Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification. Yizhong Wang, Kai Liu, Jing Liu, Wei He, Yajuan Lyu, Hua Wu, Sujian Li, and Haifeng Wang. ACL 2018.

  33. Reinforced Mnemonic Reader for Machine Reading Comprehension. Minghao Hu, Yuxing Peng, Zhen Huang, Xipeng Qiu, Furu Wei, and Ming Zhou. IJCAI 2018.

  34. Stochastic Answer Networks for Machine Reading Comprehension. Xiaodong Liu, Yelong Shen, Kevin Duh, and Jianfeng Gao. ACL 2018.

  35. Multi-Granularity Hierarchical Attention Fusion Networks for Reading Comprehension and Question Answering. Wei Wang, Ming Yan, and Chen Wu. ACL 2018.

  36. A Multi-Stage Memory Augmented Neural Networkfor Machine Reading Comprehension. Seunghak Yu, Sathish Indurthi, Seohyun Back, and Haejun Lee. ACL 2018 workshop.

  37. S-NET: From Answer Extraction to Answer Generation for Machine Reading Comprehension. Chuanqi Tan, Furu Wei, Nan Yang, Bowen Du, Weifeng Lv, and Ming Zhou. AAAI2018.

  38. Ask the Right Questions: Active Question Reformulation with Reinforcement Learning. Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Wojciech Gajewski, Andrea Gesmundo, Neil Houlsby, and Wei Wang. ICLR2018.

  39. QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. ICLR2018.

  40. Read + Verify: Machine Reading Comprehension with Unanswerable Questions. Minghao Hu, Furu Wei, Yuxing Peng, Zhen Huang, Nan Yang, and Ming Zhou. NAACL2018.

【利用额外知识篇】

  1. Leveraging Knowledge Bases in LSTMs for Improving Machine Reading. Bishan Yang and Tom Mitchell. ACL 2017.

  2. Learned in Translation: Contextualized Word Vectors. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. arXiv preprint arXiv:1708.00107 (2017).

  3. Knowledgeable Reader: Enhancing Cloze-Style Reading Comprehension with External Commonsense Knowledge. Todor Mihaylov and Anette Frank. ACL 2018.

  4. A Comparative Study of Word Embeddings for Reading Comprehension. Bhuwan Dhingra, Hanxiao Liu, Ruslan Salakhutdinov, and William W. Cohen. arXiv preprint arXiv:1703.00993 (2017).

  5. Deep contextualized word representations. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. NAACL 2018.

  6. Improving Language Understanding by Generative Pre-Training. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. OpenAI.

  7. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. arXiv preprint arXiv:1810.04805 (2018).

【探索篇】

  1. Adversarial Examples for Evaluating Reading Comprehension Systems. Robin Jia, and Percy Liang. EMNLP 2017.

  2. Did the Model Understand the Question? Pramod Kaushik Mudrakarta, Ankur Taly, Mukund Sundararajan, and Kedar Dhamdhere. ACL 2018.

【开放域问答篇】

  1. Reading Wikipedia to Answer Open-Domain Questions. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. ACL 2017.

  2. R^3: Reinforced Reader-Ranker for Open-Domain Question Answering. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. AAAI 2018.

  3. Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering. Shuohang Wang, Mo Yu, Jing Jiang, Wei Zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, and Murray Campbell. ICLR 2018.

  4. Denoising Distantly Supervised Open-Domain Question Answering. Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. ACL 2018.

【数据集篇】

  1. (SQuAD 1.0) SQuAD: 100,000+ Questions for Machine Comprehension of Text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. EMNLP 2016.

  2. (SQuAD 2.0) Know What You Don't Know: Unanswerable Questions for SQuAD. Pranav Rajpurkar, Robin Jia, and Percy Liang. ACL 2018.

  3. (MS MARCO) MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. arXiv preprint arXiv:1611.09268 (2016).

  4. (Quasar) Quasar: Datasets for Question Answering by Search and Reading. Bhuwan Dhingra, Kathryn Mazaitis, and William W. Cohen. arXiv preprint arXiv:1707.03904 (2017).

  5. (TriviaQA) TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer. arXiv preprint arXiv:1705.03551 (2017).

  6. (SearchQA) SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine. Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, and Kyunghyun Cho. arXiv preprint arXiv:1704.05179 (2017).

  7. (QuAC) QuAC : Question Answering in Context. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. arXiv preprint arXiv:1808.07036 (2018).

  8. (CoQA) CoQA: A Conversational Question Answering Challenge. Siva Reddy, Danqi Chen, and Christopher D. Manning. arXiv preprint arXiv:1808.07042 (2018).

  9. (MCTest) MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text. Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. EMNLP 2013.

  10. (CNN/Daily Mail) Teaching Machines to Read and Comprehend. Hermann, Karl Moritz, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. NIPS 2015.

  11. (CBT) The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. arXiv preprint arXiv:1511.02301 (2015).

  12. (bAbi) Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, and Tomas Mikolov. arXiv preprint arXiv:1502.05698 (2015).

  13. (LAMBADA) The LAMBADA Dataset:Word Prediction Requiring a Broad Discourse Context. Denis Paperno, Germ ́an Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fern ́andez. ACL 2016.

  14. (SCT) LSDSem 2017 Shared Task: The Story Cloze Test. Nasrin Mostafazadeh, Michael Roth, Annie Louis,Nathanael Chambers, and James F. Allen. ACL 2017 workshop.

  15. (Who did What) Who did What: A Large-Scale Person-Centered Cloze Dataset Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. EMNLP 2016.

  16. (NewsQA) NewsQA: A Machine Comprehension Dataset. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. arXiv preprint arXiv:1611.09830 (2016).

  17. (RACE) RACE: Large-scale ReAding Comprehension Dataset From Examinations. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. EMNLP 2017.

  18. (ARC) Think you have Solved Question Answering?Try ARC, the AI2 Reasoning Challenge. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot,Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. arXiv preprint arXiv:1803.05457 (2018).

  19. (MCScript) MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge. Simon Ostermann, Ashutosh Modi, Michael Roth, Stefan Thater, and Manfred Pinkal. arXiv preprint arXiv:1803.05223.

  20. (NarrativeQA) The NarrativeQA Reading Comprehension Challenge. Tomáš Kočiský, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. TACL 2018.

  21. (DuoRC) DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension. Amrita Saha, Rahul Aralikatte, Mitesh M. Khapra, and Karthik Sankaranarayanan. ACL 2018.

  22. (CLOTH) Large-scale Cloze Test Dataset Created by Teachers. Qizhe Xie, Guokun Lai, Zihang Dai, and Eduard Hovy. EMNLP 2018.

  23. (DuReader) DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications. Wei He, Kai Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yuan Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, Tian Wu, and Haifeng Wang. ACL 2018 Workshop.

  24. (CliCR) CliCR: a Dataset of Clinical Case Reports for Machine Reading Comprehension. Simon Suster and Walter Daelemans. NAACL 2018.

-END-

清华 NLP 团队推荐:必读的77篇机器阅读理解论文相关推荐

  1. 机器阅读理解论文必读论文(二): Teaching Machines to Read and Comprehend

    本文是机器阅读理解论文的第二篇,发表于2015年.论文提出了新的训练集,即CNN和每日邮报的新闻语料库,并针对此数据集构建了新的深度学习模型.以下是对论文的部分翻译和解读 摘要: 让机器阅读自然语言文 ...

  2. 【自然语言处理(NLP)】基于预训练模型的机器阅读理解

    [自然语言处理(NLP)]基于预训练模型的机器阅读理解 作者简介:在校大学生一枚,华为云享专家,阿里云专家博主,腾云先锋(TDP)成员,云曦智划项目总负责人,全国高等学校计算机教学与产业实践资源建设专 ...

  3. 解读ACL2020的一篇机器阅读理解方向的论文(Recurrent Chunking Mechanisms for Long-text machine reading comprehension)

    BERT在MRC任务上已经达到了很高的效果,但是缺点在于BERT的输入最多只能512个单词.而对于MRC任务来说,有的数据集的文章特别长.因此想要用BERT处理这类数据集,就必须将文章切分开.每一篇文 ...

  4. 谷歌AI论文BERT双向编码器表征模型:机器阅读理解NLP基准11种最优(公号回复“谷歌BERT论文”下载彩标PDF论文)

    谷歌AI论文BERT双向编码器表征模型:机器阅读理解NLP基准11种最优(公号回复"谷歌BERT论文"下载彩标PDF论文) 原创: 秦陇纪 数据简化DataSimp 今天 数据简化 ...

  5. 『NLP打卡营』实践课6:机器阅读理解

    基于预训练模型的机器阅读理解 阅读理解是检索问答系统中的重要组成部分,最常见的数据集是单篇章.抽取式阅读理解数据集. 该示例展示了如何使用PaddleNLP快速实现基于预训练模型的机器阅读理解任务. ...

  6. NLP机器阅读理解:四大任务及相应数据集、比赛

    作者 | 周俊贤 整理 | NewBeeNLP 关于机器阅历理解应用,首先介绍大家一个真实的业务场景,从网购平台的退单工单中抽取实际退款金额,数据大概是这样: 你好,我3月份在网上买的洗衣服务,当时买 ...

  7. NLP 作业:机器阅读理解(MRC)综述

    最近自己会把自己个人博客中的文章陆陆续续的复制到CSDN上来,欢迎大家关注我的 个人博客,以及我的github. 本文主要是我的 NLP 作业--机器阅读理解的综述,内容很少涉及到模型的具体架构和相关 ...

  8. 荣获百度机器阅读理解第一名的团队,他们想分享这些给你!

    日前,由中国中文信息学会 (CIPS).中国计算机学会 (CCF) 和百度公司联合举办的「2018 机器阅读理解技术竞赛」落下帷幕,Naturali 奇点机智从国内外 800 多支队伍中脱颖而出,获得 ...

  9. 【NLP】如何利用BERT来做基于阅读理解的信息抽取

    信息抽取 (Information Extraction: IE)是把文本里包含的信息进行结构化处理,变成计算机能够处理的结构,实体抽取.关系抽取.事件抽取等都属于信息抽取的范畴.在NLP领域,信息抽 ...

最新文章

  1. 深度丨当AI变得无处不在,人类社会将发生这五大变化!
  2. templateclass T函数模板
  3. opencv python全屏显示、置窗口大小和位置
  4. c盘users的用户名怎么改_iphone备份太大,严重挤占C盘空间怎么办?不用额外软件将备份放在C盘之外的教程...
  5. pycharm快速添加函数及参数注释_后端开发使用pycharm的技巧
  6. leetcode 179. 最大数(排序)
  7. 在.NET 3.5 平台上使用LINQ to SQL创建三层/多层Web应用系统(源代码下载和PDF文档下载)...
  8. CF633C:Spy Syndrome 2——题解
  9. SQL语言入门详细教程(更新中)
  10. 数字信号处理笔记02:离散时间傅里叶变换(DTFT)
  11. 马云给阿里的礼物:90多项区块链专利,全球最多
  12. android属性动画作用范围,Android属性动画的使用(上)
  13. 图解HTTP+彩色版 pdf版学习(更新中)
  14. redis实现共同好友功能
  15. 执行董事和CEO有什么区别
  16. OpenCV 找出图像中最小值最大值函数minMaxLoc的使用
  17. 做一个H5项目的准备工作-1
  18. NAXX Demo2_WYQ_01
  19. 知云文献翻译没反应_论文翻译工具--Copytranslate
  20. Win32如何定义IP数据报的首部

热门文章

  1. mac 搭建python+selenium+chromedriver环境
  2. Windows下安装MySQL 5.7.26 及注意事项
  3. 线上 | ICCV 2021中国预会议日程公开,注册有奖
  4. 这些 AI 大咖的实践干货,从事人工智能的你应该知道
  5. 图灵书单——程序员的算法
  6. Apress水果大餐——移动开发
  7. 【世界最大人脸对齐数据集】ICCV 2017:距离解决人脸对齐已不远
  8. 【南洋理工-CVPR2022】视觉语言模型的条件提示学习
  9. MES/MOM的未来:低代码与模型驱动
  10. 独家 | 使用Spark进行大规模图形挖掘(附链接)