每天给你送来NLP技术干货!


作者:zenRRan

公众号:深度学习自然语言处理

AAAI2022出来一段时间了,但是好像还没人整理出NLP相关的论文呢,趁着周末最后一天还没过完,吐血整理了一番,需要的收藏即可。

其中包括:信息抽取、关系抽取、机器翻译、命名实体识别、多模态、数据增强、智能问答、多语言、知识蒸馏、文本纠错等。

信息抽取

OneRel: Joint Entity and Relation Extraction with One Module in One Step

Yu-Ming Shang, Heyan Huang, Xian-Ling Mao

BROS: A Pre-Trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents

Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park

Selecting Optimal Context Sentences for Event-Event Relation Extraction

Hieu Man Duc Trong, Nghia Ngo Trung, Linh Van Ngo, Thien Huu Nguyen

Hyperbolic Disentangled Representation for Fine-Grained Aspect Extraction

Chang-You Tai, Ming-Yao Li, Lun-Wei Ku

Language Model Priming for Cross-Lingual Event Extraction

Steven Fincke, Shantanu Agarwal, Scott Miller, Elizabeth Boschee

知识蒸馏

Content-Variant Reference Image Quality Assessment via Knowledge Distillation

Guanghao Yin, Wei Wang, Zehuan Yuan, Chuchu Han, Wei Ji, Shouqian Sun, Changhu Wang

Adversarial Data Augmentation for Task-Specific Knowledge Distillation of Pre-Trained Transformers

Minjia Zhng, Niranjan Uma Naresh, Yuxiong He

Boosting Contrastive Learning with Relation Knowledge Distillation

Kai Zheng, Yuanjiang Wang, Ye Yuan

UniMS: A Unified Framework for Multimodal Summarization with Knowledge Distillation

Zhengkun Zhang, Xiaojun Meng, Yasheng Wang, Xin Jiang, Qun Liu, Zhenglu Yang

Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay

Kuluhan Binici, Shivam Aggarwal, Nam Trung Pham, Karianto Leman, Tulika Mitra

Cross-Task Knowledge Distillation in Multi-Task Recommendation

Chenxiao Yang, Junwei Pan, Xiaofeng Gao, Tingyu Jiang, Dapeng Liu, Guihai Chen

Improving Neural Cross-Lingual Abstractive Summarization via Employing Optimal Transport Distance for Knowledge Distillation

Thong Nguyen, Luu Anh Tuan

Knowledge Distillation via Constrained Variational Inference

Ardavan Saeedi, Yuria Utsumi, Li Sun, Kayhan Batmanghelich, Li-wei H. Lehman

Up to 100$\times$ Faster Data-Free Knowledge Distillation

Gongfan Fang, Kanya Mo, Xinchao Wang, Jie Song, Shitao Bei, Haofei Zhang, Mingli Song

Adversarial Data Augmentation for Task-Specific Knowledge Distillation of Pre-Trained Transformers

Minjia Zhng, Niranjan Uma Naresh, Yuxiong He

Boosting Contrastive Learning with Relation Knowledge Distillation

Kai Zheng, Yuanjiang Wang, Ye Yuan

多语言

Multilingual Code Snippets Training for Program Translation

Ming Zhu, Karthik Suresh, Chandan K. Reddy

Improving Neural Cross-Lingual Abstractive Summarization via Employing Optimal Transport Distance for Knowledge Distillation

Thong Nguyen, Luu Anh Tuan

DetIE: Multilingual Open Information Extraction Inspired by Object Detection

Michael Vasilkovsky, Anton Alekseev, Valentin Malykh, Ilya Shenbin, Elena Tutubalina, Dmitriy Salikhov, Mikhail Stepnov, Andrey Chertok, Sergey Nikolenko

Zero-Shot Cross-Lingual Machine Reading Comprehension via Inter-Sentence Dependency Graph

Liyan Xu, Xuchao Zhang, Bo Zong, Yanchi Liu, Wei Cheng, Jingchao Ni, Haifeng Chen, Liang Zhao, Jinho D. Choi

Language Model Priming for Cross-Lingual Event Extraction

Steven Fincke, Shantanu Agarwal, Scott Miller, Elizabeth Boschee

Cross-Lingual Adversarial Domain Adaptation for Novice Programming

Ye Mao, Farzaneh Khoshnevisan, Thomas Price, Tiffany Barnes, Min Chi

Interpreting Gender Bias in Neural Machine Translation: Multilingual Architecture Matters

Marta R. Costa-jussà, Carlos Escolano, Christine Basta, Javier Ferrando, Roser Batlle, Ksenia Kharitonova

From Good to Best: Two-Stage Training for Cross-Lingual Machine Reading Comprehension

Nuo Chen, Linjun Shou, Ming Gong, Jian Pei

Few-Shot Cross-Lingual Stance Detection with Sentiment-Based Pre-Training

Momchil Hardalov, Arnav Arora, Preslav Nakov, Isabelle Augenstein

Parameter Differentiation Based Multilingual Neural Machine Translation

Qian Wang, Jiajun Zhang

XLM-K: Improving Cross-Lingual Language Model Pre-Training with Multilingual Knowledge

Xiaoze Jiang, Yaobo Liang, Weizhu Chen, Nan Duan

Mind the Gap: Cross-Lingual Information Retrieval with Hierarchical Knowledge Enhancement

Fuwei Zhang, Zhao Zhang, Xiang Ao, Dehong Gao, Fuzhen Zhuang, Yi Wei, Qing He

BiRdQA: A Bilingual Dataset for Question Answering on Tricky Riddles

Yunxiang Zhang, Xiaojun Wan

UNISON: Unpaired Cross-Lingual Image Captioning

Jiahui Gao, Yi Zhou, Philip L. H. Yu, Shafiq Joty, Jiuxiang Gu

问答

Video as Conditional Graph Hierarchy for Multi-Granular Question Answering

Junbin Xiao, Angela Yao, Zhiyuan Liu, Yicong Li, Wei Ji, Tat-Seng Chua

Block-Skim: Efficient Question Answering for Transformer

Yue Guan, Zhengyi Li, Zhouhan Lin, Yuhao Zhu, Jingwen Leng, Minyi Guo

BiRdQA: A Bilingual Dataset for Question Answering on Tricky Riddles

Yunxiang Zhang, Xiaojun Wan

(2.5+1)D Spatio-Temporal Scene Graphs for Video Question Answering

Anoop Cherian, Chiori Hori, Tim K. Marks, Jonathan Le Roux

Zero-Shot Commonsense Question Answering with Cloze Translation and Consistency Optimization

Zi-Yi Dou, Nanyun (Violet) Peng

Dynamic Key-Value Memory Enhanced Multi-Step Graph Reasoning for Knowledge-Based Visual Question Answering

Mingxiao Li, Marie-Francine Moens

多模态

Show Your Faith: Cross-Modal Confidence-Aware Network for Image-Text Matching

Huatian Zhang, Zhendong Mao, Kun Zhang, Yongdong Zhang

Event-Image Fusion Stereo Using Cross-Modality Feature Propagation

Hoonhee Cho, Kuk-Jin Yoon

MAGIC: Multimodal relAtional Graph adversarIal inferenCe for Diverse and Unpaired Text-Based Image Captioning

Wenqiao Zhang, Haochen Shi, Jiannan Guo, Shengyu Zhang, Qingpeng Cai, Juncheng Li, Sihui Luo, Yueting Zhuang

Hierarchical Cross-Modality Semantic Correlation Learning Model for Multimodal Summarization

Litian Zhang, Junshu Pan, Xiaoming Zhang, Feiran Huang

UniMS: A Unified Framework for Multimodal Summarization with Knowledge Distillation

Zhengkun Zhang, Xiaojun Meng, Yasheng Wang, Xin Jiang, Qun Liu, Zhenglu Yang

Cross-Modal Mutual Learning for Audio-Visual Speech Recognition and Manipulation

Chih-Chun Yang, Wan-Cyuan Fan, Cheng-Fu Yang, Yu-Chiang Frank Wang

Cross-Modal Coherence for Text-to-Image Retrieval

Malihe Alikhani, Fangda Han, Hareesh Ravi, Mubbasir Kapadia, Vladimir Pavlovic, Matthew Stone

Are Vision-Language Transformers Learning Multimodal Representations? A Probing Perspective.

Emmanuelle Salin, Badreddine Farah, Stéphane Ayache, Benoit Favre

D-Vlog: Multimodal Vlog Dataset for Depression Detection

Jeewoo Yoon, Chaewon Kang, Seungbae Kim, Jinyoung Han

Sentiment and Emotion-Aware Multi-Modal Complaint Identification

Apoorva Singh, Soumyodeep Dey, Anamitra Singha, Sriparna Saha

机器翻译

Parameter Differentiation Based Multilingual Neural Machine Translation

Qian Wang, Jiajun Zhang

Deep Fusing Pre-Trained Models into Neural Machine Translation

Rongxiang Weng, Heng Yu, Weihua Luo, Min Zhang

Non-Parametric Online Learning from Human Feedback for Neural Machine Translation

Dongqi Wang, Haoran Wei, Zhirui Zhang, Shujian Huang, Jun Xie, Jiajun Chen

Frequency-Aware Contrastive Learning for Neural Machine Translation

Tong Zhang, Wei Ye, Baosong Yang, Long Zhang, Xingzhang Ren, Dayiheng Liu, Jinan Sun, Shikun Zhang, Haibo Zhang, Wen Zhao

From Fully Trained to Fully Random Embeddings: Improving Neural Machine Translation with Compact Word Embedding Tables

Krtin Kumar, Peyman Passban, Mehdi Rezagholizadeh, Yiusing Lau, Qun Liu

Interpreting Gender Bias in Neural Machine Translation: Multilingual Architecture Matters

Marta R. Costa-jussà, Carlos Escolano, Christine Basta, Javier Ferrando, Roser Batlle, Ksenia Kharitonova

命名实体识别

Unified Named Entity Recognition as Word-Word Relation Classification

Jingye Li, Donghong Ji, Jiang Liu, Hao Fei, Meishan Zhang, Shengqiong Wu, Chong Teng, Fei Li

模型压缩

BATUDE: Budget-Aware Neural Network Compression Based on Tucker Decomposition

Miao Yin, Huy Phan, Xiao Zang, Siyu Liao, Bo yuan

From Dense to Sparse: Contrastive Pruning for Better Pre-Trained Language Model Compression

Runxin Xu, Fuli Luo, Chengyu Wang, Baobao Chang, Jun Huang, Songfang Huang, Fei Huang

Convolutional Neural Network Compression Through Generalized Kronecker Product Decomposition

Marawan Gamal Abdel Hameed, Marzieh Tahaei, Ali Mosleh, Vahid Partovi Nia

数据增强

Adversarial Data Augmentation for Task-Specific Knowledge Distillation of Pre-Trained Transformers

Minjia Zhng, Niranjan Uma Naresh, Yuxiong He

ALP: Data Augmentation Using Lexicalized PCFGs for Few-Shot Text Classification

Hazel Kim, Daecheol Woo, Seong Joon Oh, Jeong-Won Cha, Yo-Sub Han

阅读理解

Zero-Shot Cross-Lingual Machine Reading Comprehension via Inter-Sentence Dependency Graph

Liyan Xu, Xuchao Zhang, Bo Zong, Yanchi Liu, Wei Cheng, Jingchao Ni, Haifeng Chen, Liang Zhao, Jinho D. Choi

From Good to Best: Two-Stage Training for Cross-Lingual Machine Reading Comprehension

Nuo Chen, Linjun Shou, Ming Gong, Jian Pei

文本纠错

Sequence-to-Action: Grammatical Error Correction with Action Guided Sequence Generation

Jiquan Li, Junliang Guo, Yongxin Zhu, Xin Sheng, Deqiang Jiang, Bo Ren, Linli Xu

最近文章

EMNLP 2022 和 COLING 2022,投哪个会议比较好?

一种全新易用的基于Word-Word关系的NER统一模型,刷新了14种数据集并达到新SoTA

阿里+北大 | 在梯度上做简单mask竟有如此的神奇效果


下载一:中文版!学习TensorFlow、PyTorch、机器学习、深度学习和数据结构五件套!  后台回复【五件套】
下载二:南大模式识别PPT  后台回复【南大模式识别】

投稿或交流学习,备注:昵称-学校(公司)-方向,进入DL&NLP交流群。

方向有很多:机器学习、深度学习,python,情感分析、意见挖掘、句法分析、机器翻译、人机对话、知识图谱、语音识别等。

记得备注呦

整理不易,还望给个在看!

吐血整理了下AAAI2022中和NLP相关的论文,包括信息抽取、机翻、NER、多模态、数据增强、问答、多语言、KD、文本纠错等...相关推荐

  1. 【NLP】NLP提效,除了选择合适的模型,就是数据增强了

    如何用有限的数据做出最佳的效果?除了一个优秀的模型以外,最有效的方法就是做数据增强了.自然语言处理(NLP)不同于图像,可以做裁剪,反转,缩放等操作来增强,而是有文本特别的处理方式.这篇论文<C ...

  2. 强烈推荐十大NLP主流经典项目:预训练BERT、知识图谱、智能问答、机器翻译、文本自动生成等...

    自然语言处理技术近几年发展非常快,像BERT.GPT-3.图神经网络.知识图谱等技术被大量应用于项目实践中. 今年大厂的NLP面试中对项目方面的考察深度也随之提升了很多,经常会被面试官揪着细节一步一步 ...

  3. 别人的【计算机视觉算法岗面经】“吐血”整理:2019秋招资料

    别人的[计算机视觉算法岗面经]"吐血"整理:2019秋招资料 相关链接:[计算机视觉算法岗面经]"吐血"整理:2019秋招面经 //2018/09/27 兵荒马 ...

  4. 使用NeMo快速完成NLP中的信息抽取任务,英伟达专家实战讲解,内附代码

    信息抽取(IE)是从非结构化.半结构化的可读文档或其他电子表示来源中自动提取结构化信息的任务.信息抽取技术为文本挖掘.智能检索.智能对话.知识图谱.推荐系统等应用提供了基本的技术支持. 近日,英伟达x ...

  5. 斯坦福Introduction to NLP:第十讲关系抽取

    最近需要调研NLP中的关系抽取任务 找了一篇RE的综述,关于早期研究的介绍较为笼统,因此找到斯坦福的自然语言处理入门课程学习. 课程是2012年的,比较早,正好学习一下早期的RE模型. 看视频的过程中 ...

  6. NLP中的数据增强方法综述

    论文链接:A Survey of Data Augmentation Approaches for NLP 摘要 由于越来越多的研究在低资源领域.新任务和需要大量训练数据的大规模神经网络中,NLP中的 ...

  7. 【信息抽取】NLP中关系抽取的概念,发展及其展望

    事物.概念之间的关系是人类知识中非常重要的一个部分,但是他们通常隐藏在海量的非结构文本中.为了从文本中抽取这些关系事实,从早期的模式匹配到近年的神经网络,大量的研究在多年前就已经展开. 然而,随着互联 ...

  8. NLP:数据增强/Data Argumentation【词汇替换、随机Mask、回译、添加噪声】

    数据增强技术在计算机视觉中应用的比较广泛,但是在 NLP 中却很少能得到有效的应用.本质原因在于图像中的一些数据增强方法,比如将图像旋转几度或将其色度转换为灰度,在增强数据的同时并不会改变图像本身的含 ...

  9. 历史最全自然语言处理各领域常见 数据增强 方法整理分享

    数据增强让有限的数据产生更多的数据,增加训练样本的数量以及多样性(噪声数据),提升模型鲁棒性,一般用于训练集.神经网络需要大量的参数,许许多多的神经网路的参数都是数以百万计,而使得这些参数可以正确工作 ...

  10. JioNLP:预处理、信息抽取、数据增强、NLP简单功能与词典,找它就对了!

    ⭐戳这里 -> JioNLP ⭐戳这里 => 在线直接使用版 -> JioNLP    pip install jionlp 来看看 JioNLP 能干什么?Ctrl+F 搜索一下 ...

最新文章

  1. Spring Cloud Alibaba基础教程:使用Nacos实现服务注册与发现
  2. 喜马拉雅音频下载工具
  3. JavaScript No Overloading 函数无重载之说
  4. 说说我的工作——桌面支持
  5. sans serif_Sans和Serif相遇可爱
  6. esxi usb插口_酷暑大作战 | USB-C风扇新体验
  7. phpcmsV9 添加内容:如何“增加复选框、下拉菜单”(含案例、截图)- 教程篇
  8. 直接用Jdbc就能操作数据库了,为什么还要用spring框架
  9. 5月8日——iOS中的3D Touch效果
  10. Qt ::Warning: The name 'layoutWidget' (QWidget)...
  11. rxbus 源码_RxBus 这个 RxBus 稳如老狗 @codeKK Android开源站
  12. 嵌入式Qt-做一个秒表
  13. MySQL中IN对NULL的处理
  14. TCP/IP协议 - 三次握手四次挥手(入门易懂版)
  15. Gartner云安全理念解读
  16. XCopy复制文件夹命令及参数详解以及xcopy拷贝目录并排除特定文件
  17. 区块链+社群经济是什么样子?让「牛顿」的NewMall告诉你
  18. vue3-tauri-chat:基于tauri聊天实例|tauri仿微信客户端
  19. 10的次方 各种集合 需要可以拿去用
  20. 技嘉 G1.Sniper B6 (rev. 1.0) B85 主板 添加 M.2 NVME 启动支持

热门文章

  1. VSCode使用记录一:ubuntu 16.04下安装、编译文件、制作桌面图标和卸载
  2. 腾讯微信惊天漏洞,利用手机号致帐号丢失无法找回!——论个人信息安全与防护...
  3. jquery 图片无缝切换
  4. CSS代码属性大全(HTML)
  5. 在Linux Redhat 9.0使用YUM
  6. 2-ESP8266 SDK开发基础入门篇--点亮一个灯
  7. Mybatis中insert之后返回主键
  8. ADO.Net之SqlConnection、 Sqlcommand的应用
  9. 1.UiDevice API 详细介绍
  10. grep -i pattern files :不区分大小写地搜索。默认情况区分大小写,