NAACL是自然语言处理领域的顶级学术会议,为了进一步促进国际间学术交流,青源会将于8月4日上午09:00-12:20举办「青源Seminar丨NAACL专场线上分享会」,召集人为青源研究组成员、耶鲁大学博士生唐相儒。

本次分享会将聚焦“语言模型”和“文本摘要”两大前沿主题,邀请了相关主题的8位 NAACL 论文一作进行专场分享和圆桌讨论。

点击活动官网+直播预约或阅读原文预约线上直播,微信扫描下方二维码加入讲者微信群。

扫码加入讲者交流群

(点击查看高清图片)

Yusheng Su

苏裕胜

清华大学计算机博士生

苏裕胜,目前是清华大学计算机博士三年级学生,主要研究方向为自然语言处理(预训练语言模型),在WWW, NAACL, ACL, IEEE/TASLP等会议上发表过多篇论文。同时担任过COLING, EMNLP, ACL, NAACL, ICML等会议审稿人。

On Transferability of Prompt Tuning for Natural Language Processing

Prompt tuning (PT) 只需要调整少量参数即可实现与全参数微调相当的性能,是一种使用超大规模预训练语言模型的参数高效方法。然而,与微调相比,PT 需要更多的训练时间。因此,我们探索是否能通过prompt迁移来增强PT,我们在这项工作中实验研究了prompt在不同下游任务和不同类型、规模的预训练语言模型之间的迁移性。

我们发现:

(1)在零样本设定下,训练过的prompt可以有效地迁移到同一预训练语言模型的类似任务上,也可以迁移到其他不同的预训练语言模型上并完成类似任务。

(2)此外,这些训练过的prompt也可以直接作为相似任务prompt的初始化,来提高 PT 的训练速度。

(3)为了探索影响迁移性的因素,我们研究了各种迁移性指标,发现prompt所激活神经元的重叠率与迁移性存在较强相关性。我们的研究结果表明,prompt迁移是一种有前景的增强PT的方式,我们鼓励进一步的研究更多关注prompt如何激活预训练语言模型以完成各种任务。

Xuandong Zhao

赵宣栋

UCSB计算机博士生

赵宣栋,目前是UCSB计算机博士三年级,导师为李磊和王宇翔。曾在阿里巴巴,微软等公司实习,研究兴趣为机器学习和自然语言处理(模型保护和隐私保护)。

Provably Confidential Language Modelling

Large language models are shown to memorize privacy information such as social security numbers in training data. Given the sheer scale of the training corpus, it is challenging to screen and filter all privacy data, either manually or automatically. In this paper, we propose Confidentially Redacted Training (CRT), a method to train language generation models while protecting the confidential segments. We borrow ideas from differential privacy (which solves a related but distinct problem) and show that our method is able to provably prevent unintended memorization by randomizing parts of the training process. Moreover, we show that redaction with an approximately correct screening policy amplifies the confidentiality guarantee. We implement the method for both LSTM and GPT language models. Our experimental results show that the models trained by CRT obtain almost the same perplexity while preserving strong confidentiality.

Weiyan SHi

史唯艳

哥伦比亚大学博士生

我主要的研究方向是对话系统,尤其是策略性和有影响力的对话系统(比如,说服对话系统)。其他的研究方向包括对话生成,和隐私保护的NLP模型。

Selective Differential Privacy for Language Modeling

With the increasing applications of language models, it has become crucial to protect these models from leaking private information. Previous work has attempted to tackle this challenge by training RNN-based language models with differential privacy guarantees. However, applying classical differential privacy to language models leads to poor model performance as the underlying privacy notion is over-pessimistic and provides undifferentiated protection for all tokens in the data. Given that the private information in natural language is sparse (for example, the bulk of an email might not carry personally identifiable information), we propose a new privacy notion, selective differential privacy, to provide rigorous privacy guarantees on the sensitive portion of the data to improve model utility. To realize such a new notion, we develop a corresponding privacy mechanism, Selective-DPSGD, for RNN-based language models. Besides language modeling, we also apply the method to a more concrete application--dialog systems. Experiments on both language modeling and dialog system building show that the proposed privacy-preserving mechanism achieves better utilities while remaining safe under various privacy attacks compared to the baselines.

Jingfeng Yang

杨靖锋

亚马逊研究科学家

现为亚马逊研究科学家(暂时放弃华盛顿大学计算机系自然语言处理的博士offer)。硕士在佐治亚理工学院毕业,导师为杨笛一教授,本科在北大获得生物与计算机双学位。主要研究方向为语义解析、文本生成、多语自然语言处理等。在ACL、 EMNLP、 NAACL 等发表多篇一作文章,担任ACL、 EMNLP、 NAACL、 NeurlPS、 AAAI 等会议审稿人,曾在谷歌、亚马逊、微软、爱丁堡大学等研究实习。

Compositional Generalization in Large Langauge Model Era

组合泛化仍是是大模型的最重要的难点之一,是实现推理、分布外泛化,以及通往通用人工智能这一最终目标的关键。我们两篇NAACL的文章分别从两种视角提出两种方式来增强模型的组合泛化能力。从模型角度,我们可以通过序列Prompt填充、以及集成预训练模型和精调模型,来保证分布内泛化能力的同时,提升分布外泛化能力,其中,我们发现预训练模型的限制解码、以及在限制词表上概率重新归一化是这一技术获得成功的关键。从数据角度,我们提出了通过语义树子树替换的方法进行数据扩增,然后再将扩增数据作为Seq2seq生成模型的训练数据。这两种方法在一系列组合性语义解析的测试中取得了明显提升。

Jiacheng Xu

徐嘉诚

Salesforce研究院

研究科学家

徐嘉诚是Salesforce研究院的研究科学家,专注于自然语言处理,尤其是自然语言生成和文本摘要方向的前沿研究。此前,他于2022年博士毕业于美国德州大学奥斯汀分校,导师为Greg Durrett。他于2017年从复旦大学本科毕业,师从邱锡鹏和黄萱菁教授。他此前曾在谷歌(2020)和微软(2019)实习。

Massive-scale Decoding for Text Generation using Lattices

Conditional neural text generation models generate high-quality outputs, but often concentrate around a mode when what we really want is a diverse set of options. We present a search algorithm to construct lattices encoding a massive number of generation options. First, we restructure decoding as a best-first search, which explores the space differently than beam search and improves efficiency by avoiding pruning paths. Second, we revisit the idea of hypothesis recombination: we can identify pairs of similar generation candidates during search and merge them as an approximation. On both summarization and machine translation, we show that our algorithm encodes thousands of diverse options that remain grammatical and high-quality into one lattice. This algorithm provides a foundation for building downstream generation applications on top of massive-scale diverse outputs.

Xiangru Tang

唐相儒

耶鲁大学博士生

唐相儒目前是耶鲁大学计算机系博士一年级,导师为Mark Gerstein。此前,他于耶鲁大学获得计算机硕士学位,合作导师为Dragomir Radev。他的主要研究方向为预训练语言模型、文本生成和计算生物学。

CONFIT: Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning

Factual inconsistencies in generated summaries severely limit the practical applications of abstractive dialogue summarization. Although significant progress has been achieved by using pre-trained neural language models, substantial amounts of hallucinated content are found during the human evaluation. In this work, we first devised a typology of factual errors to better understand the types of hallucinations generated by current models and conducted human evaluation on popular dialog summarization dataset. We further propose a training strategy that improves the factual consistency and overall quality of summaries via a novel contrastive fine-tuning, called CONFIT. To tackle top factual errors from our annotation, we introduce additional contrastive loss with carefully designed hard negative samples and self-supervised dialogue-specific loss to capture the key information between speakers. We show that our model significantly reduces all kinds of factual errors on both SAMSum dialogue summarization and AMI meeting summarization. On both datasets, we achieve significant improvements over state-of-the-art baselines using both automatic metrics, ROUGE and BARTScore, and human evaluation.

Yue Fang

房越

北京邮电大学研究生

北京邮电大学人工智能学院研二在读学生,研究方向为对话摘要。

From spoken dialogue to formal summary: An utterance rewriting for dialogue summarization

Due to the dialogue characteristics of unstructured contexts and multi-parties with first-person perspective, many successful text summarization works have failed when dealing with dialogue summarization. In dialogue summarization task, the input dialogue is usually spoken style with ellipsis and co-references but the output summaries are more formal and complete. Therefore, the dialogue summarization model should be able to complete the ellipsis content and co-reference information and then produce a suitable summary accordingly. How- ever, the current state-of-the-art models pay more attention on the topic or structure of summary, rather than the consistency of dialogue summary with its input dialogue context, which may suffer from the personal and logical inconsistency problem. In this paper, we propose a new model, named ReWriteSum, to tackle this problem. Firstly, an utterance rewriter is conducted to complete the ellipsis content of dialogue content and then obtain the rewriting utterances. Then, the co-reference data aug- mentation mechanism is utilized to replace the referential person's name with its specific name to enhance the personal information.

Xiangci Li

李向磁

UT Dallas计算机博士生

李向磁是UT Dallas第二年博士生,师从Prof. Jessica Ouyang,主要研究方向为科研文献处理(信息抽取和相关工作摘要生成)。于南加州大学获得硕士学位,师从彭楠贇。曾在Chan Zuckerburg Initiative,百度和腾讯北美人工智能实验室实习。

CORWA: A Citation-Oriented Related Work Annotation Dataset

Academic research is an exploratory activity to discover new solutions to problems. By this nature, academic research works perform literature reviews to distinguish their novelties from prior work. In natural language processing, this literature review is usually conducted under the “Related Work” section. The task of related work generation aims to automatically generate the related work section given the rest of the research paper and a list of papers to cite. Prior work on this task has focused on the sentence as the basic unit of generation, neglecting the fact that related work sections consist of variable length text fragments derived from different information sources. As a first step toward a linguistically-motivated related work generation framework, we present a Citation Oriented Related Work Annotation (CORWA) dataset that labels different types of citation text fragments from different information sources. We train a strong baseline model that automatically tags the CORWA labels on massive unlabeled related work section texts. We further suggest a novel framework for human-in-the-loop, iterative, abstractive related work generation.

点击左下角“阅读原文”,了解更多!

青源Seminar丨NAACL专场:Language Modeling Summarization相关推荐

  1. 开启报名 | 青源 Salon 第 1 期:强化学习专场,报告,海报,激辩,这是年轻人的会场

    由北京智源人工智能研究院主办的「青源Salon | 第1期] 将聚焦强化学习的前沿研究与进展,于2021年4月22日在线下和线上同步召开.本次沙龙邀请了卡耐基梅隆大学助理教授方飞,Google Bra ...

  2. 新加坡国立大学尤洋:我的四个选择,本质的喜欢催动长久的坚持丨青源专栏...

    为了启发青年学者思考职业发展,激发科研灵感,智源社区推出青源专栏,定期邀请青源会员分享他们的研究思考和科研感悟.新加坡国立大学计算机系校长青年教授.青源会会员尤洋分享了他在高性能计算研究.创业经历以及 ...

  3. 讲座回顾|2021/4/7|青源美团|CVPR 2021 预讲 · 美团专场,覆盖实例分割,图像分割,表情识别,特征选择和对齐...

    讲座回顾|美团青源视觉2021/4/7讲座 1.魏晓林,美团视觉智能中心负责人 2.论文:End-to-End Video Instance Segmentation with Transformer ...

  4. 艺术家与AI研究者的跨界碰撞丨记青源Workshop「AI+艺术」研讨会(2022年第10期)...

    艺术创造力是人类最无可替代的能力之一,曾几何时,艺术是AI无法涉足的疆域.但从2014年推出的GAN,到近年的DALL-E.CogView.MidJourney,再到今年横空出世的Stable Dif ...

  5. 第五届字节跳动青训营寒假 —— 前端专场

    第五届字节跳动青训营寒假 -- 前端专场 文章目录 第五届字节跳动青训营寒假 -- 前端专场 青训营 - 前端练习题 每日一练 编程题 前端编程题 [342. 4的幂](https://leetcod ...

  6. 张拳石:深度学习可解释理论的统一体系与去芜存菁 | 青源 Talk 第 14 期

    活动议程 日期:3月17日(周四) 时间 主题 14:30-14:35 开场简介 许志钦 上海交通大学自然科学研究院/数学科学学院长聘教轨副教授,青源会会员 14:35-15:20 深度学习可解释理论 ...

  7. 吴琦:视觉-语言导航新进展:Pre-training 与 Sim2Real | 青源 Talk 第 12 期

    活动议程 日期:2月17日(周四) 时间 主题 14:30-14:35 开场简介 刘偲 北航人工智能研究院教授.博导,青源会会员 14:35-15:20 视觉-语言导航新进展:Pre-training ...

  8. 刘偲:AI+艺术 | 青源 Talk 第 11 期

    活动议程 日期:1月13日(周四) 时间 主题 14:30-14:35 开场简介 黄高  清华大学自动化系副教授.博导,青源会会员 14:35-15:20 AI+艺术 刘偲 北航人工智能研究院教授.博 ...

  9. 苗旺:因果推断,观察性研究和 2021 年诺贝尔经济学奖 | 青源 Talk 第 8 期

    活动议程 日期:11月26日(周五) 时间 主题 14:30-14:35 开场简介 崔鹏 清华大学长聘副教授,青源会会员 14:35-15:20 主题:因果推断,观察性研究和2021年诺贝尔经济学奖 ...

最新文章

  1. C++中的虚函数与纯虚函数
  2. java php html,java和html的区别是什么
  3. java 解析时间字符串_Java8解析给定字符串的日期或日期时间格式
  4. 基于Linux命令行终端的ftp客户端程序
  5. Scrapy_LinkExtractor
  6. python 操作mysql_Python 操作MySQL
  7. Docker教程小白实操入门(21)--如何备份、恢复数据卷
  8. 用于网络销售的虚拟产品演示软件
  9. 软件系统 - 网址大全
  10. Android利用jsoup爬虫爬网页数据(一)
  11. SiC弱修饰的Si二维纳米结构/具有类石墨烯结构的二维碳化物晶体Ti2C/氧化锌纳米结构场效应晶体管/硅烯、硼烯和CO分子晶体的MBE生长
  12. 欢迎莅临HPX华南理工大学——产品经理职业规划讲座
  13. python动态页面元素爬取_Python开发爬虫之动态网页抓取篇:爬取博客评论数据——通过浏览器审查元素解析真实网页地址...
  14. C 语言的控制台输出只是 “黑底白字”吗 ?
  15. mysql8.0.20安装教程mac_mac安装mysql 8.0.20
  16. 软件自学成才到公司要学历吗_来自7位自学成才的编码人员的经验教训,他们现在全职从事软件开发人员的工作...
  17. Android未接电话(未接电话个数,以及未接电话信息的读取)
  18. BFS算法之求单源最短路径
  19. 计算机视觉学习——表面检测
  20. python爬取微博动态页面id、内容、评论点赞数存入MongoDB 详解

热门文章

  1. 使用gitee+gitbook搭建个人在线电子书
  2. 喜马拉雅修改资料提示服务器升级,喜马拉雅怎么修改个人资料 喜马拉雅个人资料在哪里修改...
  3. bootstrap设置默认主题皮肤
  4. emqtt 启动报错 Node 'emq@127.0.0.0' not responding to pings.
  5. 编码自动识别工具 uchardet
  6. 用CSS实现段落前面缩进两个字
  7. 【大数据day14】——MapReduce的运行机制详解(案列:Reduce 端实现 JOIN, Map端实现 JOIN,求共同好友)
  8. Spark创建空的DataFrame
  9. 写给 -- Arrow.L
  10. 二年级课程表(3月14日-3月18日)