Abstract

Distant supervision for relation extraction provides uniform bag labels for each sentence inside the bag. 关系提取的远程监督为袋子内的每个句子提供统一的袋子标签,而准确的句子标签对于需要确切关系类型的下游应用程序很重要。

Directly using bag labels for sentence-level training will introduce much noise, thus severely degrading performance.直接使用袋标签进行句子级训练会引入很多噪音,从而严重降低性能。

In this work, we propose the use of negative training (NT), in which a model is trained using complementary labels regarding that “the instance does not belong to these complementary labels”. 在这项工作中,我们提出使用负训练(NT),其中使用关于“实例不属于这些互补标签”的互补标签来训练模型。

Since the probability of selecting a true label as a complementary label is low, NT provides less noisy information.由于选择真实标签作为补充标签的概率很低,因此 NT 提供的噪声信息较少。

Furthermore, the model trained with NT is able to separate the noisy data from the training data. 此外,用 NT 训练的模型能够将噪声数据与训练数据分开。

Based on NT, we propose a sentence-level framework,SENT, for distant relation extraction.基于NT,我们提出了一个句子级框架SENT,用于远距离关系提取。

SENT not only filters the noisy data to construct a cleaner dataset, but also performs a relabeling process to transform the noisy data into useful training data, thus further benefiting the model’s performance. SENT 不仅过滤噪声数据以构建更清晰的数据集,还执行重新标记过程将噪声数据转换为有用的训练数据,从而进一步提高模型的性能。

Experimental results show the significant improvement of the proposed method over previous methods on sentence-level evaluation and de-noise effect.实验结果表明,所提出的方法在句子级评估和去噪效果方面比以前的方法有显着的改进。

1 Introduction

Relation extraction (RE), which aims to extract the relation between entity pairs from unstructured text, is a fundamental task in natural language processing.关系提取 (RE) 旨在从非结构化文本中提取实体对之间的关系,是自然语言处理中的一项基本任务。

The extracted relation facts can benefit various downstream applications, e.g., knowledge graph completion (Bordes et al., 2013; Wang et al., 2014), information extraction (Wu and Weld, 2010) and question answering (Yao and Van Durme, 2014; Fader et al., 2014).提取的关系事实可以有益于各种下游应用,例如知识图谱补全(Bordes 等,2013;Wang 等,2014)、信息提取(Wu 和 Weld,2010)和问答(Yao 和 Van Durme,2014) ; Fader 等人,2014 年)。

A significant challenge for relation extraction is the lack of large-scale labeled data. 关系提取的一个重大挑战是缺乏大规模标记数据。

Thus, distant supervision (Mintz et al., 2009) is proposed to gather training data through automatic alignment between a database and plain text.因此,提出了远程监督(Mintz 等人,2009 年)通过数据库和纯文本之间的自动对齐来收集训练数据。

Such annotation paradigm results in an inevitable noise problem, which is alleviated by previous studies using multi-instance learning (MIL). 这种注释范式会导致不可避免的噪声问题,之前使用多实例学习 (MIL) 的研究可以缓解这一问题。

In MIL, the training and testing processes are performed at the bag level, where a bag contains noisy sentences mentioning the same entity pair but possibly not describing the same relation.在 MIL 中,训练和测试过程在包级别执行,其中包含提及相同实体对但可能不描述相同关系的噪声句子。

Studies using MIL can be broadly classified into two categories: 
1) the soft de-noise methods that leverage soft weights to differentiate the influence of each sentence (Lin et al., 2016; Han et al., 2018c; Li et al., 2020; Hu et al., 2019a; Ye and Ling, 2019; Yuan et al., 2019a,b); 
2) the hard de-noise methods that remove noisy sentences from the bag (Zeng et al., 2015; Qin et al., 2018; Han et al., 2018a; Shang, 2019)

使用 MIL 的研究可以大致分为两类:
1) 利用软权重区分每个句子影响的软去噪方法(Lin et al., 2016; Han et al., 2018c; Li et al., 2020; Hu et al., 2019a; Ye and Ling, 2019; Yuan 等人, 2019a,b);
2) 从包中去除嘈杂句子的硬去噪方法(Zeng 等人,2015;Qin 等人,2018;Han 等人,2018a;Shang,2019)

However, these bag-level approaches fail to map each sentence inside bags with explicit sentence labels. 然而,这些包级方法无法用明确的句子标签映射包内的每个句子。

This problem limits the application of RE in some downstream tasks that require sentence level relation type, e.g., Yao and Van Durme (2014) and Xu et al. (2016) use sentence-level relation extraction to identify the relation between the answer and the entity in the question.这个问题限制了 RE 在一些需要句子级关系类型的下游任务中的应用,例如 Yao and Van Durme (2014) and Xu et al。 (2016) 使用句子级关系提取来识别答案与问题中实体之间的关系。

Therefore, several studies (Jia et al. (2019); Feng et al. (2018)) have made efforts on sentence-level (or instance-level) distant RE, empirically verifying the deficiency of bag-level methods on sentence-level evaluation.因此,几项研究(Jia et al. (2019); Feng et al. (2018))在句子级(或实例级)远程 RE 上做出了努力,实证验证了袋级方法在句子级评估的不足。

However, the instance selection approaches of these methods depend on rewards(Feng et al., 2018) or frequent patterns(Jia et al., 2019) determined by bag-level labels, which contain much noise. 然而,这些方法的实例选择方法依赖于包含大量噪声的袋级标签确定的奖励(Feng et al., 2018)或频繁模式(Jia et al., 2019)。

For one thing, one bag might be assigned to multiple bag labels, leading to difficulties in one-to-one mapping between sentences and labels. 一方面,一个包可能被分配给多个包标签,导致句子和标签之间的一对一映射困难。

As shown in Fig.1, we have no access to the exact relation between “place of birth” and “employee of” for the sentence “Obama was born in the United States.”. 如图 1 所示,对于“奥巴马出生在美国”这句话,我们无法获得“出生地”和“雇员”之间的确切关系。

For another, the sentences inside a bag might not express the bag relations.In Fig.1, the sentence “Obama was back to the United States yesterday” actually express the relation “live in”, which is not included in the bag labels.另一方面,包里的句子可能不表达包的关系。在图1中,“奥巴马昨天回到美国”这句话实际上表达了“住在”的关系,这并不包含在包标签中。

图1:袋级标签中存在两种类型的噪声:
1)多标签噪声:每个句子的确切标签(“出生地”或“雇员”)不清楚;
2)错误的标签噪音:袋子里面的第三句话实际上表达的是“live in”,它没有包含在袋子标签中。

In this work, we propose the use of negative training (NT) (Kim et al., 2019) for distant RE.在这项工作中,我们建议对远程 RE 使用负训练 (NT) (Kim et al., 2019)。

Different from positive training (PT), NT trains a model by selecting the complementary labels of the given label, regarding that “the input sentence does not belong to this complementary label”.与正训练(PT)不同,NT通过选择给定标签的互补标签来训练模型,即“输入的句子不属于这个互补标签”。

Since the probability of selecting a true label as a complementary label is low, NT decreases the risk of providing noisy information and prevents the model from overfitting the noisy data. 由于选择真实标签作为补充标签的概率很低,NT 降低了提供噪声信息的风险并防止模型过度拟合噪声数据。

Moreover, the model trained with NT is able to separate the noisy data from the training data  (a histogram in Fig.3 shows the separated data distribution during NT). 此外,用 NT 训练的模型能够将噪声数据与训练数据分离(图 3 中的直方图显示了 NT 期间分离的数据分布)。

Based on NT, we propose SENT, a sentence level framework for distant RE.  During SENT training, the noisy instances are not only filtered with a noise-filtering strategy, but also transformed into useful training data with a re-labeling method.基于 NT,我们提出了 SENT,一个用于远程 RE 的句子级框架。 在 SENT 训练期间,噪声实例不仅通过噪声过滤策略进行过滤,而且还通过重新标记方法转化为有用的训练数据。

We further design an iterative training algorithm to take full advantage of these data-refining processes, which significantly boost performance. 我们进一步设计了一种迭代训练算法,以充分利用这些数据提炼过程,从而显着提高性能。

To summarize the contribution of this work:
• We propose the use of negative training for sentence-level distant RE, which greatly protects the model from noisy information.

• We present a sentence-level framework, SENT, which includes a noise-filtering and a re-labeling strategy for re-fining distant data.
• The proposed method achieves significant improvement over previous methods in terms of both RE performance and de-noise effect.

总结一下这项工作的贡献:
• 我们提出对句子级远程 RE 使用负训练,这极大地保护了模型免受噪声信息的影响。

• 我们提出了一个句子级框架SENT,它包括一个噪声过滤和一个重新标记远距离数据的重新标记策略。
• 所提出的方法在RE 性能和去噪效果方面都比以前的方法有了显着的改进。

2 Related Work

2.1 Distant Supervision for RE

Supervised relation extraction (RE) has been constrained by the lack of large-scale labeled data. Therefore, distant supervision (DS) is introduced by Mintz et al. (2009), which employs existing knowledge bases (KBs) as source of supervision instead of annotated text. 监督关系提取(RE)一直受到缺乏大规模标记数据的限制。 因此,Mintz 等人引入了远程监督(DS)。 (2009),它使用现有的知识库 (KB) 作为监督源而不是带注释的文本。

Riedel et al. (2010) relaxes the DS assumption to the express-at-least-once assumption. 里德尔等人。 (2010) 将 DS 假设放宽到表达至少一次假设。

As a result,multi-instance learning is introduced (Riedel et al. (2010); Hoffmann et al. (2011); Surdeanu et al.(2012)) for this task, where the training and evaluating process are performed in bag-level,with potential noisy sentences existing in each bag. 因此,引入了多实例学习(Riedel et al. (2010); 霍夫曼等人。 (2011); Surdeanu et al.(2012)) 用于此任务,其中训练和评估过程在袋子级别执行,每个袋子中存在潜在的噪声句子。

Most following studies in distant RE adopt this paradigm, aiming to decrease the impact of noisy sentences in each bag. 大多数后续远程 RE 研究采用这种范式,旨在减少每个包中噪声句子的影响。

These studies include the attention-based methods to attend to useful information ( Lin et al. (2016); Han et al. (2018c); Li et al. (2020); Hu et al. (2019a); Ye and Ling (2019); Yuan et al. (2019a); Zhu et al. (2019); Yuan et al. (2019b); Wu et al. (2017)), 
the selection strategies such as RL or adversarial training to remove noisy sentences from the bag (Zeng et al. (2015); Shang (2019); Qin et al. (2018); Han et al. (2018a)) and the incorporation with extra information such as KGs, multi-lingual corpora or other information (Ji et al. (2017); Lei et al. (2018); Vashishth et al. (2018); Han et al. (2018b); Zhang et al. (2019); Qu et al. (2019); Verga et al. (2016); Lin et al. (2017); Wang et al. (2018); Deng and Sun (2019); Beltagy et al. (2019)). 这些研究包括关注有用信息的基于注意力的方法(Lin et al. (2016); Han et al. (2018c); Li et al. (2020); Hu et al. (2019a); Ye and Ling ( 2019); Yuan et al. (2019a); Zhu et al. (2019); Yuan et al. (2019b); Wu et al. (2017)),
RL 或对抗性训练等选择策略从袋子中去除噪声的句子(Zeng 等人(2015);Shang(2019);Qin 等人(2018);Han 等人(2018a))以及与额外信息的结合,例如 KG、多语言语料库或其他信息(Ji et al. (2017); Lei et al. (2018); Vashishth et al. (2018); Han et al. (2018b); Zhang et al. (2019);Qu et al. (2019);Verga et al. (2016);Lin et al. (2017);Wang et al. (2018);Deng and Sun (2019);Beltagy et al. (2019) )。

In this work, we focus on sentence-level relation extraction. Several previous studies also perform Distant RE on sentence-level. Feng et al. (2018) proposes a reinforcement learning framework for sentence selecting, where the reward is given by the classification scores on bag labels. 在这项工作中,我们专注于句子级关系提取。 之前的几项研究也在句子级别执行远程 RE。 冯等人。 (2018) 提出了一种用于句子选择的强化学习框架,其中奖励由袋子标签上的分类分数给出。

Jia et al. (2019) builds an initial training set and further select confident instances based on selected patterns. The difference between the proposed work and previous works is that we do not rely on bag-level labels for sentence selecting.贾等人。 (2019) 建立一个初始训练集,并根据选定的模式进一步选择可信实例。 所提出的工作与以前的工作之间的区别在于,我们不依赖袋级标签进行句子选择。

Furthermore, we leverage NT to dynamically separate the noisy data from the training data, thus can make use of diversified clean data.此外,我们利用 NT 从训练数据中动态分离噪声数据,从而可以利用多样化的干净数据。

2.2 Learning with Noisy Data

Learning with noisy data is a widely discussed problem in deep learning, especially in the field of computer vision. 噪声数据学习是深度学习中一个被广泛讨论的问题,尤其是在计算机视觉领域。

Existing approaches include robust learning methods such as leveraging a robust loss function or regularization method(Lyu and Tsang, 2020; Zhang and Sabuncu, 2018; Hu et al., 2019b; Kim et al., 2019), re-weighting the loss of potential noisy samples (Ren et al., 2018; Jiang et al., 2018), modeling the corruption probability with a transition matrix (Goldberger and BenReuven, 2016; Xia et al.) and so on.现有方法包括稳健的学习方法,例如利用稳健的损失函数或正则化方法(Lyu 和 Tsang,2020 年;Zhang 和 Sabuncu,2018 年;Hu 等人,2019b;Kim 等人,2019 年),重新加权潜在噪声样本的损失(Ren 等人,2018 年;Jiang 等人,2018 年),使用转换矩阵对腐败概率进行建模(Goldberger 和 BenReuven,2016 年;Xia 等人)等。

Another line of research tries to recognize or even correct the noisy instances from the training data(Malach and Shalev-Shwartz, 2017; Yu et al., 2019; Arazo et al., 2019; Li et al., 2019).另一项研究试图从训练数据中识别甚至纠正噪声实例(Malach 和 Shalev-Shwartz,2017;Yu 等,2019;Arazo 等,2019;Li 等,2019)。

In this paper, we focus on the noisy label problem in distant RE. We first leverage a robust negative loss (Kim et al., 2019) for model training. Then, we develop a new iterative training algorithm for noise selection and correction.在本文中,我们关注远程 RE 中的嘈杂标签问题。 我们首先利用强大的负损失 (Kim et al., 2019) 进行模型训练。 然后,我们开发了一种新的迭代训练算法,用于噪声选择和校正。

3 Methodology

In order to achieve sentence-level relation classification using bag-level labels in distant RE, we propose a framework, SENT, which contains three main steps (as shown in Fig.2): 
(1) Separating the noisy data from the training data with negative training (Sec.3.1);
 (2) Filtering the noisy data as well as re-labeling a part of confident instances (Sec.3.2); 
(3) Leveraging an effective training algorithm based on (1) and (2) to further boost the performance (Sec.3.3)

为了在远程 RE 中使用袋级标签实现句子级关系分类,我们提出了一个框架 SENT,它包含三个主要步骤(如图 2 所示):

(1) 用负训练将噪声数据与训练数据分离(Sec.3.1);
(2) 过滤噪声数据并重新标记一部分可信实例(Sec.3.2);
(3) 基于 (1) 和 (2) ,采用有效训练算法进一步提升性能(第 3.3 节)

3.1  Negative Training on Distant Data

In order to perform robust training on the noisy distant data, we propose the use of negative Training (NT), which trains based on the concept that “the input sentence does not belong to this complementary label”. 为了对嘈杂的远距离数据进行稳健的训练,我们建议使用负训练(NT),它基于“输入句子不属于这个互补标签”的概念进行训练。

We find that NT not only provides less noisy information, but also separates the noisy and clean data during training.我们发现 NT 不仅提供较少噪声的信息,而且在训练过程中将噪声数据和干净数据分开。

3.1.1 Positive Training

Positive training (PT) trains the model towards predicting the given label, based on the concept that “the input sentence belongs to this label”.正训练 (PT) 基于“输入句子属于这个标签”的概念,训练模型预测给定的标签。

3.1.2 Negative Training

To further illustrate the effect of NT, we train the classifier with PT and NT respectively on a constructed TACRED dataset with 30% noise (details shown in Sec.4.1).为了进一步说明 NT 的效果,我们分别使用 PT 和 NT 在构建的具有 30% 噪声的 TACRED 数据集上训练分类器(详细信息见第 4.1 节)

A histogram3 of the training data after PT and NT is shown in Figs. 3(a),(b), which reveals that, when training with PT, the confidence of clean data and noisy data increase with no difference, resulting in the model to overfit noisy training data. PT 和 NT 后训练数据的直方图如图 3 所示。 图 3(a),(b) 表明,当用 PT 训练时,干净数据和噪声数据的置信度增加,没有差异,导致模型过度拟合噪声训练数据。

On the contrary, when training with NT, the confidence of noisy data is much lower than that of clean data. This result confirms that the model trained with NT suffers less from overfitting noisy data with less noisy information provided. 相反,当用 NT 训练时,噪声数据的置信度远低于干净数据的置信度。 该结果证实,使用 NT 训练的模型在提供噪声较少的信息的情况下,受过拟合噪声数据的影响较小。

Moreover, as the confidence value of clean data and noisy data separate from each other, we are able to filter noisy data with a certain threshold. 此外,由于干净数据和噪声数据的置信度值相互分离,我们能够以一定的阈值过滤噪声数据。

Fig.4 shows the details of the data-filtering effect. After the first iteration of NT, a modest threshold contributes to 97% precision noise-filtering with about 50% recall, which further verifies the effectiveness of NT on noisy data training.图 4 显示了数据过滤效果的细节。 在 NT 的第一次迭代之后,适度的阈值贡献了 97% 的精度噪声过滤和约 50% 的召回率,这进一步验证了 NT 在噪声数据训练上的有效性。

3.2 Noise Filtering and Re-labeling

In Section 3.1, we have illustrated the effectiveness of NT on training with noisy data, as well as the capability to recognize noisy instances. 在第 3.1 节中,我们已经说明了 NT 在噪声数据训练方面的有效性,以及识别噪声实例的能力。

While filtering noisy data is important for training on distant data, these filtered data contain useful information that can boost performance if properly re-labeled.虽然过滤噪声数据对于训练远程数据很重要,但这些过滤后的数据包含有用的信息,如果正确重新标记,可以提高性能。

In this section, we describe the proposed noise filtering and label-recovering strategy for refining distant data based on NT.在本节中,我们描述了基于 NT 提炼远距离数据所提出的噪声过滤和标签恢复策略。

3.2.1 Filtering Noisy Data

As discussed before, it is intuitive to construct a filtering strategy based on a certain threshold after NT. However, in distant RE, the long-tail problem cannot be neglected. 如前所述,在NT之后根据某个阈值构建过滤策略是很直观的。 然而,在远程 RE 中,长尾问题不容忽视。

During training, the degree of convergence is disparate among different classes. Simply setting a uniform threshold might harm the data distribution with instances of long tail relations largely filtered out. Therefore, we leverage a dynamic threshold for filtering noisy data.在训练过程中,不同类之间的收敛程度是不同的。 简单地设置一个统一的阈值可能会损害数据分布,因为长尾关系的实例在很大程度上被过滤掉了。 因此,我们利用动态阈值来过滤噪声数据。

In this way, the noise-filtering threshold not only relies on the degree of convergence in each class, but also dynamically changes during the training phase, thus making it more suitable for noise-filtering on long-tail data.这样,噪声过滤阈值不仅依赖于每个类的收敛程度,而且在训练阶段动态变化,从而使其更适合对长尾数据进行噪声过滤。

3.2.2 Re-labeling Useful Data

After noise-filtering, the noisy instances are regarded as unlabeled data, which also contain useful information for training. Here, we design a simple strategy for re-labeling these unlabeled data.噪声过滤后,噪声实例被视为未标记数据,其中也包含用于训练的有用信息。 在这里,我们设计了一个简单的策略来重新标记这些未标记的数据。

3.3 Iterative Training Algorithm

Although effective, simply performing a pipeline of NT, noise-filtering and re-labeling fail to take full advantage of each part, thus the model performance can be further boosted through iterative training.虽然有效,但简单地执行一条NT流水线,噪声过滤和重新标记并不能充分利用每个部分,因此可以通过迭代训练进一步提升模型性能。

As shown in Fig.2, for each iteration, we first train the classifier on the noisy data using NT: for each instance, we randomly sample K complementary labels and calculate the loss on these labels with Eq.(2).如图 2 所示,对于每次迭代,我们首先使用 NT 在噪声数据上训练分类器:对于每个实例,我们随机采样 K 个互补标签,并使用公式(2)计算这些标签上的损失。

After M-epochs negative training, the noise-filtering and re-labeling processes are carried out for updating the training data. Next, we perform a new iteration of training on the newly-refined data. 在 M-epochs 负训练后,进行噪声过滤和重新标记过程以更新训练数据。 接下来,我们对新精炼的数据执行新的训练迭代。

Here, we re-initialize the classifier in every iteration for two reasons: First, re-initialization ensures that in each iteration, the new classifier is trained on a dataset with higher quality. Second, re-initialization introduces randomness, thus contributing to more robust data-filtering. 在这里,我们在每次迭代中重新初始化分类器有两个原因:首先,重新初始化确保在每次迭代中,新分类器在更高质量的数据集上进行训练。 其次,重新初始化引入了随机性,从而有助于更强大的数据过滤。

Finally, we stop the iteration after observing the best result on the dev set. We then perform a round of noise-filtering and re-labeling with the best model in the last iteration to obtain the final refined data.最后,我们在观察到开发集上的最佳结果后停止迭代。 然后我们在最后一次迭代中使用最佳模型进行一轮噪声过滤和重新标记,以获得最终的细化数据。

Fig.3(c) shows the data distribution after certain iterations of SENT. As seen, the noise and clean data are separated by a large margin. 图 3(c) 显示了 SENT 某些迭代后的数据分布。 如所见,噪声和干净数据相隔很大。

Most noisy data are successfully filtered out, with an acceptable number of clean data mistaken.大多数嘈杂的数据都被成功过滤掉了,错误的干净数据的数量是可以接受的。

However, we can see that the model trained with NT still lacks convergence (with low-confidence predictions). Therefore, we train the classifier on the iteratively-refined data with PT for better convergence. 然而,我们可以看到用 NT 训练的模型仍然缺乏收敛性(低置信度预测)。 因此,我们使用 PT 在迭代细化的数据上训练分类器以获得更好的收敛性。

As shown in Fig.3(d), the model predictions on most of the clean data are in high confidence after PT training.如图 3(d) 所示,在 PT 训练后,对大部分干净数据的模型预测具有高置信度。

SENT:Sentence-level Distant Relation Extraction via Negtive Training-ACL2021相关推荐

  1. 论文解读:Combining Distant and Direct Supervision for Neural Relation Extraction

    论文解读:Combining Distant and Direct Supervision for Neural Relation Extraction 夏栀的博客--王嘉宁的个人网站 正式上线,欢迎 ...

  2. 论文解读:Are Noisy Sentences Useless for Distant Supervised Relation Extraction?

    论文解读:Are Noisy Sentences Useless for Distant Supervised Relation Extraction? 注:本文章初次编辑为2020年9月2日,最新编 ...

  3. 关系抽取远程监督PCNN:Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks

    Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks 0 前言 1 多示例学习 ...

  4. PCNN模型解读:《Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks》

    PCNN模型解读 本文是对Daojian Zeng, Kang Liu, Yubo Chen and Jun Zhao的论文<Distant Supervision for Relation E ...

  5. 关系抽取论文总结(relation extraction)不断更新

    2000 1.Miller, Scott, et al. "A novel use of statistical parsing to extract information from te ...

  6. 关系抽取概述及研究进展Relation Extraction Progress

    关系抽取的概述及研究进展 关系抽取任务概述 关系抽取的定义 关系抽取的公开的主流评测数据集 ACE 2005 SemiEval 2010 Task8 Dataset: NYT(New York Tim ...

  7. 20-Joint entity and relation extraction based on a hybrid neural network(LSTM-ED+CNN),考虑长距离的实体标签之间的关

    文章目录 abstract 1.introduction 2.相关工作 2.1. Named entity recognition 2.2. Relation classification 2.3 联合 ...

  8. 【论文】Awesome Relation Extraction Paper(关系抽取)(PART IV)

    0. 写在前面 不想写 1. Neural Relation Extraction with Multi-lingual Attention(Lin/ ACL2017) 这篇文章是在Lin 2016年 ...

  9. 【论文】Awesome Relation Extraction Paper(关系抽取)(PART III)

    0. 写在前面 回头看了一遍之前的博客,好些介绍的论文主要是属于关系分类的领域,于是就把前几篇的标题给修改了一下哈哈.关系分类和之前的文本分类,基于目标词的情感识别还都挺像的,baseline模型也都 ...

  10. Relation Extraction 关系抽取综述

    文章目录 往期文章链接目录 Information Extraction v.s. Relation Extraction Existing Works of RE Pattern-based Met ...

最新文章

  1. 以太坊公链私链_如何使用以太坊构建汽车制造供应链系统
  2. Class.getResourceAsStream
  3. 看我是如何利用升级系统一键GetShell
  4. centos mysql数据迁移_Mysql 5.7.17 离线版安装和数据迁移(centos 7)
  5. 无需安装的CLI才是最好的
  6. [2021.1.13多校省选模拟2]T1(动态规划/轮廓线dp)
  7. nosql非关系型数据库_从Datomic出发,革命性的非NoSQL数据库
  8. sqlalchemy与mysql区别_sqlite3和sqlalchemy有什么区别?
  9. “约见”面试官系列之常见面试题之第一百零八篇之如何获取dom(建议收藏)
  10. 2020身高体重标准表儿童_男女孩最新身高标准表,你家孩子达标了吗?(附增高秘籍)...
  11. 架构设计:负载均衡层设计方案(3)——Nginx进阶
  12. Python编程中一定要注意的那些“坑”(二)
  13. c程序语言符号的作用,c语言宏定义中的#,##,#@及\符号的作用
  14. asio几种异步编程模型
  15. Linux服务器间信任关系建立方法
  16. Git amend:修改最近一次提交
  17. 软件测试自动化分类,自动化测试的主要分类
  18. 成功实施年终考核的6个步骤(zt)
  19. 定位及优化SQL语句的性能问题
  20. 常用汇编数据传输指令

热门文章

  1. Android基础之批量发送短信
  2. amd 服务器cpu型号怎么看,AMD CPU型号识别方法图解
  3. 重新定义 \maketitle
  4. 信而泰 X-Snapper测试系统,助力家庭路由器IPv6支持度测试
  5. 2022-05 - 英语语法 - 16种时态终极详解
  6. 定位模组 ppm CEP 等参数 说明
  7. Miracast/HDCP
  8. word文件太大如何压缩到最小?
  9. D触发器、D上升沿触发器、T触发器
  10. 华为USG防火墙通过nat64 ipv6用户访问内网ipv4服务(原创,转发请注明出处)