最大边界相关算法

用于计算查询文本和搜索文档之间的相似度,然后对文档进行排序。算法公式为 MMR ( Q , C , R ) = A r g max ⁡ d i , i n , c k [ λ s i m ( Q , d i ) − ( a − λ ) max ⁡ d j , k ( s i m ( d i , d j ) ) ] \text{MMR}(Q,C,R) = Arg \max_{d_{i},in,c}^k [\lambda sim(Q,d_{i}) - (a- \lambda) \max_{d_{j},k} (sim(d_{i},d_{j}))] MMR(Q,C,R)=Argdi​,in,cmaxk​[λsim(Q,di​)−(a−λ)dj​,kmax​(sim(di​,dj​))]

其中Q指查询文本,C指搜索文档集合,R为一个已经求得的以相关度为基础的初始集合, A r g max ⁡ d i , i n , c k Arg \max_{d_{i},in,c}^k Argmaxdi​,in,ck​表示搜索返回的K个句子的索引。

具体到文本摘要生成任务中,Q和C表示整篇文档, d i d_{i} di​表示文档中的某个句子,[ ]中第一项表示文档中某个句子和整篇文档的相似度,后一项表示文档中的某个句子和已经抽取的摘要句子的相似度。通过这样的方式,希望抽取的句子既能表达整个文档的含义,又可以具备多样性, λ \lambda λ表示两者重要性的权衡。

通过MMR算法可以实现文档按重要性进行句子的抽取组成摘要,其中相关的相似性度量方式有很多,比如TF-IDF、余弦相似度、欧式距离或是使用神经网络模型进行评判。下面我们就来看一下如何使用TF-IDF和余弦相似度结合MMR算法来完成简单的抽取式摘要任务。

  • TF-IDF + MMR

TF:用于估计一个词在一个文档中的重要程度

TF ( w , d ) = n w , d ∑ u ∈ { w d } n u , d \text{TF}(w,d) = \frac{n_{w,d}}{\sum_{u \in \{ w_{d}\}}n_{u,d}} TF(w,d)=∑u∈{wd​}​nu,d​nw,d​​

其中 n w , d n_{w,d} nw,d​表示词w在文档d中出现的次数, { w d } \{w_{d}\} {wd​}表示文档d中所有词的集合

IDF:表示逆文档频率

IDF ( w , d ) = log ⁡ ( n n w ) \text{IDF}(w,d) = \log(\frac{n}{n_{w}}) IDF(w,d)=log(nw​n​)

其中 n w n_{w} nw​表示包含词w的文档数目

TF-IDF

TF-IDF ( w , d ) = TF ( w , d ) ∗ IDF ( w , d ) \text{TF-IDF}(w,d) = \text{TF}(w,d) * \text{IDF}(w,d) TF-IDF(w,d)=TF(w,d)∗IDF(w,d)

def TFs(sentences):tfs = dict() # 记录文档的 词-TFfor sent in sentences:sent = sent.split(" ")preprowords, wordFreqs = preprowords_and_wordFreqs(sent)for word,value in wordFreqs.items():       if tfs.get(word,0) == 0: # 若此时tfs中没有word,词频计数即为 wordFreqs 中对应的值tfs[word] = wordFreqs[word]else:tfs[word] = tfs[word] + wordFreqs[word]  # 否则将两个字典中对应的值相加return tfs
def IDFs(sentences):N = len(sentences)idf = 0idfs,words, = dict(), dict()w2 = list()for sent in sentences:sent = sent.split(" ")preprowords, wordFreqs = preprowords_and_wordFreqs(sent)for word in preprowords:if wordFreqs.get(word,0) != 0:words[word] = words.get(word,0) + 1for word in words:n = words[word]try:w2.append(n)idf = math.log10(float(N) / n)except ZeroDivisionError:idf = 0idfs[word] = idfreturn idfs
def TF_IDF(sentences):tfs = TFs(sentences)idfs = IDFs(sentences)retval = dict()for word,value in tfs.items():    tf_idfs = tfs[word] * idfs[word]if retval.get(tf_idfs,None) == None:retval[tf_idfs] = [word]else:retval[tf_idfs].append(word)  # 得到每个词以及和它对应的tf-idf值return retval

在知道了如何计算TF-IDF值之后,我们就需要明白如何使用它来进行句子间相似性的度量。

def sentenceSim(sent1, sent2, IDF_w):numerator = 0  # 分子denominator = 0  # 分母# 获取两个句子中的词和对应的词频sent1 = sent1.split(" ")preprowords1, wordFreqs1 = preprowords_and_wordFreqs(sent1)sent2 = sent2.split(" ")preprowords2, wordFreqs2 = preprowords_and_wordFreqs(sent2)for word in preprowords2:numerator += wordFreqs1.get(word,0) * wordFreqs2.get(word,0) * IDF_w.get(word,0) ** 2for word in preprowords1:denominator += (wordFreqs2.get(word,0) * IDF_w.get(word,0)) ** 2try:return numerator / math.sqrt(denominator)except ZeroDivisionError:return float("-inf")

MMR值的计算:

def MMRScore(Si, query, Sj, lambta, IDF):Sim1 = sentenceSim(Si, query, IDF)l_expr = lambta * Sim1value = [float("-inf")]for sent in Sj:Sim2 = sentenceSim(Si, sent, IDF)value.append(Sim2)r_expr = (1-lambta) * max(value)MMR_SCORE = l_expr - r_expr    return MMR_SCORE
  • 余弦相似度

简单的可以使用scikit-learn中的CountVectorizer完成句子的表示,当然可以使用预训练的词嵌入向量模型得到关于句子更好的表示;使用cosine_similarity来进行余弦相似度的计算。

def calculateSimilarity(sentence, doc):if doc == []:  return 0  vocab = {}  for word in sentence.split():  vocab[word] = 0docInOneSentence = '';  for t in doc:  docInOneSentence += (t + ' ')  for word in t.split():  vocab[word]=0    cv = CountVectorizer(vocabulary=vocab.keys())  docVector = cv.fit_transform([docInOneSentence])  sentenceVector = cv.fit_transform([sentence])  return cosine_similarity(docVector, sentenceVector)[0][0]

完整的实现代码


# coding: utf-8from __future__ import absolute_import,print_function,unicode_literals,divisionimport collections
import unicodedata
import re
import os
import nltk
from nltk.stem.porter import PorterStemmer
import mathwith open('article.txt','r') as f:  # 可替换为任意的文本文件article = f.read()def unicode_to_ascii(text):return ''.join(c for c in unicodedata.normalize('NFD',text) if unicodedata.category(c) != 'Mn')# 文本预处理
def process_text(text):#text = unicode_to_ascii(text.lower().strip())# create a space between a word and the punctuation following ittext = re.sub(r"([?.!,¿])", r" \1 ", text)text = re.sub(r'[" "]+', " ", text)# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",")text = re.sub(r"[^a-zA-Z?.!¿,]+", " ", text)text = text.replace(',',' ')text = text.replace('\n',' ')text = text.strip()return text.lower()STOPWORDS = frozenset(['all', 'six', 'just', 'less', 'being', 'indeed', 'over', 'move', 'anyway', 'four', 'not', 'own', 'through','using', 'fifty', 'where', 'mill', 'only', 'find', 'before', 'one', 'whose', 'system', 'how', 'somewhere','much', 'thick', 'show', 'had', 'enough', 'should', 'to', 'must', 'whom', 'seeming', 'yourselves', 'under','ours', 'two', 'has', 'might', 'thereafter', 'latterly', 'do', 'them', 'his', 'around', 'than', 'get', 'very','de', 'none', 'cannot', 'every', 'un', 'they', 'front', 'during', 'thus', 'now', 'him', 'nor', 'name', 'regarding','several', 'hereafter', 'did', 'always', 'who', 'didn', 'whither', 'this', 'someone', 'either', 'each', 'become','thereupon', 'sometime', 'side', 'towards', 'therein', 'twelve', 'because', 'often', 'ten', 'our', 'doing', 'km','eg', 'some', 'back', 'used', 'up', 'go', 'namely', 'computer', 'are', 'further', 'beyond', 'ourselves', 'yet','out', 'even', 'will', 'what', 'still', 'for', 'bottom', 'mine', 'since', 'please', 'forty', 'per', 'its','everything', 'behind', 'does', 'various', 'above', 'between', 'it', 'neither', 'seemed', 'ever', 'across', 'she','somehow', 'be', 'we', 'full', 'never', 'sixty', 'however', 'here', 'otherwise', 'were', 'whereupon', 'nowhere','although', 'found', 'alone', 're', 'along', 'quite', 'fifteen', 'by', 'both', 'about', 'last', 'would','anything', 'via', 'many', 'could', 'thence', 'put', 'against', 'keep', 'etc', 'amount', 'became', 'ltd', 'hence','onto', 'or', 'con', 'among', 'already', 'co', 'afterwards', 'formerly', 'within', 'seems', 'into', 'others','while', 'whatever', 'except', 'down', 'hers', 'everyone', 'done', 'least', 'another', 'whoever', 'moreover','couldnt', 'throughout', 'anyhow', 'yourself', 'three', 'from', 'her', 'few', 'together', 'top', 'there', 'due','been', 'next', 'anyone', 'eleven', 'cry', 'call', 'therefore', 'interest', 'then', 'thru', 'themselves','hundred', 'really', 'sincere', 'empty', 'more', 'himself', 'elsewhere', 'mostly', 'on', 'fire', 'am', 'becoming','hereby', 'amongst', 'else', 'part', 'everywhere', 'too', 'kg', 'herself', 'former', 'those', 'he', 'me', 'myself','made', 'twenty', 'these', 'was', 'bill', 'cant', 'us', 'until', 'besides', 'nevertheless', 'below', 'anywhere','nine', 'can', 'whether', 'of', 'your', 'toward', 'my', 'say', 'something', 'and', 'whereafter', 'whenever','give', 'almost', 'wherever', 'is', 'describe', 'beforehand', 'herein', 'doesn', 'an', 'as', 'itself', 'at','have', 'in', 'seem', 'whence', 'ie', 'any', 'fill', 'again', 'hasnt', 'inc', 'thereby', 'thin', 'no', 'perhaps','latter', 'meanwhile', 'when', 'detail', 'same', 'wherein', 'beside', 'also', 'that', 'other', 'take', 'which','becomes', 'you', 'if', 'nobody', 'unless', 'whereas', 'see', 'though', 'may', 'after', 'upon', 'most', 'hereupon','eight', 'but', 'serious', 'nothing', 'such', 'why', 'off', 'a', 'don', 'whereby', 'third', 'i', 'whole', 'noone','sometimes', 'well', 'amoungst', 'yours', 'their', 'rather', 'without', 'so', 'five', 'the', 'first', 'with','make', 'once'
])# 去停用词
def remove_stopwords(s):s = unicode_to_ascii(s)return " ".join(w for w in s.split() if w not in STOPWORDS)def create_dataset(text,):data = [remove_stopwords(process_text(line)) for line in text.split(".")]return datadef preprowords_and_wordFreqs(sent):porter_stemmer = PorterStemmer()preprowords = list()  # 经过词干提取后词的集合s  =  "" for word in sent:s += ' ' + porter_stemmer.stem(word)preprowords.append(porter_stemmer.stem(word))wordFreqs = collections.Counter(str(s).split(" ")).most_common(15)wordFreqs = dict(wordFreqs)  # 词频字典return preprowords, wordFreqs# ### TF
#
# 用于估计一个词在一个文档中的重要程度
#
# $$\text{TF}(w,d) = \frac{n_{w,d}}{\sum_{u \in \{ w_{d}\}}n_{u,d}}$$
#
# 其中$n_{w,d}$表示词w在文档d中出现的次数,$\{w_{d}\}$表示文档d中所有词的集合def TFs(sentences):tfs = dict() # 记录文档的 词-TFfor sent in sentences:sent = sent.split(" ")preprowords, wordFreqs = preprowords_and_wordFreqs(sent)for word,value in wordFreqs.items():       if tfs.get(word,0) == 0: # 若此时tfs中没有word,词频计数即为 wordFreqs 中对应的值tfs[word] = wordFreqs[word]else:tfs[word] = tfs[word] + wordFreqs[word]  # 否则将两个字典中对应的值相加return tfs# ### IDF
#
# 逆文档频率
#
# $$\text{IDF}(w,d) = \log(\frac{n}{n_{w}})$$
#
# 其中$n_{w}$表示包含词w的文档数目def IDFs(sentences):N = len(sentences)idf = 0idfs,words, = dict(), dict()w2 = list()for sent in sentences:sent = sent.split(" ")preprowords, wordFreqs = preprowords_and_wordFreqs(sent)for word in preprowords:if wordFreqs.get(word,0) != 0:words[word] = words.get(word,0) + 1for word in words:n = words[word]try:w2.append(n)idf = math.log10(float(N) / n)except ZeroDivisionError:idf = 0idfs[word] = idfreturn idfs# ### TF-IDF
#
# $$\text{TF-IDF}(w,d) = \text{TF}(w,d) * \text{IDF}(w,d)$$def TF_IDF(sentences):tfs = TFs(sentences)idfs = IDFs(sentences)retval = dict()for word,value in tfs.items():    tf_idfs = tfs[word] * idfs[word]if retval.get(tf_idfs,None) == None:retval[tf_idfs] = [word]else:retval[tf_idfs].append(word)return retvaldef sentenceSim(sent1, sent2, IDF_w):numerator = 0denominator = 0sent1 = sent1.split(" ")preprowords1, wordFreqs1 = preprowords_and_wordFreqs(sent1)sent2 = sent2.split(" ")preprowords2, wordFreqs2 = preprowords_and_wordFreqs(sent2)for word in preprowords2:numerator += wordFreqs1.get(word,0) * wordFreqs2.get(word,0) * IDF_w.get(word,0) ** 2for word in preprowords1:denominator += (wordFreqs2.get(word,0) * IDF_w.get(word,0)) ** 2try:return numerator / math.sqrt(denominator)except ZeroDivisionError:return float("-inf")   def build_query(sentences, TF_IDF_w, n):scores = TF_IDF_w.keys()scores = list(scores)scores.sort(reverse = True)i = 0j = 0querywords = list()while(i < n):words = TF_IDF_w[scores[j]]for word in words:querywords.append(word)i += 1if i > n:breakj += 1s = ""for word in querywords:s += " " + wordreturn sdef best_sentence(sentences, query, IDF):best_sent = NonemaxVal = float("-inf")for sent in sentences:similarity = sentenceSim(sent,query,IDF)if similarity > maxVal:best_sent = sentmaxVal = similaritysentences.remove(best_sent)return best_sentdef MMRScore(Si, query, Sj, lambta, IDF):Sim1 = sentenceSim(Si, query, IDF)l_expr = lambta * Sim1value = [float("-inf")]for sent in Sj:Sim2 = sentenceSim(Si, sent, IDF)value.append(Sim2)r_expr = (1-lambta) * max(value)MMR_SCORE = l_expr - r_expr return MMR_SCOREdef make_summary(sentences, best_sentence, query, summary_length, lambta, IDF):summary = [best_sentence]preprowords, wordFreqs = preprowords_and_wordFreqs(best_sentence)sum_len = len(preprowords)MMRVal = {}while sum_len < summary_length:MMRVal = {}for sent in sentences:MMRVal[sent] = MMRScore(sent,query,summary,lambta,IDF)maxxer = max(MMRVal, key = MMRVal.get)summary.append(maxxer)sentences.remove(maxxer)preprowords, wordFreqs = preprowords_and_wordFreqs(maxxer)sum_len += len(preprowords)return summaryTD_w = TFs(article)
IDF_w = IDFs(article)
TF_IDF_w = TF_IDF(article)query = build_query(article, TF_IDF_w,10)best1sentence = best_sentence(article,query,IDF_w)summary = make_summary(article,best1sentence,query,100,0.5,IDF_w)final_summary = ""for sent in summary:final_summary += " " + sent + "."final_summary = final_summary[:-1]final_summary# #### 采用余弦相似度度量句子在文档中的重要性
import os
import re
import jieba
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import operator  def calculateSimilarity(sentence, doc):if doc == []:  return 0  vocab = {}  for word in sentence.split():  vocab[word] = 0docInOneSentence = '';  for t in doc:  docInOneSentence += (t + ' ')  for word in t.split():  vocab[word]=0    cv = CountVectorizer(vocabulary=vocab.keys())  docVector = cv.fit_transform([docInOneSentence])  sentenceVector = cv.fit_transform([sentence])  return cosine_similarity(docVector, sentenceVector)[0][0]  def compute_scores(sentences):scores = {}for sent in sentences: sentences.remove(sent)score = calculateSimilarity(sent, sentences)scores[sent] = score  return scoresscores = compute_scores(article)
print (scores)n = 25 * len(article) / 100
alpha = 0.7
summarySet = []
while n > 0:  mmr = {}  for sentence in scores.keys():  if not sentence in summarySet:  mmr[sentence] = alpha * scores[sentence] - (1-alpha) * calculateSimilarity(sentence, summarySet)      selected = max(mmr.items(), key=operator.itemgetter(1))[0]    summarySet.append(selected)  n -= 1 print (str(summarySet))

参考

自动摘要生成(一):最大边界相关算法(MMR)
TF-TDF算法 笔记
摘要抽取算法——最大边界相关算法MMR(Maximal Marginal Relevance) 实践
Python实现利用MMR提取自动摘要

MMR(最大边界相关算法)相关推荐

  1. TextRank方法的优化——MMR(最大边界相关算法)

    文章目录 TextRank方法的优化--MMR(最大边界相关算法) 一.文件目录 二.TextRank优化--MMR(main.py) 实验结果 TextRank方法的优化--MMR(最大边界相关算法 ...

  2. 自动摘要生成(一):最大边界相关算法(MMR)

    分享一下前一段时间公司需要做的文章自动摘要. 一.摘要方法 目前来说,文章摘要自动生成主要分为两种方法:生成式和抽取式. 生成式采用sequence2sequence+Attention的模型,采用E ...

  3. 最大边界相关算法MMR(Maximal Marginal Relevance) 实践

    NLP(自然语言处理)领域一个特别重要的任务叫做--文本摘要自动生成.此任务的主要目的是快速的抽取出一篇文章的主要内容,这样读者就能够通过最少的文字,了解到文章最要想表达的内容.由于抽取出来的摘要表达 ...

  4. 15-垃圾回收相关算法

    垃圾回收相关算法 标记阶段:引用计数算法(Java未使用) 垃圾标记阶段:对象存活判断 在堆里存放着几乎所有的Java对象实例,在GC执行垃圾回收之前,首先需要区分出内存中哪些是存活对象,哪些是已经死 ...

  5. JVM学习笔记之-拉圾回收概述,垃圾回收相关算法

    拉圾回收概述 什么是垃圾 垃圾收集,不是Java语言的伴生产物.早在1960年,第一门开始使用内存动态分配和垃圾收集技术的Lisp语言诞生. 关于垃圾收集有三个经典问题: 哪些内存需要回收? 什么时候 ...

  6. 第 15 章 垃圾回收相关算法

    第 15 章 垃圾回收相关算法 1.标记阶段:引用计数器 1.1.标记阶段的目的 垃圾标记阶段:判断对象是否存活 在堆里存放着几乎所有的Java对象实例,在GC执行垃圾回收之前,首先需要区分出内存中哪 ...

  7. sklearn.neighbors 最近邻相关算法,最近邻分类和回归

    文章目录 sklearn.neighbors 最近邻相关算法,分类和插值 1. 查找最近邻元素 2. 最近邻分类 3. 最近邻回归 4. NearestCentroid 最近邻质心分类 5. Neig ...

  8. 第十五章 - 垃圾回收相关算法

    第十五章 - 垃圾回收相关算法 文章目录 第十五章 - 垃圾回收相关算法 1.标记阶段:引用计数算法 1.1 垃圾标记阶段:对象存活判断 1.2 引用计数算法 1.3 小结 2.标记阶段:可达性分析算 ...

  9. 第七篇章——垃圾回收概念及相关算法

    垃圾回收--概述 本专栏学习内容来自尚硅谷宋红康老师的视频以及<深入理解JVM虚拟机>第三版 有兴趣的小伙伴可以点击视频地址观看,也可以点击下载电子书 垃圾回收概述 垃圾回收不是Java语 ...

最新文章

  1. 计算机硬件与游戏发展史,电脑硬件的发展历程中 什么是电竞SSD?
  2. android ui自动化测试框架有哪些,自动化测试框架对比(UIAutomator、Appium、Robotium)...
  3. Http 状态码详解
  4. Windows Form -----内容(2)
  5. Java学习--设计模式之创建型模式
  6. MySQL什么是关系_MySQL教程-关系模型
  7. ASP.NET MVC 4 视图页去哪里儿
  8. 脚本清理maven项目打包残留文件,节省磁盘空间
  9. rootkit后门检测工具rkhunter
  10. dell 恢复介质_使用Dell OS Recovery Tool制作Windows恢复U盘
  11. 改变世界的程序员—Jack Dorsey
  12. 电脑桌面下栏和计算机里面全黑,电脑桌面下面菜单栏变黑条了,为什么?
  13. 几种常用的Web安全认证方式
  14. 怎样理财?不做老板也发财
  15. 【计算思维题】少儿编程 蓝桥杯青少组计算思维题真题及解析第2套
  16. python生成快递取件码没了怎么办_货到速递易,但没有收到取件码,怎么办
  17. html5怎么实现雨滴效果,HTML5实现晶莹剔透的雨滴特效
  18. win11桌面改成win7桌面的设置方法
  19. 鸿蒙开发起步系列 | 环境搭建、HarmonyOS应用开发及智能硬件开发
  20. 高性能本地缓存Ristretto(一)——存储策略

热门文章

  1. 5800日常操作使用小技巧
  2. 进一步了解XPath(利用XPath爬取飞哥的博客)【python爬虫入门进阶】(04)
  3. 标签体系,这么做才有实用价值
  4. 【MATLAB】函数定义与反函数
  5. 计算机辅助翻译stm,计算机辅助翻译报告.docx
  6. [Java]SpringBoot2整合mqtt服务器EMQ实现消息订阅发布入库(二)
  7. 从《西游》《魔界》看东西方的团队与项目管理
  8. PHP 中 foreach和for循环哪个效率更高
  9. 《The One!团队》第八次作业:ALPHA冲刺(二)
  10. 【每日早报】2019/08/16