在微博上看到 @c0d3r_Jia 同学发的一条信息:

这篇讲LDA更”人道”一些,比那些用来证明自己算法正确的文章清楚很多。不过也提到,LDA或者概率模型要用好,需要不断的筛选features、精选进行操作的token才行。// Topic modeling made just simple enough http://t.cn/zOpOc4D //喜欢这样的文章是不是就是Sheldon看不上Leonard很重要的方面,呵呵

就打开链接看了一下,然后转发了,再之后有同学反映文章被墙了,才发现这篇文章发表在wordpress.com上,转载在这里吧,有需要的同学可以看看,原文见:Topic modeling made just simple enough

Right now, humanists often have to take topic modeling on faith. There are several good posts out there that introduce the principle of the thing (by Matt Jockers, for instance, and Scott Weingart). But it’s a long step up from those posts to the computer-science articles that explain “Latent Dirichlet Allocation” mathematically. My goal in this post is to provide a bridge between those two levels of difficulty.

Computer scientists make LDA seem complicated because they care about proving that their algorithms work. And the proof is indeed brain-squashingly hard. But the practice of topic modeling makes good sense on its own, without proof, and does not require you to spend even a second thinking about “Dirichlet distributions.” When the math is approached in a practical way, I think humanists will find it easy, intuitive, and empowering. This post focuses on LDA as shorthand for a broader family of “probabilistic” techniques. I’m going to ask how they work, what they’re for, and what their limits are.

How does it work? Say we’ve got a collection of documents, and we want to identify underlying “topics” that organize the collection. Assume that each document contains a mixture of different topics. Let’s also assume that a “topic” can be understood as a collection of words that have different probabilities of appearance in passages discussing the topic. One topic might contain many occurrences of “organize,” “committee,” “direct,” and “lead.” Another might contain a lot of “mercury” and “arsenic,” with a few occurrences of “lead.” (Most of the occurrences of “lead” in this second topic, incidentally, are nouns instead of verbs; part of the value of LDA will be that it implicitly sorts out the different contexts/meanings of a written symbol.)


Of course, we can’t directly observe topics; in reality all we have are documents. Topic modeling is a way of extrapolating backward from a collection of documents to infer the discourses (“topics”) that could have generated them. (The notion that documents are produced by discourses rather than authors is alien to common sense, but not alien to literary theory.) Unfortunately, there is no way to infer the topics exactly: there are too many unknowns. But pretend for a moment that we had the problem mostly solved. Suppose we knew which topic produced every word in the collection, except for this one word in document D. The word happens to be “lead,” which we’ll call word type W. How are we going to decide whether this occurrence of W belongs to topic Z?

We can’t know for sure. But one way to guess is to consider two questions. A) How often does “lead” appear in topic Z elsewhere? If “lead” often occurs in discussions of Z, then this instance of “lead” might belong to Z as well. But a word can be common in more than one topic. And we don’t want to assign “lead” to a topic about leadership if this document is mostly about heavy metal contamination. So we also need to consider B) How common is topic Z in the rest of this document?

Here’s what we’ll do. For each possible topic Z, we’ll multiply the frequency of this word type W in Z by the number of other words in document D that already belong to Z. The result will represent the probability that this word came from Z. Here’s the actual formula:


Simple enough. Okay, yes, there are a few Greek letters scattered in there, but they aren’t terribly important. They’re called “hyperparameters” — stop right there! I see you reaching to close that browser tab! — but you can also think of them simply as fudge factors. There’s some chance that this word belongs to topic Z even if it is nowhere else associated with Z; the fudge factors keep that possibility open. The overall emphasis on probability in this technique, of course, is why it’s called probabilistic topic modeling.

Now, suppose that instead of having the problem mostly solved, we had only a wild guess which word belonged to which topic. We could still use the strategy outlined above to improve our guess, by making it more internally consistent. We could go through the collection, word by word, and reassign each word to a topic, guided by the formula above. As we do that, a) words will gradually become more common in topics where they are already common. And also, b) topics will become more common in documents where they are already common. Thus our model will gradually become more consistent as topics focus on specific words and documents. But it can’t ever become perfectly consistent, because words and documents don’t line up in one-to-one fashion. So the tendency for topics to concentrate on particular words and documents will eventually be limited by the actual, messy distribution of words across documents.

That’s how topic modeling works in practice. You assign words to topics randomly and then just keep improving the model, to make your guess more internally consistent, until the model reaches an equilibrium that is as consistent as the collection allows.

What is it for? Topic modeling gives us a way to infer the latent structure behind a collection of documents. In principle, it could work at any scale, but I tend to think human beings are already pretty good at inferring the latent structure in (say) a single writer’s oeuvre. I suspect this technique becomes more useful as we move toward a scale that is too large to fit into human memory.

So far, most of the humanists who have explored topic modeling have been historians, and I suspect that historians and literary scholars will use this technique differently. Generally, historians have tried to assign a single label to each topic. So in mining the Richmond Daily Dispatch, Robert K. Nelson looks at a topic with words like “hundred,” “cotton,” “year,” “dollars,” and “money,” and identifies it as TRADE — plausibly enough. Then he can graph the frequency of the topic as it varies over the print run of the newspaper.

As a literary scholar, I find that I learn more from ambiguous topics than I do from straightforwardly semantic ones. When I run into a topic like “sea,” “ship,” “boat,” “shore,” “vessel,” “water,” I shrug. Yes, some books discuss sea travel more than others do. But I’m more interested in topics like this:


You can tell by looking at the list of words that this is poetry, and plotting the volumes where the topic is prominent confirms the guess.


This topic is prominent in volumes of poetry from 1815 to 1835, especially in poetry by women, including Felicia Hemans, Letitia Landon, and Caroline Norton. Lord Byron is also well represented. It’s not really a “topic,” of course, because these words aren’t linked by a single referent. Rather it’s a discourse or a kind of poetic rhetoric. In part it seems predictably Romantic (“deep bright wild eye”), but less colorful function words like “where” and “when” may reveal just as much about the rhetoric that binds this topic together.

A topic like this one is hard to interpret. But for a literary scholar, that’s a plus. I want this technique to point me toward something I don’t yet understand, and I almost never find that the results are too ambiguous to be useful. The problematic topics are the intuitive ones — the ones that are clearly about war, or seafaring, or trade. I can’t do much with those.

Now, I have to admit that there’s a bit of fine-tuning required up front, before I start getting “meaningfully ambiguous” results. In particular, a standard list of stopwords is rarely adequate. For instance, in topic-modeling fiction I find it useful to get rid of at least the most common personal pronouns, because otherwise the difference between 1st and 3rd person point-of-view becomes a dominant signal that crowds out other interesting phenomena. Personal names also need to be weeded out; otherwise you discover strong, boring connections between every book with a character named “Richard.” This sort of thing is very much a critical judgment call; it’s not a science.

I should also admit that, when you’re modeling fiction, the “author” signal can be very strong. I frequently discover topics that are dominated by a single author, and clearly reflect her unique idiom. This could be a feature or a bug, depending on your interests; I tend to view it as a bug, but I find that the author signal does diffuse more or less automatically as the collection expands.


What are the limits of probabilistic topic modeling? I spent a long time resisting the allure of LDA, because it seemed like a fragile and unnecessarily complicated technique. But I have convinced myself that it’s both effective and less complex than I thought. (Matt Jockers, Travis Brown, Neil Fraistat, and Scott Weingart also deserve credit for convincing me to try it.)

This isn’t to say that we need to use probabilistic techniques for everything we do. LDA and its relatives are valuable exploratory methods, but I’m not sure how much value they will have as evidence. For one thing, they require you to make a series of judgment calls that deeply shape the results you get (from choosing stopwords, to the number of topics produced, to the scope of the collection). The resulting model ends up being tailored in difficult-to-explain ways by a researcher’s preferences. Simpler techniques, like corpus comparison, can answer a question more transparently and persuasively, if the question is already well-formed. (In this sense, I think Ben Schmidt is right to feel that topic modeling wouldn’t be particularly useful for the kinds of comparative questions he likes to pose.)

Moreover, probabilistic techniques have an unholy thirst for memory and processing time. You have to create several different variables for every single word in the corpus. The models I’ve been running, with roughly 2,000 volumes, are getting near the edge of what can be done on an average desktop machine, and commonly take a day. To go any further with this, I’m going to have to beg for computing time. That’s not a problem for me here at Urbana-Champaign (you may recall that we invented HAL), but it will become a problem for humanists at other kinds of institutions.

Probabilistic methods are also less robust than, say, vector-space methods. When I started running LDA, I immediately discovered noise in my collection that had not previously been a problem. Running headers at the tops of pages, in particular, left traces: until I took out those headers, topics were suspiciously sensitive to the titles of volumes. But LDA is sensitive to noise, after all, because it is sensitive to everything else! On the whole, if you’re just fishing for interesting patterns in a large collection of documents, I think probabilistic techniques are the way to go.

Where to go next
The standard implementation of LDA is the one in MALLET. I haven’t used it yet, because I wanted to build my own version, to make sure I understood everything clearly. But MALLET is better. If you want a few examples of complete topic models on collections of 18/19c volumes, I’ve put some models, with R scripts to load them, in my github folder.

If you want to understand the technique more deeply, the first thing to do is to read up on Bayesian statistics. In this post, I gloss over the Bayesian underpinnings of LDA because I think the implementation (using a strategy called Gibbs sampling, which is actually what I described above!) is intuitive enough without them. And this might be all you need! I doubt most humanists will need to go further. But if you do want to tinker with the algorithm, you’ll need to understand Bayesian probability.

David Blei invented LDA, and writes well, so if you want to understand why this technique has “Dirichlet” in its name, his works are the next things to read. I recommend his Introduction to Probabilistic Topic Models. It recently came out in Communications of the ACM, but I think you get a more readable version by going to his publication page (link above) and clicking the pdf link at the top of the page.

Probably the next place to go is “Rethinking LDA: Why Priors Matter,” a really thoughtful article by Hanna Wallach, David Mimno, and Andrew McCallum that explains the “hyperparameters” I glossed over in a more principled way.

Then there are a whole family of techniques related to LDA — Topics Over Time, Dynamic Topic Modeling, Hierarchical LDA, Pachinko Allocation — that one can explore rapidly enough by searching the web. In general, it’s a good idea to approach these skeptically. They all promise to do more than LDA does, but they also add additional assumptions to the model, and humanists are going to need to reflect carefully about which assumptions we actually want to make. I do think humanists will want to modify the LDA algorithm, but it’s probably something we’re going to have to do for ourselves; I’m not convinced that computer scientists understand our problems well enough to do this kind of fine-tuning.

Topic modeling made just simple enough相关推荐

  1. Collaborative topic modeling(推荐)算法实现中的大数组问题

    本文作者:合肥工业大学 管理学院 钱洋 email:1563178220@qq.com . 以下内容是个人的论文阅读笔记,内容可能有不到之处,欢迎交流. 未经本人允许禁止转载. 问题背景 最近,在使用 ...

  2. Targeted Topic Modeling for Focused Analysis(TTM的理解)

    问题描述 问题定义 最简单的实现方法 作者提出的模型 代码 本文作者:合肥工业大学 管理学院 钱洋 email:1563178220@qq.com 内容可能有不到之处,欢迎交流. 未经本人允许禁止转载 ...

  3. 用GibbsLDA做Topic Modeling

    http://weblab.com.cityu.edu.hk/blog/luheng/2011/06/24/%E7%94%A8gibbslda%E5%81%9Atopic-modeling/ 用Gib ...

  4. Topic Modeling of Short Texts: A Pseudo-Document View

    PTM认为大量的短文本是从数量少得多但大小正常的潜在文档中产生的,这些潜在文档被称为伪文档. 通过学习伪文档而不是短文本的主题分布,PTM具有固定数量的参数,并在训练语料相对不足时获得避免过拟合的能力 ...

  5. 如何在Python中活学活用主题词模型(Topic Modeling)和隐狄利克雷分布(LDA)

    主题词模型是一种统计模型,用于发现文档集合中出现的抽象"主题". Latent Dirichlet Allocation(LDA)是主题模型的一个例子,用于将文档中的文本分类为特定 ...

  6. A topic modeling framework for spatio-temporal information management(2020)

    摘要 在诸如Twitter这样的动态环境中,实时处理和学习相互冲突的数据,特别是来自不同想法.地点和时间的消息,是一项具有挑战性的任务,最近受到了广泛的关注.本文介绍了一个管理.处理.分析.检测和跟踪 ...

  7. 我爱机器学习网机器学习类别文章汇总

    机器学习领域的几种主要学习方式 From Stumps to Trees to Forests KDD-2014 – The Biggest, Best, and Booming Data Scien ...

  8. 我爱自然语言处理网文章汇总

    斯坦福大学深度学习与自然语言处理第三讲:高级的词向量表示 斯坦福大学深度学习与自然语言处理第二讲:词向量 斯坦福大学深度学习与自然语言处理第一讲:引言 用MeCab打造一套实用的中文分词系统(三):M ...

  9. 我爱机器学习--机器学习方向资料汇总

    转载:http://blog.csdn.net/shuimanting520/article/details/45748505 机器学习爱好者资料 机器学习领域的几种主要学习方式 From Stump ...

  10. 2018 A Sparse Topic Model for Extracting Aspect-Specific Summaries from Online Reviews 稀疏主题模型学习笔记

    论文来源 文章介绍 模型及推理 关于源码 论文来源 Rakesh V, Ding W, Ahuja A, et al. A Sparse Topic Model for Extracting Aspe ...

最新文章

  1. 三十四、段页式管理方式
  2. Zend Studio 10正式版破解(2013-02-26更新)
  3. java中的IO详解(下)
  4. python安装缺少api怎么办_请问缺少win32api模块该如何解决?
  5. 应用深度学习(台大陈蕴侬李宏毅) Part1
  6. IIS Express 无法启动
  7. 春考计算机组装维修知识点,【校选修】计算机组装与维修 考试题
  8. java enum枚举使用例子
  9. 南信大c语言实验8报告,北科大C语言程序设计实验报告8论文报告.doc
  10. 从原理带你掌握Spring MVC拦截处理器知识
  11. cadence快捷键修改文件_PCB快捷键设置
  12. “编程太差,那你别搞开发了!”基础差的程序员,你不知道有多难!!
  13. 龙卷风路径_关于龙卷风,看这篇文章就够了
  14. 宝峰对讲机16频率表_宝峰888S对讲机的16个信道频率是多少?
  15. 面试经常考的五个Sql查询
  16. 中标麒麟yum源地址
  17. 一元二次方程求根计算机的代码,[C算法]一元二次方程求根
  18. 最新版养猫小程序前端+后端搭建详细教程
  19. 互联网晚报 | 05月17日 星期二 | 郑州首套房贷利率最低降至4.4%;可口可乐被曝员工不得购买竞品...
  20. 强类型语言与弱类型语言/面向过程与面向对象

热门文章

  1. win7系统提示0x80072F8F错误代码的解决方法(刷新你的认知)
  2. 投入产出实例matlab,基于MATLAB的投入产出分析
  3. 【数学建模】regress()函数进行回归分析| 美国人口预测
  4. Linux之ARM(IMX6U)裸机主频和时钟配置
  5. JavaScript中的倒叙和排序
  6. 微信端跳到外部浏览器进行apk文件下载
  7. 手机卫星定位系统_手机怎样连接北斗导航?一打开这个设置,马上连接,很简单...
  8. CenterNet网络中的hourglass网络(深度学习)
  9. VMware Ubuntu虚拟机非正常关机的恢复
  10. win10计算机用户怎么删除,Win10系统怎么管理的家庭成员账户? Win10删除账户的教程...