(1)SVD与LSI教程(1):理解SVD和LSI

(2)SVD和LSI教程(2):计算奇异值

(3)SVD与 LSI教程(3): 计算矩阵的全部奇异值

(4)SVD 与 LSI 教程(4): LSI计算

(5)SVD 与 LSI教程(5):LSI关键字研究与协同理论

/**********************作者信息****************/

Dr. E. Garcia

Mi Islita.com

Email | Last Update: 01/07/07

/**********************作者信息****************/

Introduction

In Part 1 and 2 of this tutorial we covered the Singular Value Decomposition (SVD) algorithm. In Part 3 and 4 we explained through examples how SVD is used in Latent Semantic Indexing (LSI). We mentioned how the U, S and V matrices and truncated matrices adopt a meaning not found in plain SVD implementations.

First, we demonstrated that rows of V (or columns of VT) holds document vector coordinates. Thus, any two documents can be compared by computing cosine similarities. This information can be used to group documents (clustering), to classify documents by topics (topic analysis) or to construct collections of similar documents (directories).

Second, we have shown that the diagonal of S represents dimensions. These dimensions are used to embed vectors representing documents, queries, and terms.

Third, we indicated that rows of U holds term vector coordinates. Thus, any two terms can be compared by computing cosine similarities. With this information one should be able to conduct keyword research studies. Such studies could include the construction of a thesaurus, generation of lists of candidate terms to be used in web documents or in a keyword bidding program.

In this article we want to address the last point; i.e., how readers could use U or Uk for keyword research. Since such studies are intimate linked to word usage and co-occurrence, we want to explain the role of keyword co-occurrence in the term-term LSI matrix. In particular we want to explain how co-occurrence affects LSI scores. In the process we want to debunk another SEO myth: the claim made by some SEO "experts" that in order to make documents "LSI-friendly, -ready or -compliant" these must be stuffed with synonyms or related terms.

We assume that readers have assimilated previous tutorials. If you haven't done this please STOP AND READ THESE since we will be reusing concepts and examples already discussed.

You might also find useful the following fast tracks:

LSI Keyword Research - A Fast Track Tutorial
Latent Semantic Indexing (LSI) Fast Track Tutorial
Singular Value Decomposition (SVD) Fast Track Tutorial

These are designed to serve as quick references for readers of this series.

Revisiting the Uk Matrix

As mentioned before, rows of Uk hold term vector coordinates. Thus, keyword research can be conducted with LSI by computing term-term cosine similarities.

Luckly in the example used in Part 4 we worked with three documents and ended up with a two-dimensional space so we can visualize the vectors. These are shown in Figure 1. For more than three dimensions a visual representation is not possible. We have included the query vector (gold silver truck, in red) to simplify the discussion.

Revisiting the U Matrix

Figure 1. Revisiting the Uk Matrix.

* See important footnote

Note how some terms end grouped in the reduced space. Some vectors end completely superimposed and are not shown.

Now that we have grouped terms, these can be used for several purposes. For instance these can be used in new documents or in ads or to formulate new queries. Also terms around the query can be used to expand or refine the query.

Note that none of these terms are synonyms. We have selected this example to debunk another SEO myth.

Another SEO Myth Debunked: Synonym Stuffing

Let revisit Figure 1 and the original documents:

  • d1: Shipment of gold damaged in a fire.
  • d2: Delivery of silver arrived in a silver truck.
  • d3: Shipment of gold arrived in a truck.

When we look at Figure 1 the first SEO misconception that gets debunked is that LSI groups terms because these happen to maintain a synonymity association. Clearly this is not the case.

One could argue that gold is more related to silver than to shipment. After all, these can be used as adjectives (both are colors) or nouns (both are metals). Why then in this example gold and shipment form a two-term cluster? The term-document matrix (A) reveals why: these co-occur in d1 and d3, but not in d2.

Also note that the vectors associated to silver and delivery are superimposed. A shows these co-occurring in d2 and as being mutually dependent; i.e., one occurs whenever the other occurs.

We can also identify a by-product or direct consequence of using a primitive weight scheme like the Term Count Model: term repetition affects the length of vectors. In d2silver occurs twice and delivery once. This explains why the length of these term vectors is 2:1.

But wait: there is more.

Damage and fire end clustered since they co-occur once in d1, but not in d2 and d3. Arrived and truck are clustered and co-occur once in d2 and d3, but not in d1. Stopwords a, in, of co-occur once in d1, d2 and d3 and are also clustered by the LSI algorithm. Certainly, these stopwords are not synonyms. In all these cases we are dealing with what is known as first-order co-occurrence.

The case of second-order co-occurrence, that is, two terms not co-occurring while co-occurring with a third term is also clear. For instance, gold and silver do not co-occur in any of the three documents. However, they co-occur with truck and as follows:

  1. in d3, gold and truck co-occur, but silver doesn't.
  2. in d2, silver and truck co-occur, but gold doesn't.

It is the presence of these first and second order co-occurrence paths what is at the heart of LSI and makes the technique works --not the fact that terms happen to maintain a synonymity or relatedness association. So, where does this "synonym myth" comes from?

In the early LSI papers the role of first and high-order co-occurrence patterns was mentioned, but not fully addressed. These papers overemphasized the role of LSI as a synonym discovery technique.

It so happens that synomyns are a special class of tokens that do not tend to occur together, but tend to co-occur in similar contexts (neighboring terms), which is precisely a high-order co-occurrence phenomenon called second-order co-occurrence. The reverse is not necessarily true; not all terms with a second-order co-occurrence relationship are synonyms. Think of this in terms of the following analogy:

Dogs are four-leg animals, but not all four-leg animals are dogs.

It appears that search marketers looked at one way of the issue and then arrived to a fallacious conclusion. This might explain why many of these tend to misquote outdated papers and even suggest that in order to make documents "LSI-friendly" these should be stuffed with synonyms and related terms.

There is no such thing as "LSI-Friendly" documents

A lot of use of synonyms and related terms in a copy has nothing to do with LSI.

At this DigitalPoint thread I explained that the use of synonyms and related terms is a common sense practice one should use to improve copy style, but not that one should use because of LSI.

Some SEOs are giving the wrong advice by saying that one should use synonyms and related terms under the pretension or wrong thesis that this will make a document "LSI-friendly". In fact, when one think thoroughly there is no such thing as making documents "LSI-friendly". This is another SEO Myth.

The great thing about a phenomenon taking place at a global level like co-occurrence and IDF (inverse document frequency) is that the chances for end users to manipulate these are close to nada, zero, zip, nothing.

In LSI, co-occurrence (especially second-order co-occurrence) is responsible for the LSI scores assigned to terms, not the nature of the terms or whether these happen to be synonyms or related terms. In the early LSI papers this was not fully addressed and emphasis was given to synonyms. Why?

Because the documents selected to conduct those experiments happen to contain synonyms and related terms. It was thought that somehow synonymity associations were responsible for the clustering phenomenon. The fact is that this was direct result of co-occurrence patterns present in the LSI matrix.

Two studies (2, 3) have explained the role of co-occurrence patterns in the LSI matrix, but differ a bit in some of their findings. It seems that SEOs are still quoting the first LSI papers from the late eighties and early nineties and in the process some have stretched that old research in order to market better whatever they sell.

When LSI is applied to a term-document matrix representing a collection of documents in the zillions, the co-occurrence phenomenon that affects the LSI scores becomes a global effect, occuring between documents in the collection.

Thus, the only way that end users (e.g. SEOs) would influence the LSI scores is if they can access and control the content of all the documents of the matrix or launch a coordinated spam attack to the entire collection. The later would be the case of a spammer trying to make an LSI-based search engine to index billion of documents (to say a quantity) he/she has created.

If an end user or research want to understand and manipulate the effect of co-occurrence in a single document, he/she would need to deconstruct a single document and make a term-passage matrix for that single document and to this apply LSI --then play by manipulating single terms. Whatever the results these will only be valid for that universe represented by the matrix, that is for that and only that document.

If such document is then submitted to the LSI-based search engine that local effect simply vanishes and global co-occurrence "takes over" and spreads throughout the collection, forming the corresponding connectivity paths that eventually forces a redistribution of term weights.

Consequently, SEOs that sell this idea of making documents "LSI-friendly", "LSI-ready" or "LSI-compliant" like some firms sending emails that read "is your site LSI optimized?", "we can make your documents LSI-valid" or those that promote the notion of "LSI and Link Popularity" end exposed for what they are and for how much they know about search engines. The sad thing is that these illusion sellers find their way via search engine conferences (SES), blogs and forums to deceive the industry with such blogonomies. In the process they give a black eye to the rest of ethical SEOs/SEMs before the IR community, reinforcing the wide spread perception that search marketers are a bunch of spammers or unscrupulous sales people. BTW here are Two More LSI Blogonomies.

In the next sections we discuss this in more details. In particular we want to explain why some terms gain or lose weight and how first- and second-order co-occurrence paths present in the term-term LSI matrix spread throughout a collection.

Why d2 scores higher than d3?

Revisiting the original documents

  • d2: Delivery of silver arrived in a silver truck.
  • d3: Shipment of gold arrived in a truck.

The query consists of three terms: gold silver truck.

Note that d2 and d3 both match two of these and miss one query term. d2 misses gold and d3 misses silver. Evidently,

  1. the term missed by d3 (silver) is repeated twice in d2, while the term missed by d2 (gold) occurs once in d3.
  2. d2 mentions delivery, which co-occur with silver and truck. Its vector also overlaps partially the silver vector. Note that the vectors for deliver, silver and truck all are close to the query vector. In the case of d3 this mentions shipment, but this term occurs with gold and not with silver or truck. This explains why its vector is far away from the query vector.

This suggests that terms co-occurring with similar neighboring terms are responsible for the observed LSI scores. Whether these happen to be synonyms or not is not the determining factor.

Why d3 scores higher than d1?

A similar reasoning can be used to compare d3 and d1. Revisiting the original documents:

  • d1: Shipment of gold damaged in a fire.
  • d3: Shipment of gold arrived in a truck.

We can see that d1 mentions damaged and fire. These terms co-occur with gold, but never with silver and truck. Note that their vectors are superimposed and far way from the query vector.

In the case of d3 this document mentions arrived and truckArrived co-occurs with silver which is not explicitly present in the document, but is part of the query. The document also mentions truck which definitely is in the query. Arrived and truck also co-occur and their vectors are closer to the query vector. It is then not surprising to find d3 scoring higher than d1.

Let's look now at delivery and shipment. It can be argued that delivery is more related to shipment than to silver. However, delivery and shipment do not co-occur and their vectors tend to end at opposite extremes of the query vector. Again, co-occurrence and not the nature of the terms is the determining factor.

A Quantitative Interpretation using Co-Occurrence

So far we have used co-occurrence arguments to provided a qualitative explanation for the observed LSI scores. Let's reinforce now our main arguments with a quantitative description of the problem.

In Figure 2 we have recomputed A as a Rank 2 Approximation.

Truncated Matrix for the Rank 2 Approximation

Figure 2. Truncated Matrix for the Rank 2 Approximation.

Note that LSI has readjusted matrix A term weights which are now either incremented or lowered in the truncated matrix Ak. Let us underscore that this redistribution is not based on the nature of the terms, whether these happen to be synonyms or related terms, but on the type of co-occurrence between these.

To illustrate, let's take a new look at d2 and d3 using Figure 2.

  • d2: Delivery of silver arrived in a silver truck.
  • d3: Shipment of gold arrived in a truck.

The word silver did not appear in d3, but because d3 did contain arrived and truck and these co-occur with silver its new weight is 0.3960. This is an example of second-order co-occurrence. By contrast, the value 1 for arrived and truck, which appeared once in d3, has been replaced by 0.5974 reflecting the fact that these terms co-occur with a word not present in d3. This represents a lost of contextuality.

A similar reasoning can be used with d1 and d3.

  • d1: Shipment of gold damaged in a fire.
  • d3: Shipment of gold arrived in a truck.

The words arrived and truck did not appear in d1, but these co-occur with the stopwords aof, and in in all three documents, so their weights in d1 are 0.3003.

This redistribution of term weights (addition and substraction) occurring in the truncated LSI matrix is better understood with a side-by-side comparison of terms as illustrated in Figure 3.

Redistribution of Weights

Figure 3. Redistribution of Weights.

Inspect thoroughly this figure. Any little change in a term or terms in any given document will provoke a redistribution of term weights across the entire collection. There is no way for end users to predict that redistribution in other documents of the collection. Since end users don't have access, don't have control over other documents of the collection, and don't know when or how someone (or how many) across the entire collection (or the Web) will make changes to a given document, it would be impossible to predict the final redistribution of weights caused by the LSI algorithm and subsequent ranking at any given point in time. And we still don't know the specific implementation of LSI used by search engines like Google or Yahoo!; eg. how many dimensions were used to truncate the original matrix, which term weight scoring scheme was used to populate the initial term-doc matric, and so forth. Certainly no current search engine uses raw frequencies to assign weights to terms and then use these to rank documents.

From the common sense side, this is why we say that there is no such thing as "LSI-friendly" or "LSI-compliant" documents. How could you predict the redistribution of term weights in the SVD matrix? Exactly.

Therefore, SEO firms claiming that they can make "LSI-friendly" documents or that are selling "LSI Tools", "LSI Videos", "LSI-link popularity", and other form of "LSI-based" services are deceiving the public and prospective clients. Stay away from their businesses or whatever they claim in any SEO Book, Forum, or Blog or in any search engine conference & expo or "advanced" marketing seminar.

Most of these folks don't even know how to SVD a simple matrix and are just about selling something or about promoting their image as SEO "experts". Each time I meet an IR researcher or colleague from the academic world and discuss SVD they simply laugh out laud (LOL) at how these search engine marketers interpret or "explain" LSI and other IR algorithms.

Rant aside, in Figure 3 we have computed row and column totals and a grand total. These deviate from the expected totals. How could we interpret such deviations?

Well, the original term-document data was described by a matrix of rank 3 and embedded in a 3-dimensional space. When we applied the SVD algorithm we removed one dimension and obtained a Rank 2 Approximation. So, the truncated data was embedded in a 2-dimensional space. We assumed that the dimension removed was noisy, so any fluctuation (increment or decrement) occurring in this dimension was taken for noise. For all practical purposes the difference 22 - 21.9611 = 0.0389 can be taken for the net change caused by the SVD algorithm after removing the noise.

Note that only four terms gain weights. These are:

  • arrived: 2.0305 - 2 = + 0.0305
  • gold: 2.0119 - 2 = + 0.0119
  • shipment: 2.00119 - 2 = + 0.0119
  • truck: 2.0305 - 2 = + 0.0305

for an accummulated gain of 0.0848 weight units. All these terms are mentioned in d3, yet the rank order was d2 > d3 > d1, for the reasons previously mentioned. This makes sense since ranks were assigned by comparing query-document cosine similarities, not by net weight changes. Surprise: Net weight gains not necessarily made d3 more relevant that d2! Say "adios" to term manipulation efforts.

Beyond Plain Co-Occurrence: Contextual Co-Occurrence

In this exercise we have shown that what accounts for the redistribution of term weights in the truncated LSI matrix is a co-occurrence phenomenon and not the nature of the terms. In particular we have limited the discussion to co-occurrence of the first and second kind. The jury is still out as to whether higher orders (third, fourth, fifth, and so forth) play a significative role. At least two studies provide contradictory results (2, 3).

Let us stress that these studies, the above example and the results herein discussed have been extracted under controlled conditions, free from ads and spam. Applying these results to Web collections is far more difficult. This is due to the fact that Web documents, especially long documens, tend to discuss different topics and are full of vested interests and alliances of all kind. Such documents can be full of ads, headlines, news feeds, etc.

Thus, only because any two terms happen to be found in the same document this is not evidence of similarity or relatedness. Thus, a simplistic co-occurrence approach is not recommended. This is why the extraction of terms from commercial documents by means of "LSI based" tools is a questionable practice.

It should be pointed out that only because two terms happen to be synonyms or happen to co-occur in a document this is not evidence of contextuality. In this case one must look at terms co-occurring within similar neighboring terms. This is an ongoing research area we are looking into.

Conclusion

This tutorial series presents introductory material on Singular Value Decomposition (SVD) and Latent Semantic Indexing (LSI). We have shown how SVD is used in LSI. In the process several SEO myths have been debunked and few fast track tutorials have been provided.

During our journey toward a better understanding of LSI we searched for clues on what makes LSI work. We have shown that co-occurrence seems to be at the heart of LSI, especially when terms co-occur in the same context (with similar neighboring terms). At the time of writing, the role of higher order co-occurrences is still unclear. We are currently looking into a geometrical framework for understanding the redistribution of term weights.

* BlueBit Important Upgrade

Note 1 After this tutorial was written, BlueBit upgraded the SVD calculator and now is giving the VT transpose matrix. We became aware of this today 10/21/06. This BlueBit upgrade doesn't change the calculations, anyway. Just remember that if using VT and want to go back to V just switch rows for columns.

Note 2 BlueBit also uses now a different subroutine and a different sign convention which flips the coordinates of the figures given above. Absolutely none of these changes affect the final calculations and main findings of the example given in this tutorial. Why?

Tutorial Review

  1. Rework Figure 3, but this time without truncating the term-document matrix. Explain any deviation in the observed totals.
  2. Why some terms receive negative weights in the example given above.
  3. Also in the example given above, explain why some terms end weighting more than in the original matrix.
References
  1. LSI Keyword Research - A Fast Track Tutorial, E. Garcia (2006).
  2. Understanding LSI via the Truncated Term-term Matrix, 2005 Thesis, by Regis Newo (Germany) .
  3. A Framework for Understanding Latent Semantic Indexing (LSI) Performance, April Kontostathis and William Pottenger (Lehigh University).

SVD 与 LSI教程(5):LSI关键字研究与协同理论相关推荐

  1. java static关键字_好程序员Java教程分享static关键字的理解

    好程序员Java教程分享static关键字的理解,static关键字含义可以理解为静态的. 1. 当其修饰属性时,该属性为整个类公有,所有的对象操作的都是同一个静态属性.所以调用时应该使用类名去调用, ...

  2. TypeScript入门教程 之 Let 关键字

    TypeScript入门教程 之 Let 关键字 let varJavaScript中的变量是函数范围的.这不同于许多其他语言(C#/ Java等),其中变量是块作用域的.如果将块作用域的思维方式带入 ...

  3. 市场调研报告-全球与中国关键字研究工具市场现状及未来发展趋势

    本文研究全球及中国市场关键字研究工具现状及未来发展趋势,侧重分析全球及中国市场的主要企业,同时对比北美.欧洲.日本.中国.东南亚.印度等地区的现状及未来发展趋势. 2019年全球关键字研究工具市场规模 ...

  4. seo关键字_SEO最佳关键字研究技术

    seo关键字 Want more traffic? Better conversion rates? Higher search engine rankings? Then it's time to ...

  5. seo关键词工具_2020年针对SEO的8种最佳关键字研究工具(比较)

    seo关键词工具 Are you looking for the best keyword research tools for SEO? 您是否正在寻找SEO的最佳关键字研究工具? Keyword ...

  6. Maven简明教程(4)---依赖关系(理论篇)

    [工欲善其事,必先利其器] 在本文中,我们来简单介绍maven中几个常见的概念,这些概念在日常开发中经常见到.各位看官可以下面的介绍作为参考知识. -------------------------- ...

  7. 独家 | 探索性文本数据分析的新手教程(Amazon案例研究)

    作者:Abhishek Sharma 翻译:李嘉骐 校对:方星轩 本文长度为5500字,建议阅读10+分钟 本文利用Python对Amazon产品的反馈对数据文本进行探索性研究与分析,并给出结论. 标 ...

  8. Pandas简明教程-适用于竞赛、研究以及办公自动化

    数据的读.写.查.改是数据分析的基础,也是竞赛.研究以及办公自动化类项目的常用操作.为了让大家能以更简单的方法来操作数据,我们选择Pandas作为处理数据的工具,希望通过这个系列的教程能够帮助大家节省 ...

  9. C语言基础教程 之 系统关键字

    C语言基础教程目录:https://blog.csdn.net/SparkLee2013/article/details/85229406 C语言的系统关键字总共有32个 1)数据类型关键字(12个) ...

最新文章

  1. R构建KNN多分类模型
  2. 整数加扰java_生成随机顺序,但在java中有约束
  3. 【模板】AC自动机(加强版)
  4. 最具体的历史centos下一个 postfix + extmail + dovecot + maildrop 安装注意事项2014更新...
  5. 深入理解C++中的explicit关键字
  6. Leetcode每日一题:222.count-complete-tree-nodes(完全二叉树的节点个数)
  7. 2021-2025年中国住宅安全行业市场供需与战略研究报告
  8. 准备将redis引入项目做消息队列使用
  9. Android 系统开发_核心技术篇 -- 深入钻研 JNI
  10. c盘清理代码_WIN10 C盘空间不够怎么办?几个小方法助你清理硬盘空间
  11. 《大师谈游戏设计——创意与节奏》【笔记一】
  12. Linux基础知识总结一
  13. 嗨格式视频转换器全新上线,一个音视频转换神器
  14. C# CAD开发 选择集的使用
  15. 赵燕菁:城市化2.0与规划转型 ——一个两阶段模型的解释│宏论
  16. cdr 表格自动填充文字_Excel表格设置生成自动填充序号、编号
  17. 在地址栏直接使用Google“手气不错”功能
  18. 2022年大数据技能大赛国赛(模块A,B)
  19. Lora SX1278芯片 模块引脚的功能介绍
  20. kali wmap使用教程

热门文章

  1. Java程序设计-实验6-sdust
  2. 基本粒子结构以及宇宙现象的徦说
  3. 用Django编写邮箱注册以及验证码
  4. 雅马哈机器人编程讲解_雅马哈机器人RCX编程手册
  5. JavaME证书的制作和介绍
  6. 大数据、物联网、云计算
  7. blender合并物体后材质丢失问题的解决办法
  8. 租用游艇问题 石子合并问题 动态规划实验
  9. Spark SQL: Relational Data Processing in Spark
  10. 极客时间课程笔记:业务安全