只做了计分部分

Operations on word vectors

Welcome to your first assignment of this week!

Because word embeddings are very computionally expensive to train, most ML practitioners will load a pre-trained set of embeddings.

After this assignment you will be able to:

  • Load pre-trained word vectors, and measure similarity using cosine similarity
  • Use word embeddings to solve word analogy problems such as Man is to Woman as King is to __.
  • Modify word embeddings to reduce their gender bias

Let's get started! Run the following cell to load the packages you will need.

In [ ]:
import numpy as np
from w2v_utils import *

Next, lets load the word vectors. For this assignment, we will use 50-dimensional GloVe vectors to represent words. Run the following cell to load the word_to_vec_map.

In [ ]:
words, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')

You've loaded:

  • words: set of words in the vocabulary.
  • word_to_vec_map: dictionary mapping words to their GloVe vector representation.

You've seen that one-hot vectors do not do a good job cpaturing what words are similar. GloVe vectors provide much more useful information about the meaning of individual words. Lets now see how you can use GloVe vectors to decide how similar two words are.

1 - Cosine similarity

To measure how similar two words are, we need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors uu and vv, cosine similarity is defined as follows:

CosineSimilarity(u, v)=u.v||u||2||v||2=cos(θ)(1)(1)CosineSimilarity(u, v)=u.v||u||2||v||2=cos(θ)

where u.vu.v is the dot product (or inner product) of two vectors, ||u||2||u||2 is the norm (or length) of the vector uu, and θθ is the angle between uu and vv. This similarity depends on the angle between uu and vv. If uu and vv are very similar, their cosine similarity will be close to 1; if they are dissimilar, the cosine similarity will take a smaller value.

Figure 1: The cosine of the angle between two vectors is a measure of how similar they are

Exercise: Implement the function cosine_similarity() to evaluate similarity between word vectors.

Reminder: The norm of uu is defined as ||u||2=ni=1u2i||u||2=∑i=1nui2

In [ ]:
# GRADED FUNCTION: cosine_similarity
def cosine_similarity(u, v):
"""
Cosine similarity reflects the degree of similariy between u and v

Arguments:
u -- a word vector of shape (n,)
v -- a word vector of shape (n,)
Returns:
cosine_similarity -- the cosine similarity between u and v defined by the formula above.
"""

distance = 0.0

### START CODE HERE ###
# Compute the dot product between u and v (≈1 line)
dot = np.dot(u,v)
# Compute the L2 norm of u (≈1 line)
norm_u = np.sqrt(np.sum(np.square(u)))

# Compute the L2 norm of v (≈1 line)
norm_v =np.sqrt(np.sum(np.square(v)))
# Compute the cosine similarity defined by formula (1) (≈1 line)
cosine_similarity = dot/(norm_u*norm_v)
### END CODE HERE ###

return cosine_similarity

In [ ]:
father = word_to_vec_map["father"]
mother = word_to_vec_map["mother"]
ball = word_to_vec_map["ball"]
crocodile = word_to_vec_map["crocodile"]
france = word_to_vec_map["france"]
italy = word_to_vec_map["italy"]
paris = word_to_vec_map["paris"]
rome = word_to_vec_map["rome"]
print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother))
print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile))
print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy))

Expected Output:

cosine_similarity(father, mother) = 0.890903844289
cosine_similarity(ball, crocodile) = 0.274392462614
cosine_similarity(france - paris, rome - italy) = -0.675147930817

After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! Playing around the cosine similarity of other inputs will give you a better sense of how word vectors behave.

2 - Word analogy task

In the word analogy task, we complete the sentence "a is to b as c is to __". An example is 'man is to woman as king is to queen' . In detail, we are trying to find a word d, such that the associated word vectors ea,eb,ec,edea,eb,ec,ed are related in the following manner: ebeaedeceb−ea≈ed−ec. We will measure the similarity between ebeaeb−ea and edeced−ec using cosine similarity.

Exercise: Complete the code below to be able to perform word analogies!

In [ ]:
# GRADED FUNCTION: complete_analogy
def complete_analogy(word_a, word_b, word_c, word_to_vec_map):
"""
Performs the word analogy task as explained above: a is to b as c is to ____.

Arguments:
word_a -- a word, string
word_b -- a word, string
word_c -- a word, string
word_to_vec_map -- dictionary that maps words to their corresponding vectors.

Returns:
best_word --  the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity
"""

# convert words to lower case
word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()

### START CODE HERE ###
# Get the word embeddings v_a, v_b and v_c (≈1-3 lines)
e_a, e_b, e_c = [word_to_vec_map[word_a],word_to_vec_map[word_b],word_to_vec_map[word_c]]
### END CODE HERE ###

words = word_to_vec_map.keys()
max_cosine_sim = -100              # Initialize max_cosine_sim to a large negative number
best_word = None                   # Initialize best_word with None, it will help keep track of the word to output
# loop over the whole word vector set
for w in words:
# to avoid best_word being one of the input words, pass on them.
if w in [word_a, word_b, word_c] :
continue

### START CODE HERE ###
# Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c)  (≈1 line)
cosine_sim = cosine_similarity(e_b-e_a,word_to_vec_map[w]-e_c)

# If the cosine_sim is more than the max_cosine_sim seen so far,
# then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines)
if cosine_sim > max_cosine_sim:
max_cosine_sim = cosine_sim
best_word = w
### END CODE HERE ###

return best_word

Run the cell below to test your code, this may take 1-2 minutes.

In [ ]:
triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')]
for triad in triads_to_try:
print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map)))

Expected Output:

italy -> italian :: spain -> spanish
india -> delhi :: japan -> tokyo
man -> woman :: boy -> girl
small -> smaller :: large -> larger

Once you get the correct expected output, please feel free to modify the input cells above to test your own analogies. Try to find some other analogy pairs that do work, but also find some where the algorithm doesn't give the right answer: For example, you can try small->smaller as big->?.

Congratulations!

You've come to the end of this assignment. Here are the main points you should remember:

  • Cosine similarity a good way to compare similarity between pairs of word vectors. (Though L2 distance works too.)
  • For NLP applications, using a pre-trained set of word vectors from the internet is often a good way to get started.

Even though you have finished the graded portions, we recommend you take a look too at the rest of this notebook.

Congratulations on finishing the graded portions of this notebook!

Coursera 吴恩达DeepLearning.AI 第五课 sequence model 序列模型 第二周 Operations on word vectors - v2相关推荐

  1. 吴恩达deeplearning.ai最后一课上线,下一次得等多少年?

    喜大普奔!今天,吴恩达的深度学习系列课程最后一课上线了! 去年6月,吴恩达宣布deeplearning.ai创业项目,8月,该项目揭晓:一套由5门课组成的深度学习系列课程--Deep Learning ...

  2. 吴恩达老师深度学习视频课笔记:序列模型和注意力机制

    基础模型:比如你想通过输入一个法语句子来将它翻译成一个英语句子,如下图,seq2seq模型,用x<1>一直到x<5>来表示输入句子的单词,然后我们用y<1>到y&l ...

  3. 吴恩达deeplearning.ai深度学习课程空白作业

      吴恩达deeplearning.ai深度学习课程的空白作业,包括深度学习微专业五门课程的全部空白编程作业,经多方整理而来.网上找来的作业好多都是已经被别人写过的,不便于自己练习,而且很多都缺失各种 ...

  4. 吴恩达deeplearning.ai系列课程笔记+编程作业(11)第四课 卷积神经网络-第二周 深度卷积网络:实例探究(Deep convolutional models: case studies)

    第四门课 卷积神经网络(Convolutional Neural Networks) 第二周 深度卷积网络:实例探究(Deep convolutional models: case studies) ...

  5. 吴恩达Deeplearning.ai课程学习全体验:深度学习必备课程 By 路雪2017年8月14日 11:44 8 月 8 日,吴恩达正式发布了 Deepleanring.ai——基于 Cours

    吴恩达Deeplearning.ai课程学习全体验:深度学习必备课程 By 路雪2017年8月14日 11:44 8 月 8 日,吴恩达正式发布了 Deepleanring.ai--基于 Course ...

  6. 【干货】吴恩达deeplearning.ai专项课程历史文章汇总

    AI有道 一个有情怀的公众号 本文列出了吴恩达deeplearning.ai专项课程的所有精炼笔记,均是红色石头精心制作的原创内容.主要包括:<神经网络与深度学习>.<优化神经网络& ...

  7. 吴恩达deeplearning.ai系列课程笔记+编程作业(15)序列模型(Sequence Models)-第三周 序列模型和注意力机制

    第五门课 序列模型(Sequence Models) 第三周 序列模型和注意力机制(Sequence models & Attention mechanism) 文章目录 第五门课 序列模型( ...

  8. 吴恩达deeplearning.ai系列课程笔记+编程作业(14)序列模型(Sequence Models)-第二周 自然语言处理与词嵌入

    第五门课 序列模型(Sequence Models) 第二周 自然语言处理与词嵌入(Natural Language Processing and Word Embeddings) 文章目录 第五门课 ...

  9. 吴恩达deeplearning.ai系列课程笔记+编程作业(13)序列模型(Sequence Models)-第一周 循环序列模型(Recurrent Neural Networks)

    第五门课 序列模型(Sequence Models) 第一周 循环序列模型(Recurrent Neural Networks) 文章目录 第五门课 序列模型(Sequence Models) 第一周 ...

最新文章

  1. 通俗易懂!《图机器学习导论》(附链接)
  2. Mysql 查询主键未指定排序时的默认排序问题
  3. 服务器同步什么文件类型,不同服务器同步文件类型
  4. grasshopper for rhino 6下载_从SU到Rhino——lumion批量种树
  5. 谈谈java的线程池(创建、机制)
  6. JavaScript 灯泡暗亮
  7. eclipse和idea代码通用吗_25个JavaScript代码简写技巧(下篇)
  8. python中通过pip安装套件
  9. Android基于Socket无线遥控 - 模拟触摸按键篇framework jar
  10. 开源,不是一种道德绑架
  11. java中KMP模式,Java数据结构-串及其应用-KMP模式匹配算法
  12. js时间与毫秒互相转换
  13. 智能家居有线系统KNX的介绍
  14. 本科蓝色学术论文答辩PPT模板
  15. Linux打包解包、压缩解压缩
  16. 十问十答 | 王峰对话投资人郑刚:特斯拉穿越生死无数次 看好电动车未来
  17. 计算机键盘怎么输入平方,word里平方2怎么打_在word里输入㎡符号的方法
  18. 前后端分离 Spring Security 对登出.logout()的处理
  19. Linux tar命令详解
  20. 用友CDM货位间商品移库(一步)增加冲红功能

热门文章

  1. 中国人脸识别未来发展前景展望
  2. FLASH使用技术提示
  3. 郑州大学 计算机组成原理 67林67楠 ppt,郑州大学计算机组成原理.ppt
  4. 天下攘攘皆为利往——浏览器之争
  5. 如何快速找到对应的论文源码
  6. 一整套偏方,亲们有对应症状可略为参考
  7. multimedia教学设计计算机英语,原创:论多媒体辅助中学英语课堂教学的利与弊原稿...
  8. 人脸识别 - A Discriminative Feature Learning Approach for Deep Face Recognition
  9. Android富文本开发,从0到1!
  10. 电脑故障维修大全2.0[转载]