一、ex6.m

%% Machine Learning Online Class
%  Exercise 6 | Support Vector Machines
%
%  Instructions
%  ------------
%
%  This file contains code that helps you get started on the
%  exercise. You will need to complete the following functions:
%
%     gaussianKernel.m
%     dataset3Params.m
%     processEmail.m
%     emailFeatures.m
%
%  For this exercise, you will not need to change any code in this file,
%  or any other files other than those mentioned above.
%%% Initialization
clear ; close all; clc%% =============== Part 1: Loading and Visualizing Data ================
%  We start the exercise by first loading and visualizing the dataset.
%  The following code will load the dataset into your environment and plot
%  the data.
%fprintf('Loading and Visualizing Data ...\n')% Load from ex6data1:
% You will have X, y in your environment
load('ex6data1.mat');% Plot training data
plotData(X, y);fprintf('Program paused. Press enter to continue.\n');
pause;%% ==================== Part 2: Training Linear SVM ====================
%  The following code will train a linear SVM on the dataset and plot the
%  decision boundary learned.
%% Load from ex6data1:
% You will have X, y in your environment
load('ex6data1.mat');fprintf('\nTraining Linear SVM ...\n')% You should try to change the C value below and see how the decision
% boundary varies (e.g., try C = 1000)
C = 1;
model = svmTrain(X, y, C, @linearKernel, 1e-3, 20);
visualizeBoundaryLinear(X, y, model);fprintf('Program paused. Press enter to continue.\n');
pause;%% =============== Part 3: Implementing Gaussian Kernel ===============
%  You will now implement the Gaussian kernel to use
%  with the SVM. You should complete the code in gaussianKernel.m
%
fprintf('\nEvaluating the Gaussian Kernel ...\n')x1 = [1 2 1]; x2 = [0 4 -1]; sigma = 2;
sim = gaussianKernel(x1, x2, sigma);fprintf(['Gaussian Kernel between x1 = [1; 2; 1], x2 = [0; 4; -1], sigma = 0.5 :' ...'\n\t%f\n(this value should be about 0.324652)\n'], sim);fprintf('Program paused. Press enter to continue.\n');
pause;%% =============== Part 4: Visualizing Dataset 2 ================
%  The following code will load the next dataset into your environment and
%  plot the data.
%fprintf('Loading and Visualizing Data ...\n')% Load from ex6data2:
% You will have X, y in your environment
load('ex6data2.mat');% Plot training data
plotData(X, y);fprintf('Program paused. Press enter to continue.\n');
pause;%% ========== Part 5: Training SVM with RBF Kernel (Dataset 2) ==========
%  After you have implemented the kernel, we can now use it to train the
%  SVM classifier.
%
fprintf('\nTraining SVM with RBF Kernel (this may take 1 to 2 minutes) ...\n');% Load from ex6data2:
% You will have X, y in your environment
load('ex6data2.mat');% SVM Parameters
C = 1; sigma = 0.1;% We set the tolerance and max_passes lower here so that the code will run
% faster. However, in practice, you will want to run the training to
% convergence.
model= svmTrain(X, y, C, @(x1, x2) gaussianKernel(x1, x2, sigma));
visualizeBoundary(X, y, model);fprintf('Program paused. Press enter to continue.\n');
pause;%% =============== Part 6: Visualizing Dataset 3 ================
%  The following code will load the next dataset into your environment and
%  plot the data.
%fprintf('Loading and Visualizing Data ...\n')% Load from ex6data3:
% You will have X, y in your environment
load('ex6data3.mat');% Plot training data
plotData(X, y);fprintf('Program paused. Press enter to continue.\n');
pause;%% ========== Part 7: Training SVM with RBF Kernel (Dataset 3) ==========%  This is a different dataset that you can use to experiment with. Try
%  different values of C and sigma here.
% % Load from ex6data3:
% You will have X, y in your environment
load('ex6data3.mat');% Try different SVM Parameters here
[C, sigma] = dataset3Params(X, y, Xval, yval);% Train the SVM
model= svmTrain(X, y, C, @(x1, x2) gaussianKernel(x1, x2, sigma));
visualizeBoundary(X, y, model);fprintf('Program paused. Press enter to continue.\n');
pause;

二、gaussianKernel.m

function sim = gaussianKernel(x1, x2, sigma)
%RBFKERNEL returns a radial basis function kernel between x1 and x2
%   sim = gaussianKernel(x1, x2) returns a gaussian kernel between x1 and x2
%   and returns the value in sim% Ensure that x1 and x2 are column vectors
x1 = x1(:); x2 = x2(:);% You need to return the following variables correctly.
sim = 0; % 1*1% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return the similarity between x1
%               and x2 computed using a Gaussian kernel with bandwidth
%               sigma
%
%square_diff = sum((x1 - x2) .^ 2);
sim = exp(-square_diff / 2 /(sigma^2));% =============================================================end

三、dataset3Params.m

function [C, sigma] = dataset3Params(X, y, Xval, yval)
%EX6PARAMS returns your choice of C and sigma for Part 3 of the exercise
%where you select the optimal (C, sigma) learning parameters to use for SVM
%with RBF kernel
%   [C, sigma] = EX6PARAMS(X, y, Xval, yval) returns your choice of C and
%   sigma. You should complete this function to return the optimal C and
%   sigma based on a cross-validation set.
%% You need to return the following variables correctly.
C = 1; % 1*1
sigma = 0.3; % 1*1% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return the optimal C and sigma
%               learning parameters found using the cross validation set.
%               You can use svmPredict to predict the labels on the cross
%               validation set. For example,
%                   predictions = svmPredict(model, Xval);
%               will return the predictions on the cross validation set.
%
%  Note: You can compute the prediction error using
%        mean(double(predictions ~= yval))
%set_values = [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30];
results = [];
long = numel(set_values);
for i = 1:long
for j = 1:longC_temp = set_values(i);
sigma_temp = set_values(j);model = svmTrain(X, y, C_temp, @(x1, x2) gaussianKernel(x1, x2, sigma_temp));
predictions = svmPredict(model, Xval);pre_error = mean(double(predictions ~= yval));results = [results; C_temp, sigma_temp, pre_error];
end
end[smallest_error, idx] = min(results(:, 3));
C = results(idx, 1);
sigma = results(idx, 2);
% =========================================================================end

四、processEmail.m

function word_indices = processEmail(email_contents)
%PROCESSEMAIL preprocesses a the body of an email and
%returns a list of word_indices
%   word_indices = PROCESSEMAIL(email_contents) preprocesses
%   the body of an email and returns a list of indices of the
%   words contained in the email.
%% Load Vocabulary
vocabList = getVocabList();% Init return value
word_indices = [];% ========================== Preprocess Email ===========================% Find the Headers ( \n\n and remove )
% Uncomment the following lines if you are working with raw emails with the
% full headers% hdrstart = strfind(email_contents, ([char(10) char(10)]));
% email_contents = email_contents(hdrstart(1):end);% Lower case
email_contents = lower(email_contents);% Strip all HTML
% Looks for any expression that starts with < and ends with > and replace
% and does not have any < or > in the tag it with a space
email_contents = regexprep(email_contents, '<[^<>]+>', ' ');% Handle Numbers
% Look for one or more characters between 0-9
email_contents = regexprep(email_contents, '[0-9]+', 'number');% Handle URLS
% Look for strings starting with http:// or https://
email_contents = regexprep(email_contents, ...'(http|https)://[^\s]*', 'httpaddr');% Handle Email Addresses
% Look for strings with @ in the middle
email_contents = regexprep(email_contents, '[^\s]+@[^\s]+', 'emailaddr');% Handle $ sign
email_contents = regexprep(email_contents, '[$]+', 'dollar');% ========================== Tokenize Email ===========================% Output the email to screen as well
fprintf('\n==== Processed Email ====\n\n');% Process file
l = 0;while ~isempty(email_contents)% Tokenize and also get rid of any punctuation[str, email_contents] = ...strtok(email_contents, ...[' @$/#.-:&*+=[]?!(){},''">_<;%' char(10) char(13)]);% Remove any non alphanumeric charactersstr = regexprep(str, '[^a-zA-Z0-9]', '');% Stem the word % (the porterStemmer sometimes has issues, so we use a try catch block)try str = porterStemmer(strtrim(str)); catch str = ''; continue;end;% Skip the word if it is too shortif length(str) < 1continue;end% Look up the word in the dictionary and add to word_indices if% found% ====================== YOUR CODE HERE ======================% Instructions: Fill in this function to add the index of str to%               word_indices if it is in the vocabulary. At this point%               of the code, you have a stemmed word from the email in%               the variable str. You should look up str in the%               vocabulary list (vocabList). If a match exists, you%               should add the index of the word to the word_indices%               vector. Concretely, if str = 'action', then you should%               look up the vocabulary list to find where in vocabList%               'action' appears. For example, if vocabList{18} =%               'action', then, you should add 18 to the word_indices %               vector (e.g., word_indices = [word_indices ; 18]; ).% % Note: vocabList{idx} returns a the word with index idx in the%       vocabulary list.% % Note: You can use strcmp(str1, str2) to compare two strings (str1 and%       str2). It will return 1 only if the two strings are equivalent.%%%%%%%%%%%%%%%%%%%%%% NOT CORRECT %%%%%%%%%%%%%%%%%%%%%
%str2 = str(:);
%long_dic = numel(vocabList2);
%long_email = numel(str2);%for i = 1:long_email
%for j = 1:long_dic
%if 1 == strcmp(str2(i), vocabList2(j))
%word_indices = [word_indices ; j];
%break;
%end % if-end
%end
%end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%CORRECT
word_indices = [word_indices, find(ismember(vocabList, str))];% =============================================================% Print to screen, ensuring that the output lines are not too longif (l + length(str) + 1) > 78fprintf('\n');l = 0;endfprintf('%s ', str);l = l + length(str) + 1;end% Print footer
fprintf('\n\n=========================\n');end

五、emailFeatures.m

function x = emailFeatures(word_indices)
%EMAILFEATURES takes in a word_indices vector and produces a feature vector
%from the word indices
%   x = EMAILFEATURES(word_indices) takes in a word_indices vector and
%   produces a feature vector from the word indices. % Total number of words in the dictionary
n = 1899;% You need to return the following variables correctly.
x = zeros(n, 1); % n*1% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return a feature vector for the
%               given email (word_indices). To help make it easier to
%               process the emails, we have have already pre-processed each
%               email and converted each word in the email into an index in
%               a fixed dictionary (of 1899 words). The variable
%               word_indices contains the list of indices of the words
%               which occur in one email.
%
%               Concretely, if an email has the text:
%
%                  The quick brown fox jumped over the lazy dog.
%
%               Then, the word_indices vector for this text might look
%               like:
%
%                   60  100   33   44   10     53  60  58   5
%
%               where, we have mapped each word onto a number, for example:
%
%                   the   -- 60
%                   quick -- 100
%                   ...
%
%              (note: the above numbers are just an example and are not the
%               actual mappings).
%
%              Your task is take one such word_indices vector and construct
%              a binary feature vector that indicates whether a particular
%              word occurs in the email. That is, x(i) = 1 when word i
%              is present in the email. Concretely, if the word 'the' (say,
%              index 60) appears in the email, then x(60) = 1. The feature
%              vector should look like:
%
%              x = [ 0 0 0 0 1 0 0 0 ... 0 0 0 0 1 ... 0 0 0 1 0 ..];
%
%x([word_indices]) = 1;% =========================================================================end

六、submit results

Machine Learning week 7 quiz: programming assignment-Support Vector Machines相关推荐

  1. Machine Learning week 5 quiz: programming assignment-Multi-Neural Network Learning

    一.ex4.m %% Machine Learning Online Class - Exercise 4 Neural Network Learning% Instructions % ------ ...

  2. Machine Learning week 4 quiz: programming assignment-Multi-class Classification and Neural Networks

    一.ex3.m %% Machine Learning Online Class - Exercise 3 | Part 1: One-vs-all% Instructions % --------- ...

  3. Machine Learning week 9 quiz: programming assignment-Anomaly Detection and Recommender Systems

    一.ex8.m %% Machine Learning Online Class % Exercise 8 | Anomaly Detection and Collaborative Filterin ...

  4. Machine Learning week 8 quiz: programming assignment-K-Means Clustering and PCA

    一.ex7.m %% Machine Learning Online Class % Exercise 7 | Principle Component Analysis and K-Means Clu ...

  5. Machine Learning week 6 quiz: programming assignment-Regularized Linear Regression and Bias/Variance

    一.ex5.m %% Machine Learning Online Class % Exercise 5 | Regularized Linear Regression and Bias-Varia ...

  6. Machine Learning week 3 quiz: programming assignment-Logistic Regression

    一.ex2.m: the main .m file to call other function files % matlab%% Machine Learning Online Class - Ex ...

  7. Machine Learning Techniques 笔记:2-1 Linear Support Vector Machine

    Linear Calssification:通过资料加权,看是正还是负,来对资料进行分类. PLA:把资料做未分类的线不止一条,那条线是最佳的分类?Margin最大? 我们学过的理论保障:VC bou ...

  8. Machine Learning week 7 quiz: Support Vector Machines

    Support Vector Machines 5 试题 1. Suppose you have trained an SVM classifier with a Gaussian kernel, a ...

  9. Programming Exercise 6:Support Vector Machines

    大家好,我是Mac Jiang,今天和大家分享一下coursera网站上Stanford University的Machine Learning公开课(吴恩达老师)课程第六次作业:Programmin ...

最新文章

  1. python 中cookie_Python 处理Cookie的菜鸟教程(一)Cookie库
  2. 【深度学习】深度学习语义分割理论与实战指南.pdf
  3. Ghost 2.16.3 发布,基于 Markdown 的在线写作平台
  4. 散列--数据结构与算法JavaScript描述(8)
  5. MySQL 5.6 for Windows 解压缩版配置安装
  6. 【干货】借助用户画像解决电商业务问题.pdf(附下载链接)
  7. php根据经纬度查询附近工人,并算出距离(tp3.2)
  8. AS3 键盘的事件与实现
  9. 调优jvm需要修改什么文件_JVM性能调优:基本概念介绍
  10. win7 sp1简体中文升级补丁包(64位)
  11. 泰坦尼克号生还率预测分析
  12. QImage 32bit转8bit
  13. JBPM学习入门(一) 配置JBPM运行环境
  14. Java Web——基于Jsp+Servlet的大学生社团管理系统
  15. WITH GRANT OPTION
  16. Grip编辑器增强版(UG二次开发工具,最好用的Grip编辑器,没有之一)
  17. 测试工作3年还在基础岗?可能只是因为你的工作能力差
  18. 什么是BCD码?BCD码编码规则
  19. 2020年全国各省、各个地级市、各县的10米分辨率的土地利用数据的制作方法与获取
  20. 关于单元测试框架GoogleTest——参考《百度文库》、大量博客

热门文章

  1. 转载:谢谢原作者: 块设备驱动实战基础篇二 (继续完善170行过滤驱动代码至200行)
  2. 小工匠聊架构 - 缓存与数据库【双写不一致】【读写并发不一致】解决方案一览
  3. asp.net httpclient post 请求头_Java11的HttpClient的使用
  4. maven helper的使用
  5. Redis对象类型与编码
  6. Redis之跳跃表(面试重点容易考)
  7. python 选择排序算法
  8. python的pip文件目录_python基础—pip指定安装目录
  9. vscode 在标签的src引入别名路径_从零开始 - VSCode 插件运行机制
  10. 微信小程序——收起和查看更多功能