一、ex7.m

%% Machine Learning Online Class
%  Exercise 7 | Principle Component Analysis and K-Means Clustering
%
%  Instructions
%  ------------
%
%  This file contains code that helps you get started on the
%  exercise. You will need to complete the following functions:
%
%     pca.m
%     projectData.m
%     recoverData.m
%     computeCentroids.m
%     findClosestCentroids.m
%     kMeansInitCentroids.m
%
%  For this exercise, you will not need to change any code in this file,
%  or any other files other than those mentioned above.
%%% Initialization
clear ; close all; clc%% ================= Part 1: Find Closest Centroids ====================
%  To help you implement K-Means, we have divided the learning algorithm
%  into two functions -- findClosestCentroids and computeCentroids. In this
%  part, you shoudl complete the code in the findClosestCentroids function.
%
fprintf('Finding closest centroids.\n\n');% Load an example dataset that we will be using
load('ex7data2.mat');% Select an initial set of centroids
K = 3; % 3 Centroids
initial_centroids = [3 3; 6 2; 8 5];% Find the closest centroids for the examples using the
% initial_centroids
idx = findClosestCentroids(X, initial_centroids);fprintf('Closest centroids for the first 3 examples: \n')
fprintf(' %d', idx(1:3));
fprintf('\n(the closest centroids should be 1, 3, 2 respectively)\n');fprintf('Program paused. Press enter to continue.\n');
pause;%% ===================== Part 2: Compute Means =========================
%  After implementing the closest centroids function, you should now
%  complete the computeCentroids function.
%
fprintf('\nComputing centroids means.\n\n');%  Compute means based on the closest centroids found in the previous part.
centroids = computeCentroids(X, idx, K);fprintf('Centroids computed after initial finding of closest centroids: \n')
fprintf(' %f %f \n' , centroids');
fprintf('\n(the centroids should be\n');
fprintf('   [ 2.428301 3.157924 ]\n');
fprintf('   [ 5.813503 2.633656 ]\n');
fprintf('   [ 7.119387 3.616684 ]\n\n');fprintf('Program paused. Press enter to continue.\n');
pause;%% =================== Part 3: K-Means Clustering ======================
%  After you have completed the two functions computeCentroids and
%  findClosestCentroids, you have all the necessary pieces to run the
%  kMeans algorithm. In this part, you will run the K-Means algorithm on
%  the example dataset we have provided.
%
fprintf('\nRunning K-Means clustering on example dataset.\n\n');% Load an example dataset
load('ex7data2.mat');% Settings for running K-Means
K = 3;
max_iters = 10;% For consistency, here we set centroids to specific values
% but in practice you want to generate them automatically, such as by
% settings them to be random examples (as can be seen in
% kMeansInitCentroids).
initial_centroids = [3 3; 6 2; 8 5];% Run K-Means algorithm. The 'true' at the end tells our function to plot
% the progress of K-Means
[centroids, idx] = runkMeans(X, initial_centroids, max_iters, true);
fprintf('\nK-Means Done.\n\n');fprintf('Program paused. Press enter to continue.\n');
pause;%% ============= Part 4: K-Means Clustering on Pixels ===============
%  In this exercise, you will use K-Means to compress an image. To do this,
%  you will first run K-Means on the colors of the pixels in the image and
%  then you will map each pixel on to it's closest centroid.
%
%  You should now complete the code in kMeansInitCentroids.m
%fprintf('\nRunning K-Means clustering on pixels from an image.\n\n');%  Load an image of a bird
A = double(imread('bird_small.png'));% If imread does not work for you, you can try instead
%   load ('bird_small.mat');A = A / 255; % Divide by 255 so that all values are in the range 0 - 1% Size of the image
img_size = size(A);% Reshape the image into an Nx3 matrix where N = number of pixels.
% Each row will contain the Red, Green and Blue pixel values
% This gives us our dataset matrix X that we will use K-Means on.
X = reshape(A, img_size(1) * img_size(2), 3);% Run your K-Means algorithm on this data
% You should try different values of K and max_iters here
K = 16;
max_iters = 10;% When using K-Means, it is important the initialize the centroids
% randomly.
% You should complete the code in kMeansInitCentroids.m before proceeding
initial_centroids = kMeansInitCentroids(X, K);% Run K-Means
[centroids, idx] = runkMeans(X, initial_centroids, max_iters);fprintf('Program paused. Press enter to continue.\n');
pause;%% ================= Part 5: Image Compression ======================
%  In this part of the exercise, you will use the clusters of K-Means to
%  compress an image. To do this, we first find the closest clusters for
%  each example. After that, we fprintf('\nApplying K-Means to compress an image.\n\n');% Find closest cluster members
idx = findClosestCentroids(X, centroids);% Essentially, now we have represented the image X as in terms of the
% indices in idx. % We can now recover the image from the indices (idx) by mapping each pixel
% (specified by it's index in idx) to the centroid value
X_recovered = centroids(idx,:);% Reshape the recovered image into proper dimensions
X_recovered = reshape(X_recovered, img_size(1), img_size(2), 3);% Display the original image
subplot(1, 2, 1);
imagesc(A);
title('Original');% Display compressed image side by side
subplot(1, 2, 2);
imagesc(X_recovered)
title(sprintf('Compressed, with %d colors.', K));fprintf('Program paused. Press enter to continue.\n');
pause;

二、findClosestCentroids.m

function idx = findClosestCentroids(X, centroids)
%FINDCLOSESTCENTROIDS computes the centroid memberships for every example
%   idx = FINDCLOSESTCENTROIDS (X, centroids) returns the closest centroids
%   in idx for a dataset X where each row is a single example. idx = m x 1
%   vector of centroid assignments (i.e. each entry in range [1..K])
%% Set K
K = size(centroids, 1); % centroids*1 i.e. K*1 % K% You need to return the following variables correctly.
idx = zeros(size(X,1), 1); % m*1% ====================== YOUR CODE HERE ======================
% Instructions: Go over every example, find its closest centroid, and store
%               the index inside idx at the appropriate location.
%               Concretely, idx(i) should contain the index of the centroid
%               closest to example i. Hence, it should be a value in the
%               range 1..K
%
% Note: You can use a for-loop over the examples to compute this.
%m = size(X, 1); % m
for i = 1:m
dist = [];
for j = 1:K
dist(j) =  sum((X(i, :)-centroids(j, :)) .^ 2);
end
[min_dist, min_idx] = min(dist);
idx(i) = min_idx;
end% =============================================================end

三、computeCentroids.m

function centroids = computeCentroids(X, idx, K)
%COMPUTECENTROIDS returs the new centroids by computing the means of the
%data points assigned to each centroid.
%   centroids = COMPUTECENTROIDS(X, idx, K) returns the new centroids by
%   computing the means of the data points assigned to each centroid. It is
%   given a dataset X where each row is a single data point, a vector
%   idx of centroid assignments (i.e. each entry in range [1..K]) for each
%   example, and K, the number of centroids. You should return a matrix
%   centroids, where each row of centroids is the mean of the data points
%   assigned to it.
%% Useful variables
[m n] = size(X); % m*n% You need to return the following variables correctly.
centroids = zeros(K, n); % k*n% ====================== YOUR CODE HERE ======================
% Instructions: Go over every centroid and compute mean of all points that
%               belong to it. Concretely, the row vector centroids(i, :)
%               should contain the mean of the data points assigned to
%               centroid i.
%
% Note: You can use a for-loop over the centroids to compute this.
%for i = 1:K
idx_set = find(i == idx);
ck = numel(idx_set);
if(0 ~= ck)
cen_sum = sum(X(idx_set, :));
centroids(i, :) = cen_sum / ck;
end
end% =============================================================end

四、pca.m

function [U, S] = pca(X)
%PCA Run principal component analysis on the dataset X
%   [U, S, X] = pca(X) computes eigenvectors of the covariance matrix of X
%   Returns the eigenvectors U, the eigenvalues (on diagonal) in S
%% Useful values
[m, n] = size(X); % m*n% You need to return the following variables correctly.
U = zeros(n); % n*n
S = zeros(n); % n*n% ====================== YOUR CODE HERE ======================
% Instructions: You should first compute the covariance matrix. Then, you
%               should use the "svd" function to compute the eigenvectors
%               and eigenvalues of the covariance matrix.
%
% Note: When computing the covariance matrix, remember to divide by m (the
%       number of examples).
%Omega = X' * X / m;
[U S V] = svd(Omega);% =========================================================================end

五、projectData.m

function Z = projectData(X, U, K)
%PROJECTDATA Computes the reduced data representation when projecting only
%on to the top k eigenvectors
%   Z = projectData(X, U, K) computes the projection of
%   the normalized inputs X into the reduced dimensional space spanned by
%   the first K columns of U. It returns the projected examples in Z.
%% You need to return the following variables correctly.
Z = zeros(size(X, 1), K); % m*K% ====================== YOUR CODE HERE ======================
% Instructions: Compute the projection of the data using only the top K
%               eigenvectors in U (first K columns).
%               For the i-th example X(i,:), the projection on to the k-th
%               eigenvector is given as follows:
%                    x = X(i, :)';
%                    projection_k = x' * U(:, k);
%Ureduce = U(:, 1:K);
x = X';
Z = x' * Ureduce; % i.e. X * Ureduce % =============================================================end

六、recoverData.m

function X_rec = recoverData(Z, U, K)
%RECOVERDATA Recovers an approximation of the original data when using the
%projected data
%   X_rec = RECOVERDATA(Z, U, K) recovers an approximation the
%   original data that has been reduced to K dimensions. It returns the
%   approximate reconstruction in X_rec.
%% You need to return the following variables correctly.
X_rec = zeros(size(Z, 1), size(U, 1)); % size(X)% ====================== YOUR CODE HERE ======================
% Instructions: Compute the approximation of the data by projecting back
%               onto the original space using the top K eigenvectors in U.
%
%               For the i-th example Z(i,:), the (approximate)
%               recovered data for dimension j is given as follows:
%                    v = Z(i, :)';
%                    recovered_j = v' * U(j, 1:K)';
%
%               Notice that U(j, 1:K) is a row vector.
%               Ureduce = U(:, 1:K);
X_rec = Z * Ureduce';% =============================================================end

七、submit results

Machine Learning week 8 quiz: programming assignment-K-Means Clustering and PCA相关推荐

  1. Machine Learning week 5 quiz: programming assignment-Multi-Neural Network Learning

    一.ex4.m %% Machine Learning Online Class - Exercise 4 Neural Network Learning% Instructions % ------ ...

  2. Machine Learning week 4 quiz: programming assignment-Multi-class Classification and Neural Networks

    一.ex3.m %% Machine Learning Online Class - Exercise 3 | Part 1: One-vs-all% Instructions % --------- ...

  3. Machine Learning week 9 quiz: programming assignment-Anomaly Detection and Recommender Systems

    一.ex8.m %% Machine Learning Online Class % Exercise 8 | Anomaly Detection and Collaborative Filterin ...

  4. Machine Learning week 7 quiz: programming assignment-Support Vector Machines

    一.ex6.m %% Machine Learning Online Class % Exercise 6 | Support Vector Machines % % Instructions % - ...

  5. Machine Learning week 6 quiz: programming assignment-Regularized Linear Regression and Bias/Variance

    一.ex5.m %% Machine Learning Online Class % Exercise 5 | Regularized Linear Regression and Bias-Varia ...

  6. Machine Learning week 3 quiz: programming assignment-Logistic Regression

    一.ex2.m: the main .m file to call other function files % matlab%% Machine Learning Online Class - Ex ...

  7. Machine Learning week 11 quiz: Application: Photo OCR

    Application: Photo OCR 5 试题 1. Suppose you are running a sliding window detector to find text in ima ...

  8. Machine Learning week 10 quiz: Large Scale Machine Learning

    Large Scale Machine Learning 5 试题 1. Suppose you are training a logistic regression classifier using ...

  9. Machine Learning week 6 quiz: Machine Learning System Design

    Machine Learning System Design 5 试题 1. You are working on a spam classification system using regular ...

最新文章

  1. 队列——PowerShell版
  2. SAP QM 模块的弊端?
  3. @RequestParam @RequestBody @PathVariable 等参数绑定注解详解
  4. 【干货】用痛点切入,用痒点黏住
  5. 解决npm 的 shasum check failed for错误
  6. Linux下ms软件,在linux下有没有什么软件可以连接windows上的MSSQL SERVER
  7. oppo r11 android版本,OPPO R11手机一共有几个版本?各版本都有哪些区别?
  8. linux 修改Db2主机名,修改DB2服务器的主机名
  9. sql 时态表的意义_在SQL Server 2016中拉伸时态历史记录表
  10. 如何获取不重复的随机数
  11. windows批处理文件打印幻方
  12. android room_Android Room –待办事项清单应用程序
  13. python的http请求和应答_python通过get,post方式发送http请求和接收http响应的方法,pythonget...
  14. 素数平方根之和c语言注释,C语言 素数平方之和
  15. HTML5中国象棋游戏源代码
  16. 李希贵:教育改革最大的阻力来自我们内心
  17. Ajax页面缓存问题分析与解决办法
  18. 通过Bypass UAC进行权限提升
  19. Python-100-days学习笔记(一)
  20. 鲁大师5.12.15.1184 纯净去广告单文件版

热门文章

  1. 传统金融PK互联网 必须透过现象看本质
  2. 盘点过去一年,MIT人工智能实验室的那些创新
  3. Elasticsearch-06 Spring Boot 2.0.9整合ElasticSearch5.6.16
  4. Spring Cloud【Finchley】- 21 Spring Cloud Stream 构建消息驱动微服务
  5. android camera 检测,检测Android Camera文件夹
  6. linkstack头文件 c语言,链式栈的基本操作——LinkStack(C语言版)
  7. php制作标签,ThinkPHP标签制作教程
  8. 谈谈mysql优化_浅谈MySQL SQL优化
  9. 为什么用redis?
  10. linux 病毒脚本,解析常见的Linux病毒