Multi-class Classification

1. 数据预处理和可视化

dispalyData.m

function [h, display_array] = displayData(X, example_width)
%DISPLAYDATA Display 2D data in a nice grid
%   [h, display_array] = DISPLAYDATA(X, example_width) displays 2D data
%   stored in X in a nice grid. It returns the figure handle h and the
%   displayed array if requested. % Set example_width automatically if not passed in
if ~exist('example_width', 'var') || isempty(example_width) example_width = round(sqrt(size(X, 2)));
end% Gray Image
colormap(gray);% Compute rows, cols
[m n] = size(X);
example_height = (n / example_width);% Compute number of items to display
display_rows = floor(sqrt(m));
display_cols = ceil(m / display_rows);% Between images padding
pad = 1;% Setup blank display
display_array = - ones(pad + display_rows * (example_height + pad), ...pad + display_cols * (example_width + pad));% Copy each example into a patch on the display array
curr_ex = 1;
for j = 1:display_rowsfor i = 1:display_colsif curr_ex > m, break; end% Copy the patch% Get the max value of the patchmax_val = max(abs(X(curr_ex, :)));display_array(pad + (j - 1) * (example_height + pad) + (1:example_height), ...pad + (i - 1) * (example_width + pad) + (1:example_width)) = ...reshape(X(curr_ex, :), example_height, example_width) / max_val;curr_ex = curr_ex + 1;endif curr_ex > m, break; end
end% Display Image
h = imagesc(display_array, [-1 1]);% Do not show axis
axis image offdrawnow;end

ex3.m

%% Initialization
clear ; close all; clc%% Setup the parameters you will use for this part of the exercise
input_layer_size  = 400;  % 20x20 Input Images of Digits
num_labels = 10;          % 10 labels, from 1 to 10% (note that we have mapped "0" to label 10)%% =========== Part 1: Loading and Visualizing Data =============
%  We start the exercise by first loading and visualizing the dataset.
%  You will be working with a dataset that contains handwritten digits.
%% Load Training Data
fprintf('Loading and Visualizing Data ...\n')load('ex3data1.mat'); % training data stored in arrays X, y
m = size(X, 1);% Randomly select 100 data points to display
rand_indices = randperm(m);
sel = X(rand_indices(1:100), :);displayData(sel);fprintf('Program paused. Press enter to continue.\n');
%pause;

2. 正则化代价函数和梯度下降法


IrCostFunction.m

function [J, grad] = lrCostFunction(theta, X, y, lambda)
%LRCOSTFUNCTION Compute cost and gradient for logistic regression with
%regularization
%   J = LRCOSTFUNCTION(theta, X, y, lambda) computes the cost of using
%   theta as the parameter for regularized logistic regression and the
%   gradient of the cost w.r.t. to the parameters. % Initialize some useful values
m = length(y); % number of training examples% You need to return the following variables correctly
% J = 0;
% grad = zeros(size(theta));% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
%               You should set J to the cost.
%               Compute the partial derivatives and set grad to the partial
%               derivatives of the cost w.r.t. each parameter in theta
%
% Hint: The computation of the cost function and gradients can be
%       efficiently vectorized. For example, consider the computation
%
%           sigmoid(X * theta)
%
%       Each row of the resulting matrix will contain the value of the
%       prediction for that example. You can make use of this to vectorize
%       the cost function and gradient computations.
%
% Hint: When computing the gradient of the regularized cost function,
%       there're many possible vectorized solutions, but one solution
%       looks like:
%           grad = (unregularized gradient for logistic regression)
%           temp = theta;
%           temp(1) = 0;   % because we don't add anything for j = 0
%           grad = grad + YOUR_CODE_HERE (using the temp variable)
%h_x = sigmoid(X * theta);
y1 = -y' * log(h_x);
y2 = (1 - y)' * log(1 - h_x);
tempTt = theta; tempTt(1) = 0;J = 1 / m * (y1 - y2) + lambda / (2*m) * tempTt' * tempTt;grad = 1 / m .* X' * (h_x - y) + lambda / m .* tempTt;% =============================================================grad = grad(:);end

ex3.m

%% ============ Part 2a: Vectorize Logistic Regression ============
%  In this part of the exercise, you will reuse your logistic regression
%  code from the last exercise. You task here is to make sure that your
%  regularized logistic regression implementation is vectorized. After
%  that, you will implement one-vs-all classification for the handwritten
%  digit dataset.
%% Test case for lrCostFunction
fprintf('\nTesting lrCostFunction() with regularization');theta_t = [-2; -1; 1; 2];
X_t = [ones(5,1) reshape(1:15,5,3)/10];
y_t = ([1;0;1;0;1] >= 0.5);
lambda_t = 3;
[J grad] = lrCostFunction(theta_t, X_t, y_t, lambda_t);fprintf('\nCost: %f\n', J);
fprintf('Expected cost: 2.534819\n');
fprintf('Gradients:\n');
fprintf(' %f \n', grad);
fprintf('Expected gradients:\n');
fprintf(' 0.146561\n -0.548558\n 0.724722\n 1.398003\n');fprintf('Program paused. Press enter to continue.\n');
%pause;

3. one-vs-all

oneVsAll.m

function [all_theta] = oneVsAll(X, y, num_labels, lambda)
%ONEVSALL trains multiple logistic regression classifiers and returns all
%the classifiers in a matrix all_theta, where the i-th row of all_theta
%corresponds to the classifier for label i
%   [all_theta] = ONEVSALL(X, y, num_labels, lambda) trains num_labels
%   logistic regression classifiers and returns each of these classifiers
%   in a matrix all_theta, where the i-th row of all_theta corresponds
%   to the classifier for label i% Some useful variables
m = size(X, 1);
n = size(X, 2);% You need to return the following variables correctly
all_theta = zeros(num_labels, n + 1);% Add ones to the X data matrix
X = [ones(m, 1) X];% ====================== YOUR CODE HERE ======================
% Instructions: You should complete the following code to train num_labels
%               logistic regression classifiers with regularization
%               parameter lambda.
%
% Hint: theta(:) will return a column vector.
%
% Hint: You can use y == c to obtain a vector of 1's and 0's that tell you
%       whether the ground truth is true/false for this class.
%
% Note: For this assignment, we recommend using fmincg to optimize the cost
%       function. It is okay to use a for-loop (for c = 1:num_labels) to
%       loop over the different classes.
%
%       fmincg works similarly to fminunc, but is more efficient when we
%       are dealing with large number of parameters.
%
% Example Code for fmincg:
%
%     % Set Initial theta
initial_theta = zeros(n + 1, 1);
%
%     % Set options for fminunc
options = optimset('GradObj', 'on', 'MaxIter', 50);
%
%     % Run fmincg to obtain the optimal theta
%     % This function will return theta and the cost
for c = 1 : num_labelsall_theta(c,:) =  fmincg (@(t)(lrCostFunction(t, X, (y == c), lambda)), initial_theta, options);
end% =========================================================================end

predictOnevsAll.m

function p = predictOneVsAll(all_theta, X)
%PREDICT Predict the label for a trained one-vs-all classifier. The labels
%are in the range 1..K, where K = size(all_theta, 1).
%  p = PREDICTONEVSALL(all_theta, X) will return a vector of predictions
%  for each example in the matrix X. Note that X contains the examples in
%  rows. all_theta is a matrix where the i-th row is a trained logistic
%  regression theta vector for the i-th class. You should set p to a vector
%  of values from 1..K (e.g., p = [1; 3; 1; 2] predicts classes 1, 3, 1, 2
%  for 4 examples) m = size(X, 1);
num_labels = size(all_theta, 1);% You need to return the following variables correctly
p = zeros(size(X, 1), 1);% Add ones to the X data matrix
X = [ones(m, 1) X];% ====================== YOUR CODE HERE ======================
% Instructions: Complete the following code to make predictions using
%               your learned logistic regression parameters (one-vs-all).
%               You should set p to a vector of predictions (from 1 to
%               num_labels).
%
% Hint: This code can be done all vectorized using the max function.
%       In particular, the max function can also return the index of the
%       max element, for more information see 'help max'. If your examples
%       are in rows, then, you can use max(A, [], 2) to obtain the max
%       for each row.
%       [c, p] = max(sigmoid(X * all_theta'), [], 2);% =========================================================================end

max(x,[],dim):是求矩阵中列或行的最大值。

[]是为了区别max(x,y)和max(x,[],dim),表示这是两种调用方式。
dim表示维数。如果x是一个矩阵的话,dim取1表示按列求最大值,dim取2表示按行求最大值。

ex3.m

%% ============ Part 2b: One-vs-All Training ============
fprintf('\nTraining One-vs-All Logistic Regression...\n')lambda = 0.1;
[all_theta] = oneVsAll(X, y, num_labels, lambda);fprintf('Program paused. Press enter to continue.\n');
%pause;%% ================ Part 3: Predict for One-Vs-All ================pred = predictOneVsAll(all_theta, X);fprintf('\nTraining Set Accuracy: %f\n', mean(double(pred == y)) * 100);

Neural Networks

1. Feedforward Propagation and Prediction(反馈传播和预测)

predict.m

function p = predict(Theta1, Theta2, X)
%PREDICT Predict the label of an input given a trained neural network
%   p = PREDICT(Theta1, Theta2, X) outputs the predicted label of X given the
%   trained weights of a neural network (Theta1, Theta2)% Useful values
m = size(X, 1);
num_labels = size(Theta2, 1);% You need to return the following variables correctly
p = zeros(size(X, 1), 1);% ====================== YOUR CODE HERE ======================
% Instructions: Complete the following code to make predictions using
%               your learned neural network. You should set p to a
%               vector containing labels between 1 to num_labels.
%
% Hint: The max function might come in useful. In particular, the max
%       function can also return the index of the max element, for more
%       information see 'help max'. If your examples are in rows, then, you
%       can use max(A, [], 2) to obtain the max for each row.
%a1 = [ones(m, 1) X];        % ipnut layer : 5000 x 401
a2 = sigmoid(a1 * Theta1'); % z^(2) : 5000 x 25a2 = [ones(m, 1) a2];       % theta of hindden layer : 5000 x 26
a3 = sigmoid(a2 * Theta2'); % z^(3) : 5000 x 10[x, p] = max(a3, [], 2);% =========================================================================end

ex3_nn.m

%% ================= Part 3: Implement Predict =================
%  After training the neural network, we would like to use it to predict
%  the labels. You will now implement the "predict" function to use the
%  neural network to predict the labels of the training set. This lets
%  you compute the training set accuracy.pred = predict(Theta1, Theta2, X);fprintf('\nTraining Set Accuracy: %f\n', mean(double(pred == y)) * 100);fprintf('Program paused. Press enter to continue.\n');
%pause;

ex3_nn.m

%% Examples
%  To give you an idea of the network's output, you can also run
%  through the examples one at the a time to see what it is predicting.%  Randomly permute examples
rp = randperm(m);for i = 1:m% Display fprintf('\nDisplaying Example Image\n');displayData(X(rp(i), :));pred = predict(Theta1, Theta2, X(rp(i),:));fprintf('\nNeural Network Prediction: %d (digit %d)\n', pred, mod(pred, 10));% Pause with quit options = input('Paused - press enter to continue, q to exit:','s');if s == 'q'breakend
end

参考资料:
[吴恩达机器学习exercise3:多分类 one vs all和神经网络]

全部作业代码

MLExercise_Ng

【吴恩达机器学习】Week4 编程作业ex3——多分类任务和神经网络相关推荐

  1. 吴恩达机器学习系列课程作业ex3 matlab实现

    目录 Matlab实现: lrCostFunction.m oneVsAll.m predictOneVsAll.m predict.m Matlab实现: lrCostFunction.m func ...

  2. 第03周:吴恩达机器学习课后编程题ex3神经网络——Python

    1 Multi-class Classifification 多类分类 在本练习中,使用逻辑回归和神经网络识别手写数字(从 0 到 9).在练习的第一部分,将扩展之前的逻辑回归实现并将其应用到 one ...

  3. 吴恩达深度学习编程作业汇总

    以下列表为吴恩达的深度学习课程所对应的编程作业列表,都直接指向了github的连接地址:这些作业也是我在网上购买,可能与官方的内容有所出入:同时由于有的训练集和测试集以及预训练好的参数过大,不便上传, ...

  4. 吴恩达 深度学习 编程作业(2-2)- Optimization Methods

    吴恩达Coursera课程 DeepLearning.ai 编程作业系列,本文为<改善深层神经网络:超参数调试.正则化以及优化 >部分的第二周"优化算法"的课程作业,同 ...

  5. Python吴恩达机器学习课后习题作业一算法(Ⅰ)(1)

    吴恩达老师的机器学习课程是机器学习入门者必看的经典之一,其课后习题也是典中典.下面将在以Python为语言工具完成课后习题. 问题一:单元线性回归 问题:根据城市人口数量,预测开小吃店的利润 下面我将 ...

  6. 【吴恩达深度学习编程作业】4.4特殊应用——人脸识别和神经风格转换(问题未解决)

    参考文章:1.人脸识别与神经风格转换 2.神经风格转换编程作业 神经网络风格中遇到的问题已经解决了并将解决方案写在了备注里面,但是人脸识别那里运行到database就出错了,目前仍没有找到解决方案.我 ...

  7. 吴恩达ex3_[Coursera] Machine Learning ex3 多元分类和神经网络 步骤分析

    第四周的主要内容是神经网络,个人觉得讲得比较跳,所以补充几篇文章加深一下理解: But what *is* a Neural Network? 先提一下,本人设计背景,没学过微积分,这篇只当是笔记,有 ...

  8. 吴恩达机器学习Week4神经网络表述

    神经元模型 定义:神经网路是由具有适应性的简单单元组成的广泛并行互连的网络,它的组织能够模拟生物神经系统对真实世界物体所作出的交互反应[Kohomen,1988]. 神经网络中最基本的成分是神经元(N ...

  9. 吴恩达机器学习 逻辑回归 作业3(手写数字分类) Python实现 代码详细解释

    整个项目的github:https://github.com/RobinLuoNanjing/MachineLearning_Ng_Python 里面可以下载进行代码实现的数据集 题目介绍: In t ...

最新文章

  1. 探究rh6上mysql5.6的主从、半同步、GTID多线程、SSL认证主从复制
  2. linux c语言文件拷贝_linux - scp命令远程文件拷贝
  3. 命名空间“Microsoft.Office”中不存在类型或命名空间名称“Interop”(是否缺少程序集引用?...
  4. python学习-defaultdict
  5. 通过Blender将PMX模型转为FBX导入Unity
  6. 12864汉字液晶显示驱动程序
  7. 热点热词 新闻热点 最新新闻数据API接口-天狗热点热词开放平台
  8. [10秒学会] - iOS录制屏幕 ReplayKit
  9. java 数据源xml 展示到界面_ZK开发关键知识点
  10. .NET core ABP 获取远程IP地址
  11. 2014计算机三级网络,2014计算机三级网络技术知识点.doc
  12. 基于Vue的俄罗斯方块游戏设计与实现
  13. STM32:利用VM8978和I2S实现录音的频率分析
  14. 算法工程师(机器学习)部分面试题(转载参考)
  15. 用Python实现简易超市售货系统
  16. 1040 有几个PAT 测试点34
  17. Android HIDL HAL 接口定义语言详解
  18. 安装spconv踩的坑
  19. C语言数据浅谈之实型
  20. Photoshop 2022下载上手体验:更加智能

热门文章

  1. 欧阳娜娜作息时间公开,怎样活成所有女生羡慕的样子
  2. google海底光缆图_谷歌启用世界最快海底光缆 速度增加1000万倍
  3. 在 Kubernetes 上执行 GitHub Actions 流水线作业
  4. 流水线作业调度最小时间问题
  5. 用burpsuit伪造xxf
  6. 职场规划的行动路径是什么?
  7. 软件测试工程职场发展细谈
  8. Oracle数据库审计功能的使用
  9. 正则表达式判定电话号码(电信移动联通)
  10. 男朋友转行 Java 失败,找不到工作