Nearest Neighbor分类器

L1距离计算:

分别代表训练与测试图像,对两图像逐个像素做差,所得结果即为L1值(如下图所示)

L1插值越大,代表test图片与train图片差异越大。

L2距离公式:

代码笔记:

1.notbook 导入包设置plt

# 此处为notebook部分代码import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt             # matplotlib 是款画图使用的插件
%matplotlib inline #使用%matplotlib命令可以将matplotlib的图表直接嵌入到Notebook之中
plt.rcParams['figure.figsize'] = (10.0, 8.0) # 设置图像尺寸
plt.rcParams['image.interpolation'] = 'nearest' # 最近邻差值: 像素为正方形
plt.rcParams['image.cmap'] = 'gray' # 输出为灰色# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2

matplotlib更多设置可见此处: http://t.csdn.cn/TPgOh

2.加载数据集

#notebook部分第二块代码
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' #此处为cifar数据集存放地址,可以
# 手动下载放置其中,cs231n在assignment1目录之下# Cleaning up variables to prevent loading data multiple times (which may
# cause memory issue)
try:del X_train, y_train  #del函数可以清除后续变量中的内容,在存储上是解绑变量与内存del X_test, y_testprint('Clear previously loaded data.')
except:passX_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) #载入数据集,在
# data_utils中可以找到# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)#输出结果为下
#Training data shape:  (50000, 32, 32, 3)
#Training labels shape:  (50000,)
#Test data shape:  (10000, 32, 32, 3)
#Test labels shape:  (10000,)

3.展示部分数据集

flatnonzero(),与random.choice()函数详细解析http://t.csdn.cn/abuYJ

#notebook部分第三块代码
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship',
'truck']
#classes是分类的标签
num_classes = len(classes) #记录标签数
samples_per_class = 7 #展示图片数设定
for y, cls in enumerate(classes):  # 遍历y为标签位置,cls为标签值如 y=1,cls=planeidxs = np.flatnonzero(y_train == y) # 筛选出y_train中值为y的数据地址idxs = np.random.choice(idxs, samples_per_class, replace=False)#  从idxs中抽取samples_per_class张图片组成新数组for i, idx in enumerate(idxs):plt_idx = i * num_classes + y + 1 #计算索引位置,i为行位置,y为列位置(可见显示矩阵)plt.subplot(samples_per_class, num_classes, plt_idx)plt.imshow(X_train[idx].astype('uint8'))plt.axis('off')if i == 0:plt.title(cls)
plt.show()

图片显示如下:

4.二次采样

#notebook部分第四块代码
# Subsample the data for more efficient code execution in this exercise
# 使用mask数组划分训练集,测试集
num_training = 5000
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]num_test = 500
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]# Reshape the image data into rows
# 将图片数据转化为行
X_train = np.reshape(X_train, (X_train.shape[0], -1)) # 转化为行数为X_train.shape[0]的形状
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print(X_train.shape, X_test.shape)#输出结果
#(5000, 3072) (500, 3072)

5.导入 k_nearest_neighbor

#notebook部分第五块代码
from cs231n.classifiers import KNearestNeighbor
# 导入k_nearest_neighbor.py程序中的KNearestNeighbor类# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)

k_nearest_neighbor.py模块代码如下:

from builtins import range
from builtins import object
import numpy as np
from past.builtins import xrangeclass KNearestNeighbor(object):""" a kNN classifier with L2 distance """def __init__(self):passdef train(self, X, y):"""Train the classifier. For k-nearest neighbors this is justmemorizing the training data.Inputs:- X: A numpy array of shape (num_train, D) containing the training dataconsisting of num_train samples each of dimension D.- y: A numpy array of shape (N,) containing the training labels, wherey[i] is the label for X[i]."""self.X_train = Xself.y_train = ydef predict(self, X, k=1, num_loops=0):"""Predict labels for test data using this classifier.Inputs:- X: A numpy array of shape (num_test, D) containing test data consistingof num_test samples each of dimension D.- k: The number of nearest neighbors that vote for the predicted labels.- num_loops: Determines which implementation to use to compute distancesbetween training points and testing points.Returns:- y: A numpy array of shape (num_test,) containing predicted labels for thetest data, where y[i] is the predicted label for the test point X[i]."""if num_loops == 0:dists = self.compute_distances_no_loops(X)elif num_loops == 1:dists = self.compute_distances_one_loop(X)elif num_loops == 2:dists = self.compute_distances_two_loops(X)else:raise ValueError('Invalid value %d for num_loops' % num_loops)return self.predict_labels(dists, k=k)

L2(self, X)距离函数代码如下,公式为:

此为2层循环代码

 def compute_distances_two_loops(self, X):"""Compute the distance between each test point in X and each training pointin self.X_train using a nested loop over both the training data and thetest data.Inputs:- X: A numpy array of shape (num_test, D) containing test data.Returns:- dists: A numpy array of shape (num_test, num_train) where dists[i, j]is the Euclidean distance between the ith test point and the jth trainingpoint."""num_test = X.shape[0]num_train = self.X_train.shape[0]dists = np.zeros((num_test, num_train))for i in range(num_test):for j in range(num_train):###################################################################### TODO:                                                             ## Compute the l2 distance between the ith test point and the jth    ## training point, and store the result in dists[i, j]. You should   ## not use a loop over dimension, nor use np.linalg.norm().          ####################################################################### *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****dists[i][j] = np.sqrt(np.sum((X[i]-self.X_train[j]) ** 2))# dists[i][j] = np.linalg.norm(X[i]-self.X_train[j])pass# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****return dists

使用单次循环,代码如下

def compute_distances_one_loop(self, X):"""Compute the distance between each test point in X and each training pointin self.X_train using a single loop over the test data.Input / Output: Same as compute_distances_two_loops"""num_test = X.shape[0]num_train = self.X_train.shape[0]dists = np.zeros((num_test, num_train))for i in range(num_test):######################################################################## TODO:                                                               ## Compute the l2 distance between the ith test point and all training ## points, and store the result in dists[i, :].                        ## Do not use np.linalg.norm().                                        ######################################################################### *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****#dists[i, :] = np.sqrt(np.sum((self.X_train-X[i])**2, axis=1))dists[i, :] = np.sqrt(np.sum(np.square((self.X_train-X[i])), axis=1))pass# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****return dists

无循环代码推导公式,详情http://t.csdn.cn/upPdB

代码如下:

def compute_distances_no_loops(self, X):"""Compute the distance between each test point in X and each training pointin self.X_train using no explicit loops.Input / Output: Same as compute_distances_two_loops"""num_test = X.shape[0]num_train = self.X_train.shape[0]dists = np.zeros((num_test, num_train))########################################################################## TODO:                                                                 ## Compute the l2 distance between all test points and all training      ## points without using any explicit loops, and store the result in      ## dists.                                                                ##                                                                       ## You should implement this function using only basic array operations; ## in particular you should not use functions from scipy,                ## nor use np.linalg.norm().                                             ##                                                                       ## HINT: Try to formulate the l2 distance using matrix multiplication    ##       and two broadcast sums.                                         ########################################################################### *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****# hmmm... reference: https://mlxai.github.io/2017/01/03/finding-distances-between-data-points-with-numpy.htmldists = X.dot(self.X_train.T)X_square = np.sum(np.square(X), axis=1) X_train_square = np.square(self.X_train).sum(axis=1)dists = np.sqrt(X_square[:, np.newaxis] + X_train_square - 2*dists)#np.newaxis用于扩展维度# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****return dists

预测标签函数代码如下,其中argsort详情参考http://t.csdn.cn/UZ9oe:

 def predict_labels(self, dists, k=1):"""Given a matrix of distances between test points and training points,predict a label for each test point.Inputs:- dists: A numpy array of shape (num_test, num_train) where dists[i, j]gives the distance betwen the ith test point and the jth training point.Returns:- y: A numpy array of shape (num_test,) containing predicted labels for thetest data, where y[i] is the predicted label for the test point X[i]."""num_test = dists.shape[0]y_pred = np.zeros(num_test)for i in range(num_test):# A list of length k storing the labels of the k nearest neighbors to# the ith test point.closest_y = []########################################################################## TODO:                                                                 ## Use the distance matrix to find the k nearest neighbors of the ith    ## testing point, and use self.y_train to find the labels of these       ## neighbors. Store these labels in closest_y.                           ## Hint: Look up the function numpy.argsort.                             ########################################################################### *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****closest_y = self.y_train[np.argsort(dists[i])][0:k] #argsort将图像从小到大排序,并输出前k个的类型pass# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****########################################################################## TODO:                                                                 ## Now that you have found the labels of the k nearest neighbors, you    ## need to find the most common label in the list closest_y of labels.   ## Store this label in y_pred[i]. Break ties by choosing the smaller     ## label.                                                                ########################################################################### *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****y_pred[i] = max(closest_y, key=list(closest_y).count)#对得到的k个数进行投票,选取出现次数最多的类别作为最后的预测类别pass# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****return y_pred

6.交叉验证部分

交叉验证思维导图如下

num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]X_train_folds = []
y_train_folds = []
################################################################################
# TODO:                                                                        #
# Split up the training data into folds. After splitting, X_train_folds and    #
# y_train_folds should each be lists of length num_folds, where                #
# y_train_folds[i] is the label vector for the points in X_train_folds[i].     #
# Hint: Look up the numpy array_split function.                                #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****y_train=y_train.reshape(-1,1)
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
# 将X_train,y_train分割成五个数组# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}################################################################################
# TODO:                                                                        #
# Perform k-fold cross validation to find the best value of k. For each        #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times,   #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all     #
# values of k in the k_to_accuracies dictionary.                               #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****num_fold_test = y_train_folds[0].shape[0]for k in k_choices:classifier = KNearestNeighbor()k_to_accuracies[k] = []for test_idx in range(num_folds):# print(np.concatenate(np.delete(X_train_folds, k, 0), axis=0).shape)classifier.train(np.concatenate(np.delete(X_train_folds, test_idx, 0)), np.concatenate(np.delete(y_train_folds, test_idx, 0)))#np.concatenate的作用是将np.delete处理后的数组拼接成一个数组#np.delete是对X_train_folds矩阵的test_idx索引按行删除dists = classifier.compute_distances_no_loops(X_train_folds[test_idx])k_to_accuracies[k].append(np.sum(classifier.predict_labels(dists, k) == y_train_folds[test_idx]) / num_fold_test*1.0)#计算交叉验证准确率# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****# Print out the computed accuracies
for k in sorted(k_to_accuracies):for accuracy in k_to_accuracies[k]:print('k = %d, accuracy = %f' % (k, accuracy))

cs231n_2020 作业knn笔记相关推荐

  1. 黑马学习作业练习笔记

    黑马作业练习笔记day10 题目2 键盘输入任意字符串,打乱里面的内容 public class StringTest {//键盘输入任意字符串,打乱里面的内容public static void m ...

  2. 【李宏毅2021机器学习深度学习】作业讲解 笔记收藏 课程主页

    作业仓库 15个作业仓库 作业参考 大佬代码参考 参考2–目前只有1-3作业? 笔记收藏 21 年 课程主页 sample code & slide 21 年笔记 20 年笔记 [<20 ...

  3. Games101--现代计算机图形学入门 作业3笔记(渲染管线、着色模型、双线性插值)

    一.渲染管线 一.作业要求 在这次编程任务中,我们会进一步模拟现代图形技术.我们在代码中添加了ObjectLoader(用于加载三维模型),VertexShader与FragmentShader,并且 ...

  4. 团队作业(五)-笔记app top5

    在互联网快速发展的情况下,各个行业的软件层出不穷,五花八门.各个行业都有相当多的软件介入其中,在如此多的软件之中,便有了相当激烈的竞争角逐.今天我们十五万的总冠军就着笔记APP行业中位列top 5的软 ...

  5. Python课程第六周笔记及作业练习

    Python第六周 笔记 作业 练习 笔记 组合数据类型 三种基本数据类型: (1) 集合类型 (2) 序列类型:元组类型和列表类型 (3) 字典类型 集合类型及操作 (1) 集合类型定义 集合是多个 ...

  6. 【2017cs231n】:课程笔记-第2讲:图像分类

    [2017cs231n]:课程笔记-第2讲:图像分类 搜索微信公众号:'AI-ming3526'或者'计算机视觉这件小事' 获取更多算法.机器学习干货 csdn:https://blog.csdn.n ...

  7. 【大四上学期】过程控制系统课程笔记

    过程控制系统 笔记食用指南(by SJJ) 期末考试内容 = 过程控制系统 + 过程控制工程, 包含小题(填空, 判断等), 大题(简答, 计算等), 期中考试的内容期末也会考到, 但不一定是一样的题 ...

  8. 【cs224n学习作业】Assignment 1 - Exploring Word Vectors【附代码】

    前言 这篇文章是CS224N课程的第一个大作业, 主要是对词向量做了一个探索, 并直观的感受了一下词嵌入或者词向量的效果.这里简单的记录一下我探索的一个过程. 这一下几篇文章基于这次作业的笔记理论: ...

  9. Python学习笔记:第十三站 接着找对象

    Python学习笔记 文章目录 Python学习笔记 第十三站 接着找对象 1. 封装 2. 继承 3. 方法重写 4. object类 5. 多态 6. 特殊方法和特殊属性 7. 类的赋值与拷贝 8 ...

  10. 数学建模笔记-第十四讲-主成分分析

    文章目录 主成分分析 数据降维 主成分分析思想 PCA计算过程 主成分分析的应用 例1 主成分的说明 例2 MATLAB 对结果的解释 主成分分析的滥用:主成分得分 主成分分析用于聚类 主成分回归 说 ...

最新文章

  1. DokuWiki 开源wiki引擎程序
  2. python学习笔记(三)—— 序列类型及方法(列表、元组、字符串)
  3. C语言sizeof运算符
  4. oracle+中子分类账,【勇猛精进】Oracle EBS R12 总帐和子分类账关系详解
  5. Flutter之点击按钮打开百度链接
  6. 90 Subsets II
  7. python里的shell是什么_Python与shell的3种交互方式介绍
  8. 页面修改成套用MasterPage时遇到Invalid postback or callback argument的错误
  9. 【风电功率预测】基于matlab粒子群算法优化LSTM风电功率预测【含Matlab源码 941期】
  10. erp软件涉及哪些计算机技术?,ERP软件应该学习哪些内容?
  11. plex插件显示无服务器,Plex插件
  12. 计算机教程求和,电脑excel求和怎么操作步骤 | excel竖列自动求和sum
  13. Ariduino入门笔记——1. Arduino 默认函数(数字接口/模拟接口)
  14. 建群网培PMP每日一练2020-08-13
  15. 百度BML飞桨训练营(十)面部表情迁移PaddleGAN--蒙娜丽莎在微笑
  16. HTML标签之常见格式标签(1)
  17. web安全从基础术语、windows/linux基础到安全漏洞/病毒木马挖掘与分析利用(持续更新)
  18. 全球及中国纺织机械行业运营规划与十四五投资状况分析报告2022版
  19. 3评选最牛群主v1.0(4分)
  20. 小米/红米刷机过程-以红米3S为例

热门文章

  1. ISO27001认证步骤方法
  2. 自制工具:CSV代码生成器:自动生成CSV文件对应的C++实体类和字段类型解析代码
  3. 用Python统计新浪微博各种表情使用频率
  4. FunCode太空战机C++实现
  5. csdn泄漏密码分析
  6. 推美妆、搭IP,出货千万的天猫精灵打上95后的主意...
  7. Java中的编译、反编译和反编译工具全家桶分享
  8. 兄弟连新版Linux视频教程
  9. 《请君入瓮——APT攻防指南之兵不厌诈》目录—导读
  10. rootkit 后门检查工具 rkHunter安装使用