题目一:

你将建立一个逻辑回归模型来预测一个学生是否被大学录取。假设你是一所大学系的管理员,你想根据两次考试的成绩来决定每个申请人的录取机会。您有以前申请者的历史数据,可以用作逻辑回归的训练集。对于每个培训示例,您都有申请人在两次考试中的分数和录取决定。你的任务是建立一个分类模型,根据这两次考试的分数来估计申请人的录取概率。

数据集:

34.62365962451697,78.0246928153624,0
30.28671076822607,43.89499752400101,0
35.84740876993872,72.90219802708364,0
60.18259938620976,86.30855209546826,1
79.0327360507101,75.3443764369103,1
45.08327747668339,56.3163717815305,0
61.10666453684766,96.51142588489624,1
75.02474556738889,46.55401354116538,1
76.09878670226257,87.42056971926803,1
84.43281996120035,43.53339331072109,1
95.86155507093572,38.22527805795094,0
75.01365838958247,30.60326323428011,0
82.30705337399482,76.48196330235604,1
69.36458875970939,97.71869196188608,1
39.53833914367223,76.03681085115882,0
53.9710521485623,89.20735013750205,1
69.07014406283025,52.74046973016765,1
67.94685547711617,46.67857410673128,0
70.66150955499435,92.92713789364831,1
76.97878372747498,47.57596364975532,1
67.37202754570876,42.83843832029179,0
89.67677575072079,65.79936592745237,1
50.534788289883,48.85581152764205,0
34.21206097786789,44.20952859866288,0
77.9240914545704,68.9723599933059,1
62.27101367004632,69.95445795447587,1
80.1901807509566,44.82162893218353,1
93.114388797442,38.80067033713209,0
61.83020602312595,50.25610789244621,0
38.78580379679423,64.99568095539578,0
61.379289447425,72.80788731317097,1
85.40451939411645,57.05198397627122,1
52.10797973193984,63.12762376881715,0
52.04540476831827,69.43286012045222,1
40.23689373545111,71.16774802184875,0
54.63510555424817,52.21388588061123,0
33.91550010906887,98.86943574220611,0
64.17698887494485,80.90806058670817,1
74.78925295941542,41.57341522824434,0
34.1836400264419,75.2377203360134,0
83.90239366249155,56.30804621605327,1
51.54772026906181,46.85629026349976,0
94.44336776917852,65.56892160559052,1
82.36875375713919,40.61825515970618,0
51.04775177128865,45.82270145776001,0
62.22267576120188,52.06099194836679,0
77.19303492601364,70.45820000180959,1
97.77159928000232,86.7278223300282,1
62.07306379667647,96.76882412413983,1
91.56497449807442,88.69629254546599,1
79.94481794066932,74.16311935043758,1
99.2725269292572,60.99903099844988,1
90.54671411399852,43.39060180650027,1
34.52451385320009,60.39634245837173,0
50.2864961189907,49.80453881323059,0
49.58667721632031,59.80895099453265,0
97.64563396007767,68.86157272420604,1
32.57720016809309,95.59854761387875,0
74.24869136721598,69.82457122657193,1
71.79646205863379,78.45356224515052,1
75.3956114656803,85.75993667331619,1
35.28611281526193,47.02051394723416,0
56.25381749711624,39.26147251058019,0
30.05882244669796,49.59297386723685,0
44.66826172480893,66.45008614558913,0
66.56089447242954,41.09209807936973,0
40.45755098375164,97.53518548909936,1
49.07256321908844,51.88321182073966,0
80.27957401466998,92.11606081344084,1
66.74671856944039,60.99139402740988,1
32.72283304060323,43.30717306430063,0
64.0393204150601,78.03168802018232,1
72.34649422579923,96.22759296761404,1
60.45788573918959,73.09499809758037,1
58.84095621726802,75.85844831279042,1
99.82785779692128,72.36925193383885,1
47.26426910848174,88.47586499559782,1
50.45815980285988,75.80985952982456,1
60.45555629271532,42.50840943572217,0
82.22666157785568,42.71987853716458,0
88.9138964166533,69.80378889835472,1
94.83450672430196,45.69430680250754,1
67.31925746917527,66.58935317747915,1
57.23870631569862,59.51428198012956,1
80.36675600171273,90.96014789746954,1
68.46852178591112,85.59430710452014,1
42.0754545384731,78.84478600148043,0
75.47770200533905,90.42453899753964,1
78.63542434898018,96.64742716885644,1
52.34800398794107,60.76950525602592,0
94.09433112516793,77.15910509073893,1
90.44855097096364,87.50879176484702,1
55.48216114069585,35.57070347228866,0
74.49269241843041,84.84513684930135,1
89.84580670720979,45.35828361091658,1
83.48916274498238,48.38028579728175,1
42.2617008099817,87.10385094025457,1
99.31500880510394,68.77540947206617,1
55.34001756003703,64.9319380069486,1
74.77589300092767,89.52981289513276,1

python代码实现:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.optimize as opt#读取数据集
path = "C:/Users/Administrator/Desktop/吴恩达机器学习数据集/week1/ex2data1.txt"
data = pd.read_csv(path, header=None, names=['Exam 1', 'Exam 2', 'Admitted'])
print(data.head())#找出结果分别为1和0的数据集并且赋值以名称,
positive = data[data['Admitted'].isin([1])] #取Admitted这一行,用isin函数查看是否存在1,若存在返回布尔值true1
negative = data[data['Admitted'].isin([0])]#定义代价函数
def sigmoid(z):return 1/(1 + np.exp(-z))def computeCost(theta, X, y):theta = np.matrix(theta)X = np.matrix(X)y = np.matrix(y)first = np.multiply(y, np.log(sigmoid(X * theta.T)))second = np.multiply((1-y), np.log(1-sigmoid(X * theta.T)))return -(np.sum(first + second) / (len(X)))#初始化数据
data.insert(0, 'Ones', 1)
cols = data.shape[1]
X = data.iloc[:, 0:cols-1]
y = data.iloc[:, cols-1:cols]
theta = np.zeros(3)
X = np.matrix(X.values)
y = np.matrix(y.values)
print("计算机初始代价:")
print(computeCost(theta, X, y))#高级优化算法 共轭梯度法求theta,先对梯度进行计算(此处使用函数名为gredientDecent,其实这个函数并没有下降theta的作用),循环次数为theta值的个数,计算出的就是每个theta值对应的梯度(代价函数的导数),计算后返回计算出的梯度向量。
def gredientDecent(theta, X, y):X = np.matrix(X)y = np.matrix(y)theta = np.matrix(theta)parameters =int(theta.ravel().shape[1])#多维降一维grad = np.zeros(parameters)error = sigmoid(X * theta.T) - yfor i in range(parameters):term = np.multiply(error, X[:, i])grad[i] = np.sum(term) / len(X)return grad
#Scipy高级优化算法求theta[0]:fmin_tnc()有约束的多元函数问题,提供梯度信息,使用截断牛顿法。'''func:优化的目标函数x0:初值fprime:提供优化函数func的梯度函数,不然优化函数func必须返回函数值和梯度,或者设置approx_grad=Trueapprox_grad :如果设置为True,会给出近似梯度args:元组,是传递给优化函数的参数'''
result = opt.fmin_tnc (func = computeCost, x0 = theta, fprime = gredientDecent, args = (X, y))
print("返回高级算法的结果:")
print(result)
print("计算合适theta值后的代价为:")
print(computeCost(result[0], X, y))#绘制散点图,并且绘制决策线
plotting_x1 = np.linspace(30, 100, 100) #构造等差数列,在30到100分之间,分隔100份
plotting_h1 = (-(result[0][0] + result[0][1] * plotting_x1)) / result[0][2]
#根据h1公式计算,-(theta[0]+theta[1]*x)/theta[2],此公式可以得到散点图的决策边界,是提前给的fig,ax = plt.subplots(figsize = (8,6)) #8行6列
ax.plot(plotting_x1, plotting_h1, 'y', label = 'Prediction')
ax.scatter(positive['Exam 1'], positive['Exam 2'], s=50, c='b', marker='o', label='Admitted')
ax.scatter(negative['Exam 1'], negative['Exam 2'], s=50, c='r', marker='x', label='Not Admitted')
ax.legend()
ax.set_xlabel('Exam 1 Score')
ax.set_ylabel('Exam 2 Score')
plt.show()#预测通过的概率
def hfunc(theta, X):return sigmoid(X * theta.T)
pre = [1,45,85]
pre = np.matrix(pre)
the = np.matrix(result[0])
print("该同学通过的概率为:", hfunc(the, pre))#计算准确率
def predict(theta, X):probability = sigmoid(X * theta.T) #theta为1*3,X为100*3,theta.T为3*1矩阵return [1 if x >= 0.5 else 0 for x in probability]   #如果x>0.5,则输出1,否则x位于probability则为0theta_min = np.matrix(result[0])
predictions = predict(theta_min, X) #此函数theta_min为1*3的矩阵,X必须在之前设置为array或matrix,否则dataframe不可用,为100*3
correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b) in zip(predictions, y)]
accuracy = (sum(map(int, correct)) % len(correct))
print ('accuracy = {0}%'.format(accuracy))



题目二:

正则化逻辑回归在练习的这一部分,您将实施正则化逻辑回归来预测制造工厂的微芯片是否通过质量保证。在质量保证过程中,每个微芯片都要经过各种测试,以确保其功能正常。假设你是工厂的产品经理,你有两个不同测试的一些微芯片的测试结果。从这两个测试中,你可以决定微芯片应该被接受还是被拒绝。为了帮助您做出决定,您有一个过去微芯片的测试结果数据集,从中可以构建逻辑回归模型

数据集:

0.051267,0.69956,1
-0.092742,0.68494,1
-0.21371,0.69225,1
-0.375,0.50219,1
-0.51325,0.46564,1
-0.52477,0.2098,1
-0.39804,0.034357,1
-0.30588,-0.19225,1
0.016705,-0.40424,1
0.13191,-0.51389,1
0.38537,-0.56506,1
0.52938,-0.5212,1
0.63882,-0.24342,1
0.73675,-0.18494,1
0.54666,0.48757,1
0.322,0.5826,1
0.16647,0.53874,1
-0.046659,0.81652,1
-0.17339,0.69956,1
-0.47869,0.63377,1
-0.60541,0.59722,1
-0.62846,0.33406,1
-0.59389,0.005117,1
-0.42108,-0.27266,1
-0.11578,-0.39693,1
0.20104,-0.60161,1
0.46601,-0.53582,1
0.67339,-0.53582,1
-0.13882,0.54605,1
-0.29435,0.77997,1
-0.26555,0.96272,1
-0.16187,0.8019,1
-0.17339,0.64839,1
-0.28283,0.47295,1
-0.36348,0.31213,1
-0.30012,0.027047,1
-0.23675,-0.21418,1
-0.06394,-0.18494,1
0.062788,-0.16301,1
0.22984,-0.41155,1
0.2932,-0.2288,1
0.48329,-0.18494,1
0.64459,-0.14108,1
0.46025,0.012427,1
0.6273,0.15863,1
0.57546,0.26827,1
0.72523,0.44371,1
0.22408,0.52412,1
0.44297,0.67032,1
0.322,0.69225,1
0.13767,0.57529,1
-0.0063364,0.39985,1
-0.092742,0.55336,1
-0.20795,0.35599,1
-0.20795,0.17325,1
-0.43836,0.21711,1
-0.21947,-0.016813,1
-0.13882,-0.27266,1
0.18376,0.93348,0
0.22408,0.77997,0
0.29896,0.61915,0
0.50634,0.75804,0
0.61578,0.7288,0
0.60426,0.59722,0
0.76555,0.50219,0
0.92684,0.3633,0
0.82316,0.27558,0
0.96141,0.085526,0
0.93836,0.012427,0
0.86348,-0.082602,0
0.89804,-0.20687,0
0.85196,-0.36769,0
0.82892,-0.5212,0
0.79435,-0.55775,0
0.59274,-0.7405,0
0.51786,-0.5943,0
0.46601,-0.41886,0
0.35081,-0.57968,0
0.28744,-0.76974,0
0.085829,-0.75512,0
0.14919,-0.57968,0
-0.13306,-0.4481,0
-0.40956,-0.41155,0
-0.39228,-0.25804,0
-0.74366,-0.25804,0
-0.69758,0.041667,0
-0.75518,0.2902,0
-0.69758,0.68494,0
-0.4038,0.70687,0
-0.38076,0.91886,0
-0.50749,0.90424,0
-0.54781,0.70687,0
0.10311,0.77997,0
0.057028,0.91886,0
-0.10426,0.99196,0
-0.081221,1.1089,0
0.28744,1.087,0
0.39689,0.82383,0
0.63882,0.88962,0
0.82316,0.66301,0
0.67339,0.64108,0
1.0709,0.10015,0
-0.046659,-0.57968,0
-0.23675,-0.63816,0
-0.15035,-0.36769,0
-0.49021,-0.3019,0
-0.46717,-0.13377,0
-0.28859,-0.060673,0
-0.61118,-0.067982,0
-0.66302,-0.21418,0
-0.59965,-0.41886,0
-0.72638,-0.082602,0
-0.83007,0.31213,0
-0.72062,0.53874,0
-0.59389,0.49488,0
-0.48445,0.99927,0
-0.0063364,0.99927,0
0.63265,-0.030612,0

python代码实现

import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import scipy.optimize as opt
matplotlib.rcParams['font.sans-serif']=['SimHei']#用黑体显示中文
matplotlib.rcParams['axes.unicode_minus']=False # 可以显示负号】、#第一步 引入数据
df=pd.read_csv('C:/Users/Administrator/Desktop/吴恩达机器学习数据集/week1/ex2data2.txt',names=['测试一','测试二','接受'])
print(df.head())#第一部分  数据可视化  绘制散点图
positive=df[df['接受'].isin([1])]
negetive=df[df['接受'].isin([0])]
fig,ax=plt.subplots(figsize=(8,5))
ax.scatter(positive['测试一'],positive['测试二'],s=50, c='b',label='接受')
ax.scatter(negetive['测试一'],negetive['测试二'],s=50, c='r',marker='x',label='未接受')
ax.legend()
ax.set_xlabel('测试一')
ax.set_ylabel('测试二')
plt.show()#第二部分 特征映射  为了能很好的拟合,所以我们要进行特征映射,从而得到非线性的决策边
#特征映射将低维特征向量(本例中为二维)转化为高维特征向量(本例中为28维),因为特征向量的个数变多,所以需要用正则化,否则容易出现过拟合。
#1.定义特征映射函数
def feature_mapping(x1,x2,power):data={}for i in np.arange(power+1):for p in np.arange(i+1):data["f{}{}".format(i-p,p)]=np.power(x1,i-p)*np.power(x2,p)return pd.DataFrame(data)
#2、将特征变量进行特征映射
x1=df['测试一']
x1=x1.values
x2=df['测试二']
x2=x2.values
df2=feature_mapping(x1,x2,6)
print(df2.head())
#3、重新选取变量
X=df2.values  # 将特征映射后得到的变量定义为X
y=df['接受'].values
theta=np.zeros(X.shape[1])#第三部分 正则化代价函数
#1、定义sigmoid函数
def sigmoid(z):return 1/(1+np.exp(-z))
#2、定义代价函数
def cost(theta,X,y):first=y@np.log(sigmoid(X@theta))    # @表示矩阵乘法second=(1-y)@np.log(1-sigmoid(X@theta))return (first+second)/(-len(X))
#3、定义正则化代价函数
def costReg(theta,X,y,lam=1):_theta=theta[1:]reg=(lam/(2*len(X)))*(_theta@_theta)return cost(theta,X,y)+reg
#4、代价函数初始值
costReg(theta,X,y)#第四部分 高级优化算法求theta
#1、定义正则化后的梯度函数(偏导数)
def gradient(theta,X,y):return (X.T@(sigmoid(X@theta)-y))/(len(X))
def gradientReg(theta,X,y,lam=1):reg=(lam/len(X))*thetareg[0]=0    # 第一项没有惩罚因子return gradient(theta,X,y)+reg#2、使用高级优化算法求theta
result=opt.fmin_tnc(func=costReg,x0=theta,fprime=gradientReg,args=(X,y,2))#第五部分  求预测准确率(进行评估)
#1、定义预测函数,将求得的参数theta(result) 和X(特征映射后)带入预测函数,并计算预测精度
def predict(theta,X):probability=sigmoid(X@theta)return [1 if x>=0.5 else 0 for x in probability ]
finally_theta=result[0]
predictions=predict(finally_theta,X)
correct=[1 if a==b else 0 for (a,b) in zip(predictions,y)]
accuracy=sum(correct)/len(X)
print(accuracy)#或者用sklearn中的方法进行评估
'''
from sklearn.metrics import classification_report
print(classification_report(predictions,y))
'''#第六部分 绘制决策边界
x = np.linspace(-1, 1.5, 250)
xx, yy = np.meshgrid(x, x)   #生成网格点坐标矩阵
# 得到的xx,yy是array形式,维度为(250,250)
z = feature_mapping(xx.ravel(), yy.ravel(), 6).values  # xx.ravel()将xx一维化(62500,)
z = z @ finally_theta  #z.shape (62500,)
z = z.reshape(xx.shape)fig,ax=plt.subplots(figsize=(8,6))
ax.scatter(positive['测试一'],positive['测试二'],label='接受')
ax.scatter(negetive['测试一'],negetive['测试二'],marker='x',label='未接受')
# 设置图例显示在图的上方
box = ax.get_position()  #获取轴
ax.set_position([box.x0,box.y0,box.width,box.height*0.8])  #.set_position()功能用于设置轴位置
ax.legend(loc='center left',bbox_to_anchor=(0.2,1.12),ncol=3)
ax.set_xlabel('测试一')
ax.set_ylabel('测试二')
ax.set_title('决策边界lambda为2')
plt.contour(xx,yy,z,0) # 生成等高线,0是指图像一分为二
plt.show()


吴恩达机器学习中文版课后题(中文题目+数据集+python版答案)week2 逻辑回归相关推荐

  1. 吴恩达机器学习中文版课后题(中文题目+数据集+python版答案)week1 线性回归

    一.单线性回归问题 参考:https://blog.csdn.net/qq_42333474/article/details/119100860 题目一: 您将使用一元线性回归来预测食品车的利润.假设 ...

  2. 吴恩达机器学习课程-作业1-线性回归(python实现)

    Machine Learning(Andrew) ex1-Linear Regression 椰汁学习笔记 最近刚学习完吴恩达机器学习的课程,现在开始复习和整理一下课程笔记和作业,我将陆续更新. Li ...

  3. 吴恩达机器学习课程作业(一)基于python实现 详细解析(上篇)

    @单变量线性回归 前言 斯坦福大学吴恩达老师的机器学习课程几乎是每位热爱人工智能领域同学的必修课.网上虽然有许多基于python实现的代码,但大多使用python交互模式解释器ipython实例讲解. ...

  4. python 异常检测算法_吴恩达机器学习中文版笔记:异常检测(Anomaly Detection)

    大数据文摘经授权转载 作者:黄海广 在接下来的一系列视频中,我将向大家介绍异常检测(Anomaly detection)问题.这是机器学习算法的一个常见应用.这种算法的一个有趣之处在于:它虽然主要用于 ...

  5. 吴恩达机器学习 逻辑回归 作业2(芯片预测) Python实现 代码详细解释

    整个项目的github:https://github.com/RobinLuoNanjing/MachineLearning_Ng_Python 里面可以下载进行代码实现的数据集 题目介绍: In t ...

  6. 吴恩达机器学习课程-作业5-Bias vs Variance(python实现)

    Machine Learning(Andrew) ex4-Regularized Linear Regression and Bias v.s. Variance 椰汁笔记 Regularized L ...

  7. 吴恩达机器学习学习笔记第四章:python的配置

    python有两个版本分python2和python3这两个可以说是截然不同了 市面上python语言基础的书籍大多停留在python2 本人紧跟时代潮流 使用的是python3.6 如果你也想学对p ...

  8. 吴恩达机器学习作业2:逻辑回归(Python实现)

    逻辑回归 在训练的初始阶段,将要构建一个逻辑回归模型来预测,某个学生是否被大学录取.设想你是大学相关部分的管理者,想通过申请学生两次测试的评分,来决定他们是否被录取.现在你拥有之前申请学生的可以用于训 ...

  9. 吴恩达机器学习课后作业——偏差和方差

    1.写在前面 吴恩达机器学习的课后作业及数据可以在coursera平台上进行下载,只要注册一下就可以添加课程了.所以这里就不写题目和数据了,有需要的小伙伴自行去下载就可以了. 作业及数据下载网址:吴恩 ...

  10. 吴恩达机器学习课后作业——线性回归(Python实现)

    1.写在前面 吴恩达机器学习的课后作业及数据可以在coursera平台上进行下载,只要注册一下就可以添加课程了.所以这里就不写题目和数据了,有需要的小伙伴自行去下载就可以了. 作业及数据下载网址:吴恩 ...

最新文章

  1. 海量秋招面试资料等你来拿!你离大厂也许并不远
  2. SQL Server 2016新特性:Query Store
  3. GPUImage混合滤镜处理图片
  4. docker 训练深度学习_利用RGB图像训练MultiModality的深度学习模型进行图像分割
  5. 3kyu Path Finder #3: the Alpinist
  6. 一键部署区块链环境 阿里云发布企业级BaaS服务
  7. Android中使用AlarmManager设置闹钟
  8. 企业邮箱自建,该如何选型测试
  9. ICCV 2019 | 港大提出视频显著物体检测算法MGA,大幅提升分割精度
  10. resin4 发布war包
  11. servlet3异步 例子_异步Servlet示例
  12. Java实现学生管理系统
  13. 2017年度全球一级市场“投资龙虎榜”发布 | 钛媒体Pro独家
  14. 人才缺口达30万!前端人拿下这个证书有多吃香?!
  15. LDK3读书笔记(第二章:从内核出发)
  16. css3地球自转,CSS3 月亮围绕地球转动的3D动画
  17. 我的世界服务器没有显示物品ID,我的世界物品ID不显示怎么办
  18. 九度 题目1013:开门人和关门人
  19. java 爬取评论,Java基于WebMagic爬取某豆瓣电影评论的实现
  20. 畴昔之羊,子为政,今日之御,我为政

热门文章

  1. html ckplayer.swf,Flash基础入门之ckplayer.js视频播放插件
  2. AQS框架之南风北巷
  3. 男孩子不上学了学计算机要学历吗,十三岁男孩不上学,能学什么手艺?
  4. 51Nod - 1384 全排列
  5. 菜鸟教程首页制作html5
  6. form表单提交中的input,button,submit
  7. 导航栏的HTML的布局方式
  8. 20年研发管理经验谈(二)
  9. 小幅震荡市场下的期权投资策略举例
  10. python代码格式化神器_牛逼啊!一个随时随地写Python代码的神器