文章目录

  • Regularization-正则化
    • 1.one-by-one regularization
    • 2.Flexible regularization
  • mnist数据集实战

Regularization-正则化



1.one-by-one regularization

2.Flexible regularization

mnist数据集实战

import osos.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf
from tensorflow.keras import datasets, layers, optimizers, Sequential, metricsdef preprocess(x, y):x = tf.cast(x, dtype=tf.float32) / 255.y = tf.cast(y, dtype=tf.int32)return x, ybatchsz = 128
(x, y), (x_val, y_val) = datasets.mnist.load_data()
print('datasets:', x.shape, y.shape, x.min(), x.max())db = tf.data.Dataset.from_tensor_slices((x, y))
db = db.map(preprocess).shuffle(60000).batch(batchsz).repeat(10)ds_val = tf.data.Dataset.from_tensor_slices((x_val, y_val))
ds_val = ds_val.map(preprocess).batch(batchsz)network = Sequential([layers.Dense(256, activation='relu'),layers.Dense(128, activation='relu'),layers.Dense(64, activation='relu'),layers.Dense(32, activation='relu'),layers.Dense(10)])
network.build(input_shape=(None, 28 * 28))
network.summary()optimizer = optimizers.Adam(lr=0.01)for step, (x, y) in enumerate(db):with tf.GradientTape() as tape:# [b, 28, 28] => [b, 784]x = tf.reshape(x, (-1, 28 * 28))# [b, 784] => [b, 10]out = network(x)# [b] => [b, 10]y_onehot = tf.one_hot(y, depth=10)# [b]loss = tf.reduce_mean(tf.losses.categorical_crossentropy(y_onehot, out, from_logits=True))# regularizationloss_regularization = []for p in network.trainable_variables:# tf.nn.l2_loss(p)表示计算对应参数的L2正则loss_regularization.append(tf.nn.l2_loss(p))loss_regularization = tf.reduce_sum(tf.stack(loss_regularization))loss = loss + 0.0001 * loss_regularizationgrads = tape.gradient(loss, network.trainable_variables)optimizer.apply_gradients(zip(grads, network.trainable_variables))if step % 100 == 0:print(step, 'loss:', float(loss), 'loss_regularization:', float(loss_regularization))# evaluateif step % 500 == 0:total, total_correct = 0., 0for step, (x, y) in enumerate(ds_val):# [b, 28, 28] => [b, 784]x = tf.reshape(x, (-1, 28 * 28))# [b, 784] => [b, 10]out = network(x)# [b, 10] => [b] pred = tf.argmax(out, axis=1)pred = tf.cast(pred, dtype=tf.int32)# bool type correct = tf.equal(pred, y)# bool tensor => int tensor => numpytotal_correct += tf.reduce_sum(tf.cast(correct, dtype=tf.int32)).numpy()total += x.shape[0]print(step, 'Evaluate Acc:', total_correct / total)
datasets: (60000, 28, 28) (60000,) 0 255
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense (Dense)                multiple                  200960
_________________________________________________________________
dense_1 (Dense)              multiple                  32896
_________________________________________________________________
dense_2 (Dense)              multiple                  8256
_________________________________________________________________
dense_3 (Dense)              multiple                  2080
_________________________________________________________________
dense_4 (Dense)              multiple                  330
=================================================================
Total params: 244,522
Trainable params: 244,522
Non-trainable params: 0
_________________________________________________________________
0 loss: 2.3507652282714844 loss_regularization: 350.3747863769531
78 Evaluate Acc: 0.2855
100 loss: 0.36240220069885254 loss_regularization: 589.3790893554688
200 loss: 0.5004981756210327 loss_regularization: 682.1327514648438
300 loss: 0.21314147114753723 loss_regularization: 775.0255737304688
400 loss: 0.27983567118644714 loss_regularization: 828.80859375
500 loss: 0.2916567325592041 loss_regularization: 878.8491821289062
78 Evaluate Acc: 0.9542
600 loss: 0.32055604457855225 loss_regularization: 904.9992065429688
700 loss: 0.1594236195087433 loss_regularization: 942.77978515625
800 loss: 0.221163809299469 loss_regularization: 960.769287109375
900 loss: 0.22233238816261292 loss_regularization: 963.0105590820312
1000 loss: 0.14087724685668945 loss_regularization: 990.5718994140625
78 Evaluate Acc: 0.9661
1100 loss: 0.24112433195114136 loss_regularization: 1028.871826171875
1200 loss: 0.3315228819847107 loss_regularization: 1044.771728515625
1300 loss: 0.25928544998168945 loss_regularization: 1086.576904296875
1400 loss: 0.22301778197288513 loss_regularization: 1110.1475830078125
1500 loss: 0.3217935562133789 loss_regularization: 1083.4852294921875
78 Evaluate Acc: 0.9599
1600 loss: 0.19745567440986633 loss_regularization: 1097.128662109375
1700 loss: 0.3913511633872986 loss_regularization: 1100.0284423828125
1800 loss: 0.24843886494636536 loss_regularization: 1143.0257568359375
1900 loss: 0.28508129715919495 loss_regularization: 1125.083251953125
2000 loss: 0.2747941315174103 loss_regularization: 1099.05908203125
78 Evaluate Acc: 0.9645
2100 loss: 0.18370205163955688 loss_regularization: 1083.613037109375
2200 loss: 0.24575147032737732 loss_regularization: 1096.5216064453125
2300 loss: 0.2671639323234558 loss_regularization: 1119.0755615234375
2400 loss: 0.17508020997047424 loss_regularization: 1075.147216796875
2500 loss: 0.20603394508361816 loss_regularization: 1099.5045166015625
78 Evaluate Acc: 0.9666
2600 loss: 0.20938491821289062 loss_regularization: 1063.99755859375
2700 loss: 0.33030807971954346 loss_regularization: 1058.947265625
2800 loss: 0.2951526343822479 loss_regularization: 1092.590087890625
2900 loss: 0.37690189480781555 loss_regularization: 1113.8955078125
3000 loss: 0.39653170108795166 loss_regularization: 1164.5491943359375
78 Evaluate Acc: 0.9688
3100 loss: 0.3081352710723877 loss_regularization: 1098.758056640625
3200 loss: 0.31366774439811707 loss_regularization: 1121.0487060546875
3300 loss: 0.3593229353427887 loss_regularization: 1124.6781005859375
3400 loss: 0.1733701378107071 loss_regularization: 1143.44140625
3500 loss: 0.2331463098526001 loss_regularization: 1137.5433349609375
78 Evaluate Acc: 0.967
3600 loss: 0.23532089591026306 loss_regularization: 1074.432861328125
3700 loss: 0.19450485706329346 loss_regularization: 1079.4312744140625
3800 loss: 0.15056748688220978 loss_regularization: 1108.001953125
3900 loss: 0.28273844718933105 loss_regularization: 1071.671142578125
4000 loss: 0.15014755725860596 loss_regularization: 1081.7843017578125
78 Evaluate Acc: 0.971
4100 loss: 0.1769871711730957 loss_regularization: 1120.951904296875
4200 loss: 0.21285438537597656 loss_regularization: 1044.5946044921875
4300 loss: 0.2390756756067276 loss_regularization: 1046.773681640625
4400 loss: 0.20340555906295776 loss_regularization: 1036.9803466796875
4500 loss: 0.1344645917415619 loss_regularization: 1021.6719970703125
78 Evaluate Acc: 0.9652
4600 loss: 0.23330935835838318 loss_regularization: 1026.7904052734375

深度学习2.0-26.Regularization减轻overfitting相关推荐

  1. 深度学习笔记(26) 卷积神经网络

    深度学习笔记(26) 卷积神经网络 1. CONV 2. POOL 3. Layer 4. FC 5. 卷积的优势 1. CONV 假设,有一张大小为32×32×3的输入图片,这是一张RGB模式的图片 ...

  2. Keras深度学习实战(26)——文档向量详解

    Keras深度学习实战(26)--文档向量详解 0. 前言 1. 文档向量基本概念 2. 神经网络模型与数据集分析 2.1 模型分析 2.2 数据集介绍 3. 利用 Keras 构建神经网络模型生成文 ...

  3. halcon 深度学习标注_HALCON深度学习工具0.4 早鸟版发布了

    原标题:HALCON深度学习工具0.4 早鸟版发布了 HALOCN深度学习工具在整个深度学习过程中扮演着重要的作用,而且在将来将扮演更重要的辅助作用,大大加快深度学习的开发流程,目前发布版本工具的主要 ...

  4. halcon显示坐标_HALCON深度学习工具0.4 早鸟版发布了

    HALOCN深度学习工具在整个深度学习过程中扮演着重要的作用,而且在将来将扮演更重要的辅助作用,大大加快深度学习的开发流程,目前发布版本工具的主要作用是图像数据处理和目标检测和分类中的标注. 标注训练 ...

  5. 神经网络与深度学习——TensorFlow2.0实战(笔记)(二)(开发环境介绍)

    开发环境介绍 Python3 1.结构清晰,简单易学 2.丰富的标准库 3.强大的的第三方生态系统 4.开源.开放体系 5.高可扩展性:胶水语言 6.高可扩展性:胶水语言 7.解释型语言,实现复杂算法 ...

  6. 资源下载| 深度学习Pytoch1.0如何玩?这一门含900页ppt和代码实例的深度学习课程带你飞

    本文来自专知 近日,在NeurIPS 2018 大会上,Facebook 官方宣布 PyTorch 1.0 正式版发布了.如何用Pytorch1.0搞深度学习?对很多小白学生是个问题.瑞士非盈利研究机 ...

  7. 深度学习Trick——用权重约束减轻深层网络过拟合|附(Keras)实现代码

    向AI转型的程序员都关注了这个号???????????? 机器学习AI算法工程   公众号:datayx 在深度学习中,批量归一化(batch normalization)以及对损失函数加一些正则项这 ...

  8. 深度学习_TensorFlow2.0基础_张量创建,运算,维度变换,采样

    Tensorflow2.0 基础 一:TensorFlow特性 1.TensorFlow An end-to-end open source machine learning platform end ...

  9. 深度学习-计算机视觉-0基础-学习历程

    周志华<机器学习>------------------------若是想从基础算法公式开始可以先试着看一下周志华的<机器学习>,由于我对公式推导很头疼,看了几页就跳过了.(在经 ...

  10. 动手学深度学习V2.0(Pytorch)——11.模型选择+过拟合和欠拟合

    文章目录 1. 模型选择 2. 过拟合和欠拟合 3. 代码 4. Q&A 4.1 SVM和神经网络相比,缺点在哪里 4.2 训练集验证集测试集比例 4.3 时序预测问题中的测试集训练集 4.4 ...

最新文章

  1. Power Designer逆向工程导入Oracle表,转为模型加注释
  2. ie下面兼容性问题的一些总结(转)
  3. 用友服务器文件如何查找,如何查询用友t3服务器地址
  4. python的源代码文件的扩展名是-python源文件后缀是什么
  5. 【计算理论】计算复杂性 ( 两个带子的图灵机的时间复杂度 )
  6. 35佳国外顶级品牌企业网站设计案例(上)
  7. 您为了什么而学?【一入红尘深似海 勿负天下有心人】
  8. plsql轻量版基本语法
  9. java oralce merge_Oracle数据库merge into的使用,存在则更新,不存在则插入
  10. ERA5 积雪 降雪 区别_面对大雪吧~2020陕西首场,以下区域积雪将达20厘米
  11. 计算机原理 ---- 程序之下
  12. 计算理论是研究用计算机解决,可计算性理论
  13. 轻云,云虚拟,ECS差别
  14. 手动解除加密文件夹 lockdir产生的文件com1.{d3e34b21-9d75-101a-8c3d-00aa001a1652}
  15. 面向接口编程的一些总结
  16. 热血仙境服务器修改,热血仙境安卓首发服务器爆红 - 07073手机游戏
  17. 打印机驱动的PCL与PS的区别
  18. 你真的了解switch吗?
  19. 数字孪生使用云流化的优势
  20. 【算法竞赛学习笔记】佩尔方程-数学提升计划

热门文章

  1. Linux CentOS修改网卡IP/网关设置
  2. 百度云推广~麻烦各位点一下吧
  3. HTML5--本地存储Web Storage
  4. 上周Asp.net源码(11.5-11.10)免费下载列表
  5. 编码原则 之 Once and Only Once
  6. Azure database
  7. 与走在创业路上的学子交流——记网维“海大快点”创业团队
  8. 非root用户安装java版本
  9. 浅谈TCP/IP网络编程中socket的行为
  10. rownum的用法oracle